score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
92 | The Panic of 1819 was the first major financial crisis in the United States. The new nation faced a depression in the late 1780s (which led directly to the establishment of the dollar and, perhaps indirectly, to the calls for a Constitutional Convention), and another severe economic downturn in the late 1790s following the Panic of 1797. In those earlier crises, however, the primary cause of economic turmoil originated in the broader Atlantic economy. In contrast, the causes of the Panic of 1819 largely originated within the U.S. economy. The resulting crisis caused widespread foreclosures, bank failures, unemployment, and a slump in agriculture and manufacturing. It marked the end of the economic expansion that had followed the War of 1812. However, things would change for the US economy after the Second Bank of the United States was founded in 1816, in response to the spread of bank notes across United States from private banks, due to inflation brought on by the debt following the war.
Explanations from economists range from "boom-bust cycles happen," to "a failure of the banking system following the War of 1812 because it was not rechartered. Combined with the issue of the depression and overspeculation," the notion that
Government borrowed heavily to finance the War of 1812, which caused tremendous strain on the banks’ reserves of specie and led inevitably to a suspension of specie payments in 1814 during war & again in 1819-1821 during recession, violating contractual rights of depositors. The suspension of the obligation to redeem greatly spurred the establishment of new banks and the expansion of bank note issues. This inflation of money encouraged unsustainable investments to take place. It soon became clear the monetary situation was bad, and the Second Bank of the United States was forced to call a halt to its expansion and launch a painful process of contraction. There was a wave of bankruptcies, bank failures, and bank runs; prices dropped and wide-scale urban unemployment began. By 1819, land measures in the U.S. had also reached 3,500,000 acres, and many Americans did not have enough money to pay off to their loans.
The Panic was also partially due to international events. European demand for American foodstuffs was decreased because agriculture in Europe was recovering from the Napoleonic Wars, which had decimated European agriculture. War and revolution in the New World destroyed the supply line of precious metals from Mexico and Peru to Europe. Without the base of the international money supply, poor Europeans and governments hoarded all the available specie. This caused American bankers and businessmen to start issuing false banknotes and expand credit. American bankers, who had little experience with corporate charters, promissory notes, bills of exchange, or stocks and bonds, encouraged the speculated boom during the first years of the market revolution. By the end of 1819, the bank would call these loans [to disasterous effect].
Other notable historical panics in the U.S. are the Panic of 1873 and Panic of 1893, together sometimes known as the Long Depression.
Wikipedia describes the events leading up to the Panic of 1873 as follows:
It was precipitated by the bankruptcy of the Philadelphia banking firm Jay Cooke & Company on September 18, 1873, following the crash on May 9, 1873 of the Vienna Stock Exchange in Austria (the so-called Gründerkrach or “founders' crash”). . . . In September 1873, the American economy entered a crisis. This followed a period of post Civil War economic overexpansion that arose from the Northern railroad boom. It came at the end of a series of economic setbacks: the Black Friday panic of 1869, the Chicago fire of 1871, the outbreak of equine influenza in 1872, and the demonetization of silver in 1873.
The Black Friday panic was caused by the attempt of Jay Gould and Jim Fisk to corner the gold market in 1869. They were prevented from doing so by the decision of the administration of President Ulysses S. Grant to release government gold for sale. The collapse of gold premiums culminated in a day of panic when thousands of overleveraged speculators were ruined - Friday, September 24, 1869, popularly called Black Friday. There was great indignation against the perpetrators.
Coming at the height of an extremely dry period, the Chicago fire of October 8-9, 1871, caused a loss of nearly $200 million in property in a blaze that overran four square miles. . . .
The outbreak of equine influenza in 1872 had a pervasive effect on the economy. Called the "Great Epizoötic", it had an effect on every aspect of American transportation. The whole street railway industry ground to a halt. Locomotives came to a halt as coal or wood could not be delivered to power them. Even the United States Army Cavalry was reduced to fighting the Western tribes on foot; their adversaries likewise found their mounts too sick to do battle. The outbreak forced men to pull wagons by hand, while trains and ships full of cargo sat unloaded, tram cars stood idle and deliveries of basic community essentials were no longer being made.
The Coinage Act of 1873 changed the United States policy with respect to silver. Before the Act, the United States had backed its currency with both gold and silver, and it minted both types of coins. The Act moved the United States to the gold standard, which meant it would no longer buy silver at a statutory price or convert silver from the public into silver coins (and stopped minting silver dollars altogether.)
The Act had the immediate effect of depressing silver prices. This hurt Western mining interests, who labeled the Act "The Crime of '73." Its effect was offset somewhat by the introduction of a silver trade dollar for use in the Orient, and by the discovery of new silver deposits at Virginia City, Nevada, resulting in new investment in mining activity. But the coinage law also reduced the domestic money supply, which hurt farmers and anyone else who carried heavy debt loads. The resulting outcry raised serious questions about how long the new policy would last. This perception of instability in United States monetary policy caused investors to shy away from long-term obligations, particularly long-term bonds. The problem was compounded by the railroad boom, which was in its later stages at the time.
The conventional wisdom concerning the Panic of 1893, according to Wikipedia is that it "was caused by railroad overbuilding and shaky railroad financing; which set off a series of bank failures. Compounding market overbuilding and a railroad bubble was a run on the gold supply and a policy of using both gold and silver metals as a peg for the US Dollar value."
Since then, as measured by the stock market, the most notable economic downturns were the Great Depression, the 1973 oil crisis from January 1973 to October 1974, and the tech bust from March 2000 to October 2002, and of course, the current financial crisis which reached its pre-collapse stock market peak in October 2007.
Abroad, there are two frequently examined recent economic crisis precedents frequently compared to the U.S. financial crisis that we are in now.
One is the Japanese real estate bust. This started when a housing bubble in Japanese real estate collapsed in 1989 and produced a profound economic slump and deflation, followed by a brief recovery in the stock markets, at least, from April 2003 to June 2007, but, Japanese stock markets fell approximately 50% (between June 2007 and December 2008), and Japanese economy remains less than thriving now, in part due to the global impact of the American led financial crisis which is similar in many respects to the one we are in now.
The other is the Asian Financial Crisis of 1997 takes a bit of explaining to make sense of:
The crisis started in Thailand with the financial collapse of the Thai baht caused by the decision of the Thai government to float the baht, cutting its peg to the USD, after exhaustive efforts to support it in the face of a severe financial overextension that was in part real estate driven. At the time, Thailand had acquired a burden of foreign debt that made the country effectively bankrupt even before the collapse of its currency. As the crisis spread, most of Southeast Asia and Japan saw slumping currencies, devalued stock markets and other asset prices, and a precipitous rise in private debt.
Though there has been general agreement on the existence of a crisis and its consequences, what is less clear is the causes of the crisis, as well as its scope and resolution. Indonesia, South Korea and Thailand were the countries most affected by the crisis. Hong Kong, Malaysia, Laos and the Philippines were also hurt by the slump. The People's Republic of China, India, Taiwan, Singapore, Brunei and Vietnam were less affected, although all suffered from a loss of demand and confidence throughout the region.
Foreign debt-to-GDP ratios rose from 100% to 167% in the four large ASEAN economies in 1993-96, then shot up beyond 180% during the worst of the crisis. In Korea, the ratios rose from 13-21% and then as high as 40%, while the other Northern NICs (Newly Industrialized Countries) fared much better. Only in Thailand and Korea did debt service-to-exports ratios rise.
Although most of the governments of Asia had seemingly sound fiscal policies, the International Monetary Fund (IMF) stepped in to initiate a $40 billion program to stabilize the currencies of South Korea, Thailand, and Indonesia, economies particularly hard hit by the crisis. The efforts to stem a global economic crisis did little to stabilize the domestic situation in Indonesia, however. After 30 years in power, President Suharto was forced to step down in May 1998 in the wake of widespread rioting that followed sharp price increases caused by a drastic devaluation of the rupiah. The effects of the crisis lingered through 1998. In the Philippines growth dropped to virtually zero in 1998. Only Singapore and Taiwan proved relatively insulated from the shock, but both suffered serious hits in passing, the former more so due to its size and geographical location between Malaysia and Indonesia. By 1999, however, analysts saw signs that the economies of Asia were beginning to recover.
Deeper analysis of what was behind these panics presented in thumbnail, economic history 101 sketches from Wikipedia above (to save me the trouble of rewriting well written prose there) will have to be saved for my uncoming paper on the financial crisis at the Law and Society Conference in Denve this May, but these tidbits of research are topic and useful, so they deserve and early release. | http://washparkprophet.blogspot.com/2009/03/historical-american-and-asian-financial.html | 13 |
29 | Before CFRB came on the air in Toronto on February 19, 1927, radio stations around the world relied on Direct Current (D.C.) for their power supplies as provided by batteries and/or motorized generators. The Alternating Current (A.C.) from power systems could not be used for heating filaments in the audio circuits because it produced an obnoxious hum - hence low-voltage batteries were commonly used. Further, to produce the high-voltage direct current and/or the low filament voltage required by transmitting tubes, electric A.C. motors were coupled mechanically to D.C. generators. This set-up served the purpose but it was expensive and unreliable.
Earlier, another Canadian, Reginald Fessenden had demonstrated in 1900 that radio signals, until then limited to dots and dashes, could be "modulated" to carry voice, music and other sounds.
Then, in 1902, Fessenden went on to patent another invention of his - the "heterodyne principle" which, as it was perfected, made it easier to tune a radio by using only one instead of as many as four knobs.
In 1925, Edward S. (Ted) Rogers of Toronto scored a break-through in radio receivers. His achievement resulted from his experiments in developing radio tubes employing an "indirectly-heated cathode". This proved to be a major advancement for radio - particularly so for
receivers. Before Rogers' invention, alternating current could not be used to heat the filaments of tubes because of the severe hum caused in the receiver. These early tubes used a filament (as in a lamp) for the "cathode" element (when it was activated, it became hot and emitted electrons). Rogers' invention shielded the filament with a metal sleeve so that the sleeve was heated by the filament inside it. That sleeve became the "cathode" element, and the filament was re-named as a "heater" for the cathode. This development eliminated the hum when the heater was applied with alternating current.
|Edward (Ted) Rogers, Sr.
In addition, while rectifier tubes that could convert A.C. to D.C. had been invented, they were not used in broadcasting - neither for home receivers nor for transmitters. Ted Rogers applied their principles to develop a rectified A.C.-power supply to substitute for the high-voltage "B+" batteries in receivers. Thus, all batteries were eliminated. Neither the 6-volt "A" batteries nor the 45-volt "B" batteries were necessary.
In August of 1925, the first Rogers Batteryless Radio Receiver was publicly unveiled at the Canadian National Exhibition in Toronto. For the ensuing two years, these Rogers radio receivers were the only batteryless radios manufactured in North America.
After developing this revolutionary receiving tube, Ted Rogers and his brother Elsworth started a new company to make receivers using these tubes and the rectified power supplies. Their father, Albert Rogers, had financed the operations with a holding company incorporated as Standard Radio Manufacturing Corporation Limited.
(In 1928, the name was changed to Rogers Majestic Corporation Limited). Standard controlled both Rogers Radio Tube Company and Rogers Batteryless Radio Company - the latter, the manufacturer of the receivers. Models were produced under the names Rogers Majestic and DeForest-Crossley.
In his book Broadcast Policy Development (1982), Frank Foster recalled that the invention of the batteryless radio by E.S. Rogers had an indirect effect on the development of policy for broadcasting. "His batteryless radio increased the popularity of radio broadcasting. With an increase in the number of radio listeners there was a corresponding increase in the demands for a distinctive Canadian system".
Ted Rogers Sr. went on to serve this demand.
The History of CFRB
There were 92,000 radio receiving sets in Canada, but Ted Rogers changed all of that when he produced a radio tube that converted ordinary alternating electric power into direct current that could be used by radio. In the summer of 1925, patrons of the CNE saw the new Rogers "batteryless" set in action.
On February 19th, to respond to the public's desire for more sources of radio entertainment and to demonstrate to the world that broadcasting stations could operate solely from alternating current power lines, Standard Radio Manufacturing Corporation Limited obtained an experimental license for "9RB". Using tubes that he had developed, Ted Rogers built the world's first radio broadcasting transmitter operating from power lines, without batteries and motorized D.C. converters.
At 9:00 p.m. on February 19, conductor Jack Arthur raised his baton and a symphony orchestra put music into the air. The cast included, among others, Frank Oldfield (bass baritone), Luigi Von Kunitz (director of Toronto Symphony Orchestra), the Gilson Trio, Aeolian Male Quartette and Ben Hokea's Hawaiian Quartette, and Freddie Tee (singer).
On February 19, 1927, 9RB became CFRB (Canada's First Rogers Batteryless), broadcasting on 1030 kHz with 1,000 watts. CFRB was operated by Rogers Radio Broadcasting Company. CFRB shared the 1030 frequency and airtime with CKGW and CJYC. CFRB's first studios were situated in the mansion built by the Massey family which had been converted to accommodate the Ryan Art Galleries (Jarvis Street, near Wellesley). The transmitter was sited north of Toronto in Aurora on Bloomingdale sideroad - later named "CFRB Sideroad". The station used two 98 foot high wooden towers - on the highest point of land in the Toronto area - on a hill in Aurora - 1,040 feet above sea level.
In March, CFRB's position on the dial became 960 kHz, and the air-time and the frequency were shared only with CKGW.
Lindbergh's dramatic flight in the spring sparked world imagination and showed the place of radio in the reporting of news events.
George Wade and his Cornhuskers joined CFRB shortly after it went on the air.
CFRB was only a month old when Denton Massey aired his "York Bible Class" on the station.
Alexander Chuhaldin, leader of the Imperial Grand Theatre in Moscow, fled Russia in 1924 and became CFRB's first musical director in 1927.
Ernest Bushnell assumed joint managership of CFRB.
Early in the year, Ted Rogers suggested CFRB have newscasts direct from the editorial offices of the Toronto Globe. Station engineer Jack Sharpe and vice-president Ellsworth Rogers designed a compact remote control amplifier for the Globe offices.
On April 25, using a new transmitter and still on 960, CFRB's power was increased from 1,000 watts to 5,000 watts.
Standard Radio Manufacturing Corp. Ltd. became Rogers Majestic Corp. Ltd.
Wes McKnight joined the station as Sports Director, doing sports interviews, live coverage of the King’s Plate horse races; in 1934 he developed “Sportsviews” - before the Imperial Esso Hockey broadcast on Saturday nights– a program that continued for 40 years. He also did “Sports Commentary”at 6:40 pm following the News on week-nights, and was the voice of the Toronto Argonauts and live coverage of Grey Cup games for over 30 years.
CFRB used its powerful transmitter to beam special broadcasts to an expedition in Hudson Strait, above the Arctic Circle.
Toronto Conservatory of Music graduate Ernest Bushnell started out as a singer, formed one of the first agencies to sell radio time, and joined CFRB this year, working with the station's first station manager Charles Shearer.
Kathleen Stokes was an organist at Loew's Theatre when she got her first radio audience with CFRB.
CFRB moved to new offices and studios in a two-story building at 37 Bloor Street West. At the time, the studios were ranked as the largest in Canada, with 2,000 square feet of floor space and a large auditorium to accommodate audiences for live shows. These studios were used in the production of a large number of sponsored Canadian programs which were fed to networks of stations in Montreal and various Ontario cities selected by advertising agencies.
The station's original management was on hand for the opening of the new facility: H.E. Mott (chief engineer), Sam Rogers (legal counsel), H.S. McDougall (president), Harry S. Moore (secretary-treasurer), Charles Shearer (studio director), Jack Sharpe (studio engineer), Wes McKnight (chief announcer) and Walter Kiehn (sales manager).
On April 29, CFRB became affiliated with CBS, the Columbia Broadcasting System, and carried as its first CBS feature The Majestic Theatre of the Air.
CKGW moved to another frequency, leaving CFRB the only station in Toronto on 960 kHz.
CKGW was re-positioned on 960, and again, CKGW and CFRB "shared".
CFRB was the first Canadian station to originate a program on an American network (CBS).
Rex Frost joined CFRB.
Ernest Bushnell left CFRB to manage the Canadian National Carbon Company's (Eveready Batteries) CKNC.
Announcer Wes McKnight took over the first regular sports program - on CFRB.
On November 2nd, CFRB's frequency was changed from 960 kHz to 690 kHz, an international clear Class I-A channel allocated to Canada. Power was increased to 10,000 watts.
The Canadian Pacific Railway, which had been authorized to use the "phantom" call-sign CPRY, leased certain periods of transmitter time for presentation of programs from studios it established in the Roof Garden of the CPR-owned Royal York Hotel.
Ted Rogers said, "It has been our constant aim to keep in the forefront of radio development."
Anne Jamieson (of Guelph) joined CFRB where she greatly impressed station producers.
To keep program quality and station policy up to technical standards, and to have CFRB pay a profit, Ted Rogers persuaded a young Famous Players Canadian Corp. (the biggest theatre holding organization in Canada) executive, Harry Sedgwick, to take over management of CFRB. Harry had never been in a radio studio and had never been interested in the business, except as a listener - and as a listener, he knew what he liked. Sedgwick would bring to an end the reliance on free talent and amateur musicians, something common in Canadian radio in the early years. The station would now air such programs as Sunday afternoon concerts by the Canadian National Railways Symphony, Imperial Oil Symphony Concert, Canadian General Electric Vagabonds, C.I.L.'s Opera House of the Air, the Rogers Majestic musical program under Luigi Romanelli's baton, General Motors Hockey broadcasts, and then, Wes McKnight's sports column - the first regular sports program on Canadian radio.
Andrew Allen joined CFRB from the University of Toronto. He would announce, write and produce.
Harry Sedgwick became Managing Director of Rogers Radio Broadcasting Ltd., begining 15 years of leading CFRB by developing talent and programing that made the station one of the most repected in Canada. At the same time he helped the Canadian Association of Broadcasters through its toughest years, many of them as its President.
Two 300-foot steel towers for a 600-foot "flat-top" antenna were installed at the Aurora plant.
Kate Aitken joined the CFRB air-staff.
In the early morning hours of August 20, fire swept through CFRB's control room. The fire was disastrous and costly. Bell Telephone worked closely with station engineers to set up a temporary control panel and CFRB was able to open its morning show right on schedule.
1932-1935: CFRB 690 vs 500,000 watt WLW 700
CFRB (690 kHz) expressed concern with Washington (via the Canadian government) about the experimental operation of WLW Cincinnati at 700 kHz, using a power of 500,000 watts. Because of interference from WLW, CFRB increased its power . CFRB said its power increase succeeded in diminishing interference between it and WLW. However, CFRB feared that if the experimental operation of WLW with 500,000 watts were successful, the experimental restriction might be removed with the result that interference would be caused to CFRB. The Canadian government suggested a transfer of WLW to a channel at least 50 kHz away from 690 kHz and from any other channel used in Ontario. This would not happen of course. The actual separation in miles between WLW and CFRB was 400 miles.
Canada contacted the U.S. Government again. Canada said that with WLW operating at 500,000 watts, the service area of CFRB was reduced to little more than the City of Toronto itself, and 50 miles out, the signals from Toronto were completely obliterated. WLW was told it could operate with a power of 500,000 watts during the day and only 50,000 watts at night - or 500,000 watts at night, provided such a radiating system was employed that the effective signal delivered in the area between Niagara Falls, N.Y., Lockport, N.Y., and Lake Ontario, did not exceed the effective signal delivered in that area when operating with 50,000 watts. In 1932, the governments of the U.S. and Canada entered into an agreement by which 690 kHz was allocated exclusively for the use of a Canadian station located at Toronto, with the right reserved to Canada to increase the power thereof to 50,000 watts. With that in mind, it was very difficult for the U.S. to allow WLW to continue its unrestricted 500,000 watt night-time experiment.
Harry Sedgwick hired many new personalities in the 1930’s, many of them were women. Claire Wallace was one of the most outstanding. “Tea Time Topics”, 15 minutes daily, just before the news, became very popular.
Recognizing the need for a stronger News presence, CFRB hired Jim Hunter who was writing for the Telegram newspaper, and he read the news directly from the Telegram’s editorial room. His newscast became very popular with the sound of a “coach horn” introduction to the tune of “a-Hunting We Will Go”. His newscasts continued until his death in 1949.
When the Moose River mine collapsed near Halifax, CFRB put newscaster Jim Hunter on the air every 20 minutes for 129 consecutive hours to cover the event. The collapse had trapped three men for ten days.
A short-wave transmitter, licensed as CFRX, was installed at the Aurora transmitting site, utilizing 1,000 watts on 6070 kHz. It carried CFRB's programs.
Rogers Radio Broadcasting Co. Ltd. was granted an experimental licence for a frequency modulation (FM) station to simulcast CFRB's programming as VE9AK. A 50-watt transmitter, built at the Rogers Radio Tube plant, operated in the original FM band - around 42 MHz. A vertical antenna was mounted on the roof of the CFRB studio building at 37 Bloor Street West - with a height of only 60 feet above ground level.
Dick McDougal left CFRB for the CKCL announce staff. Claire Wallace was CFRB topics commentator. Andrew Allan was an announcer. Bob Kesten left CKCL to free-lance but was heard mainly on CFRB.
Slogans: 3 million Canadians can hear us tonight! And they do listen! 12 years of continuous service built this listener appeal. / By popular vote THE most popular station in Canada's wealthiest market! CFRB, Toronto.
Edward Samuel (Ted) Rogers, founder of CFRB, died. His son, E.S. (Ted) Rogers junior, who was to become one of the most dynamic persons in Canadian broadcasting and telecommunications in the 20th century, had yet to reach his 6th birthday. This obituary appeared in Broadcasting Magazine (American): EDWARD S. (Ted) ROGERS, president of CFRB, Toronto, and prominent Canadian radio manufacturer, died May 6 after a severe internal hemorrhage. He was 38 years old. Well-known as a radio engineer and executive, Mr. Rogers started radio as a hobby when a youth and in 1921 was the first Canadian amateur to successfully broadcast a transatlantic signal. His original amateur station, 3BP, grew into the present CFRB, and his early receiver construction efforts into Canada's largest radio and tube plants. He is credited with having developed the first commercial light-socket radio receiver in 1925. His widow and a son survive.
J. E. Rogers was named president of Rogers Broadcasting Co., owners of CFRB and CKLW, succeeding his brother E. S. Rogers, who died May 6. Mr. Rogers also succeeded his brother as president of the Rogers-Majestic Corp., parent company of all the Rogers interests in radio and tube and set manufacturing.
CFRB signed a full term contract with British United Press.
On May 19 a violent wind and rain storm hit southern Ontario, blowing down of the 360 foot towers of CFRB, at Aurora, 20 miles north of Toronto. CFRB was off the air nearly five hours (5:40 to 10:20 p.m.) while engineers rigged up a temporary tower for the loose end of the T antenna. The west tower was a crumpled heap of steel on the ground. Damage was estimated at around $10,000.
CFRB placed and order for a 300 foot steel tower with Canadian Bridge Co. Ltd. of Walkerville, to replace the tower blown down in a wind storm on May 19. The T-type antenna was now only supported from one 300 foot tower to a small temporary tower.
CFRB was granted an experimental FM licence by the Radio Branch of the Department of Transport. 25 watts of power would be used at 43.4 MHz. CFRB manager Harry Sedgwick said no equipment had been purchased yet and an opening date was not known at this point. It was expected the FM transmitter would be located at the CFRB studio building.
Ad slogan: Canada's Foremost Radio Buy (CFRB).
Al Savage did the morning show. Fred Haywood joined the CFRB announcing staff. He had worked with stations out west, CKSO Sudbury and CHML in Hamilton. Bob Kesten was a free-lance announcer for both CFRB and CKCL.
As a result of the terms of Havana Treaty of 1937 the Canadian Broadcasting Corporation (CBC), the regulator of Canadian broadcasting, instructed CFRB to move from 690 to another Canadian Class I-A channel - 860 kHz. This move was made to accommodate CBF - the CBC's new 50-kilowatt station in Montreal which had been assigned to 690 kHz. Thus, on March 29, CFRB's transmitter plant at Aurora was re-adjusted to operate on 860 kHz.
Through negotiations with W.C. Thornton Cran, the manufacturing assets of Rogers Majestic Corporation Limited (Rogers Radio Tubes Ltd. and Rogers Batteryless Radio Co. Ltd.) were sold to Small Electric Motors Limited. Rogers Majestic Corporation Limited changed its name to Standard Radio Limited, and retained control through Rogers Radio Broadcasting Company Limited, which held the licence for CFRB.
John Collingwood Reade was now heard on CFRB. Claire Wallace left CFRB for the CBC. Earl Dunn joined the station.
A special broadcast marked the 2,000th program of Rex Frost's Farm Broadcast on March 13. The program had been running continuously since 1933 as a sponsored market report and general farm discussion feature.
The Ontario Government started a tourist program on CBS, April 20. The half-hour variety show originated at Toronto's Hart House Theatre. It was carried in Canada only on CFRB, Toronto's CBS outlet. While the commercials on Ontario tourist attractions were heard on U.S. stations, listeners to CFRB were told how best they can receive American visitors to Canada and what Americans expect of their Canadian hosts.
VE9AK (FM) left the air until the end of WW II.
Harry (Red) Foster worked on-air at CFRB. Lloyd Moore, Bob Morrison, Rai Purdy and Al Savage also worked at CFRB.
Wes McKnight's Bee Hive Sports Views entered its 10th year of uninterrupted broadcasting. The show aired at 6:40 p.m. daily and on Saturdays, was fed to a network of 39 stations across the country.
Todd Russell was at CFRB. Announcer Fred Heywood reported for training and was replaced at the station by Loy Owens. Gordon Fraser left CFRB's engineering department for the National Film Board in Ottawa. Wib Perry joined CFRB in the fall.
Harry Sedgwick was president of CFRB and chairman of the baord of the Canadian Association of Broadcasters. He took over CFRB in 1931. Roy Locksley was program director.
| Jck Dennett
John Collingwood Reade ended four years of commentary on CFRB on October 14. Jack Dennett joined CFRB from CKRC Winnipeg (and CFAC Calgary before that). He took over the 11:00 p.m. newscast from Reade and would also do another nightly 10 minute newscast. Eventually Dennett would take over the 8:00 a.m. and 6:30 p.m. newscasts, replacing the popular Jim Hunter. He wrote, edited and read these newscasts for 26 years. As a member of the Hot Stove League, he presided over that group prior to the Saturday Night Hockey radio broadcast which went coast to coast.
An advertising slogan of the day: In the Heart of Ontario - CFRB Toronto - The listener's choice.
Wally Armour was appointed CFRB's musical director, replacing Roy Locksley.
Armour had been in radio since 1926 and succeeded Roy Locksley, now with "The Navy Show".
Some of the on-air names: Rex Frost, Jim Hunter, Cy Strange (joined this year).
Standard Radio Ltd., the holding company for CFRB and Windsor's CKLW, showed a profit according to its 1941-42 financial statement.
Gordon Sinclair joined CFRB on June 6. While working at the Toronto Star, Sinc had actually had his first taste of radio in 1942 with CFRB when he did a number of feature reports. Wes McKnight was program director. Staff announcer Maurice "Bod" Boddington left CFRB to free-lance. He had been with CFRB for 13 years and worked at CKGW before that. Wib Perry left for CJBC.
Eddie Luther joined CFRB as junior announcer.
John Collingwood Reade returned to CFRB following a period of political activities in Ottawa. He would now do a news commentary on the station, three days a week at 10 p.m. Eddie Luther joined the CFRB announce staff. He was a retired air force pilot and for the past two years had been a flying instructor. He was the brother of well known New York freelance announcer, Paul Luther.
Small Electric Motors Limited, with its former Rogers' manufacturing units, were sold - to become the Canadian nucleus of the Dutch-owned Philips Electronics Ltd.
Bill Deegan joined the CFRB announcing staff from Sudbury's CKSO. Keith Dancy was hired as an announcer at CFRB.
From an ad - "Some of the people who have helped put CFRB first among Ontario listeners" - John Collingwood Reade with fresh material from European battle fronts...Roly Young with inside dope on stage and screen... Gordon Sinclair, globe-trotting reporter, writer and newscaster...Jim Hunter and Jack Dennett also reporting the news...Rex Frost, with hi farm broadcast and news analysis...Kate Aitken with informal talks to women...Wes McKnight quizzing the hockey stars and giving his "Sportviews"...Foster Hewitt, with breathless descriptions of N.H.L. games...Mide Ellis discussing the "Woman's World"...Barry Wood, genial host and emcee with his"Top of the Morning"...Ann Adam of the "Homecrafters", with aariations in recipes and menus.
Wishart Campbell was named CFRB's new musical director. Jack Sharpe was chief studio engineer. E.L. Moore was manager. John S. Hall hosted a gardening show on CFRB Thursdays at 7:45 p.m. It was noted that former CFRB announcer Loy Owens was now with the public relations branch of the Canadian Army.
Slogan: CFRB - Where most of the favourites are!
George Retzlaff joined CFRB from Winnipeg's CKRC where he was chief operator. Librarian Allan Acres left CFRB for CKEY.
Four Canadian stations had their applications for 50,000 watts of power turned down: CFRB, CKAC Montreal, CFCN Calgary and CKY Winnipeg.
CFRB celebrated its 19th anniversary on February 19. Three of the original staff members - engineer Jack Sharpe and program director Wes McKnight and cheif operator Bill Baker - were still with the station in 1946.
Velma Rogers, the widow of Ted Rogers Sr., sold her shares in Standard Radio Limited. Argus Corp. Ltd. which had been formed in 1945 by E.P. Taylor, J.A "Bud" McDougald and his brother-in-law W.E. Phillips, acquired control of CFRB through the purchase of shares of Standard Radio Ltd.
In March, the CBC Board of Governors declared that, as a result of the Havana Treaty of 1937, all Class 1-A frequencies would be reserved for the CBC's use. They then made a formal application for 3 Class I-A frequencies that were being occupied by private stations. One of those channels was 860 kHz in Toronto - used by CFRB for several years. The stations were notified on April 18th that the CBC would be requiring the use of the channels by June, 1947. The CBC's second Toronto station CJBC (ex-CBY) was operating as a Class II channel with 5,000 watts on 1010 kHz, directional. The CBC's plan was to move CJBC to 860 kHz with 50,000 watts, non-directional, jointly using CBL's antenna site and tower at Hornby. A lengthy controversy over CFRB's forced eviction from 860 kHz developed and continued into 1947. The other private stations being booted from clear channels were CFCN Calgary and CKY in Winnipeg.
Mornings on CFRB would never be the same ! On November 1, Wally Crouter started a 50 year stint on the station. He always had something new going on, special guests, controversial topics – all with the listener in mind. Typical of Wally – when Hurricane Hazel hit Southern Ontario in the 1950's, killing 81 people and leaving thousands homeless, Wally somehow made it to the studio by 6:00 a.m., and, yes the lights were on, as was the transmitter. He was on the air until noon, with non-stop messages about school and office closings, and helping to organize volunteers to help those in trouble. He would retire 50 years later to the day in 1996 – with a big retirement party with many of his fans. Before coming to CFRB, Wally had been with CHEX in his hometown of Peterborough. Crouter had done some work at CFRB once before - in 1940, he was a vocalist at the station.
On his discharge from the RCAF, Jack Dawson, who had come fom CJCA Edmonton in 1939, rejoined the announcing staff.
Joan Baird was a women's commentator at CFRB. The Toronto Better Business Bureau's A.R. Haskell had been hosting a program (Facts about Rackets) on CFRB for 11 years now.
Slogan: First for Information! First for Entertainment! First for Inspiration!
Before the Special Committee on Radio Broadcasting, CFRB's Harry Sedgwick compared the programming of his station with that of CJBC, the station that would take over CFRB's 860 kHz frequency. He used the week of June 30 to July 6 for his comparison. In religious broadcasts, CFRB offered 3 hours and 25 minutes, 2 church services, organ music, choir singing, hymns, daily "Victorious Living". This compared to CJBC's one program of religious music. Sustaining public service broadcasts on 'RB (not including spot announcements) amounted to 5 hours, Columbia symphony orchestra, outdoor programs, Report from Parliament Hill, etc. CJBC offered 2 hours and 15 minutes, including Operation Crossroads and High School News. When it came to sustaining news and news commentaries, CFRB had six hours and CJBC, four hours and 48 minutes. U.S. network commercials accounted for 11 hours and 45 minutes on CFRB and 9 hours and 30 minutes for CJBC. Canadian ads used 31 hours and 20 minutes on 'RB and 30 minutes on 'BC. CJBC used 34 hours of American sustaining programs to fill its schedules while CFRB used 27 hours and 40 minutes. CFRB used 47 hours and 50 minutes of recorded programs against CJBC's 46 hours and 5 minutes. CFRB was on the air 127 hours and 5 minutes of the week and CJBC was on 117 hours and 36 minutes. When it came to ratings (Elliott Haynes for June, 1946), Harry Sedgwick said CFRB had 19.9% of all radio sets tuned in in the Toronto area for the 9 a.m. to 6 p.m. time period. CJBC had 7.7%. In the evenings, he said 27.9% were tuned to CFRB while CJBC rated 7.3%. A test of signal strength conducted by RCA on June 26, at ten scattered points in the city, showed the strength of CJBC was over 2 1/2 times that of CFRB. Sedgwick said CJBC's lack of audience in the Toronto area was not due to any lack of signal, but could only be due to their program policies.
Freelancer Michael FitzGerald joined the CFRB announcing team in August, replacing Cy Strange. He had worked in the past at CKTB St. Catharines. Strange left CFRB to work in the British film business.
CFRB marked its 20th anniversary in February. A special broadcast to mark the event featured station staffers: Jack Sharpe (chief engineer since day one), Bill Baker (chief operator - he was with the Rogers factory that built the Rogers Battery-less station before he joined CFRB itself), Wishart Campbell (voice known to CFRB listeners for many years, joined the station as musical director on release from the RCAF), Wes McKnight (program director - joined the station in 1928. His "Sportsviews" were still heard to this day and were the first daily sportscasts in the country), Lloyd Moore (station manager - joined 'RB in the early 1930's... started his career at Hamilton's CKOC), and Harry Sedgwick (president of CFRB and chairman of the board of the Canadian Association of Broadcasters). The broadcast also featured William S. Paley (chairman of Columbia - CBS), Gordon Sinclair, Greg Clark, John Collingwood Reade, Jim Hunter (just recently did his ten thousandth newscast), Rex Frost, Claire Wallace, Andrew Allen (CBC), Joan Baird, Roy Ward Dickson, Maurice Boddington, Grace Matthews and Todd Russell. The special program also paid tribute to those who had contributed to the CFRB program schedule over the years: Anne Jamieson, Jack O'Donnell, Bill Kemp, Denton Massey, John Holden, Margueretta Nuttal, Reginald Stewart, the late Luigi Romanelli, Ernest Seitz, Eddie Bowers, Al and Bob Harvey, Charlie Shearer (former CFRB manager), Alexander Chahallen, A.S. Rogers, Edgar Stone and Foster Hewitt.
As noted above, in January, Jim Hunter delivered the news for the 10,000th time. For 16 years, 11 months annually, 6 days a week, twice a day, he had presented the news over CFRB. Originally Hunter was the Toronto Evening Telegram's talking reporter. He started in radio in 1929 at Chicago's WBBM. He joined CHML Hamilton in 1930, moved to CKCR Kitchener in 1931 and then joined CFRB a short time after that.
Faced with the loss of 860 kHz, CFRB searched for property where, using 1010 kHz, it could continue to principally serve Toronto. A site was selected near Clarkson, southwest of Toronto. Knowing it didn't have a chance of winning against the CBC for 860, CFRB formally applied to change frequency to 1010 kHz. The CBC approved the application that would see CFRB on 1010 kHz with a power of 10,000 watts, using four 250 foot towers. Subsequently, CFRB's engineering consultant was able to develop adjustments to his antenna design whereby protection was maintained to other channels, with the station using higher power. CFRB then applied for an increase in power to 50,000 watts. The daytime radiation pattern would use two of four towers. The night pattern would use all four towers. Before the year came to an end, the CBC Board of Governors agreed that in moving from a Class I-A frequency to a Class II frequency and to a new transmitter site, CFRB would be allowed to increase its power to 50,0000 watts from 10,000 watts "to maintain the present coverage".
CJBC 1010 was expanding into the commercial field, handling national spot business as well - with the assumption it would continue as key station for the CBC's Dominion network. It seemed CJBC planned to walk in on CFRB's market before taking over that station's 860 kHz frequency, expected in June.
Rogers Radio Broadcasting CO. Ltd. received an FM licence for CFRB at Toronto - 99.9 MHz with 250 watts of power.
Ken Marsden was CFRB's publicity director. Jaff Ford was a CFRB announcer. Aubrey Wice joined CFRB as record librarian. He had worked in the past for the CBC and CKEY. Free-lancer Vic Growe was now narrating a Hollywood news-type presentation over CFRB. Waldo Holden joined CFRB as sales manager. He had held the same position at CKEY. Loy Owens returned to the CFRB announce staff. Helen Quinn was CFRB's new women's commentator. Wally Crouter joined CFRB from CHEX, replacing Michael Fitzgerald who resigned to do freelance work.
Slogan: Co-operation in every project of community interest. Featured personalities on all topical subjects. Regular newscasts - local, Canadian and world. Balanced programming that holds a loyal audience.
CFRB-FM signed on, replacing experimental station VE9AK.
In December, CFRB received approval to increase power to 50,000 watts on 1010 kHz (directional). This was the CBC's first approval of 50,000 watts for a private station. The CBC said: "The object of this recommendation is to maintain the extent of the coverage of station CFRB as near as possible to its present coverage at a power of 10 kW on the frequency of 860 kc."
The frequency exchange for CFRB and CJBC had now been planned for July 1st, but both stations were experiencing some installation problems. It was mutually agreed to delay the cut-over until September 1st. Thus, at midnight August 31st, the Aurora transmitter was shut down, and on the new day, CFRB began broadcasting on 1010 from Clarkson. Similarly, at midnight, CJBC shut down its Port Credit plant, restarting on September 1st from Hornby at 50 kW. CFRB was the first private station in the British Commonwealth with power of this magnitude and to employ two directional patterns (DA-2). The CFRX shortwave transmitter plant was also moved to the Clarkson site.
Bill Valentine joined CFRB in February from Winnipeg's CKRC. At ‘RB, he became the first rep to join sales manager Waldo Holden. Gord Atkinson began hosting "Club Crosby" in September. The program featured Bing Crosby records.
CFRB was considering discontinuing the CFRX shortwave service. Thanks to hundreds of letters, the service would continue. The change of CFRB's frequency would possibly result in a temporary silencing of CFRX though. Early in the year, work was underway at the new 1010 kHz / 50 kW transmitter site at Clarkson, 20 miles west of Toronto, just off the Lakeshore Highway. The new RCA BTA-50-F transmitter was expected to be operational on 1010 kHz this summer.
Harry Sedgwick, founder of the Canadian Association of Broadcasters in its present form (1948) and the chairman of the board since 1934, stepped down in favour of CFRN's Dick Rice. Rice had been honorary president. Sedgwick would remain as a CAB director. CKAC's Phil Lalonde became honorary president. The post of general manager, made vacant by the resignation of Harry Dawson, went to Jim Allard, who started in radio in 1935 on CJCA, where he worked under Percy Gayner, Tiny Elphicke and Gordon Henry.
Slogans: CFRB - The Listener's Choice! / Ontario's Favourite Radio Station.
CFRB requested an extension to August 31, 1948, the licence for its 860 kHz operation. The switch for CFRB to 50 kW on 1010 kHz and CJBC to 860 kHz with 50 kW had been set for July 1. A new switch date now had to be set.
Applications to bring television to Canada, starting with Toronto and Montreal were turned down by the CBC Board of Governors. Applicants included CFRB, CKEY, Al Leary and Famous Players Canadian Corp. at Toronto; and CFCF and CKAC for Montreal. The applications were shelved because the CBC had no money to enter the television game.
As CFRB prepared to move to 1010 kHz and increase power to 50,000 watts, the station held a series of activities at the Canadian National Exhibition (CNE) from August 27 to September 11. The station used a portable shortwave transmitter to relay the programs from the Ex to the studios. September 3 was "Radio Day" at the CNE. An open house was also held at the Clarkson transmitter site.
Opening ceremonies for 1010 included an "Open House" special that aired from 10:10 a.m. to 10:10 p.m. on September 1.
CFRB and CJBC officially went to 50,000 watts and switched dial positions on September 1. CJBC moved from 1010 to 860 and CFRB switched from 860 to 1010. Opening ceremonies were held for CJBC at Hornby on September 1 and for CFRB at Clarkson, on the 2nd. CFRB reps attended the CJBC function and CBC officials were on hand for the CFRB festivities.
Running up to and after the move to 1010 and 50 kW, CFRB went into heavy promotion mode with ad slogans such as: Follow Your Favourites To 1010 / A New High In Good Listening. / CFRB Becomes The Most Powerful Independently Owned Radio Station In The British Commonwealth!
The Clarkson transmitter site: It cost CFRB about $500,000.00 to make the move to Clarkson, change to 1010 kHz and increase power to 50 kW. The site used 93 acres of land and was located on the Lower Middle Road (Lakeshore Highway also named), 18 miles from downtown Toronto. There were four 250 foot uniform cross-section towers. The transmitter building was modern, yellow brick, and floodlit at night. There would soon be a brilliant neon sign and landscape gardening. The main floor contained reception, lobby and offices, transmitter room, tube storage department, tuning/phasing room, and quarters for the staff. All equipment at Clarkson was RCA. The site also included a 10,000 watt standby transmitter and emergency gas-driven power unit. 36 miles of copper wire was used for the ground system - 120 lengths laid out radially from each tower. CFRB and RCA engineers worked around the clock to install and adjust the equipment in the record time available of 40 days and nights. The Clarkson transmitter received audio from the Bloor Street studios by use of telephone lines. Pat Bayly was the consulting engineer. Bill Carter was the architect. Technical supervision came from CFRB's Ellsworth Rogers, and assistant Clive Eastwood. For the record, the old Aurora
Transmitter site consisted of a small wooden hut to house the transmitter, and a single strand of copper wire between wooden poles which acted as the antenna.
Some of the management team: Harry Sedgwick (president), Ellsworth Rogers (vice president), Lloyd Moore (station manager), Wes McKnight (program director, sports commentator and news), Wishart Campbell (musical director), Jack Charp (chief engineer), and Bill Baker (studio engineer). On-air names included: Katharine Bard, Wes McKnight, Beth Corrigan, Eddie Luther, Jack Dawson, Jim Hunter, Wishart Campbell, Grace Matthews, Kate Aitken, Rex Frost, Loy Owens, Wally Crouter (Top O' The Morning), Jack Smith, Frank Grant, Gordon Cooke, Dorothy Shay, Bob Crosby, Jack Dennett, Jane Froman, Gordon Sinclair, Argos football. It should be noted that Clive Eastwood became chief engineer on his wedding day - September 15. He had been an engineer at CFRB since 1945.
CFRB became the first station in Canada to use movie promotion (about itself) in commercial theatres.
When Joseph Atkinson (Toronto Star) died in 1948 it was said several thousand shares of Standard Radio Ltd. (CFRB) were found in his safe. He closed his CFCA-AM in 1933 (opened in 1922) because he was apparently convinced radio was a passing fad. His shares in Standard were acquired by a holding company headed by E.P. Taylor, whose Argus Corp. had quietly held effective control of SRL ever since.
Angus Wilfred Donald (Don) McEachern joined CFRB's engineering department and remained with the station into the early 1990's.
Royce Frith began hosting "Home on the Range" on CFRB. Ken Marsden was promotion manager. Jaff Ford hosted Sketches on Music. Fred Cripps, with CKEY since 1945, left the station to do freelance work. One of his new jobs involved the noon newscast on CFRB. Roy Ward Dickson hosted Fun Parade. Frank Willis was heard on CFRB. Newscaster Jim Hunter died suddenly on June 6. John Collingwood Reade succeeded Jim Hunter on the 8 a.m. and 6:30 p.m. newscasts, as of June 27. Gordon Cook had been doing the newscasts since Hunter's death. Ray Harrison was an operator. Gord Atkinson hosted "Club Crosby".
The applications for new television stations (CKEY, CFRB, Famous Players and Al Leary for Toronto; and CFCF and CKAC for Montreal) were again deferred by the CBC Board of Governors.
E.L. Moore was manager and Waldo Holden was commercial manager.
Ken Marsden moved from publicity to CFRB sales as of May 1. Allan McFee and Cy Strange were on-air at CFRB. John Bradshaw was hosting farm programs. More on-air names: Jack Dennett, Loy Owens, Kate Aitken, Fred Cripps, Wes McKnight, Wally Crouter, John Collingwood Reade, Barry Wood, Gordon Sinclair and Gord Cook.
When CFRB started in 1927, it had three newscasts daily, coming from the old Toronto Globe office. All were unsponsored. Now, the station had 18 newscasts daily from its own newsroom - one every hour and one one was available for sponsorship.
Slogan: The Number One Station in the Number One Market...CFRB - 1010 on your dial.
Slogan: As ever, the No. 1 buy in Canada's No. 1 market.
Jack Dennett and John Collingwood Reade swapped time slots. Dennett's Salada news which had been heard at 11 p.m. would now be aired at 8 a.m. and 6:30 p.m. Reade, who had been doing the double shot for Mutual Benefit, returned to the 11 p.m. slot that he popularized during the war (for Eaton's). His newscast would now be sponsored by Shell Oil.
Wes McKnight handled Toronto Argonauts football (CFL) for CFRB. The games were also heard on CJKL Kirkland Lake, CKGB Timmins, CKSO Sudbury and CFCH North Bay. Barry Wood hosted CFRB's Midnight Merry-Go-Round program.
From 1927 to 1929, Rex Frost directed and announced the "Castrol Hour" on CFRB and CKGW. Since 1933, he had been doing CFRB's daily farm broadcast. For many years he did a nightly commentary on 'RB - The Armchair Club.
CFRB signed a long term lease on the 37 Bloor Street West premises. As a result, the station was renovating its space there, expanding into the entire second floor.
CFRB marked 25 years on the air in February. It was one of the few Canadian stations to survive the quarter century under the same call letters and management. It was one of only two 50,000 watt independent stations in the country (CKLW Windsor being the other). Over the years, CFRB had originated coast-to-coast hockey broadcasts and employed the first pack transmitter that an announcer could strap on and broadcast a play-by-play account of a golf tournament. CFRB was there to cover the Moose River mine disaster (see 1936, above), the Dionne Quintuplets'(fed to CBS)...the station moved half a ton of remote equipment to Callendar, Ontario for the three times a week broadcasts. CFRB was also serious about public service. In some 125,000 hours of broadcasting, the station had given about 12,500 hours of free time to various causes...Boy Scouts, Red Cross and the Fresh Air Fund as examples. Going way back, pioneer artists on CFRB included: Myrtle Hare, Joy Fawcett and Alice Blue, "The Hollingworth Girl." Charles Bodley, whose CFRB symphony orchestra was once rated among the world's best, conducted four orchestras at one period on CFRB. CFRB was once a key station in the Canadian Radio Corporation's network of 26 stations.
Farm director John Bradshaw hosted his own daily show between 4:45 and 6:30 a.m. He was also garden editor for the Toronto Star and now had a new program from S.W. Caldwell Ltd. that aired on 12 Ontario stations.
Claire Wallace returned to CFRB from the CBC.
Barry Wood left the Midnight Merry-Go-Round program. Jerry Wiggins joined the announce staff on October 1. He took over the Midnight Merry-go-Round program. He had been with CKFH.
CFRB began breaking its major 11 p.m. newscast into three segments. The 15 minute package would start with Gordon Cook doing the news, with an accent on local items. John Collingwood Reade would follow with his news analysis. Cook would then return to wrap up the package with a short newscast.
At a Parliamentary Committee, an MP complained that radio stations like CFRB and CKLW were merely American stations on Canadian soil. CFRB legal counsel Joseph Sedgwick, replied that U.S. programs accounted for 18% of CFRB's broadcasting time and 5% of its revenue.
Hurricane Hazel hit southern Ontario between October 15 and 17. On the first night, after the 11 p.m. news, Jerry Wiggins stayed on the air to 3 a.m., well past his sign-off time, to keep listeners fully informed. After CFRB signed off, Wiggins and operator Ray Harrison went to the streets with a portable tape recorder. CFRB was back on the air at 5:30 Saturday morning, 15 minutes early, with John Bradshaw's Breakfast on the Farm program, keeping listeners up to date with news and public service messages. Wally Crouter followed at 6:30 with the assistance of Jack Dennett, Ed Luther, Mike Fitzgerald and Loy Owens. Even though it was a Saturday, the office staff started to show up for emergency duty. Vice president Elsworth Rogers set up a shortwave receiving set on the station's roof. This allowed CFRB to become a nerve centre for a network of amateur radio operators. Late in the day, CFRB fed storm reports to CJAD, CFRA, CFAC, CJOB and CBS.
The Trull Sunday Hour was in its 21st year on CFRB (sponsored by the Trull Funeral Home).
Slogan: CFRB is the radio station that covers Canada's most profitable market, Ontario, completely.
Bob Aiken joined CFRB as retail sales manager. He had been assistant manager at CJIB Vernon.
Ownership of Rogers Radio Broadcasting Co. Ltd.: Standard Radio Ltd. 98.9%, E. W. Bickle 0.1%, A. B. Matthews 0.1%, M. W. McCutcheon 0.1%, J. A. McDougald 0.1%, W. E. Phillips 0.1%, J. H. Ratcliffe 0.1%, J. E. Rogers 0.1%, S. Rogers 0.1%, H. Sedgwick 0.1%, E. L. Moore 0.1% and V. McGlennon 0.1%.
Executive and staff personnel included: President, General Manager - Harry Sedgwick; Station Manager - E. Lloyd Moore; Program, News and Sports Director -Wes McKnight; Music Director - Wishart Campbell; Women's Director - Kate Aitken; Gardening Affairs - John Bradshaw; Production Manager - Earl Dunne; Chief Engineer - Clive Eastwood; Chief Operator - Bill Baker. Announcers - Jack Dawson, Wally Crouter, Keith Rich, Bill Deegan, Eddie Luther. Newscasters - Gordon Sinclair, .Jack Dennett. Program Host - Walter Kanitz.
| Ray Sonin & Noel Coward
Betty Kennedy appointed Director of Public Affair; Bob Hesketh joined the news staff and as an alternate to Gordon Sinclair. Jack Dawson became chief announcer and program director. Mary Falconer was traffic manager. CFRB had two ethnic shows - Canadians All - Saturday nights -- and -- the Sunday to Friday Continental Concert with Walter Kanitz. Bob Aiken left CFRB's sales department for CJMS in Montreal. Waldo Holden was sales manager. Jack R. Kennedy and William V. Stoeckel were appointed CFRB sales reps. Ray Sonin, publisher and editor of Music World was now writing and producing his own 45 minute program - Calling All Britons - over CFRB - Saturday's from 5:05 to 5:50 p.m. Sonin had broadcasting experience with the BBC in the past. His CFRB broadcast debuted on September 13.
According to Elliott-Haynes CFRB had a total of 933,803 adult listeners every day.
Under the new Broadcasting Act (that saw the creation of the Board of Broadcast Governors), a broadcasting station must be 75% Canadian owned but the restrictions would not apply to existing stations. A conservative senator pointed out that CFRB was owned by Standard Radio Ltd. (a public limited liability company, whose shares were traded on the Toronto Stock Exchange) and had no control over the ownership of its stock which could be purchased by persons of any country.
With the Board of Broadcast Governors replacing the CBC as regulator, many parties were awaiting the lifting of the TV ban...in Toronto one channel was available and the following parties had plans to file applications: Joel Aldred of Fifeshire Productions; John Bassett (publisher of the Toronto Telegram and head of Baton Broadcast Inc.); Spence Caldwell; Jack Kent Cooke (CKEY); Famous Players Canadian Corp.; Foster Hewitt (CKFH) and Standard Radio Ltd. (CFRB).
CFRB was one of seven unsuccessful applicants to the Board of Broadcast Governors for a license for the first Toronto private television station.
Harry Sedgwick who, since 1933. had become the President and Managing Director of Rogers Radio Broadcasting Company Limited and General Manager of CFRB, died at the age of 64.
He was succeeded temporarily by Elsworth Rogers, the brother of the late Ted Rogers Sr. who had founded CFRB.
Wes McKnight was appointed Station Manager, succeeding Lloyd Moore who had retired. Bill Stephenson took over as Sports Director; Jack Dawson was appointed Chief Announcer and Program Director, Torben Wittrup joined the CFRB news staff.
On May 15, 1959. W. C. Thornton (Winks) Cran was appointed President of Rogers Radio Broadcasting Company Limited, which became Standard Radio Limited. The Chairman of the Board was J.A. (Bud) McDougald.
Additional on-air names: John Bradshaw, and Jack Dennett. Betty Kennedy (public affairs editor), Bill Stephenson, Bob Hesketh, and Torben Wittrup joined the CFRB team.
CFRB applied for and obtained permission to change CFRB-FM's transmitter site from 37 Boor Street West to 35 King Street West and a new antenna atop the Canadian Bank of Commerce Building, which then was the tallest building in the British Empire. The high-gain antenna and the powerful 20-kW transmitter provided an effective radiated power (erp) of 200 kW - at that time, making it the most powerful FM station in Canada.
Gil Murray joined the staff as an on-air news reporter and editor. During
the eight following years he covered Queens Park on a daily basis, and in
1965, served as President of the Ontario Press Gallery.
Some of the staff at this time: Bill Baker, Betty Kennedy, Wally Crouter, Jack Dennett, Rex Frost, Gordon Sinclair, Ken Marsden, Wishart Campbell, Brian McFarlane, Jack Dawson, Bill Deegan, Keith Rich, Alan Small, Art Collins, Eddie Luther.
Bill Stoeckel, sales rep, promoted to sales manager of CFRB's new retail sales department. Jill Loring joined as continuity editor. She had been with CFCF and CHML in the past. Ed Welch joined the sales department. He had worked in the past at CJRH and CHUM. Ken Marsden was named assistant to the president of CFRB. He joined the station in 1946 after service with the RCAF. He had been promotion manager during his entire time with the station (until now).
Rogers Radio Broadcasting Ltd. (CFRB) was one of several applicants seeking to use channel 9 for a new Toronto television station. CFRB was not the lucky applicant and ran this ad after the BBG's decision: Naturally we're disappointed...but we heartily congratulate the Baton Group, controlled by The Toronto Telegram, who have been recommended for the second TV licence and wish them and their station the greatest success. CFRB...1010 - 50,000 watts. Still Ontario's Family Radio Station.
Ad - Pick a spot on ... ‘RB - the interesting station for interested people!
W.T. "Bill" Valentine, after 12 years as national sales rep for CFRB, resigned to become sales manager at CKRC Winnipeg. Waldo Holden, director of sales, appointed Bill Brennan to replace Valentine. Brennan had been president and part owner of CKPT Peterborough. Patti Lewis was a singer at CFRB.
W.C. Thornton Cran, president of Rogers Radio Broadcasting Co. Ltd. announced the following CFRB appointments: William Baker would be operations director in charge of all aspects of production of special events and outside broadcasts. Baker had been with the station over 30 years. Jack Dawson was promoted from chief announcer to program director. He had been an announcer at the station for 21 years. Eddie Luther moved up from staff announcer to chief announcer. Ken Marsden, in addition to being assistant to the president, resumed his old position of promotion and publicity director. He would be assisted by a committee of three - Wally Crouter, Bill Stoeckel and Betty Kennedy.
Rogers Radio Broadcasting Co. Ltd., owner of CFRB, was among the many applicants for channel 9 - the second television licence for Toronto. The Board of Broadcast Governors awarded the licence to Baton Aldred Rogers Broadcasting Ltd.
Some major construction caused big-time traffic congestion in June. CFRB rented a helicopter at $95.00 an hour so that Eddie Luther could keep listeners up to date.
J. Elsworth Rogers, director of Standard Radio Ltd. died in June. In 1939, he succeeded his late brother, Edward S. Rogers, as president of Rogers Broadcasting Co.
In September, Standard Radio Ltd. concluded an agreement with J. Arthur Dupont for the purchase of CJAD Ltd., subject to BBG approval. Dupont would continue as a director and consultant to CJAD (Montreal).
CFRB received a Golden Microphone Award to mark 30 years of affiliation with CBS Radio. W. C. Thornton Cran accepted the award from Frank Stanton, president of CBS, and Arthur Hull Hayes, president of CBS Radio.
Standard Radio Limited purchased Montreal radio station CJAD from its founder J. Arthur Dupont.
Standard Radio Limited established Standard Broadcast Sales as a national sales representation company - initially for CFRB and CJAD but also to act for leading stations across Canada. Appointed President was CFRB's Sales Manager Waldo Holden.
CFRB staff appointments: Promotion Manager - Jerry McCabe; News Director - Bill Hutton.
Earl Warren departed CFRN Edmonton to host CFRB's l0.00 am - 2.00 pm shift.
CFRB was the first station in Canada to introduce airborne traffic reports. The station's Eddie Luther became Canada's first airborne traffic reporter. (CHML Hamilton says it had airborne reports long before this time)
On January 3, Rogers Radio Broadcasting Limited became CFRB Ltd.
Canadian content of CFRB's musical programs received a stimulus by the introduction in October of the first ten Canadian Talent Library albums produced during the summer by Lyman Potts. Jointly funded with sister station CJAD in Montreal, the concept of using live talent budgets to employ Canadians to make stereo recordings for broadcasting was warmly welcomed by the Board of Broadcast Governors in approving CJAD's application for its FM licence.
Donald W. Insley joined CFRB. Fred K. Ursel joined Standard Broadcast Sales from CFRB where he had been in the retail sales department since 1961. He had been with CKSL London before that.
Lyman Potts, CJAD's program director and manager of CJFM-FM (both of Montreal) moved to head office in Toronto. He was appointed assistant to the president, W.C. Thornton Cran to undertake a number of specific assignments. As a start, he was to co-ordinate and be responsible for any station activities with the Department of Transport, the Board of Broadcast Governors and the Canadian Association of Broadcasters. He was also made responsible for the joint operation of the two FM stations, He would also continue to develop the Canadian Talent Library. Potts had been with CJAD since 1958 and was responsible for the launching of CJFM.
Print Ads: Keep on top of the news. Listen here: CFRB 1010. / Remember you get Results with CFRB 1010. Personalities like Earl Warren (10:05 a.m. to 2:00 p.m.) keep the big CFRB family of mature listeners (& buyers) entertained and informed...More people listen to CFRB than any other radio station in Canada. "Ontario's Family Station".
A print ad listed the many people involved in CFRB's news department in 1963: Eddie Luther (traffic), Bill Hutton (news director), Gil Murray (Assistant News Director), Gordon Sinclair (11:50 a.m. and 5:50 p.m. news and comment), Bob Hesketh (1 and 5 p.m. news), Jack Dennett (8 a.m. and 6:30 p.m. news), Torben Wittrup (noon to six news), Bill Gilmour (6:30 a.m. and noon news), Tom McKee (bulletins), Jim Fleming (newsman), Hartley Hubbs (6 p.m. to midnight news), Bill Stephenson (Sports Director), Ron McAllister (reporter/Press Information Officer), Gerry Farkas (news writer), and John Collingwood Reade (10:50 p.m. news & comment).
Veteran Toronto radio commentator, newspaper reporter and public speaker, John Collingwood Reade, died in January at the age of 58. He had been a freelance news analyst on CFRB since 1936, and recently made many appearances on CHCH-TV and CFTO-TV. During the war, in addition to his 11:00 o'clock news on CFRB, he worked with The Globe Y Mail newspaper.
Wayne Van Exan moved from CKFM-FM to replace Bill McVean on overnights, as he took on daytime assignments.
CFRB's on-air lineup: Wally Crouter (6-10), Earl Warren (10-2) Eddie Luther (2-3), Betty Kennedy (3-4), Bill Deegan (4-8), Continental Concert hosted by Walter Kanitz (8-9). Starlight Serenade hosted by George Wilson (9-11). Bill Deegan (11.30-midnight), Bill McVean (12pm-6am) There were also long breaks for major news packages. John Bradshaw had a show on the station.
CFRB-FM became CKFM-FM. Except from midnight to 6 am, its programs were now totally separate from CFRB.
CFRB opposed CHUM's application to increase power to 50,000 watts. CHUM's proposed transmitter site was in close proximity to CFRB's - both at Clarkson. CFRB operated on 1010 kHz and CHUM on 1050 kHz. CFRB felt there was a strong probability that CHUM's signal would interfere most seriously with CFRB's. At the Board of Broadcast Governors public hearing, CFRB noted that it did not oppose CHUM's application, provided undue interference was not caused to CFRB. There were concerns that CFRB would require 500 to 1,000 hours of off-air time to install the requisite traps in its antenna array to rematch the networks and readjust the patterns as a result of CHUM being nearby. CFRB also stated that any further difficulties which might be experienced by CHUM in establishing its pattern would involve CFRB being faced with additional off-air periods. CFRB's engineering consultants stated that this problem could be resolved to the satisfaction of both parties if CHUM should select a site and pattern which would significantly reduce the signal which they would radiate into the CFRB antenna system. CHUM's application for a power increase and change of antenna site was later approved.
The Canadian Talent Library was set-up by Standard Radio as a non-profit trust, and an invitation was extend ed to all stations to participate in the project by financially supporting the funds devoted to increasing the amount of Canadian programming thus available for their programming. Standard Radio, while financing and increasing its funding, pledged that all new income would be dedicated to hiring musicians and singers for the making of more records. (Over the next 25 years, 265 albums were produced, containing 3,000 Cancon performances.
Standard Radio Ltd, CFRB, CKFM-FM and Standard Broadcast Sales moved from 37 Bloor Street West into new state-of-the-art premises at 2. St. Clair Avenue West.
President of CFRB Ltd. was W.C. Thornton Cran; CFRB Manager - Wes
McKnight; Assistant Manager and Program Director - Jack Dawson; Production Manager - Earl Dunne; Chief Announcer - Eddie Luther; Sales Manager - Bill Brennan; Traffic Manager - Mary Falconer; Studio Supervisor - Bill Baker; News Director - Bill Hutton; Sports Director - Bill Stephenson; Women's Director - Betty Kennedy; Chief Engineer - Clive Eastwood; Chief Operator - Don McEachern; Music Librarian Art Collins. On-air names included: Wally Crouter, Jack Dennett, Earl Warren, Gordon Sinclair, Betty Kennedy, Bill Deegan, Bill Stephenson, Bob Hesketh and Ray Sonin.
Charles Doering and Bill McVean were new to CFRB this year. Doering had been with CKPC in Brantford until last year.
On June 1st, Donald H. Hartford was appointed General Manager of
CFRB Ltd. Subsequently, when Mr. Cran was named Chairman of the Board, Donald Hartford was appointed President and General Manager of CFRB Ltd, which included CFRB and CKFM. Jack Dawson was appointed Vice-President and Station Manager of CFRB. Wes McKnight, a Vice-President, was appointed Director of Public Relations. He had been with "Ontario's Family Station" for almost 38 years.
The corporate name of the company became Standard Broadcasting
Standard obtained Canadian rights for the broadcasting of NBC Radio News in Canada formerly held by the CBC. Forming a physical network to extend it across Canada was financially prohibitive and it certainly would not cosmetically fit-in with the spirit of the Broadcasting Act. However, it did contain news stories, actualities and fast-breaking world events that would enhance the news coverage of Canadian stations.
CN-CP Telecommunications was just perfecting the use of Broadband on its Canada-wide system, and Lyman Potts was able to establish what became known as "Standard Broadcast News". CN-CP offered a service whereby one could "dial up" a predetermined list of stations and, like a long-distance call, pay only for the time used. He then approached stations in major cities across Canada and offered them a service which would include items taped from NBC Radio News, the CFRB-CJAD News Bureau on Parliament Hill and Canadian news from participating Canadian stations. Permission had already been received from the BBG to take directly from NBC actualities of news being made. Initially, the network comprised stations in 13 cities from Vancouver to St. John's. It rose over the years to include 27 stations.
This was also the year that Lyman Potts was appointed President of a new subsidiary - Standard Broadcast Productions (SBP), designed to produce and market Canadian programs of value to other stations. These programs were distributed on tapes or discs and mailed to stations requesting them. They also provided wider exposure and new income for Canadian writers, actors and other types of performers. Examples: Bob Hesketh's 5-per-week "The Way I See It" which became the longest-running syndicated radio program in Canada; actor Don Harron's series as "Charlie Farquarson"; Guy Gisslason's "Centennial Flashback".
SBP also was an "umbrella" for Standard Broadcast News, The Canadian Talent Library, and two music publishing firms - Deer Park Music and Conestoga Music.
Wayne van Exan was now at CFRB doing the overnight show (Music ‘Till Dawn).
On March 16, Charles Templeton and Pierre Berton began "Dialogue" on CFRB. It was a daily commentary exchange between the two personalities.
Pierre Berton & Charles Templeton were doing commentary on CFRB. David Craig joined CFRB's news department.
CFRB had plans to move TV channel 13 to Toronto so that it could operate a TV station in the city. The BBG was to hear the application in early 1968. The move would be predicated entirely upon firm contracts between CFRB, CKCO Kitchener, WOKR Rochester and the CBC.
CFRB had a new Jet Ranger chopper for traffic reports.
W.C. Thornton Cran was president of Standard Radio. VP's were Don Hartford and Jack Dawson. Dick Shatto was retail sales manager. Perry Anglin was named chief of the Ottawa Bureau.
SRL subsidiary CFRB Ltd. made formal application for a licence to use channel 13 at Toronto on April 5. CFRB entered into long-term and exclusive contracts with the Rochester and Kitchener stations using channel 13, as well as the CBC, regarding the exchange of channels needed for the move and would defray the costs of such moves. Approval had already been given for erection of a 640' tower, but still required BBG ok. The Board of Broadcast Governors delayed the hearing.
Art Cole was appointed News Director, succeeding Bill Hutton. After working dayshifts at sister station CKFM-FM, Fred Napoli moved over to CFRB to do an overnight program, Music Till Dawn. Fred's interpolation of his original short stories and essays between the easy listening music tracks was to build him a major cult following. Ray Sonin, host of the Saturday program "Calling All Britons" added the weeknight show "Down Memory Lane" to his workload.
CFRB bought exclusive radio coverage for all home and away games of the CFL's Toronto Argonauts, according to vice president and general manager Jack Dawson. Bill Stephenson would do play-by-play. Dick Shatto would handle colour commentary. Bill Baker would handle technical operations.
W.O. (Bill) Crompton became television consultant for CFRB. He had been vice president and general manager of CFTO-TV.
Slogan: Ontario's Family Station.
Eddie Luther was CFRB's airborne traffic reporter. Luther and CFRB introduced air traffic reporting to Toronto listeners in 1961, using a fixed-wing aircraft for special reports in connection with its news service. CFRB was still using this plane in connection with its helicopter on occasion, for important news coverage.
It was announced on June 26 that Standard Radio Ltd. would be renamed Standard Broadcasting Corp. Ltd.
James Wesley (Wes) McKnight died at age 59. He had retired two years earlier but remained as a consultant to CFRB up to the time of his death. Wes joined CFRB in 1928, became sports director and finally, station manager, in 1959.
Donald H. Hartford, vice president and general manager of CFRB Ltd., announced the appointment of Arthur L. Cole as CFRB/CKFM news director as of August 1.
Rex Frost died at age 71. He had joined CFRB in 1931.
Gerald F. MacCabe was appointed vice president of advertising and public relations. He had been director of advertising since joining CFRB Ltd. in 1961.
After 20 years or so with CFRB, Eddie Luther left for CHFI.
At this time, CFRB had a staff of 130, eleven studios and 23,000 square feet of operating space.
According to BBM, CFRB reached the largest audience of any station in the country.
Program director Donald Insley said the top name announcers were given almost complete freedom on the air. The format stressed easy listening and adhered to the Middle of the Road format. The station has never seriously considered talk shows as two or three other Toronto stations were doing them. Insley said CFRB had "Sinc" who was about as controversial as you could get. The station also had "Dialogue" with Pierre Berton and Charles Templeton.
Arthur L. Cole, news director of Standard Radio News System, appointed Ralph L. Errington as Ottawa bureau chief. He had been SRN Parliament Hill reporter for the past year. Before that, he was city hall correspondent for CFRB and joined the station in 1963.
CFRB's plan for a TV station on channel 13 was blocked by the CRTC's moving of channels in Southern Ontario. If CFRB had gotten channel 13, CKCO Kitchener would have moved to channel 6 once CBLT moved to channel 5.
Jack Dennett marked 25 years at CFRB.
Veteran broadcaster and newspaper woman Claire Wallace died September 22 at the age of 68.
Former CFRB announcer Maurice B. Bodington died December 30. He was 84.
The government again was taking applications for new television stations in Toronto and Montreal. Standard Broadcasting was among the Toronto applicants - for UHF channel 25. Niagara Television Ltd. (CHCH Hamilton), Toronto Star Ltd. (in partnership with Montreal Star Co. Ltd.) and Canadian Film Industries (Leslie Allen of CHIC) were among the other applicants. If approved, this would be the first UHF station in Canada, even though TV sets in this country were not equipped to receive UHF. It was noted that adapters were available for about fifty dollars, and after June 1, 1969, all sets in Canada would be required to have UHF. Speaking on behalf of Standard, W.C. Thornton Cran said its bid for channel 25 was made with the intent to operate a local Toronto station without network commitments. Programming would be community-oriented and would emphasize news and public affairs. Cran also said his company's previous bid for a Toronto VHF channel was never dealt with. The applicants would be heard at a public hearing beginning February 4, 1969.
On July 25, CFRB Ltd, received approval to change CFRB's pattern on 1010 kHz with 50 kW.
Art Cole was hosting "Let's Discuss It". He had joined the station in 1968. Neal Sandy (overnight news) came to the station in June. Henry Shannon joined CFRB.
Ad: Each week 60% of the people in Metro Toronto, 18 years and over, listen to CFRB.
CFRB had ten Ryerson and Carleton University students on staff for 13 weeks this summer to be "Good News" reporters.
At this time, CFRB was offering 29 newscasts a day, consisting of approximately 180 minutes of news, weather, sports and traffic, supplemented by three news cars and one jet helicopter. CFRB also subscribed to the Standard Broadcast News service. SBN received direct feeds from NBC New York by broadband.
Don Hartford was vice president and general manager. Jack Dawson was vice president and station manager.
Slogan: Canadians 25 and over are the biggest spenders. Each week over one million of these big spenders listen to us. CFRB 1010.
Standard Broadcasting Corporation Ltd. applied to the CRTC for permission to purchase CHML and CKDS-FM in Hamilton. At the time, the rules denied licensees of stations from acquiring stations in close proximity.
The application was denied as it was decreed that Hamilton and Toronto were "too close".
On-the-air staff included: Wally Crouter, Jack Dennett, Earl Warren, Gordon Sinclair, Bill McVean, Betty Kennedy, Bill Deegan, Ray Sonin, Wayne Van Exan, Art Cole, Bob Hesketh, Charles Doering, Prior Smith (Reporter), and Neal Sandy. CKEY lured away commentators Pierre Berton & Charles Templeton.
Don Hartford was appointed president and general manager of CFRB Ltd., effective April 1. He had been VP & GM. Hartford joined CFRB in 1965.
J.A. McDougald was chairman of the board of Standard Broadcasting. W.C. Thornton Cran was president of Standard and chairman of the board of CFRB Ltd.
William O. Crompton left CFRB to open a TV consulting service. As VP and GM of CFTO, he was credited with putting that station into a profitable position.
Slogan: ON CFRB, all the time is prime.
Charles Templeton and Pierre Berton moved to CKEY from CFRB on September 7. They had been doing their daily "Dialogue" on CFRB since March 16, 1966. CFRB General Manager Jack Dawson said 'RB was not able to match CKEY's better offer and time slot for the duo.
Donald W. Insley was appointed Vice President of Programming at CFRB. He joined the station in 1962 and had latterly been Program Director. Pat Hurley, Vice President and Sales Manager of CJAD, was named Vice President of CFRB and CKFM, succeeding Wally Shubat who resigned. George Daniels, Vice President of the Montreal office of Standard Broadcast Sales, succeeded Hurley as Vice President and Sales Director at CJAD.
At the CFRB Clarkson transmitter site, two 550-foot towers were erected in place of 2 of 4 original 250-foot towers, greatly strengthening CFRB's ability to serve Toronto.
David Craig joined CFRB news. Neal Sandy became Queen's Park bureau chief. Former CFRB personality Kate Aitken died December 11.
On the retirement of Art Cole, Don Johnston left CHML Hamilton to accept the position of News Director of CFRB.
Standard Broadcasting Corporation Limited established Standard Broadcasting Corporation (U.K.) Limited in London, England to serve as a consultant to prospective applicants for commercial radio licenses. Retaining his role as President of Standard Broadcast Productions, Lyman Potts was appointed Managing Director of SBC-UK.
Bob Greenfield joined CFRB's news department.
John Spragge became CFRB's program director. He had been with the Radio Sales Bureau and with 1050 CHUM before that.
Don Insley was appointed Station Manager of CFRB, succeeding Jack Dawson who had retired.
Sue Prestedge was a CFRB ‘Good News Reporter'. Terry Glecoff was also a Good News Reporter. His stint at CFRB that summer led him to his first TV job - a reporter at CFTO. He would go on to be a news anchor at CTV Halifax, CBC Calgary and Vancouver and then CBC Newsworld International. Another member of the Good News reporting team was Valerie Whittingham, who later married and became Valerie Pringle. Henry Shannon was doing traffic. Eric Thorson was in the news department.
W.C. Thornton Cran, President of Standard Broadcasting Corporation Limited, retired . He was succeeded by H.T. (Mac) McCurdy, the President of CJAD Ltd.
The CRTC approved the purchase by Standard Broadcasting Corporation of the controlling shares of Bushnell Communications, the licensee of CJOH-TV in Ottawa.
Connie Smith was a ‘Good News Reporter'. She came from CKOC Hamilton and then left for CKVR-TV in Barrie.
W.C. Thornton Cran died in June. Torben Wittrup was in CFRB's news department. Bill Anderson joined the air staff. Jack Dennett died August 27. His daily news and commentary broadcasts were the most listened to in Canada.
Donald H. Hartford was elected a member of the board of Standard Broadcasting.
Prior Smith was in the news department. Lyman MacInnis joined CFRB to do political and business commentary.
Don Hartford became president of CJAD Ltd. (Montreal).
The Jack Dennett Microphone Collection went on permanent display in the lobby of the CN Tower, Donated by CFRB to mark the station's 50th Anniversary, the collection included 14 microphones and a granite etching of the news broadcaster.
In February, CFRB celebrated 50 years of service with "What's 50 Years Between Friends".
George Wilson hosted "Starlight Serenade". Other on air names: John Dolan, John Bradshaw (gardening), Richard Needham & Caroline Carver (feature), Bob Greenfield (news), Tony Andras (reporter). Ray Sonin's wife Eileen died. Andy Barrie joined CFRB from CJAD Montreal. Donald Insley was appointed Vice-President and General Manager of CFRB. Bill Hall was named Station Manager.
Brothers G. Montagu Black and Conrad Black gained control of CFRB Limited when they acquired Ravelston Corporation from the widows of J.A. McDougald and W.E. Phillips. Ravelston, in turn, held the controlling interest in Argus Corporation, formed in 1945 by E.P. Taylor, J.A. McDougald and W.E. Phillips. Argus was the controlling shareholder of Standard Broadcasting Corporation Limited, the owner of CFRB Limited.
Bruce Dingwall joined the CFRB engineering department. David Taffler was doing business reports, Dave Hodge was in the sports department. Don Johnston was news director. Don Insley was named vice president and general manager. Bev Cudbird, a regular contributor to "This Business of Farming" was now the station's meteorologist.
Programming: 5:30 - World At Dawn, 6:00 - Wally Crouter, 10:00 - Earl Warren, 11:45 - Gordon Sinclair, 12:00 - Earl Warren, 1:00 - Bill McVean, 2:00 - Betty Kennedy, 3:00 - John Dolan, 5:45 - Gordon Sinclair, 6:00 - John Dolan, 6:30 - News with Torben Wittrup, 7:00 - Andy Barrie, 8:00 - Ray Sonin / Dave Hodge (Mondays), 9:00 - George Wilson, 11:00 - World Tonight, 11:20 - Bill McVean, 1200 - Wayne van Exan. Weekends - John Bradshaw (Gardening), Bill Deegan, Dr. David Ochterlony, Bob MacLean, Bill Anderson, Rod Doer, Paul Kellogg, Art Cole (Let's Discuss It). News - David Craig, Peter Dickens, Liz MacDonald, Eric Thorson, Chris Wilson, Charles Doering, Bill Rogers, Bob Hesketh, Bob Greenfield, Bill Walker, Torben Wittrup, Neal Sandy. Reporters - Sidney Margales, Jim Munson, Prior Smith, Wilf List (labour), Neal Sandy. Others - John Spragge (staff announcer /program director), Henry Shannon (traffic), David Taffler (business), Allen Spragget (horoscopes), Ron Singer (entertainment), Bev Cudbird (weather), Anita Burn (traffic), Valerie Pringle, Lyman MacInnis (commentary), John Stall. Sports - Fred Locking, Dave Hodge, Bill Stephenson, Doug Beeforth.
Notes - Valerie Pringle took over 7-8 p.m. Bill Deegan moved from PM drive to weekends in February, replaced by John Dolan. Bob MacLean joined.
Donald Insley was upped to Vice-President for Radio for Standard Broadcasting. Bill Hall succeeded him as Vice-President and General Manager of CFRB. Program Director John Spragge was made a Vice-President.
Joining the news department were Peter Dickens, Chris Wilson and Brian Wrobel. Bev Cudbird became CFRB's weatherman.
Dominion Stores made Gordon Sinclair an honorary director in recognition
of his significant contribution to the favorable development of the company's relations with the public. Dominion had been a long-time sponsor of some of his broadcasts.
For the record, Gordon Sinclair was heard twice daily on CFRB...11:45 a.m. with "Let's Be Personal" and 5:45 p.m. with "Show Business". Each broadcast was followed by ten minutes of news and comment. It was on "Let's Be Personal" that Sinclair broadcast the extremely popular "The Americans" commentary in 1973.
Bob Hesketh presented commentary in "The Way I See It". This program proved most popular and was syndicated to other stations across Canada and ran for many years.
J. Lyman Potts C.M. , Vice-President of Standard Broadcasting Corporation Limited, retired. With 30 years of radio experience behind him when he was appointed in 1963 as Assistant the President of Standard, he played a formidable role in the growth of the company for two decades.
CFRB replaced the original transmitter building and equipment at its Clarkson transmitter site, but continued to use the existing antenna system. Two new Continental 317C-2 50,000 watt transmitters were installed, providing the ultimate in redundancy and reliability. The old RCA 50 kw (and 10 kw back-up) transmitter installed in 1948, when the station was forced to move from 860 to 1010 kHz, was retired. A new programmable logic controller was added. It would be used for various control and surveillance functions, and was the first application of the system in radio broadcasting. This allowed for unattended operation and everything could be controlled from the studios. A diesel generator could handle the total building load for 72 hours in the event of a power failure. The new building was one-quarter the size of the old one. The $1 million project was completed in the summer. Engineers Kirk Nesbitt, Bruce Dingwall and Clive Eastwood worked on the project.
George Daniels became vice president and general sales manager for CFRB. Art Cole retired. Frank Lehman, chief technician at CFRB retired after 33 years of service. He participated in the work on the new transmitter building earlier in the year. On-air names: Neal Sandy (reporter), Earl Warren middays), Valerie Pringle (returned to the station), Henry Shannon (traffic), Gene Kirby (joined from CKEY), Wayne Van Exan, Bill Deegan, Paul Kellogg, and John Stall (joined from CKTB).
CFRB had Argos football and Dave Hodge was part of the station's broadcast team. There was a problem though. The Argos wanted Hodge gone. So it was goodbye to Dave Hodge on CFRB. Art Cole (Let's Discuss It) retired. John Dolan resigned due to health problems.
Announcer John Dolan left to do news at CHFI-FM. Hal Vincent was a reporter. John Bradshaw, CFRB's gardening expert for 32 years, left in June to pursue other interests. He was replaced by Art Drysdale. Announcer Paul Kellogg left for CKEY.
A new transmitter for short-wave station CFRX was put into service at Clarkson, using two 50-foot vertical towers to form a directional pattern for better service to Northern Ontario.
Kenneth W. Whitelock became CFRB's general sales manager. W.E. (Bill) Hall of CFRB was named a vice president of parent company Standard Broadcasting.
Don Hartford retired from Standard Radio. Gordon Sinclair dropped his Monday broadcasts. He was still heard Tuesday thru Friday.
CFRB was testing two of the four AM Stereo systems: Motorola and Magnavox.
John Spragge was still a vice president at CFRB. Bill Auchterlonie was now on the air at CFRB as a swing announcer. He had worked for the station in the early 1970's as an operator. Lyman MacInnis was a financial expert on CFRB.
On May 17, Gordon Sinclair passed away at age 83. Gordon was a news commentator at CFRB for over 40 years. For much of that time, he presented the news and his comments at 11:50 a.m. and 5:50 p.m., as well as "Let's Be Personal" at 11:45 a.m. and "Show Business" at 5:45 p.m. "Sinc" was also a panellist on the weekly CBC Television quiz program, "Front Page Challenge". Bob Hesketh took over the news and commentary segments that Sinc had done at 11:50 and 5:50.
On July 30, stereo broadcasting over CFRB's AM transmitter was increased to 24 hours-a-day. This followed the completion of the station's new master control room (replacing 18 year old equipment). The production control room was used as the interim MCR while construction work was underway. This allowed for programming to continue without interruption. Over the past year, stereo broadcasting had taken place on CFRB between 9:00 p.m. and 5:30 a.m. while all equipment was upgraded to stereo. CFRB used the Motorola C-Quam AM stereo system. McCurdy Radio Industries supplied the new master control equipment.
At this time Bruce Dingwall was chief technologist in charge of studios and transmitters for CFRB.
Ray Sonin, host of "Calling All Britons" for more than 25 years, received an MBE (Member of Order of the British Empire) from Queen Elizabeth II at Buckingham Palace. The Saturday afternoon program provided a large audience with music and news from Britain.
Ray and JuneSonin at Buckingham Palace
CFRB weatherman Bev Cudbird passed away.
Peter Shurman was named president of the radio division (Standard Radio) of Standard Broadcasting.
Neal Sandy left in November.
Valerie Pringle left CFRB after 11 years, to co-host CBC-TV's Midday program. She was replaced by John Stall. Promotions manager Betty Abrams left, and was succeeded by former copy director Peter Henderson. Newsman Neal Sandy left to do government work. Jacqueline Holt became copy editor, succeeding Peter Henderson who was now advertising and promotion director. Andy Barrie rejoined CFRB.
Long considered a middle-of-the-road music station, CFRB embarked on a gradual transition to adult-contemporary music-and-talk. By the early 1990's CFRB was completely all "news-talk".
Ralph Lucas became CFRB's program director. He had been vice president and general manager at sister station CJAD in Montreal.
On November 14, the CRTC approved the sale of Standard Broadcasting Corp. Ltd. to Slaight Broadcasting Inc. Slaight purchased Standard from Hollinger Argus Ltd. and other shareholders. (G. Montegu Black III and Conrad M. Black) Slaight was a privately owned company. Slaight in turn sold his Toronto stations (CFGM and CILQ) to Westcom Radio Group Ltd. Standard became a privately held company.
| Alan Slaight
680 CFTR broke the long-standing record of CFRB as having the largest radio audience in Canada. CFRB hoped that programming changes made this year would help the station recapture the number one spot.
CFRB 1010 altered its night-time radiation pattern to improve service to the northwest.
John Burgess was an entertainment reporter at CFRB. Kirk Nesbitt left CFRB to become director of engineering for Rogers Radio. Murray Smith joined CFRB as entertainment editor. He had been at CKO-FM. Occasional host of "Starlight Serenade", David Ouchterlony, moved on to CFMX-FM. John Spragge left CFRB. John Donabie was now at CFRB. Both Bill Stephenson and Bob Hesketh marked 25 years with CFRB.
On September 16, Betty Kennedy's show time changed to 12:00 to 1:00 p.m., Bill McVean moved to 1:00 to 5:30 p.m. and The World Today moved to 5:30 to 7:00 p.m. Other Notes: Andy Barrie returned, Bill Anderson left, John Dolan returned. Paul Kellogg left for CJCL. Neal Sandy left. John Donabie joined.
With the change of ownership of Standard Broadcasting, Larry Nichols was no longer president of the company.
Bill Deegan (veteran CFRB announcer) and Mac McCurdy retired. Mac had been president and then deputy chairman of the company. He remained as a director. Peter Dickens was replaced after many years on major breakfast newscasts by David Craig. Dickens would now co-ordinate morning newsroom activities. He would also continue to anchor the 12:30 p.m. newscast. Charles Doering moved from the 7:00 and 9:00 a.m. newscasts to do five minutes of news and comment at 6:30, 7:30 and 8:30 a.m. Bill McVean marked 20 years at CFRB.
M. Earl Dunn died April 21 at age 70. He was retired operations manager of CFRB. Dunn joined the station in 1941, was with the RCAF from 1943-46 then returned to ‘RB. He was involved in many station broadcasts in his 40-year career.
Former CFRB reporter Mark Sherwin was now a reporter for NBC News. After nine years with CFRB news, Ken Cox left for CKO. Bill Hall retired as vice president and general manager at CFRB. Peter Henderson left CFRB where he had been promotion manager. He was replaced by Perry Goldberg. Paul Rogers left CFRB news for CFTO-TV. Don Johnston was news director. Hal Vincent was Queen's Park reporter. Toronto Star police reporter Jock Thomas stopped doing his reports on CFRB after 25 years. Wally Crouter marked 40 years as CFRB's morning man.
On August 15, Betty Kennedy did her last show for CFRB. John Stall and Barbara Land took over the noon-1 p.m. time slot (Stall continued to host The World Today as well). Other Notes: Jocko Thomas retired. Mark Sherwin left for NBC News. Ken Cox left for CKO-FM. Paul Rogers left for CFTO-TV. Ed Needham joined for evenings from CJSB Ottawa. Ray Sonin's "Down Memory Lane" was cancelled. Ray continued to broadcast "Calling All Britons" on Saturdays. Fred Napoli joined.
Doris Dicks, who had a show on CFRB in the 1930s and 1940s (as Doris Scott), died at age 68. She was also a singer with the Percy Faith Orchestra.
The Toronto Argonauts signed a new three year contract with CFRB, renewing exclusive radio rights to all home games.
CFRB marked 60 years on the air in February.
Former CFRB announcer Dean Aubrey Hughes passed away at age 79. He joined CFRB in 1935, moved to Associated Broadcasting Co. in 1937, and was then with the CBC from 1939 to 1965.
| Gary Slaight
Gary Slaight became president of Standard Broadcasting. He had been manager of CILQ-FM.
David Ouchterlony died at the age of 73. He had been an early broadcaster with CBC Television and hosted radio programs at CJBC, CKFH, CKEY, CFRB and CFMX-FM over the years. Ralph Lucas resigned as vice president of programming at CFRB. Perry Goldberg left CFRB. He had been marketing director. After 15 years as news director, Don Johnston left CFRB. He was succeeded by John Mcfadyen who had been news director at sister station CKFM. Pat Marsden joined CFRB as afternoon sportscaster.
Pat Marsden joined the sports department on August 31. John McFadyen joined from CKFM-FM. John Donabie left. Former CFRB announcer Cy Strange died February 12 at age 72.
George Ferguson became CFRB's vice president and general manager. He had been with CKWW/CJOM in Windsor. Don Costello was operations supervisor.
Sales manager Patrick Hurley left for CJCL 1430 to become general sales manager. Bob Hesketh retired from full-time duties at CFRB. Wally Crouter signed a new five-year contract with CFRB. To mark the occasion, president Alan Slaight presented Crouter with the keys to a new Porsche 928.
Taylor Parnaby was now CFRB's news director. He was also hosting a Sunday night news program on Global TV. Ron Hewatt succeeded Wolf von Raesfeld as CFRB's general sales manager. von Raesfeld moved on to CKOC in Hamilton. Bob Bratina left CFRB. He had been PM drive announcer and voice of the Argos when CFRB carried the games. He would end up back in Hamilton - this time at CKOC. After 17 years, newsman David Craig left CFRB for CJEZ-FM. News director Taylor Parnaby took over the 7 and 8 a.m. newscasts when Craig left. Wayne McLean was hired to replace Bob Bratina. McLean had been working in Ottawa. Murray Smith was now hosting CFRB's afternoon drive show. Murray Johns became retail sales manager at CFRB. He had been an account director at the station for the past six years.
Telemedia picked up the Argos broadcasts from CFRB.
Bob MacLean joined from CJSB Ottawa. Bob Bratina left for CKOC Hamilton. Suneel Joshi left for TSN, replaced by Dave Quinn on the sports talk show. David Craig left for CJEZ-FM. Tayler Parnaby joined. Bob Hesketh retired after 28 years with CFRB. Charles Doering replaced Hesketh for the 11:50 News & Comment. Dick Beddoes joined from CHCH-TV Hamilton.
Bob Greenfield left CFRB news.
Marlane Oliver left in March, replaced by Jacquie Perrin. John Keogh joined CFRB as program director in August. He had been PD at CHML Hamilton. On September 11, talk programming was added 9-10 p.m. with Jeremy Brown, Barbara Klish, Bill McVean, John Turro (different shows each night). Starlight Concert followed. In December, George Wilson left for CJEZ-FM, replaced by Fred Napoli. Jeremy Brown also left for CJEZ-FM. Other notes: Murray Smith moved from PM Drive to Weekends, replaced in afternoons by Joe Cannon from CJAD Montreal. Bob Greenfield & John McFadyen left.
Beth Kidd was appointed promotions co-ordinator.
CFRB began the transition to a news talk format, airing newscasts every half hour, 24 hours a day, replacing the last hour of Wally Crouter's morning show with a news magazine, and extending the 11:00 p.m. news to a full hour.
In January Dick Beddoes joined CFRB for a Sunday night sports talk show.
On August 7, Wally Crouter began ending his show an hour earlier. He was now on-air 5:30 to 9:00 a.m. John Stall & Marlane Oliver took over the 9-10 a.m. hour. Stall was replaced on The World Today by David Bent & Jacquie Perrin. Perrin was replaced on The World At Noon by Jane Hawtin who moved over from CKFM.
In October, Joe Cannon left for CJCL. The PM drive show was taken over by swing announcers Terry McElligott / Pat Marsden.
Jason Roberts joined from CKOC Hamilton on November 26. Jeremy Brown returned to CFRB from CJEZ in November.
Other program notes of the year: The World Tonight (11 pm) expanded to an hour. Sports-talk was added from 10 to 11 p.m. with Dave Quinn / Dick Beddoes. Jacquie Perrin left for CBL and was replaced On The World At Noon by Marlane Oliver. Pat Marsden filled in for Wally Crouter from time to time. Terry McElligott joined from CHFI-FM. David Lennick left for CFMX-FM, but returned a short time later. Lyman MacInnis left. He had done political and business commentaries since 1976. He also filled in on occasion for Betty Kennedy and other talk show hosts.
John Bradshaw passed away at age 75. He had been CFRB's garden show host and farm director from 1950 to 1982.
John Crawford joined CFRB from CKFM in January.
On February 14, Ed Needham left. He was replaced February 18 by Larry Solway and others for a new 7-9 p.m. shift. The World Today moved to the 5-7 p.m. slot.
In March, Bill Anderson (who had just returned to 'RB a short time ago for PM drive) left for CKYC. On March 25, Donna Tranquada joined from CBL.
In May, Ed Needham returned...in a way. He was now living in Florida and sent a daily commentary to the station.
On August 12, John Oakley joined for the new 11:30 p.m. to 3:00 a.m. talk show. The program was followed by repeats of other 'RB programs between 3 and 5 a.m.
Ray Sonin passed away August 20, followed by Dick Beddoes on the 24th. Sonin was 86. He hosted "Calling All Britons" on Saturdays and the weeknight program "Down Memory Lane" ran for 18 years until 1986.
In September, Fred Locking left CFRB sports for TSN. He was replaced by CKYC's Glen Crouter (Wally's son). Ted Woloshyn joined from CKFM for commentaries and fill-in talk. Charles Adler joined for weekend talk and news on September 21.
On December 2, Charles Adler moved from weekends and news to the 7-10 p.m. shift. Larry Solway moved to fill-in talk. Pat Marsden left. Ted Woloshyn moved from Friday evening talk to weekend talk.
Other 1991 program notes: David Lennick left for CJEZ-FM and was replaced by Fred Napoli.
Roy Hennessey was general manager.
Wally Crouter celebrated 45 years as CFRB's morning man with thousands of listeners joining him for breakfast on October 30 at the Sheraton Centre. In September, when Fred Napoli announced on-air that CFRB was cancelling his all-night show Music Till Dawn, listener reaction was so great that the show was extended for a further year, finally ending in September 1993.
On-air: World At Dawn with John Elston (5-5:30), Wally Crouter (5:30-9), The World This Morning with John Stall and Marlane Oliver (9-10), Andy Barrie (10-11:50), News and comment with Charles Doering (11:50-noon), The World At Noon with Jane Hawtin (12-1), Wayne McLean (1-3), Jason Roberts (3-5), The World Today with David Bent & Donna Tranquada (5-7), Charles Adler (7-10), Dave Quinn (10-11), The World Tonight with Torben Wittrup (11-11:30), John Oakley (11:30-2), Larry King. Notes: On Fridays Needham was off and Oakley did the evening show and Larry King started early at 11. By fall, Weekends were all talk too, with Mark Cullen, Gary Lansberg, Larry Grossman, and Alan Silverstein, some programs came from CJAD and elsewhere. Catharine Pope left. Jason Roberts left PM drive for CKLH Hamilton. He was replaced by Joe Cannon who returned from CJCL. Weekends: Jim Bohanon, John Eakes, Joe Cannon, Dr. Gary Lansberg, Dave Cornwell, Mark Cullen, Mark Breslin, Lowell Green, Pat Blandford, Brian Linehan, Lynn Pickering, Allan Gould, Mike Stafford, Larry Grossman, Phyllis Walker, Ed Needham, Fred Napoli, Terry McElligott, Ron Hewatt, John Dolan, Ted Woloshyn, Bill Deegan, Larry Solway, Bill McVean. (Cullen and Cornwell handled the weekend gardening show and McVean had a travel show)
Some of the other voices heard on 'RB at this time - News: Mike Stafford, Mark Coutts, Brent Copin, Chris Wilson, Catherine Pope, Hal Lowther, Jane Hawtin, Anne Winstanley, Torben Wittrup, Avery Haines, Bob Comsick, Bill McDonald, David Bent, John Elston, Dave Agar, Tayler Parnaby, Donna Tranquada, Charles Doering, Bruce Rogers, Arnis Pederson, Danna O'Brien, John Crawford. Sports: Bill Stephenson, Dave Quinn, Mike Hannifan, Tom Mckee, Glen Crouter - Reporters - John Crawford, Hal Vincent. Traffic: Monica Desantis, Henry Shannon, Dianne Pepper. Business: Pat Blandford, Arnis Peterson, Brian Costello. Commentary: Ed Needham, Bob Hesketh. Entertainment: Brian Linehan, Jeremy Brown, Lisa Brandt. Food: Jeremy Brown. Events: Linda Kitigowa.
On March 7, Ed Needham returned for Weekends.
Dave Quinn's evening Sports-talk show was dropped April 24. On April 27, The World Tonight moved from 11 to 10 p.m., Oakley From 11:20 to 10:20, The Best Of 'RB followed. On April 30, Charles Adler left and was replaced by Larry Solway (ex-weekends).
In April or May, Larry Grossman (former Ontario PC leader) was added to Weekends.
|On May 9, Phyllis Walker joined for Weekends.
On August 21, Wayne McLean left for CKWW in Windsor. On August 24, the following changes were made: John Stall was now on the air from 9-9:30 a.m., with Andy Barrie from 9:30 to 11:50. John Stall & Marlane Oliver hosted The World At Noon from 12-2 p.m. with Jane Hawtin following from 2-4. Dave Agar joined David Bent and Donna Tranquada for The World Today - now heard from 4 to 7 p.m. On August 25, Larry King was added from 2:00 to 5:00 a.m. Ed Needham returned for 7-10 p.m. from Weekends on August 31. Fred Napoli's weekend program was cancelled August 23.
On September 12, Fred Napoli returned (weekends) due to popular demand. As of September 20, Fred Davis & Judy Webb hosted a new weekend nostalgia music show - Real Radio.
On December 9, Tayler Parnaby took over the 9:00 to 9:30 a.m. shift, John Stall concentrated on The World At Noon. Torben Wittrup retired December 31. Bill Deegan also retired this year. Lisa Brandt joined at some point in the year, from CKFM.
Bill Baker died on December 21 at the age of 85. He started his broadcasting career in 1924. Baker was with sportscaster Wes McKnight for many years as engineer-operator, broadcasting sporting events from all over. Bill Baker retired around 1973, but remained an authority on the history of radio.
On February 1, Larry King moved from night-time to daytime then the program was dropped.
Eddie Luther died February 16. He was Canada's first air-borne traffic reporter, working at CFRB in 1961. He had started at the station as an announcer in the 50's.
On June 7, Harold Hosein (CITY-TV weather) began doing weather reports for CFRB.
On August 27, Ed Needham left and was replaced temporarily by John Oakley, then by Al Rae.
On September 7, Paul & Carol Mott joined for 2-4 p.m. from CKTB St. Catharines, Jayne Hawtin moved to 1-2 p.m. In September, Mike Inglis joined for swing work from CJCL. John Crawford switched from reporter to news-casting duties. Fred Napoli left in September.
In October, Marlane Oliver left for CFTR (now all-news), John Stall was now heard from 9:05 to 9:30 a.m. Jeremy Brown left. Lisa Brandt took over his regular entertainment reports and Brian Linehan took over his weekend features.
Other 1993 program notes: Ed Needham was off on Fridays nights, and John Oakley filled-in, with Larry King starting at 11 p.m. TSN Sports was added for weekends & evenings. Diane Pepper left. Early in the year, John Majhor was doing some on-air work for both CFRB and CKFM.
Former CFRB announcer Gene Kirby (Gene J. Smith) died September 10, in his 64th year.
In February Lowell Green left for CFRA.
On May 13, Newsman Peter Dickens retired.
In September, Steven Reuben left for WHAM Rochester. John Crawford left for CKLH Hamilton. Mike Cleaver joined from CHUM. Other fall program notes: Glen Crouter moved to weekends from sports, and Mike Inglis moved to sports from weekends. John Stall was now on from 9-10 a.m.
In December, John Hesselink (ex-CHOG-CILQ) was noted in doing Weekends.
Other 1994 program notes: Pat Marsden covered Blue Jays spring training. Chris Wilson left for CFTR. Guy Valentine (traffic) joined from CHUM. Susan Rogers joined from CFTR. Lisa Brandt left for CFCA Kitchener.
On February 6, John Hesellink moved from weekends to overnites (12-4 a.m.) weekdays and was replaced on weekends by Scott Robbins.
In March, Steven Reuben returned from WHAM Rochester for a Sunday night law show.
Kim Mason, a 35 year old registered nurse, was chosen from almost 1,200 listeners who auditioned for their own talk show in the "It's My Show" contest. Kimberly did her first CFRB show on April 16 and would continue to be heard from midnignt to 4:00 a.m. on weekends.
On May 31, Jane Hawtin left (Noon-2 p.m.) for CHCH-TV and CHOG-AM, replaced 12-1 by Donna Tranquada, and the Motts show was extended to 1-4 p.m.
Former CFRB newscaster Bill Walker passed away on June 25. He was 72. Long-time CFRB traffic reporter Henry Shannon retired June 30. He had been with the station since 1969 and was officially replaced by Guy Valentine. Henry would continue to do fill-in traffic reports.
While on summer vacation (July), Andy Barrie announced that he would be joining CBL 740's Metro Morning as of September 5.
CFRB became the radio voice of the NBA's Toronto Raptors. The team was owned in part by Allan Slaight, CFRB's owner.
CFRB's licence was renewed by the CRTC for only nine months. The Commission wanted to assess its guidelines to ensure high standards and appropriate responses to complaints.
In August, Brian Linehan left. John Oakley replaced Andy Barrie from 10 a.m. to noon. Oakley was replaced in the 7-10 p.m. shift by weekender Michael Coren. Paul Kellogg returned for weekends on August 21.
On September 11, Dr. Joy Browne was added from 3:00 to 5:00 a.m.
Other 1995 program notes: Anne Winstanley left for CJEZ-FM. Don McDonald joined from CJEZ. Bill McVean's travel show moved to CFMX-FM. Mike Cleaver left. John Donabie returned. CFRB hired former Toronto police chief Bill McCormack as a regular contributor and commentator. Steve Kowch was Operations Manager. Toronto Argo football broadcasts moved from CFRB to AM640.
November 1 was Wally Crouter Day in Toronto. This was the day the 73 year old Crouter retired as CFRB's morning host. He was with the station for 49 years, and 10,000 shows. That was longer than any other morning announcer in North America. His last show on November 1 marked the beginning of his 50th year, and followed a month-long countdown of special events and on-air highlights from his career.
On February 9, Shelley Klinck joined from CHOG. On February 24, Cecil Foster joined for weekends.
Jeremy Brown retired (but later showed up at CFMX-FM) on March 2.
In August, John Hesselink left, replaced with more of Dr. Joy Browne.
Former (longtime) CFRB personality Bill Deegan died October 8 at the age of 70.
November 1 saw the retirement of Canada's longest serving radio morning man, Wally Crouter. Ted Woloshyn took over the morning show on November 4. He had been doing weekends. Mike Stafford replaced him in weekends. Charles Adler returned for the 9 a.m. to noon shift, The Motts were now heard from Noon to 2 p.m. and John Oakley from 2 to 4 p.m. Money Matters with Brian Costello and Carol Mott aired Fridays from 1 to 2 p.m.
CFRB/CKFM-FM general sales manager Christopher Grossman agreed to purchase CFBG-FM Bracebridge from Telemedia.
In January, The World Today expanded from 4-7 To 4-8 p.m., and Michael Coren's Show moved from 7-10 to 8-10 p.m. Jeff Pevere joined CFRB.
In April, Elly Sedinski left for CFTR.
On June 16, John Dickie joined for midnight to 3 on weekends.
In July, Gene Valaitis joined for fill-in talk (he left CHOG in May).
On August 29, Monica Di Santis left for WBBM-AM Chicago.
On September 9, Jane Hawtin Live (rebroadcast of the WTN TV show) was added at 9 p.m. Martina Fitzgerald joined from TSN Sportsradio for Traffic and Entertainment, replacing Di Santis. Also in September, Jim Richards joined from CILQ-FM to do 12-3 a.m. Al Navis also joined for the 12-3 a.m. shift. John Stall left and was replaced by Gene Valaitis. Bill Carroll joined for news from CHOG.
Charles Doering retired on August 28, following 33 years at the station. After Gordon Sinclair died in 1984, Bob Hesketh took over the 11:50 a.m. news and commentary slot. When Hesketh was on vacation, Doering would fill in for him. After Hesketh retired, Doering took over the 11:50 a.m. time slot.
CFRB listeners were once again asked to audition their talents for the chance to win their own talk show. The station did a similar hunt for new talent about five years ago.
Dave Agar stepped down as news director in favour of retaining his duties as morning newscaster. Bill Carrol was now news director.
Dick Smyth, who retired from CFTR last year, began offering commentary on CFRB. He was also a regular on the station's 9 a.m. Free-for-All Roundtable.
On March 14 Allan Mayer joined CFRB.
On May 31, Bruce Rogers retired.
In June, Bill Carroll became Entertainment Editor.
On August 28, Charles Doering retired. He was replaced by Tayler Parnaby for the 11:50 a.m. news and commentary. Charles Adler left for CJOB Winnipeg. Program schedule as of August 31: Ted Woloshyn (5-9), Bill Carroll (9-10), The Motts (10-11:50), Tayler Parnaby (11:50-12), The Motts (12-1), John Oakley (1-4), Mike Stafford with David Bent & Donna Tranquada (4-7), Mike Stafford with Michael Coren & Jane Hawtin (7-8), Michael Coren (8-9), Jane Hawtin (9-10), World Tonight (10-11), CFRB Replay (11-12), Jim Richards (12-3), CFRB Replay (3-5).
Eric Hollo joined in September. Evening program changes as of September 7: Stafford-Bent-Tranquada 4 to 7, Stafford with Coren & Susan G. Cole 7 to 730, Coren 730 to 9, World Tonight 9 to 10, A Day In The Life (Best of CFRB) 10 to 11, Richards ("Coping" 11-12, "Nightside" 12-3). On September 8, Dick Smyth joined for commentary & the morning free-for-all (9-10 a.m.) a couple of days a week. Smyth had retired from CFTR earlier in the year. On September 20, Wally Crouter returned for the Sunday (8-9pm) show "Memory Theatre".
In December, Gene Valaitis left. Bill Carroll took over his Saturday show while Mike Stafford took over the Sunday show.
In January, Karen Horsman joined (most recently at CHOG).
In August, Michael Coren left for CHOG, and was replaced by Mike Stafford from 7-10 p.m.
Bill Stephenson retired in September and was replaced in AM drive sports by Bruce Barker (Stephenson still did fill-in work). Mark Bunting joined for sports. Dick Smyth left.
In November, Harold Hosein (weather) left for CFTR. Avery Haines left for CTV. Jane Brown joined for news from CFTR.
Other 1999 program notes: Karen Horsman started out in news and eventually wound up with the 1-5 p.m. talk slot on weekends. Donna Tranquada left for CBL. Jaymz Bee joined for Saturday night talk. Dave Trafford joined from CHOG. Marianne Somers joined for news from CFTR. Bob Durant joined from CJCL. Marty Galin and Avrum Rosensweig joined for Saturday nights from 6-7 (they were last heard on CHOG). Allan Mayer left and was replaced by Generation Next (Jackie Mahon, Ryan Doyle, Mike van Dixon, Rachael Shaw). Strange Days Indeed, a show about UFO's, was added on Sunday nights. Hal Vincent, longtime Queen's Park reporter left. Dan Reynish, Al Navis, and Eric Hollo left.
CFRB's new operations manager was CJAD program director Steve Kowch. Bob Mackowycz left that post to pursue other interests.
Dave Trafford joined CFRB news. He had been news director and assistant program director at Toronto's Talk640.
June Sonin died at the age of 68. She was the widow of Ray Sonin, who originated Calling All Britons on CFRB. A time after his death, June took the program to CHWO Oakville and was the host. June was often heard on-air with Ray when the program aired on CFRB.
During the Ontario election campaign, CFRB put headsets on a cardboard picture of Liberal leader Dalton McGuinty. The station said it tried for a month to get McGuinty to take part in an open-line show as the two other leaders had done. When he refused, the station had callers pose questions to the "dummy" which were, of course, met with silence.
After 19 years of service one of CFRB's two 50 kw Continental tansmitters was replaced by a solid-state Nautel XL-60 tranmitter.
On January 17, Michael Kane and Deirdre McMurdy's business show was added to the CFRB program line-up. Program line-up as of January 24: Ted Woloshyn (5:30-9), Bill Carroll with Laurie Goldstein & Jane Hawtin (9-11), The Motts (Brian Costello 1-2 Fri) (11-11:50), Taylor Parnaby (11:50-12), The Motts (12-2), John Oakley (2-5), World Today with Mike Stafford, David Bent, & Debra Hurst (5-7), Mike Stafford (7-9), A Day In The Life (9-10), Evening Business Hour (10-11), Jim Richards (11-2), RB Replay (2-5:30).
In February, John Dickie left, replaced by John Oakley's producer (Sun Night/Mon morn), Richard Syrett on February 6.
On February 25, former newscaster/commentator Bob Hesketh passed away at age 76. Bob was a news commentator at CFRB from July 1959 (when he was hired as a summer replacement for Gordon Sinclair) until his retirement in 1987. His daily commentary, "The Way I See It", was syndicated across Canada for 19 years. It was the longest-running syndicated show in private Canadian broadcasting.
On February 27, Karen Horsman left on maternity leave. She returned later in the year.
On March 4, The Weather Network's forecasts were added.
In August, Mike Stafford left (5-9 p.m.) and was replaced by Randy Taylor (last at CKTB).
In September, Michael Coren returned for Sunday nights. Karen Horsman returned from maternity leave. The U.S. syndicated Mitch Albom Show was added to the Sunday line-up.
Other 2000 program notes: Arnis Peterson was on long-term disability. Dan Gallagher joined for Saturday afternoons. Jaymz Bee left and was replaced by Errol Bruce-Knapp. Rob Graham and Michael Kane joined from CFTR. Laurie Goldstein left the 9 am free-for-all and was replaced by Dave Agar (The Free-For-All was reduced to half an hour from 9-9:30, then Bill Carroll on his own from 9:30-11). Mike Kirby, station voice, left. Jackie Delaney joined for traffic and fill-in talk from CJCL and CJEZ. Dr. Amanda Glew joined for pet show on Saturday afternoons (simulcast on CJAD in Montreal). The World Today and The World at Noon were cancelled. The World Today was replaced by Mike Stafford (and then Randy Taylor) from 5-9 p.m.
Other voices heard on CFRB: Weekends: John Donabie, Tom Fulton, Mark Cullen (garden), Dave Cornwell (garden), John Dickie, Erroll Bruce-Knapp, John Caras, Karen Horsman, Jaymz Bee, Jackie Mahon, Mike Bendeixen, Ryan Doyle, Rachael Saw, Marty Galin, Avrum Rosenweig, Dan Gallagher, Dr. Amanda Glew, Hannah Sung, Dr. Mickey Lester, Mitch Albom, Jerry White, Michael Coren, Tom Fulton, Richard Syrett. News: Bob Durant, Jackie Mahon, Taylor Parnaby, Dave Agar, Jane Brown, Connie Sinclair, David Bent, Deborah Hurst, Dave Trafford, John Elston, Brent Coppin, Bill McDonald, Dana O'Brien, Al Michaels. Reporters: Brent Copin, Bob Comsick, Lisa Nakarado, Dana O'Brien, Myla Koskitalo, James Fitzmorris, Claude Beaulieu. Sports: Bruce Barker, Bill Stephenson, Dave Quinn, Martina Fitzagerald. Traffic: Mrtina Fitzgerald, Sheila Walsh, Guy Valentine, Neil Bansel, Myla Koskitalo, Jennifer Reed, Henry Shannon (Fill-in), Jackie Delaney. Business: Mark Bunting, Jerry White, Brian Costello, Conrad Forest, Rob Graham, Michael Kane, Arnis Peterson. Others: Lorne Hailstone (events), Susan G. Cole (Free-for-all fill-in), Mike Kirby (station voice)> Weather: from the Weather Network. Entertainment: Sheila Walsh, Martina Fitzgerald, John Donabie, John Moore. Commentary: Tayler Parnaby, Christie Blatchford, various announcers. Evening Business Hour hosted by Michael Cane & Deirdre McMurdy.
Former CFRB sports commentator Jim Coleman, 89, died January 14.
Dan Gallagher died at age 43. He was a weekend personality at CFRB. He was found in his Toronto home by family after he failed to show for his January 20 show.
Well known broadcaster, journalist and evangelist, Charles Templeton, passed away at age 85. He and Pierre Berton had hosted a commentary feature on CFRB in the past.
On January 21: Weekend host Dan Gallagher died. He was 43.
In February, Erica Ehm joined. Dan Turner joined from CHUM.
On March 27, Jackie Mahon left for CITY-TV.
In April, Larry Silver joined from CFYI.
Former CFRB commentator Charles Templeton died June 7. He was 85.
More 2001 program notes: Larry Silver left CFRB news for the Corus Radio Network (where he had already been doing some work in addition to his CFRB stint). Spider Jones joined for some fill-in work. He had been with CFYI.
On January 11, a new antenna system was put into operation. This was the third new array since the transmitter plant was moved from Aurora to Clarkson. Four towers with identical height of 690 feet were erected further south on the site, replacing two 50-foot towers and two 550-footers. The aircraft warning lights on the new towers, using the LED design, were the first to be authorized in Canada.
Former on-air personality Earl Warren (Earl Warren Seagal) died October 19. He was 69.
Former swing announcer Tom Fulton died on December 9 at age 58.
H.T. (Mac) McCurdy, President of Standard of Standard Broadcasting Corporation from 1974 to 1985, died on September 3rd.
Daryl Wells (Daryl Frederick Wille) The Voice of Racing, heard for many years on 'RB died December 12 at 81.
Art Cole, the former host of "Let's Discuss It" died January 5 at age 87.
On September 27, Astral Media Radio G.P. received CRTC approval to acquire the assets of the radio and TV undertakings owned by Standard Radio Ltd., subject to certain conditions. The purchase included CFRB-AM, CKFM-FM and CJEZ-FM. Astral Media took ownership of the Standard stations on October 29.
John Spragge passed away on December 16 at the age of 71. At one time, he was CFRB's program director, a position he held for 13 years. He joined the station after a ten year run at CHUM, followed by a few years with the Radio Sales Bureau and Standard Broadcast sales. Jack Dawson, who had joined CFRB in 1939 as an annoucer and retired in 1973 as Vice -President & General Manager, passed away at the age of 91.
Chief correspondent Tayler Parnaby retired in January. This also ended the only connection with CFRB's past...the 11:50 a.m. newscast which had belonged to Gordon Sinclair. Parnaby's retirement was part of Astral Media Radio's restructuring at a number of stations across Canada. A number of other CFRB news personnel lost their jobs - David Bent, Jane Brown, Bill McDonald and John Elston. Talk host Richard Syrett was also let go.
On August 27, 12 employees were let go at CFRB. Sherry O'Neil, general manager of Astral Media Radio's Toronto cluster said they are "evolving the product on air" to boost ratings. She added the move had nothing to do with the economy and that any money saved will be put back into the station. Mid day talk hosts Paul & Carol Mott were among those let go. News staffers, including newscaster Kris McKusker, and some producers were also now gone from the station. Michael Coren and Jacqui Delaney were also among the casualties.
On August 28, the CRTC renewed the transitional digital radio licence of CFRB-DR-2.
After being let go by CFRB, Astral gave Steve Kowch the opportunity to work a six month contract with sister station CJAD in Montreal. He accepted. He would be program and news director.
When Steve Kowch was moved to CJAD-AM Montreal, that station's program director Mike Bendixon took up the same post at CFRB. Mike had been with CFRB before being moved to CJAD a few years earlier.
In September, CFRB announced major program line-up changes to take effect on October 5. The station would become known simply as "Newstalk 1010" as well. John Moore would move from afternoon drive to mornings (5:30-9). Morningman Bill Carroll would return to the mid-day shift (9-1). Jim Richards would handle the 1-4 p.m. shift. Former Ontario Conservative leader John Tory would move from a weekend shift to afternoon drive (4-7). Ryan Doyle would remain in the 7-10 p.m. time slot. John Moore would be joined in the mornings by Rick Hodge (sports), former Breakfast Televisioon host Liza Fromer and Tarek Fatah on international affairs.
Sportscaster Rick Hodge was no longer with Astral Media Radio Toronto. His job at EZ Rock was eliminated as was fellow morning show staffer Kim Stockwood’s position. Hodge had also been doing commentary on sister station CFRB.
Ron Hewat, after 51 years in the radio business, retired December 31 from his Specialty Sales Manager's job at CFRB. Hewat had done play-by-play for the Maple Leafs and the Canada Cup, helped build the Toronto Blue Jays' first radio network and hired play-by-play announcer Tom Cheek. He got his start in 1958, when Foster Hewitt hired him to work 12-hour weekend shifts. Shortly afterwards, Hewat began doing intermission interviews at Leafs and Marlies games and eventually became the colour man on Leafs radio broadcasts. But the big break was in 1968 when he was handed the playby-play job.
Ronald Adam Krochuk died at age 73. Krochuk held sales and marketing positions at such stations as CJOB Winnipeg, CJAD Montreal, CFRB Toronto and at the now-Corus Radio Hamilton stations.
Bill Carroll, the 9 a.m. to noon talker at CFRB, moved to KFI Los Angeles to do the noon to 2 p.m. talk show, as of February 22. Program director Mike Bendixen said Caroll would continue to be heard on CFRB during the Live Drive with John Tory. He would also join 'RB hosts on a regular basis to provide his unique opinions on the news and social issues of the day.
Jerry Agar succeeded Bill Carroll in the 9 a.m. to 1 p.m. slot. He began his career at hometown CKDM Dauphin doing the overnight show. Agar had worked at WABC New York, KMBZ Kansas City, WLS Chicago and until now, had been hosting weekends at WGN Chicago.
Gwyn 'Jocko' Thomas passed away at age 96. He was hired by the Toronto Star in October of 1929. He went on to win three National Newspaper Awards and was inducted into the Canadian News Hall of Fame in 1995. From the early ‘60s, Thomas was also heard on CFRB where he would end his news reports with his distinctive sign-off: "This is Jocko Thomas of the Toronto Star reportin' to CFRB from police headquar-r-r-rters''.
Astral Toronto let a number of CFRB people go in June: Eileen Berardini (assignment editor), Bob Komsic (evening anchor), Melissa Boyce (promotions) and weekend announcer John Donabie.
In September, Mike Bullard moved to Newstalk 1010...noon to one p.m. He had been at CHAM Hamilton until that station changed back to a country format.
Former CFRB morning man Ted Woloshyn returned to the station this year to handle Saturday afternoons.
Fred Ursel died at 76. He had worked in the 1960's with CFRB sales and with Standard Broadcast Sales.
David Lindores became the Promotion Director at KiSS 92.5/98.1 CHFI on June 15, moving from Astral Media Radio Toronto where he'd been for about three years.
Daniel Proussalidis of CFRB's news department was promoted to Ottawa Bureau Chief.
Val Meyer, Vice President of Sales at Astral Outdoor in Toronto, succeeded Sherry O'Neil as Vice President and General Manager of Astral Radio Toronto (CFRB/boomfm/Virgin), effective December 6. O'Neil became Astral's corporate Vice President of Planning and Transformation.
Former CFRB on-air personality John Dolan passed away.
On August 31, the CRTC administratively renewed the licence for CFRB-DR-2 to April 30, 2012.
Bill Herz, vice president of sales at Astral Media Radio and based in Toronto, announced his retirement. He'd been with the operation, through three owners, for 45 years beginning with CFRB in 1965. In 1975, he tried TV sales at Baton Broadcasting, then went back to Standard Broadcasting. In 1995, he worked at CHUM for short time and then returned to Standard.
Lorie Russell was now managing sales for the entire Astral Radio Toronto cluster. She added Newstalk 1010 after Scott Johns was promoted to revenue director for English Canada stations. Also at Astral Radio Toronto, retail sales supervisors Brett Dakin and Brian Labonte were promoted to retail sales managers.
Traffic reporter Rob Valentine left CFRB to joins 680NEWS (CFTR), December 7.
Dave Trafford, News Director at CFRB, moved September 15 to Global TV where he became Managing Editor.
Former CFRB talk host Larry Solway passed away January 9 at age 83.
On April 23, the CRTC administratively renewed the broadcasting licence for digital radio programming undertaking CFRB-DR-2 until August 31, 2012.
Tom McKee passed away at age 76. McKee was a sports broadcaster with CBC, CTV, ABC, TSN, CFRB and TV Labatt, and was the first face and voice on the Toronto Blue Jays' 1977 inaugural telecast.
A guest's comments on homosexuality on Mayor Rob Ford's weekly talk show on CFRB were found by the Canadian Broadcast Standards Council to have violated the CAB's Code of Ethics and Equitable Portrayal Code.
Astral Radio announced the return of "Humble" Howard Glassman and Fred Patterson. On January 21, the Humble & Fred Radio Show began airing weekday evenings on Funny 820 Hamilton, Funny 1410 London, and News Talk 1010 CFRB Toronto.
Clive Eastwood, Vice-President, Engineering (ret.1986), CFRB Limited
J. Lyman Potts, Vice-President (ret. 1981), Standard Broadcasting Corporation
Bill Dulmage, CCF, Ross McCreath, CCF
Updated May, 2013 | http://www.broadcasting-history.ca/listings_and_histories/radio/histories.php?id=398&historyID=180 | 13 |
63 | For 200 years the FUR TRADE dominated the area known as Rupert's Land. Settlement, particularly from eastern Canada and eastern Europe, eventually created a sound agricultural tradition. Postwar political and economic efforts have enabled the economy to diversify industry and develop primary resources, while maintaining agricultural strength.
Land and Resources
The regions of Manitoba are derived chiefly from its landforms. Since the final retreat of the continental ice sheet some 8000 years ago, many physical forces have shaped its surface into 4 major physiographic regions: the Hudson Bay lowland, Precambrian upland, Lake Agassiz lowland and Western upland.
Manitoba provides a corridor for the Red, Assiniboine, Saskatchewan, Nelson and Churchill rivers. Three large lakes, Winnipeg, Winnipegosis and Manitoba, cover much of the Lake Agassiz lowland. They are the remnants of Lake AGASSIZ, which occupied south-central Manitoba during the last ice age. The prolonged duration of this immense lake accounts for the remarkable flatness of one-fifth of the province, as 18-30 m of sediments were laid on the flat, preglacial surface.
Antecedent streams, such as the Assiniboine, Valley and Swan rivers, carved the southwestern part of the province (Western upland) into low plateaus of variable relief, which with the Agassiz lowland provide most of Manitoba's arable land. The Precambrian upland is composed of hard granite and other crystalline rocks that were subject to severe glacial scouring during the Ice Age; its thin soil, rock outcrop and myriad lakes in rock basins are inhospitable to agriculture but are amenable to hydroelectric power sites, freshwater fishing, metal mines and some forestry.
Flat sedimentary rocks underlie the Hudson Bay lowland, and the climate is extremely cold. Little development or settlement exists other than at CHURCHILL, Manitoba's only saltwater port. A line drawn from southeastern Manitoba to Flin Flon on the western boundary separates the arable and well-populated section to the south and west from the sparsely inhabited wilderness to the north and east. The latter comprises about two-thirds of the area of the province.
The bedrock underlying the province varies from ancient Precambrian (Archean) to young sedimentary rocks of Tertiary age. The former has been identified as 2.7 billion years old, among the oldest on Earth, and forms part of the Canadian Shield, a U-shaped band of Precambrian rocks tributary to Hudson Bay. It consists principally of granites and granite gneisses in contact with volcanic rocks and ancient, metamorphosed sedimentary rocks. Contact zones often contain valuable minerals, including nickel, lead, zinc, copper, gold and silver - all of which are mined in Manitoba.
Along the flanks of and overlying the ancient Precambrian rocks are sedimentary rocks ranging from Palaeozoic to Tertiary age. The Lake Agassiz lowland comprises a surface cover of lacustrine sediments superimposed on early Palaeozoic rocks of Ordovician, Silurian and Devonian age, from which are mined construction limestone, gypsum, clay, bentonite, sand and gravel. In favourable structures petroleum has also been recovered from rocks of Mississippian age.
West of the Agassiz lowland rises an escarpment of Cretaceous rocks, which comprise the surface formations of the Western upland. For long periods the escarpment was the west bank of glacial Lake Agassiz. East-flowing rivers such as the Assiniboine, the Valley and the Swan once carried the meltwaters of retreating glaciers, eroding deep valleys (spillways) that opened into this lake. The former lake bottom and the former valleys of tributary streams were veneered with silts and clays, which today constitute the most fertile land in western Canada.
Both the Western upland and the bed of Lake Agassiz comprise the finest farmlands of Manitoba. In the southwest the geologic structures of the Williston Basin in North Dakota extend into Manitoba and yield small amounts of petroleum. A vast lowland resting on undisturbed Palaeozoic sediments lies between the Precambrian rocks of northern Manitoba and Hudson Bay. Adverse climate, isolation and poorly drained peat bogs make this region unsuitable for agriculture.
Minor terrain features of Manitoba were formed during the retreat of the Wisconsin Glacier at the close of the last ice age. The rocks of the Shield were severely eroded, leaving a marshy, hummocky surface threaded with a myriad of lakes, streams and bogs. Relief is rolling to hilly.
Much of the Agassiz lowland, the largest lacustrine plain in North America (286 000 km2), is suitable for irrigation. Much is so flat that it requires an extensive drainage system. Its margins are identified by beach ridges. The Western upland is now covered by glacial drift. Rolling ground moraine broken in places by hilly end moraines has a relief generally favourable to highly productive cultivated land.
Since southern Manitoba is lower than the regions to the west, east and south, the major rivers of western Canada flow into it. Including their drainage basins, these are the SASKATCHEWAN RIVER (334 100 km2); the Red (138 6002) and ASSINIBOINE (160 600 km2) and WINNIPEG rivers (106 500 km2). Lakes Winnipeg, Manitoba and Winnipegosis receive the combined flow of these basins. In turn the water drains into Hudson Bay via the NELSON RIVER. These together with the CHURCHILL, HAYES and other rivers provide a hydroelectric potential of 8360 MW.
Climate, Vegetation and Soil
Situated in the upper middle latitudes (49° N to 60° N) and at the heart of a continental landmass, Manitoba experiences large annual temperature ranges: very cold winters and moderately warm summers. The southward sweep of cold, dry arctic and maritime polar air masses in winter is succeeded by mild, humid maritime tropical air in summer. Nearly two-thirds of the precipitation occurs during the 6 summer months, the remainder appearing mostly as snow. The frost-free period varies greatly according to local conditions, but as a general rule the average 100-day frost-free line extends from Flin Flon southeast to the corner of the province.
Spring comes first to the Red River valley, which has a frost-free period of about 120 days, and spreads to the north and west. As a result, the mean number of growing degree days (above 5° C) varies from 2000 to 3000 within the limits defined. Snowfall tends to be heaviest in the east and diminishes westward. Around Winnipeg the average snowfall is 126 cm per year. Fortunately, 60% of the annual precipitation accompanies the peak growing period for grains: May, June and July. Late August and early September are dry, favouring the harvest of cereal grains.
Subarctic conditions prevail over northern Manitoba. Churchill occupies a position on Hudson Bay where abnormally cold summers are induced by sea temperatures. Manitoba's climate is best understood with reference to air masses. During the winter, low temperatures and humidities are associated with the dominance of continental Arctic and continental Pacific air. During spring abrupt seasonal changes introduce maritime tropical air from the south, which is unstable and warm. The usual sequence of midlatitude "lows" and "highs" brings frequent daily temperature changes. Some Pacific air moves east, moderating at intervals the extreme cold of winter.
Manitoba's natural vegetation ranges from open grassland and aspen in the south to mixed forest in the centre, typical boreal forest in the north and bush-tundra by Hudson Bay. In the south high evaporation rates discourage the growth of trees, which are replaced by prairie. Both tall-grass and mixed-grass species were extensive before settlement. Elm, ash and Manitoba maple grow along stream courses, and oak grows on dry sites. With increase in latitude and reduced evaporation, mixed broadleaf forest replaces parkland.
The northern half of the province is characteristically boreal forest, consisting of white and black spruce, jack pine, larch, aspen and birch.
This pattern continues with decreasing density nearly to the shores of Hudson Bay, where the cold summers and short growing period discourage all but stunted growth of mainly spruce and willow and tundra types of moss, lichens and sedges. Spruce, fir and pine are processed for lumber and pulp and paper products. Large mills are found at Pine Falls (newsprint), The Pas (lumber and pulp and paper) and Swan River (oriented standboard).
In general the province's soil types correlate closely with the distribution of natural vegetation. The following soil descriptions are in order of decreasing agricultural value. The most productive are the black soils (chernozems), corresponding to the once dominant prairie grassland of the Red River valley and southwestern Manitoba. They differ in texture from fine in the former to medium in the latter. Coarse black soils are found in the old Assiniboine delta and the Souris Valley, the former extending from Portage la Prairie to Brandon. Sand dunes are evident in places.
In areas of transition to mixed forest, degraded black soils and grey-wooded soils are common, notably in the area from Minnedosa to Russell south of Riding Mountain. Large areas of the former Lake Agassiz, where drainage is poor, are termed "degraded renzina" because of high lime accumulation. Soils derived from the hard granites and other rocks of the Shield, typically covered with coniferous forest, are described as grey wooded, podsol and peat; they are rated inferior for agriculture.
Manitoba's principal resource is fresh water. Of the 10 provinces it ranks third, with 101 590 km2 in lakes and rivers, one-sixth its total area. The largest lakes are WINNIPEG (24 387 km2), WINNIPEGOSIS (5374 km2) and MANITOBA (4624 km2). Other freshwater lakes of more than 400 km2 are SOUTHERN INDIAN, Moose, Cedar, Island, Gods, Cross, Playgreen, Dauphin, Granville, Sipiwesk and Oxford. Principal rivers are the Nelson, which drains Lake Winnipeg, and the Red, Assiniboine, Winnipeg, Churchill and Hayes. Lake Winnipeg is the only body of water used today for commercial transportation, but the Hayes, Nelson, Winnipeg, Red and Assiniboine rivers were important during the fur trade and early settlement eras.
The network of streams and lakes today is a source of developed and potential hydroelectric power; its installed generating capacity is 4498 MW. Possessing 70% of the hydroelectric potential of the Prairie region, Manitoba promises to become the principal contributor to an electric grid that will serve Saskatchewan and Alberta as well as neighbouring states of the US.
Flooding along the Red River and its principal tributaries, the Souris and Assiniboine, has affected towns as well as large expanses of agricultural land. Major flood-control programs have been undertaken, beginning with the Red River Floodway and control structures completed in 1968. A 48 km diversion ditch protects Winnipeg from periodic flooding. Upstream from Portage la Prairie a similar diversion was built between the Assiniboine River and Lake Manitoba. Associated control structures include the Shellmouth Dam and Fairford Dam. Towns along the Red River are protected by dikes.
Agricultural land is the province's second major resource, with over 4 million ha in field crops in addition to land used for grazing and wild hay production. Based on "census value added," agriculture leads by far all other resource industries; mining follows in third place after hydroelectric power generation. Nickel, copper, zinc and gold account for about three-quarters by value of all minerals produced. The fuels, mainly crude petroleum, are next, followed by cement, sand, gravel and construction stone. Of the nonmetallics, peat and gypsum are important.
Most of Manitoba's productive forestland belongs to the Crown. The volume of wood cut averages 1 600 000 m3 annually, from which lumber, plywood, pulp and paper are produced. Manitoba's freshwater lakes yield large quantities of fish; the leading species by value are pickerel, whitefish, perch and sauger. Hunting and trapping support many native people.
Conservation of resources has been directed mainly to wildlife. Fur-bearing animals are managed through trapping seasons, licensing of trappers and registered traplines. Hunting is managed through the Wildlife Act, which has gone through a series of revisions since 1870. The Endangered Species Act (1990) enables protection of a wider variety of species.
In 1961 a system of wildlife management areas was established and now consists of 73 tracts of crown land encompassing some 32 000 km2 to provide protection and management of Manitoba's biodiversity. Manitoba is on the staging route of the North American Flyway and these wildlife areas protect land which many migratory birds use.
Hunting of all species of game is closely managed and special management areas have been established to provide increased protection for some game, nongame and endangered species and habitats. Hunting and fishing are also closely managed in provincial parks and forest reserves.
Forest conservation includes fire protection, insect control, controlled cutting and reforestation programs. Surveillance of forest land by aircraft and from numerous widely dispersed fire towers reduces significantly the incidence and spread of forest fires. Insects and disease are controlled by aerial spraying, tree removal and regulated burning. Among the more virulent pests are jack pine budworm, spruce budworm, aspen tortrix, forest tent caterpillar and birch beetle. Winnipeg is fighting desperately to contain dutch elm disease.
Each year millions of seedlings, mainly jack pine, red pine and white spruce, are planted for REFORESTATION. To ensure future supplies of commercial timber, operators must make annual cuttings by management units on a sustained yield basis.
RIDING MOUNTAIN NATIONAL PARK, on the Manitoba escarpment, was the province's only national park until 1996 when Wapusk National Park near Churchill was established. Manitoba has over 100 provincial parks of various types. The natural and recreational parks are the most commonly used and include WHITESHELL PROVINCIAL PARK in the west and Duck Mountain in the east. The province's first wilderness park, Atikaki, was opened in 1985 and is Manitoba's largest park.
The Manitoba Fisheries Enhancement Initiative was announced in 1993 to fund projects that protect or improve fish stocks or enhance the areas where fish live. Projects have included rock riffles for fish spawning, artificial walleye spawning shoals, stream bank protection and habitat enhancement and a fish way. The FEI encourages cooperation with other government and nongovernment agencies. This ensures that fisheries values are incorporated in other sectors; eg, agriculture, forestry and highways.
Between 1682, when YORK FACTORY at the mouth of the Hayes River was established, and 1812, when the first Selkirk settlers came to Red River, settlement consisted of fur-trading posts established by the HUDSON'S BAY COMPANY (HBC), the NORTH WEST COMPANY and numerous independent traders. As agriculture spread along the banks of the Red and Assiniboine rivers, radiating from their junction, the RED RIVER COLONY was formed. In 1870 the British government paid the HBC $1.5 million for control of the vast territory of RUPERT'S LAND and opened the way for the newly formed Dominion of Canada to create the first of 3 Prairie provinces. Manitoba in 1870 was little larger than the Red River valley, but by 1912 its current boundaries were set. Settlement of the new province followed the Dominion Lands Survey and the projected route of the national railway. The lands of the original province of Manitoba were granted to settlers in quarter-section parcels for homesteading purposes under the Dominion Lands Act of 1872.
The remainder of what is now Manitoba was still the North-West Territories at the time. After 1878 settlers could obtain grants of quarter-section parcels of land in those areas provided they managed to improve the land. By 1910 most of southern Manitoba and the Interlake and Westlake areas were settled. Railway branch lines brought most settlers within 48 km (30 mi) of a loading point from which grain could be shipped to world markets. Rural population peaked in 1941, followed by a steady decline resulting from consolidation of small holdings into larger farm units, retreat from the submarginal lands of the frontier because of long, cold winters and poor soils, and the attraction of the larger cities, especially Winnipeg.
Overpopulation of submarginal lands in the Interlake and the Westlake districts and along the contact zone with the Shield in the southeast caused a substantial shift from the farm to the city. Hamlets and small towns have shrunk or disappeared; large supply centres are more easily reached with modern motor vehicles, and children are bused to schools in larger towns and cities. Elimination of uneconomic railway branch lines also has left many communities without services.
Manitoba's population is disproportionately distributed between the "North" and the "South." A line drawn from lat 54° N (north of The Pas) to the southeast corner of the province sharply divides the continuous settled area, containing 95% of the people, from the sparsely populated north. Settlement of the north is confined to isolated fishing stations and mining towns, scattered native reserves and Churchill, a far north transshipment centre on the shores of Hudson Bay.
Until 1941 the rural population component exceeded the urban. The rural population subsequently declined in absolute and relative terms until 2001, when it was 28% of the total. "Rural" includes farm and nonfarm residents and people living in towns and hamlets that have populations under 1000.
Centres designated as "urban" (more than 1000) now comprise 72% of the total. Almost 77% of the urban total live in Winnipeg, which together with its satellite, Selkirk, accounts for nearly 60% of the total provincial population.
WINNIPEG began in the shadow of Upper Fort Garry. In the 1860s free traders, in defiance of the HBC monopoly, located there and competed for furs. After 1870 the tiny village rapidly became a commercial centre for the Red River colony. Located at "the forks" of the Red and Assiniboine rivers, it commanded water and land travel from the west, south and north and became the northern terminus of the railway from St Paul, Minn, in 1878.
Following the decision to have the CANADIAN PACIFIC RAILWAY cross the Red River at Winnipeg (1881), the centre became the apex of a triangular network of rail lines that drew commerce from Alberta eastward, and it eventually became a crossroads for east-west air traffic. Since World War II Winnipeg has experienced modest growth and commercial consolidation in a reduced hinterland. It is the provincial centre of the arts, education, commerce, finance, transportation and government.
Although Winnipeg's pre-eminence is unchallenged, certain urban centres dominate local trading areas. BRANDON, Manitoba's second city, is a distribution and manufacturing centre for the southwest, as is the smaller PORTAGE LA PRAIRIE, set in the Portage plains, one of the richest agricultural tracts in the province. In the north, THOMPSON and FLIN FLON service the mining industry.
The major towns of SELKIRK, DAUPHIN and THE PAS were founded as fur-trading forts and today serve as distribution centres for their surrounding communities. LYNN LAKE, LEAF RAPIDS and Bissett are small northern mining centres.
A network of smaller towns in southwestern Manitoba fits the "central place theory" modified by the linear pattern of rail lines emanating from Winnipeg. Grain elevators approximately every 48 km (30 mi) became the nuclei of hamlets and towns. Eventually, with the advent of motor transport, branch lines were eliminated, and with them many place names that once stood for thriving communities. The present pattern is a hierarchy of central places, from hamlets to regional centres, competing to supply a dwindling farm population.
Since 1961 Manitoba's population growth has been slow but steady, rising from 921 686 in 1961 to 1 119 583 in 2001, despite a fairly constant amount of natural increase of 6000 to 7000 per year. The significant factor in population growth during this period has been migration. During periods of economic health, Manitobans have been less likely to move away, and in fact often return home from other provinces. When the economy is in decline, Manitobans tend to migrate, primarily to Ontario and to the other western provinces.
These cyclical periods, normally 3 to 5 years, either negated or enhanced the natural population growth so the population has experienced short periods of growth followed by short periods of decline, resulting in very slow overall population growth.
The labour participation rate (5-year average 2000-04) is higher for men (74.9%) than for women (62.2%), although the figure for women has increased steadily since the latter part of the 20th century. The unemployment rate was also higher for men (5.3%) than for women (4.8%). When Winnipeg is considered separately, its unemployment rate was slightly higher (5.3%) than in the rural areas (4.6%) and to the provincial average (5.1%). Compared with other provinces, Manitoba has had one of the lowest unemployment rates over the last 25 years.
Manitoba's largest employers of labour by industry are trade (85 200), manufacturing 69 100 and health care and social assistance (78 000). The annual average annual income for individuals in Manitoba in 2001 was $28 400, about 90% of the national average of $31 900.
The dominant "mother tongue"in 2001 was English (73.9%). Other prevalent languages are German, French and Ukrainian and Aboriginal languages. The concentration of those reporting their "mother tongue" as English is higher in urban centres than in rural areas. The reverse is true for French, Ukrainian and German, the latter mainly because of the large MENNONITE farming population. In 1870 the Manitoba Act gave French and English equal status before the courts and in the legislature. In 1890 a provincial act made English the only official language of Manitoba. This act was declared ULTRA VIRES in 1979, and since 1984 the provincial government has recognized both English and French as equal in status.
In schools the Français program provides instruction entirely in French for Franco-Manitobans and the French-immersion program gives all instruction in French to students whose mother tongue is not French. Some schools offer instruction in the majority of subjects in a minority tongue, eg, Polish, Ukrainian, German.
The mother tongues of native peoples are Ojibway, Cree, Dene and Dakota. The native people of the north speak mainly Cree; Ojibway is the mother tongue of most bands in the south, although English is most often spoken.
Manitoba contains a large diversity of ethnic origins. Most Manitobans trace their ancestries to one or more of the following ethnic groups: British, Canadian, GERMAN, Aboriginal, UKRAINIAN, and FRENCH. British descendants have decreased proportionately since 1921; numerically they are strongest in urban areas, whereas the minorities are relatively more numerous in rural areas. The distribution of the larger ethnic groups, especially in rural areas, is related to the history of settlement. In the 2001 census, about 8% of the population listed their sole ethnic origin as Aboriginal. There are also significant populations of those who listed Polish, DUTCH, FILIPINO, RUSSIAN and Icelandic ancestries.
The Mennonites (German and Dutch) are concentrated in the southern Red River valley around ALTONA, STEINBACH and WINKLER; Ukrainians and POLES live in the Interlake district and along the frontier. Many French live south of Winnipeg close to the Red River. Those of ICELANDIC origin are found around the southwestern shore of Lake Winnipeg. The Filipino population is concentrated in Winnipeg. FIRST NATIONS live mainly on scattered reserves, primarily in central and northern Manitoba, although some have moved to a very different lifestyle in Winnipeg.
To some extent religious denominations reflect the pattern of ethnicity. Three groups comprise about half of the population (2001c): UNITED CHURCH (16%), Roman CATHOLIC (26.5%) and ANGLICAN (7.8%). Most Ukrainians are members of the Ukrainian Catholic (2.7%) and Orthodox (1.0%) churches. Those of German and Scandinavian backgrounds support mainly the Lutheran faith (4.6%), and 4.7% are Mennonite. Nearly 19% of the population claimed to have no affiliation with any religion.
Hunting and trapping constitute Manitoba's oldest and today's smallest industry. For 200 years the HBC dominated trade in furs across western Canada as far as the Rocky Mountains. Alongside the fur trade, buffalo hunting developed into the first commercial return of the plains; native people, Métis and voyageurs traded meat, hides and PEMMICAN, which became the staple food of the region.
Until 1875 the fur trade was the main business of Winnipeg, which was by then an incorporated city of 5000 and the centre of western commerce. In the city the retail/wholesale and real estate business grew in response to a new pattern of settlement and the development of agriculture. Red Fife wheat became the export staple that replaced the beaver pelt.
After the westward extension of the main CPR line in the 1880s, farmers and grain traders could expand into world markets and an east-west flow of trade began, with Winnipeg the "gateway" city. Over the next 20 years, this basically agricultural economy consolidated. Lumbering, necessary to early settlement, declined and flour mills multiplied.
During the boom years, 1897 to 1910, there was great commercial and industrial expansion, particularly in Winnipeg, and agriculture began to diversify. The following decades of depression, drought, labour unrest and 2 world wars sharpened the realization that the economy must diversify further to survive, and since WWII there has been modest growth and commercial consolidation.
Today, manufacturing leads all industrial groups, followed by agriculture, the production of hydroelectric power and mining. The primary industries (including electric power generation) represent about half of the total revenue derived from all goods-producing industries. Manufacturing and construction account for the rest.
Agriculture plays a prominent role in the provincial economy. There are diverse sources of income from agriculture. In 1997 farm cash receipts for crops amounted to $1.7 billion compared with livestock at $1.2 billion. Wheat cash receipts are 4 times those from barley and oats combined. Hay crops are important because of a secondary emphasis on livestock production.
Cash receipts from livestock were highest from hogs ($478 million), followed by cattle ($301 million), dairy products, poultry and eggs. Wheat is grown throughout southern Manitoba, primarily where there are medium- to fine-textured black soils, especially in the southwest. Barley used as prime cattle feed is tolerant of a range of climatic conditions, but is intensively grown south and north of Riding Mountain and in the Swan River valley. CANOLA, used as a vegetable oil and as high-protein cattle feed, is also grown throughout the province. In the late 1990s, its importance rivalled that of wheat. Prime malting barley prefers the parkland soils and cooler summer temperatures. Cultivation of oats is general and concentrated in areas of livestock farming; it is frequently tolerant of less productive soil. Flax is grown mostly in the southwest on black soil, and canola is significant on the cooler lands near the outer margin of cultivation.
Specialized crops, including sugar beets, sunflowers, corn (for both grain and silage) and canning vegetables are concentrated in the southern Red River valley, where heating degree days are at a maximum and soil texture is medium. Beef cattle are raised on most farms in western Manitoba but are less important in the Red River valley.
Dairy cattle are raised mainly in the cooler marginal lands, which extend in a broad arc from the southeast to the Swan River valley. Poultry is heavily concentrated in the Red River valley, but hogs have a much wider distribution, influenced by a surplus of barley and fresh milk. Market gardening occupies good alluvial soil around Winnipeg and the Red River, from which water is obtained for irrigation during dry periods.
Neighbouring farmers set up cooperatives, which vary in scope and purpose from the common purchase of land and machinery to processing and marketing members' products. Two large cooperatives, Manitoba Pool Elevators and United Grain Growers, were founded to handle and market grain, and now deal in livestock and oilseeds and provide members with reasonably priced farm supplies. Manitoba's 8 marketing boards are producer bodies that control stages in the marketing of specific commodities. Wheat, oats and barley for export must be sold to the national CANADIAN WHEAT BOARD.
Agriculture is never likely to expand beyond the limits imposed by shortness of growing season (less than 90 days frost-free) and the poor podsolic soils associated with the Shield. Plans for irrigating the southwestern Red River valley, known as the Pembina Triangle, are under study. Periodic flooding of the upper Red River (south of Winnipeg) has damaged capital structures and reduced income. Approximately 880 000 ha of farmland are under drainage, mostly in the Red River Valley and the Interlake and Westlake districts. The Prairie Farm Rehabilitation Act encourages conservation of water through check dams and dugouts.
Mining contributed $1 billion to the provincial economy in 1996. Of Manitoba's income from all minerals, over 80% is derived from metals, chiefly nickel, copper, zinc, cobalt and gold, with minor amounts of precious metals. All metals are found in the vast expanse of Canadian Shield.
Diminishing amounts of petroleum are recovered from sedimentary rocks of Mississippian age in the southwest corner of the province near Virden and Tilston. Industrial minerals, principally quarried stone, gravel and sand, account for 8%. The famous Tyndall stone is a mottled dolomitic limestone quarried near Winnipeg and distributed across Canada. Gypsum is mined in the Interlake district near Gypsumville and in the Westlake area near Amaranth. Silica sand comes from Black Island in Lake Winnipeg.
Manitoba's most productive metal mines are at Thompson. Reputed to be the largest integrated (mining, smelting and refining) operation in North America, Thompson accounts for all of Manitoba's nickel production. The province's oldest mining area, dating from 1930, is at Flin Flon; along with its satellite property at Snow Lake, it is a major producer of copper and zinc and small amounts of gold and silver. Other major centres include Lynn Lake, where until 1989 copper and nickel were mined and now gold has taken their place, and Leaf Rapids, where nickel and copper are mined.
Other than a small amount of petroleum, the province's resources in energy are derived from hydroelectric power. Thermal plants depend mostly on low-grade coal imported from Estevan, Sask, and on diesel fuel. Manitoba Hydro, a crown corporation, is the principal authority for the generation, development and distribution of electric power, except for Winnipeg's inner core, which is served by Winnipeg Hydro, a civic corporation. Hydro power plants were first built along the Winnipeg River and 6 of these plants still operate.
The availability of cheap power within 100 km of Winnipeg has made the city attractive to industry for many years. Since 1955 hydroelectric development has been expanding in the north. In 1960 a plant was commissioned at Kelsey on the Nelson River, and in 1968 the Grand Rapids plant was built near the mouth of the Saskatchewan River. Increased demand led to the construction of 4 additional plants on the Nelson: Jenpeg, Kettle Rapids, Long Spruce and Limestone. Downstream another plant at Limestone with a 1330 MW capacity, the largest in Manitoba, was completed by 1992. In addition, 2 thermal plants powered by coal from Estevan are located at Brandon and Selkirk; they supplement hydro sources at peak load times.
Installed generating capacity in 1994 was 4912 MW with a further hydro potential of 5260 MW. Manitoba sells surplus power, mostly during the summer period, to Ontario, Saskatchewan, Minnesota and North Dakota. Its transmission and distribution system exceeds 76 000 km. Manitoba Hydro serves some 400 000 customers and Winnipeg serves another 90 000, who consumed 27 102 GWh in 1993. Natural gas from Alberta, which is used mainly for industrial and commercial heating, supplies one-third of Manitoba's energy requirements.
In its primary stage (logging), FORESTRY accounts for very little of the value of goods-producing industries. The most productive forestlands extend north from the agricultural zone to lat 57° N; north and east of this line timber stands are sparse and the trees are stunted, gradually merging with tundra vegetation along the shores of Hudson Bay. The southern limit is determined by the northward advance of commercial agriculture. On the basis of productivity for forestry, 40% of the total provincial land area is classified as "productive," 29% as nonproductive and over 30% as nonforested land.
Of the total productive forestland of 152 000 km2, 94% is owned by the provincial government. From 1870 to 1930 lands and forests were controlled by the federal government; after the transfer of natural resources in 1930, the province assumed full responsibility. In 1930 there were 5 forest reserves; today there are 15 provincial forests totalling more than 22 000 km2.
In order of decreasing volume, the most common commercial tree species are black spruce, jack pine, trembling aspen (poplar), white spruce, balsam poplar and white birch. Other species common to Manitoba include balsam fir, larch, cedar, bur oak, white elm, green ash, Manitoba maple and red and white pine.
Timber-cutting practices are restricted around roads, lakes and rivers. The government proposes annual cuts for each management unit on a sustained yield basis. In addition to its reforestation program, the government provides planting stock to private landowners for shelterbelts and Christmas trees.
The commercial inland fishery has been active in Manitoba for over 100 years. Water covers nearly 16% of Manitoba, of which an estimated 57 000 km2 is commercially fished. Two-thirds of the total catch comes from the 3 major lakes - Winnipeg, Manitoba and Winnipegosis - and the balance is taken from the numerous smaller northern lakes. The total value of the 1997-98 catch was $15 million. The catch is delivered to 70 lakeside receiving stations located throughout the province and then transported to the Freshwater Fish Marketing Corporation's central processing plant in Winnipeg. All the commercial catch is processed at this plant. The US and Europe account for most of the corporation's annual sales.
Thirteen commercial species, dressed and filleted, include whitefish, pike, walleye and sauger. Sauger, pike, walleye, trout and catfish are principal sport fish. The Manitoba Department of Natural Resources maintains hatcheries for pickerel, whitefish and trout.
Today, Manitoba has a firm base in its processing and manufacturing industries, as shown by the value of production: over 61 000 people were employed in producing nearly $11 billion (1998) worth of goods. About two-thirds of the value of industrial production comes from the following industries: food processing, distilling, machinery (especially agricultural); irrigation and pumps; primary metals, including smelting of nickel and copper ores, metal fabricating and foundries; airplane parts, motor buses, wheels and rolling-stock maintenance; electrical equipment; computers and fibre optics.
There are also the traditional industries: meat packing, flour milling, petroleum refining, vegetable processing, lumber, pulp and paper, printing and clothing. Winnipeg accounts for 75% of the manufacturing shipments. Half of all manufactured goods are exported, one-third to foreign countries.
Winnipeg's strongest asset has always been its location. In the heart of Canada and at the apex of the western population-transportation triangle, this city historically has been a vital link in all forms of east-west transportation.
The YORK BOATS of the fur trade and the RED RIVER CARTS of early settlers gave way first to steamboats on the Red River, then to the great railways of the 19th and early 20th centuries. Subsequently, Winnipeg provided facilities for servicing all land and air carriers connecting east and west. Today, rail and road join the principal mining centres of northern Manitoba. During the long, cold winter, the myriad of interconnected lakes creates a network of winter roads. Major northern centres are linked to the south via trunk highways. The Department of Highways manages over 73 000 km of trunk highways and 10 700 km of provincial roads (mainly gravel).
Since 1926 BUSH FLYING has made remote communities accessible; several small carriers serve the majority of northern communities. Transcontinental routes of Air Canada and Canadian Airlines International pass through Winnipeg and Greyhound Air began flying between Ottawa and Vancouver with a stop in Winnipeg in the summer of 1996. NWT Air connects Winnipeg with Yellowknife and Rankin Inlet, Nunavut. Canadian Airlines International serves northern Manitoba with its partner CALM Air. Perimeter Airlines also serves northern points.
Air Canada operates daily flights south to Chicago, Ill, connected with the United Airlines network; and Northwest Airlines provides service to Minneapolis, Minn. Canadian Airlines International, Air Canada and charter airlines, Canada 3000 and Royal, provide direct flights from Winnipeg to Europe and various winter sunspot vacation destinations.
Because Winnipeg is Canada's principal midcontinent rail centre, both CNR and CPR have extensive maintenance facilities and marshalling yards in and around the city. Wheat has the largest freight volume, but diverse products from petroleum and chemicals to motor cars and lumber are transported by rail. The CNR owns Symington Yards, one of the largest and most modern marshalling yards in the world. At Transcona it maintains repair and servicing shops for rolling stock and locomotives, and at GIMLI, a national employee training centre. In addition to repair shops and marshalling yards, the CPR has a large piggyback terminal; Weston shops, one of 3 in its trans-Canada system, employs some 2500 people.
Via Rail operates Canada's passenger train service, which uses the lines of the 2 major railways and provides direct service between Vancouver and Halifax and Saint John.
In 1929 the HUDSON BAY RAILWAY, now part of the CNR system, was completed to the port of Churchill, where today major transshipment facilities handle on average annually some 290 000 t of grain between July 20 and October 31. Formerly an army base, Churchill is also a research centre and a supply base for eastern arctic communities.
Government and Politics
On 15 March 1871 the first legislature of Manitoba met for the first time; it consisted of an elected legislative assembly with members from 12 English and 12 French electoral districts, an appointed legislative council and an appointed executive council who advised the government head, Lieutenant-Governor Adams G. ARCHIBALD. When the assembly prorogued systems of courts, education and statutory law had been established, based on British, Ontarian and Nova Scotian models. The legislative council was abolished 5 years later.
Since 1871 the province has moved from communal representation to representation by population and from nonpartisan to party political government. Today the LIEUTENANT-GOVERNOR is still formal head of the provincial legislature and represents the Crown in Manitoba. The government is led by the PREMIER, who chooses a CABINET, whose members are sworn in as ministers of the Crown. Her Majesty's Loyal Opposition is customarily headed by the leader of the party winning the second-largest number of seats in a given election. Laws are passed by the unicameral legislative assembly, consisting of 57 elected members. See MANITOBA PREMIERS: TABLE; MANITOBA LIEUTENANT-GOVERNORS: TABLE.
The judiciary consists of the superior courts, where judges are federally appointed, and many lesser courts that are presided over by provincial judges. The RCMP is contracted to provide provincial police services and municipal services in some centres; provincial law requires cities and towns to employ enough police to maintain law and order. Manitoba is federally represented by 14 MPs and 6 senators.
Local government is provided by a system of municipalities. Manitoba has 5 incorporated cities (Winnipeg, Brandon, Selkirk, Portage la Prairie and Thompson), 35 incorporated towns and 40 incorporated villages. (An incorporated municipality has a greater degree of autonomy, especially in taxing and borrowing power.) There are over 100 rural municipalities ranging in size from 4 to 22 TOWNSHIPS, many of which contain unincorporated towns and villages. Locally elected councils are responsible for maintaining services and administering bylaws.
In remote areas where population is sparse, the government has established 17 local government districts with an appointed administrator and an elected advisory council. The Department of Northern Affairs has jurisdiction over remote areas in northern Manitoba and uses the community council as an advisory body. Community councils are elected bodies, mostly in Métis settlements, through which the government makes grants. Each has a local government "coordinator" to represent the government.
For the fiscal year ending 31 March 1998, the province had revenues of $5.8 billion and expenditures of $5.7 billion. Income taxes garnered $1.6 billion and other taxes, including a 7% sales tax and gasoline and resources taxes, totalled another $1.6 billion. Unconditional transfer payments and shared-cost receipts from federal sources covering education, health and economic development were $1.7 billion. More than 50% of government expenditures go toward education, health and social services.
Health and Welfare
The Manitoba Health Services Commission, with generous support from Ottawa, provides nonpremium medical care for all its citizens. A pharmacare program pays 80% of the cost of all prescription drugs above $75 ($50 for senior citizens). The province and Winnipeg each have a free dental care program for all elementary-school children.
The departments of Health and of Community Services and Corrections provide services in public and mental health, social services, probations and corrections. The government is responsible for provincial correction and detention facilities and through the Alcoholism Foundation administers drug and alcohol rehabilitation facilities.
Manitoba has over 80 provincially supported hospitals, including 10 in Winnipeg, and over 100 personal care homes in addition to elderly persons' housing. Winnipeg is an important centre for medical research; its Health Sciences Centre includes Manitoba's chief referral hospitals and a number of specialist institutions, among them the Children's Centre and the Manitoba Cancer Treatment and Research Foundation.
While Manitoba's system of RESPONSIBLE GOVERNMENT was maturing during the 1870s, communal loyalties rather than party politics dominated public representation. As the 1880s advanced, however, a strong Liberal opposition to John NORQUAY'S nonpartisan government developed under Thomas GREENWAY. After the election of 1888, Greenway's Liberals formed Manitoba's first declared partisan government until defeated in 1899 (on issues of extravagance and a weak railway policy) by an invigorated Conservative Party under Hugh John MACDONALD. When Macdonald resigned in 1900, hoping to return to federal politics, R.P. ROBLIN became premier, a position he held until 1915, when a scandal over the contracting of the new legislative buildings brought down the government in its fifth term.
In 1920, against the incumbent Liberal government of T.C. NORRIS, the United Farmers of Manitoba first entered provincial politics and returned 12 members to the legislative assembly, heralding a new era of nonpartisan politics. The promise was fulfilled in the election of 1922, when the UFM won a modest majority and formed the new government. Manitoba was returning to its roots, reaffirming rural virtues of thrift, sobriety and labour to counter rapid change, depression and the aftereffects of war.
The farmers chose John BRACKEN as their leader, and he remained premier until 1943 despite the UFM withdrawal from politics in 1928. Bracken then formed a coalition party, the Liberal-Progressives, which won a majority in the assembly in 1932, but only gained a plurality in the 1936 election, surviving with Social Credit support. He continued as premier in 1940 over a wartime government of Conservative, Liberal-Progressive, CCF and Social Credit members.
Bracken became leader of the federal Conservatives in 1943 and was replaced by Stuart S. Garson. In 1945 the CCF left the coalition, the Conservatives left it in 1950 and the Social Credit Party simply faded. From 1948 the coalition was led by Premier Douglas CAMPBELL, although after 1950 it was predominantly a Liberal government.
From 1958 the Conservatives under Duff ROBLIN governed the province until Edward SCHREYER's NDP took over in 1969 with a bare majority. His government survived 2 terms; during its years in office, many social reforms were introduced and government activity in the private sector was expanded.
In 1977 Sterling LYON led the Conservative Party to victory on a platform of reducing the provincial debt and returning to free enterprise, but his government lasted only one term. In 1981 the NDP returned to power under Howard PAWLEY. They were re-elected in 1985. The Lyon government, in fact, was the only one-term government in Manitoba's history to that time, as the political tradition of the province has been notable for its long-term stability, particularly during the era of the UFM and later coalition governments.
Pawley's NDP were ousted in 1988 when Gary Filmon led the Conservatives to an upset minority victory. Filmon's government was precarious, and the Liberal opposition was extremely vocal in its opposition to the MEECH LAKE ACCORD (see MEECH LAKE ACCORD: DOCUMENT). Debate over the accord dominated the provincial agenda and was finally killed by procedural tactics led by NDP native MLA Elijah HARPER. Filmon went to the polls immediately following the death of the accord in 1990 and eked out a slim majority victory. This majority enabled Filmon to finally dictate the legislative agenda, and he began concentrating his government's efforts at bringing the province's rising financial debt under control. His government's success in this endeavour won Filmon an increased majority in April 1995.
The denominational school system was guaranteed by the Manitoba Act of 1870 and established by the provincial School Act of 1871: local schools, Protestant or Roman Catholic, might be set up on local initiative and administered by local trustees under the superintendence of the Protestant or Roman Catholic section of a provincial board of education. The board was independent of the government but received grants from it, which the sections divided among their schools. Until 1875 the grants were equal; disparity in the population and the ensuing Protestant attack on dualism in 1876 made it necessary to divide the grants on the basis of enrolment in each section.
After 1876 the British (predominantly Protestant) and French (Roman Catholic) coexisted peaceably and separately, until agitation against the perceived growing political power of the Catholic clergy spread west from Québec in 1889. A popular movement to abolish the dual system and the official use of French culminated in 1890 in the passage of 2 provincial bills. English became the only official language and the Public Schools Act was altered. Roman Catholics could have private schools supported by gifts and fees, but a new department of education, over local boards of trustees, was to administer nondenominational schools.
French Catholic objections to violations of their constitutional rights were ignored by the Protestant Ontarian majority, who saw a national school system as the crucible wherein an essentially British Manitoba would be formed. Intervention by the courts and the federal government eventually produced the compromise of 1897: where there were 40 (urban) or 10 (rural) Catholic pupils, Catholic teachers were to be hired; where at least 10 pupils spoke a language other than English, instruction was to be given in that language; school attendance was not compulsory, since Catholics were still outside the provincial system.
After 20 years of decreasing standards and linguistic chaos, the Public Schools Act was amended in 1916; the bilingual clause was removed and the new School Attendance Act made schooling compulsory for Catholics and Protestants alike, whether publicly or privately educated.
Since 1970, Franco-Manitobans can receive instruction entirely in French through the Français program; as well, non-French students in French immersion are taught all subjects in French. Instruction in a minority tongue in the majority of subjects is possible in some schools. Both English- and French-medium schools are organized in 48 school divisions, each administered by an elected school board, under the Department of Education.
In order to meet Manitoba's constitutional obligations and the linguistic and cultural needs of the Franco-Manitoban community, a new Francophone School Division was established and was in place for the 1994-95 school year.
There are 14 school districts, of which 6 are financed mainly from sources other than provincial grants and taxes; these include private schools sponsored by church organizations and by the federal government. School boards are responsible for maintaining and equipping schools, hiring teachers and support staff and negotiating salaries. The Manitoba Teachers Federation negotiates with the boards.
In 1994 enrolment in the elementary/secondary schools of the province totalled 221 610, and 14 500 teachers were employed, of which 12 675 were full-time. Elementary schools consist of kindergarten through grades 1 to 8. Secondary schools, grades 9 to 12, have varied curriculum with core subjects and several options.
Special, practically oriented programs are available at 35 vocational-industrial schools, and vocational-business training is given in 106 schools. There are also special services for the disabled, the blind, the deaf and those with learning disabilities.
COMMUNITY COLLEGES provide a wide variety of career-oriented adult educational and vocational programs, and day, evening and extension programs - full-time and part-time - are offered in more than 120 communities. Assiniboine Community College operates in and outside Brandon. Responsible for all community college agricultural training in the province, it offers 16 certificate courses and 11 diploma courses. Keewatin College offers 16 certificate courses of one year or less, and 4 diploma courses, mostly in northern Manitoba. Red River College, located in Winnipeg, provides 33 diploma courses as well as 28 certificate courses, including courses in applied arts, business administration, health services, industrial arts and technology.
During 1993-94 there were 3900 full-time and 1646 part-time students enrolled in community colleges in Manitoba. The community colleges, previously operated by the province, were incorporated under appointed boards of governors in April 1993. The community colleges are now funded by an annual grant from the province. Manitoba spent over $54 million on community colleges in 1993-94.
In 1877 St Boniface (French, Roman Catholic), St John's (Anglican) and Manitoba (Presbyterian) united as UNIVERSITY OF MANITOBA. Later, they were joined by other colleges, but in 1967 a realignment of the constituents resulted in 3 distinct universities. The University of Manitoba is one of the largest universities in Canada, with numerous faculties and with 4 affiliated colleges that provide instruction in French: St John's and St Paul's (Roman Catholic), St Andrew's (Ukrainian Orthodox) and St Boniface, which is the only college providing instruction entirely in French. In 1994-95, 17 905 full-time and 6062 part-time students were enrolled at the U of Man.
BRANDON UNIVERSITY offers undergraduate programs in arts, science, education and music and masters degrees in education and music, with an enrolment of 1541 full-time and 1956 part-time students (1994-95). The UNIVERSITY OF WINNIPEG, located in central Winnipeg, provides primarily undergraduate instruction, teacher training and theological studies for 2679 full-time and 7387 part-time students (1994-95). Teachers are trained at all 3 universities and at Red River College.
To a large degree, Manitoba's cultural activities and historical institutions reflect the varied ethnic groups that comprise its fabric. The provincial government, through its Department of Culture, Heritage and Citizenship, subsidizes a wide range of cultural activities. Many annual FESTIVALS celebrate ethnic customs and history: the Icelandic Festival at Gimli, the Winnipeg Folk Festival, National Ukrainian Festival at Dauphin, Opasquia Indian Days and the Northern Manitoba Trappers' Festival at The Pas, Pioneer Days at Steinbach, Fête Franco-Manitobaine at La Broquerie, the midwinter Festival du voyageur in St Boniface, and Folklorama sponsored by the Community Folk Art Council in Winnipeg.
Manitoba's historic past is preserved by the Museum of Man and Nature (Winnipeg), considered one of the finest interpretive museums in Canada; by the Living Prairie Museum, a 12 ha natural reserve; by the St Boniface Museum, rich in artifacts from the Red River colony; and the Provincial Archives and Hudson's Bay Company Archives, all located in Winnipeg. Also in Winnipeg is the Planetarium, one of the finest in North America, and Assiniboine Park Zoo, which has a collection of more than 1000 animals.
The Manitoba Arts Council promotes the study, enjoyment, production and performance of works in the arts. It assists organizations involved in cultural development; offers grants, scholarships and loans to Manitobans for study and research; and makes awards to individuals. The Winnipeg Symphony Orchestra, ROYAL WINNIPEG BALLET, Manitoba Theatre Centre, Le Cercle Molière, Manitoba Opera Association, Manitoba Contemporary Dancers and Rainbow Stage all contribute to Winnipeg's position as a national centre of the performing arts.
Among well-known and respected Manitoban writers are the novelists Margaret LAURENCE and Gabrielle ROY, essayist, historian and poet George WOODCOCK and popular historian Barry Broadfoot. The Winnipeg Art Gallery, in addition to traditional and contemporary works, houses the largest collection of Inuit art in the world.
Among the fine historic sites associated with the settlement of the West is the HBC's Lower Fort Garry (see FORT GARRY, LOWER). Situated on the Red River 32 km northeast of Winnipeg, this oldest intact stone fort in western Canada was built in 1832 and preserves much of the atmosphere of the Red River colony. The Forks, a waterfront redevelopment and national HISTORIC SITE, is the birthplace of the Winnipeg. Located at the junction of the Red and Assiniboine rivers, this site has been used as a trade and meeting place for over 6000 years. Today, it is again a place where recreational, cultural, commercial and historical activities bring people together. Upper Fort Garry Gate, the only remnant of another HBC fort (see FORT GARRY, UPPER), is nearby.
Among a number of historic houses is Riel House, home of the Riel family; York Factory, located at the mouth of the Nelson River and dating from 1682, was a transshipment point for furs. The partially restored PRINCE OF WALES FORT (1731-82) at the mouth of the Churchill River was built by the HBC and destroyed by the French. Other points of historical significance are St Boniface Basilica, the oldest cathedral in western Canada and the site of Louis RIEL's grave; Macdonald House, home of Sir H. J. MACDONALD; Fort Douglas; Ross House; Seven Oaks House; and the Living Prairie Museum.
Manitoba has 5 daily newspapers: the Winnipeg Free Press, the Winnipeg Sun, the Brandon Sun, the Portage la Prairie Daily Graphic and the Flin Flon daily Reminder. Sixty-two weekly and biweekly papers service suburban Winnipeg and rural areas, with emphasis on farming, and several trade and business journals are published. The French-language weekly La Liberté is published in St Boniface, and Winnipeg produces more foreign-language newspapers than any other centre in Canada.
The province has 20 AM radio stations (all but 4 are independent), including the French-language station CKSB, and 7 FM radio stations. As well, the CBC has 28 English- and French-language rebroadcasters. Four television stations operate from Winnipeg and one from Brandon, and CABLE TELEVISION is available in most centres. The Manitoba Telephone System, a crown corporation, provides telecommunications facilities for all Manitoba. North America's first publicly owned system, it was established in 1908 after the provincial government began appropriating Bell Telephone because of high rates and inefficiency.
Trading posts were soon established along the shores: Fort Hayes (1682), Fort York (1648), Fort Churchill (1717-18), Prince of Wales Fort (1731). Henry KELSEY, an HBC employee, penetrated southwest across the prairies 1690-92. The LA VÉRENDRYE family travelled west via the Great Lakes, building Fort Maurepas on the Red River (1734), then 4 other posts within the present area of Manitoba. The subsequent invasion by independent traders of lands granted to the HBC stimulated an intense rivalry for pelts, which ended only with amalgamation of the HBC and the North West Co in 1821. About 20 forts existed at various times south of lat 54° N, but the early explorers left little permanent impression on the landscape.
Agricultural settlement began in 1812 with the arrival of Lord SELKIRK's settlers at Point Douglas, now within the boundaries of Winnipeg. Over the next 45 years, the Red River Colony at Assiniboia survived hail, frost, floods, grasshoppers, skirmishes with the Nor'Westers and an HBC monopoly. Expansionist sentiment from both Minnesota and Upper Canada challenged the HBC's control over the northwest and the Red River Colony.
In 1857 the British government sponsored an expedition to assess the potential of Rupert's Land for agricultural settlement; the PALLISER EXPEDITION reported a fertile crescent of land suitable for agriculture extending northwest from the Red River valley. That same year the Canadian government sent Henry Youle to do a similar assessment. The conflict between agricultural expansion and the rights of the Métis broke out in 2 periods of unrest (see RED RIVER REBELLION; NORTH-WEST REBELLION).
Eventually the HBC charter was terminated and the lands of the North-West were transferred to the new Dominion of Canada by the Manitoba Act of 1870; quarter sections of land were then opened to settlement. It was soon evident that the diminutive province needed to expand. Settlers were rapidly moving to the northwest and spilling over the established boundaries.
In 1881, after years of political wrangling with the federal government, the boundaries were extended to their present western position, as well as being extended farther east, and to lat 53° N. Between 1876 and 1881, 40 000 immigrants, mainly Ontario British, were drawn west by the prospect of profitable wheat farming enhanced by new machinery and milling processes.
Mennonites and Icelandic immigrants arrived in the 1870s, the former settling around Steinbach and Winkler, the latter near Gimli and Hecla. Immigration then slowed until the late 1890s and it was limited mostly to small groups of Europeans.
Between 1897 and 1910, years of great prosperity and development, settlers from eastern Canada, the UK, the US and eastern Europe - especially Ukraine - inundated the province and the neighbouring lands. Subsequent immigration was never on this scale.
From 1897 to 1910 Manitoba enjoyed unprecedented prosperity. Transportation rates fell and wheat prices rose. Grain farming still predominated, but mixed farms prospered and breeders of quality livestock and plants became famous.
Winnipeg swiftly rose to metropolitan stature, accounting for 50% of the increase in population. In the premier city of the West, a vigorous business centre developed, radiating from the corner of Portage Avenue and Main Street: department stores, real estate and insurance companies, legal firms and banks thrived. Abattoirs and flour mills directly serviced the agricultural economy; service industries, railway shops, foundries and food industries expanded.
Both the CPR and the Canadian Northern Railway (later CNR) built marshalling yards in the city which became the hub of a vast network of rail lines spreading east, west, north and south. In 1906 hydroelectricity was first generated at PINAWA on the Winnipeg River, and the establishment of Winnipeg Hydro 28 June 1906 guaranteed the availability of cheap power for domestic and industrial use.
The general prosperity ended with the depression of 1913; freight rates rose, land and wheat prices plummeted and the supply of foreign capital dried up. The opening of the Panama Canal in 1914 ended Winnipeg's transportation supremacy, since goods could move more cheaply between east and west by sea than overland.
During WWI, recruitment, war industry demands, and cessation of immigration sent wages and prices soaring; by 1918 inflation seemed unchecked and unemployment was prevalent. Real wages dropped, working conditions deteriorated and new radical movements grew among farmers and urban workers, culminating in the WINNIPEG GENERAL STRIKE of May 1919.
Ensuing depression followed by an industrial boom in the late 1920s tilted the economic seesaw again. By 1928 the value of industrial production exceeded that of agricultural production; the long agricultural depression continued into the 1930s, aggravated by drought, pests and low world wheat prices, and the movement from farm to city and town accelerated. Cities were little better off: industry flagged and unemployment was high.
To eliminate the traditional boom/bust pattern, attempts have been made to diversify the economy. The continuing expansion of mining since 1911 has underlined the desirability of broadening the basis of the economy. The demands of WWII reinforced Manitoba's dependency on agriculture and primary production, but the postwar boom gave the province the opportunity to capitalize on its established industries and to broaden the economic base.
Since WWII, the Manitoba economy has been marked by rapid growth in the province's north. The development of rich nickel deposits in northern Manitoba by Inco Ltd led to the founding of the City of Thompson, whose fluctuating fortunes have mirrored swings in world commodity prices. The region has been the site of several "megaprojects," including the Manitoba Forest Resources operation at The Pas, and the huge limestone hydroelectric generating plant on the Nelson River. The economic future of Manitoba is thus a mixed one - a continuing agricultural slump, offset by growth in light industry, publishing, the garment industry and the export of power to the US.
The 20 years from 1970 to 1990 saw a dramatic realignment of provincial politics, with the virtual disappearance of the provincial Liberal Party and the rise to power of the New Democratic Party under Edward Schreyer and Howard Pawley. Typical of the social democratic initiatives of the NDP were the introduction of a government-run automobile insurance plan and the 1987 plan to purchase Inter-City Gas Co. The government's attempt to increase bilingual services within the province aroused old passions, however, and was abandoned. The Conservative government of Filmon in the 1990s faced the same problems of public debt and economic recovery as the rest of Canada.
Author T.R. WEIR
J. Brown, Strangers in Blood (1980); K. Coates and F. McGuinness, Manitoba, The Province & The People (1987); W.L. Morton, Manitoba: A History (2nd ed, 1967); G. Friesen, Prairie West (1984); X. McWillams, Manitoba Milestones (1928); Alan Artibise, Winnipeg: An Illustrated History (1977).
Links to Other Sites
The website for the Historica-Dominion Institute, parent organization of The Canadian Encyclopedia and the Encyclopedia of Music in Canada. Check out their extensive online feature about the War of 1812, the "Heritage Minutes" video collection, and many other interactive resources concerning Canadian history, culture, and heritage.
Government of Manitoba
The official website for the Government of Manitoba. Click on "About Manitoba" for information about Manitoba's geography, history, climate, and more.
Symbols of Canada
An illustrated guide to national and provincial symbols of Canada, our national anthem, national and provincial holidays, and more. Click on "Historical Flags of Canada" and then "Posters of Historical Flags of Canada" for additional images. From the Canadian Heritage website.
Manitoba Parks and Natural Areas
The website for Manitoba Parks and Natural Areas.
Manitoba Heritage Network
Explore Manitoba's history at this website for the Manitoba Heritage Network.
Library and Archives Canada
The website for Library and Archives Canada. Offers searchable online collections of textual documents, photographs, audio recordings, and other digitized resources. Also includes virtual exhibits about Canadian history and culture, and research aids that assist in locating material in the physical collections.
Festivities of the Living and the Dead in the Americas
A multimedia tour of major festivals across Canada and throughout the Americas. Describes the origins and unique features of each event. From the Virtual Museum of Canada.
A well-illustrated online guide to natural geological processes related to plate tectonics, earthquakes, and related events. From Natural Resources Canada.
Maps of provinces and territories from "The Atlas of Canada," Natural Resources Canada.
Hudson's Bay Company Archives
A comprehensive information source about the history of the Hudson’s Bay Company and the fur trade in Canada. A Manitoba Government website.
Geographical Names of Canada
Search the "Canadian Geographical Names Data Base" for the official name of a city, town, lake (or any other geographical feature) in any province or territory in Canada. See also the real story of how Toronto got its name. A Natural Resources Canada website.
Manitoba Agricultural Hall of Fame
Check out the life stories of people who have contributed to agriculture and the historical overview of agriculture in Manitoba.
An overview of the major issues and events leading up to Manitoba's entry into Confederation. Includes biographies of prominent personalities, old photos and related archival material. From Library and Archives Canada.
The Société Historique de Saint-Boniface
The Heritage Centre conserves and promotes resources which have cultural, heritage, judicial, and historical value; the product of Francophone presence in Western Canada and Manitoba for over the past 250 years. Their website is a great source for information about Louis Riel, Le "Voyageur," and other Manitoba history topics.
The website for Travel Manitoba highlights popular tourist destinations and events throughout the province.
A history of the "census" in Canada. Check the menu on the left for data on small groups (such as lone-parent families, ethnic groups, industrial and occupational categories and immigrants) and for information about areas as small as a city neighbourhood or as large as the entire country. From the website for Statistics Canada.
The Historic Resources Branch of Manitoba Culture, Heritage and Tourism. A reference source for genealogists, historians, archaeologists, students and interested laypersons.
The Rat Portage War
A fascinating account of the 19th Century border dispute involving Manitoba and Ontario. From the Winnipeg Police Service website.
OurVoices - Stories of Canadian People and Culture
An superb online audio collection of traditional stories about the Omushkego (Swampy Cree) people of northern Manitoba and Ontario. Presented in Cree and in English by Louis Bird, storyteller and elder. Also features printed transcripts and other resources. From the Centre for Rupert's Land Studies at the University of Winnipeg.
Manitoba Historical Society
An extensive online resource devoted to the history of Manitoba. Features biographies of noteworthy residents, articles from the journal “Manitoba History,” and much more.
Aboriginal Place Names
This site highlights Aboriginal place names found across Canada. From the Department of Aboriginal Affairs and Northern Development.
Find out about the intriguing origins of some of Manitoba’s historic place names. From the Manitoba Historical Society.
Mining in Manitoba
Scroll down to “The Flin Flon Mine” section to learn about the origin of the name “Flin Flon” and the accidental geological discovery that led to the establishment of the Flin Flon mine. This article also digs into the history of other Manitoba mining sites. From the Manitoba Historical Society.
An extensive biography of Edgar Dewdney, civil engineer, contractor, politician, office holder, and lieutenant governor. Provides details about his involvement with Indian and Métis communities in the North-West Territories, the settlement of the West, the construction of the transcontinental railway, and related events. From the “Dictionary of Canadian Biography Online.”
Archives Canada is a gateway to archival resources found in over 800 repositories across Canada. Features searchable access to virtual exhibits and photo databases residing on the websites of individual archives or Provincial/Territorial Councils. Includes documentary records, maps, photographs, sound recordings, videos, and more.
Four Directions Teachings
Elders and traditional teachers representing the Blackfoot, Cree, Ojibwe, Mohawk, and Mi’kmaq share teachings about their history and culture. Animated graphics visualize each of the oral teachings. This website also provides biographies of participants, transcripts, and an extensive array of learning resources for students and their teachers. In English with French subtitles.
National Inventory of Canadian Military Memorials
A searchable database of over 5,100 Canadian military memorials. Provides photographs, descriptions, and the wording displayed on plaques. Also a glossary of related terms. A website from the Directorate of History and Heritage.
The Société franco-manitobaine supports and promotes programs that preserve and enhance French language and culture in Manitoba.
Manitoba Association of Architects
The website for the Manitoba Association of Architects.
The Flour Milling Industry in Manitoba Since 1870
An illustrated article about the history of the flour milling industry in Manitoba. From the Manitoba Historical Society.
The Manitoba Museum is the province’s largest heritage centre renowned for its combined human and natural heritage themes. The institution shares knowledge about Manitoba, the world and the universe through its collections, exhibitions, publications, on-site and outreach programs, Planetarium shows and Science Gallery exhibits.
Names of the provinces and territories
Abbreviations and symbols for the names of the provinces and territories. From the website for Natural Resources Canada.
University of Manitoba : Archives & Special Collections
The website for Archives & Special Collections at the University of Manitoba.
North Eastman Region of Manitoba
This site offers profiles of communities in the North Eastman region of Manitoba.
With One Voice: A History of Municipal Governance in Manitoba
A synopsis of a book that covers topics such as daylight saving time, taxes, rural electrification, the impact of gophers and other farm pests, lottery terminals, and more. From the Association of Manitoba Municipalities.
Louis Riel Day
An information page about Manitoba's "Louis Riel Day." Check out the menu on the left side of the page for more on the origins of this holiday. From the website for the Government of Manitoba.
Province loses 'tremendous premier'
An obituary for former Manitoba premier Dufferin Roblin. From the winnipegfreepress.com website.
An Immense Hold in the Public Estimation
A feature article about Manitoba men and women who played hockey in the late 19th and early 20th centuries. From the Manitoba Historical Society.
Field Guide: Native Trees of Manitoba
An online guide to Manitoba’s ecozones and native coniferous and deciduous trees in Manitoba. With photographs showing identifying features of various species and biological keys. A Government of Manitoba website.
Agriculture in French Manitoba
This interactive exhibit features maps, images, and stories about the history of agriculture in French Manitoba. | http://www.thecanadianencyclopedia.com/articles/manitoba | 13 |
25 | Hearing loss is the total or partial inability to hear sound in one or both ears.
See also: Hearing loss of aging
Decreased hearing; Deafness; Loss of hearing; Conductive hearing loss; Sensorineural hearing loss
Minor decreases in hearing are common after age 20.
Hearing problems usually come on gradually, and rarely end in complete deafness.
There are many causes of hearing loss. Hearing loss can be divided into two main categories:
- Conductive hearing loss (CHL) occurs because of a mechanical problem in the outer or middle ear. The three tiny bones of the ear (ossicles) may not conduct sound properly, or the eardrum may not vibrate in response to sound. Fluid in the middle ear can cause this type of hearing loss.
- Sensorineural hearing loss (SNHL) results when there is a problem with the inner ear. It most often occurs when the tiny hair cells (nerve endings) that transmit sound through the ear are injured, diseased, do not function properly, or have prematurely died. This type of hearing loss is sometimes called "nerve damage," although this is not accurate.
CHL is often reversible. SNHL is not. People who have both forms of hearing loss are said to have mixed hearing loss.
HEARING LOSS IN CHILDREN
Screening for hearing loss is now recommended for all newborns. In children, hearing problems may cause speech to develop slowly.
Ear infections are the most common cause of temporary hearing loss in children. Fluid may stay in the ear following an ear infection. The fluid can go unnoticed, or it can cause significant hearing problems in children. Any fluid that remains longer than 8 - 12 weeks is cause for concern.
Preventing hearing loss is more effective than treating it after the damage is done.
- Acoustic trauma such as from explosions, fireworks, gunfire, rock concerts, and earphones
- Barotrauma (differences in pressure)
- Skull fracture (temporal bone)
- Traumatic perforation of the eardrum
- Aminoglycoside antibiotics
- Ethacrynic acid - oral
- Working around loud noises on a continuous day-to-day basis can damage nerve cells responsible for hearing; Increased attention to conditions in the work environment has greatly decreased the chances of work-related hearing loss. See: Occupational hearing loss
Temporary hearing loss can be caused by:
Wax build-up can frequently be flushed out of the ear (gently) with ear syringes (available in drug stores) and warm water. Wax softeners (like Cerumenex) may be needed if the wax is hard and impacted.
Care should be taken when removing foreign bodies. Unless it is easy to get to, have your health care provider remove the object. Don't use sharp instruments to remove foreign bodies.
Call your health care provider if:
Call your health care provider if:
- Hearing problems interfere with your lifestyle
- Hearing problems are persistent and unexplained
- There is sudden, severe hearing loss or ringing in the ears (tinnitus)
- You have other symptoms, such as ear pain , along with hearing problems
What to expect at your health care provider's office:
The health care provider will take your medical history and do a physical examination.
Medical history questions documenting hearing loss in detail may include:
- Is the hearing loss in both ears or one ear?
- Is the hearing loss mild or severe?
- Is all of the hearing lost (inability to hear any sound)?
- Is there decreased hearing acuity (do words sound garbled)?
- Is there decreased ability to understand speech?
- Is there decreased ability to locate the source of a sound?
- How long has the hearing loss been present?
- Did it occur before age 30?
- What other symptoms are also present?
- Is there tinnitus (ringing or other sounds)?
- Is there ear pain?
- Is there dizziness or vertigo?
The physical examination will include a detailed examination of the ears.
Diagnostic tests that may be performed include:
A hearing aid or cochlear implant may be provided to improve hearing.
Baloh RW. Hearing and Equilibrium. In: Goldman L, Ausiello D, eds. Cecil Medicine. 23rd ed. Philadelphia, PA : Saunders Elsevier; 2007: chap 454.
Wrightson AS. Universal newborn hearing screening. Am Fam Physician. 2007; 75(9):1349. | http://www.wdhospital.com/body.cfm?xyzpdqabc=0&id=7&action=detail&AEArticleID=003044&AEProductID=Adam2004_1&AEProjectTypeIDURL=APT_1 | 13 |
14 | After 16 years of planning, research, excavation and development, New Yorkers recently celebrated the opening of the African Burial Ground Memorial in lower Manhattan. Speakers including poet Maya Angelou and Mayor Michael Bloomberg recognized the significance of remembering the long-forgotten burial site where some 15,000 to 20,000 people were interred. Remains from the burial site were uncovered in 1991 during construction of the federal office building on Duane Street and Broadway. A portion of the site is now a National Historic Landmark and National Monument.
But while Bloomberg acknowledged that “for two centuries slavery was widespread in New York City,” the ceremony and the flurry of press reports about it seemed to treat the burial site only as a reminder of past injustice and not as an opportunity to understand and reflect on the powerful historical connections to the present. Put in its true context, the burial grounds serves as a reminder of the powerful role that real estate has always played in displacing African Americans.
How The Burial Grounds Were Created
During the period of Dutch and English settlement, New York City was one of the nation’s largest urban centers for the slave trade and served as a financial patron of the plantation economy in the South. The city also had a sizeable population of slaves and freed blacks in New York City. In the Dutch colony, as many as 40 percent of the population were slaves.
Slaves had no choice of residence and were themselves treated like a commodity, first displaced from their homes in Africa and then traded in a public marketplace at the foot of Wall Street. But even freed blacks found their options limited by official and unofficial discrimination and by changes in the city. When land values and rents went up in the more populated areas of lower Manhattan, blacks were forced to find housing uptown. They kept being pushed uptown in successive waves until the formation of black Harlem. Those who could afford to buy property were often forced to seek land outside Manhattan in settlements like Weeksville, Brooklyn. This constant displacement, experienced in different ways by all people of modest means including many immigrant groups, was especially onerous for black people because discrimination in the real estate market limited their choices.
The African Burial Grounds in lower Manhattan were a result of this displacement process and associated discrimination. At the end of the 17th century, Trinity Church formally banned blacks from its cemetery in lower Manhattan as land in the fortified city became scarce. This led to the creation of the African burial grounds outside the walls of the old city.
By the end of the 18th century, however, the city had expanded beyond the walls and the African burial ground became desirable real estate. It was filled and leveled and then sold by the property owners. Given the exclusion of blacks from government, the public sector was blind to the importance that African culture places on the remembrance of ancestors, including the places where they lived and are buried.
Black Displacement and the Segregated City
The official barriers to racial equality in the city were removed in stages, but “enlightened” New York wasn’t far ahead of the South. In 1827, slavery was abolished in New York State, but until 1854 blacks were banned from “whites-only” streetcars. The Draft Riots of 1863, and other white-led race riots in 1900, targeted black communities and contributed further to their displacement.
In the 20th century, the federal urban renewal program resulted in the destruction of many black and mixed communities, leading to its being dubbed “Negro removal.” At the same time, federal mortgage guarantees did not apply in black neighborhoods. Thus blacks who were forced to move would not have the many options available to other groups. And the neighborhoods, and burial grounds, they left behind were quickly swept away.
Now, displacement of black communities continues to be complicated by the limited choices available, resulting in constant resegregation. In studies of racial segregation, New York consistently comes out as among the most divided cities. While new immigrant communities tend to be more diverse, the black/white divide in New York remains prominent, as revealed in a recent study. Real estate agents steer blacks away from mixed neighborhoods in many subtle ways and whites away from neighborhoods where blacks are moving in (this is known as blockbusting). Gentrification of traditionally black neighborhoods prices people out of the areas they have worked for generations to improve. Are No Sites Sacred?
As blacks have been forced out of communities the neighborhoods, and burial grounds, they left behind were quickly swept away. New York’s African Burial Ground provides a vivid example of this.
The similarities and differences between the official responses to rebuilding the World Trade Center site and to the African Burial Ground provide some perspective on this. After 9/11, Governor George Pataki led the pack of officials calling for a rapid restoration of the 13 million square feet of office space at Ground Zero. There was ample support for this in the media and among the city’s powerful real estate community. The Real Estate Board of New York declared: “We know it is important that downtown remain and grow as a powerful engine of the city’s, region’s and nation’s economies. The best living memorial to those who perished in the World Trade Center attack is to make sure that lower Manhattan emerges from this tragedy as a spectacular center of the global economy.”
The survivors’ families did not agree. Their powerful, organized voices led the state to scrap its original plans to rebuild Ground Zero without a memorial. The families formed multiple organizations, and while some demanded (with support from former mayor Rudolph Giuliani) that the entire site remain open and dedicated as a memorial to 9/11, they all insisted that there had to be a significant memorial. The use of the footprints of the two World Trade Center towers for that purpose eventually became a part of the final plan.
The Ground Zero story may illustrate the powerful role of real estate interests to retain prime office space. But it also demonstrates how that role can be limited when people organize and find powerful patrons in government.
In the case of the African Burial Ground, African American groups faced initial resistance from the federal government. It took a good deal of active organizing by the groups to push for development of the memorial, and the process dragged on for 16 years. Most amazing, however, is the fairly limited size of the memorial footprint. The federal government never seriously considered canceling its office tower project for a more prominent memorial. Construction of the federal building was hardly delayed, and it now casts a long shadow over the moving “Ancestral Libation Chamber.” Despite the National Park Service’s quality programming for the site, it is easy to lose sight of this treasure among the shadows of downtown towers.
This seems to be part of a larger pattern. This city, which has always had a sizeable African American population, has a disproportionately small number of historic landmarks in African American communities. In part this is a result of the traditional emphasis in preservation circles on noble buildings by star architects (of European descent) and the resistance by traditional preservationists to preserving places as memorials to important historical events and cultural practices. (A good antidote to this thinking is Place Matters.) Beyond that, though, it appears that every memorial site in the Big Apple needs vocal advocates to balance the powerful pull of the real estate industry.
• • • • •
Land Use readers may be interested in the recently released community plan for the Vanderbilt Yards, an alternative to Forest City Ratner’s stalled megaproject. Go to www.unityplan.org for information and press reports.
Tom Angotti is Professor of Urban Affairs and Planning at Hunter College, City University of NY, editor of Progressive Planning Magazine, and a member of the Task Force on Community-based Planning. | http://www.gothamgazette.com/index.php/development/3690-burial-ground-bears-witness-to-a-segregated-city | 13 |
52 | What is... inflation?
Put simply, inflation is a general rise in prices.
When prices of goods and services are on average rising, inflation is positive. Note that this does not mean that all prices are rising, or that they are all rising at the same rate. In fact, if enough prices fall, the average may fall too, resulting in negative inflation, which is also known as deflation.
Inflation 101 – how it is measured
Inflation is typically measured as the percentage change in a representative collection of prices. The most well-known collection is the ‘consumer price index’ (CPI), a measure of the prices of the goods and services that consumers buy each month. The inflation rate is typically quoted as the percentage change on the level of prices from the same month a year ago.
How inflation works in the shops
An inflation rate of 5% per year means that if your shopping costs you $100 today, it would have cost you about only $95 a year ago. If inflation stays at 5%, the same basket of shopping will cost you $105 in a year’s time. If inflation stays at 5% for ten years, this same shopping will cost you $163.
The winners and losers with inflation
Inflation is generally bad news for:
- Consumers - because it means the cost of living is rising. This means that money is losing value, or purchasing power.
- Savers - because it means that the value of savings is going down. When inflation is high, savings will buy less in the future.
It’s good news for:
- Borrowers – because it means that the value of debt is being reduced. The higher the inflation rate, the smaller the burden that future interest payments will place on borrowers’ future spending power.
Strategies to handle inflation
Make sure that you take inflation into account when thinking about your money. Inflation is more disruptive when it is unexpected. When everyone expects it, the impact can be reduced by factoring it into pay deals and interest rates.
For example, let’s say inflation is expected to be 2%. In this case, workers and consumers will not be so worried if their pay is rising at a 5% rate. This is because their spending power is still rising by more than inflation. Similarly, if the interest rate of your savings account is 6%, this still leaves savers’ incomes running ahead of the 2% inflation rate.
The differences between inflation and pay increase and interest rates may seem small but they have a big effect over time. As a result, they can have serious implications for how much money you will have in retirement.
For example, if inflation is only 2% (a rate considered appropriate by many governments) but your wages do not increase, the amount of goods you will be able to buy in 10 years’ time is 22% less than now. In 20 years’ time it would be 49% less and in 30 years 81% less.
Given that most people work for 30 years or more, the effect of inflation on their standard of living over time can be dramatic. Conversely, if you borrow money at a fixed rate for a long period and inflation moves above the interest rate that you are paying, you can save a great deal of money.
The problem for consumers and savers comes when inflation is higher than expected. If inflation jumps to 7% when your pay is only rising at 5% and your savings are only earning 6%, your spending power is declining in what is called “real” terms. As a worker, if you can, ask for more pay, work longer hours or find a higher paid job. As a saver, look for savings products that keep pace with, or grow faster than inflation.
But for borrowers, higher than expected inflation is good news. This is because the interest rates that you are paying may not keep up with inflation. Better still, the trick is to try to lock into low interest rates when they happen to be low by borrowing at fixed rates. That way, you are shielded from any subsequent pick-up in inflation.
Weird World – when inflation goes extreme
At the extreme, inflation can turn into hyper-inflation. When inflation starts to rise at rates of 100%, 1,000% or 10,000%, people rush to spend money before it becomes worthless.
A famous example is Germany in 1923. At the peak of its hyperinflation, prices were doubling every 4 days. The central bank’s printing presses were struggling to keep up by over-printing ever higher denomination bank notes: the highest was the 100,000,000,000,000 Mark note! The resulting economic chaos is widely seen as one of the factors behind the rise of Hitler.
A more recent example of hyperinflation is in Zimbabwe, which has seen prices double every day with an inflation rate of 79,600,000,000%. An interesting table showing the highest inflation rates in history is given in this short article from the Cato Institute. www.cato.org
By Chris Dillow – Investors Chronicle writer and economist, for eZonomics.
What is eZonomics?
eZonomics by ING is an online platform about money and your life. eZonomics combines ideas around financial education, personal finance and behavioural economics to produce regular and practical information about the way people manage their money – and how this can affect their lives.
eZonomics is funded by ING and produced in ING’s global economics department. Our aims are strongly influenced by ING’s mission statement: “To set the standard in helping our customers manage their financial future.” | http://www.ing.com/Our-Company/About-us/Features-archive/Features/What-is...-inflation.htm | 13 |
24 | Evolution of Capital Punishment
Written by: Masone4718
Capital punishment can be defined as the penalty of death for the commission of a crime. The death sentence has been a traditional form of justice through time. But time, trade and geography has altered its form. In many countries today, capital punishment is a fundamental part of criminal justice systems. The death sentence is a major way of ensuring respect and instilling fear in people. It was not until recent times that the punishment of death was reserved for murder and other major offences. Throughout time, capital punishment has evolved from an extremely gruesome public display, to a more painless and serene type of penalty. Capital punishment has been around since approximately 1500 BCE (Laurence 2). The criminal condemned was found “guilty of magic” and was sentenced to death. The exact mode of his death was “left to the culprit, who was his own executioner.” (Laurence 2). In England, there is no record of capital punishment earlier than 450 BCE, when it was “the custom to throw those condemned to die in a quagmire.” (Laurence 2). A quagmire is a soft, wet, yielding land. The Mosaic law is “full of mention of the punishment of death. . . the principal mode of execution being stoning”. (Laurence 2). Forms of capital punishment that were common in early times are: the pouring of molten lead on the criminal, starvation in dungeons, tearing to death by “read-hot pincers and sawing asunder”, plus many others. (Laurence 2). The death penalty was not always used for major offences. In the Twelve Tables, from the Roman Empire from approximately 450 CE, many misdemeanors were recognized to be punished by death. Some of these include: “Publishing libels and insulting songs. . . burning a house or a stack of corn near a house. . . cheating, by a Patron, of his client. . . making disturbances in the City at night”. Many of the penalties were carried out by burning at the stake, or in one instance, being “clubbed to death”. (Laurence 3). In the time of Paul, around 60 CE, crucifixion, burning and decapitation were in use. One of the “cruelties inflicted by Nero on those sentenced to death was impalement”. (Laurence 3). Such an atrocity was practiced as late as 1876 in the Balkan peninsula, “while under Charles V criminals were thrown into their open graves and impaled by pointed stakes”. (Laurence 3). Capital Punishment in the Roman empire for parricides, people who killed their own parents, was malicious, yet strange. They were “thrown into the water in a sack, which contained a dog, a cock, a viper and an ape”. Parricide has always been selected for special punishment in all countries and ages. The Romans also inflicted the death penalty by “drowning at sea; precipitation from the Tarpeian rock; burial alive and burning to death”. (Laurence 3). In all countries capital punishment “was, with comparatively rare exceptions, public.” (Laurence 4). When criminals are executed, the “most public places are chosen, where there will be the greatest number of spectators, and so the most for the fear of punishment to work upon them.” (Laurence 4). This statement states that most executions were held in public, so fear can be instilled to every person, trying to keep them from committing a crime. Crucifixion was a practice that “originated with the Persians and was later passed on to the Carthaginians and the Phoenicians.” The Romans perfected it as “a method of capital punishment, which caused maximum pain and suffering over a significant period of time.” (Crucifixion). In crucifixion, the criminal would have his or her hands and feet nailed to a cross in the shape of a ‘T’ and left to die. This form of execution is “widely associated with Christianity” because it was “the way in which Jesus Christ was put to death.” (Forms of Execution). Burning at the stake dates back to the Christian era, where in 643, an edict declared it illegal to burn witches. This was a popular form of death that was used mostly for heretics, witched and suspicious women. (Bobit). The suspected offender was tied to a wooden stake which was encircled by sticks at the bottom. The sticks were then lit, with flames engulfing the culprit. An old practice was that “before the faggots were lighted round them they were strangled at the stake.” (Laurence 3). The Iron Maiden was a form of capital punishment in medieval times. The offender would be placed inside female effigies constructed of iron, sometimes wood, with the inside hollowed out and filled with sharp iron spikes. The person would then be embraced by the iron maiden, being impaled by the stakes. Often, the effigy would be opened, pulling out the spikes, and then closed again, causing more pain and suffering. (Forms of Execution). The statutory punishment for treason in England from 1283 to 1867 was drawing and quartering. First the prisoner was drawn to the place of the execution on a hurdle, a type of sledge. Originally he was dragged behind a horse. Then he was hanged. He was then cut down while still alive, and disemboweled and his entrails were burned before his eyes. As a final point, the condemned was beheaded, and his body cut into quarters. The remains were often put on display as a warning to others. (Adams). One of the earliest and easiest forms of execution was beheading. The easiest and most uncivil form of beheading was by axe of sword. This form of execution was quite popular in Germany and England during the 16th and 17th centuries, “where decapitation was thought to be the most humane form of capital punishment. An executioner. . . would chop off the person’s head with an axe or sword.” The last beheading by axe took place in 1747. (Bobit). In Scotland, the Maiden was used for beheading. The Maiden was an early form of the guillotine. A blade or axe, “moving in grooves. . . was fixed in a frame about ten feet in height. The blade was raised. . . and then released, severing the victim’s head from his body.” (Laurence 40). Its use was discontinued in 1710. (Laurence 99). “From Hell, Hull and Halifax, good Lord deliver us” is a popular Yorkshire saying. The Halifax referred to the Halifax gibbet, a “form of guillotine” which flourished in the sixteenth century. (Laurence 99). The Halifax, as described by William Harrison, was a “square block of wood. . . which does ride up and down in a slot. . . between two pieces of timber. . . In the. . . sliding block is an axe. . . there is a long rope fastened that cometh down among the people. . . when the offender hath made his confession and hath laid his neck over the nethermost block, every man there. . . take hold of the rope. . . pulling out the pin. . . wherein the axe. . . doth fall down with such a violence that. . . the neck. . . should be cut. . . at a stroke and roll from the body.” If the offender was apprehended for any such cattle, the cattle, or other of its same kind, had the rope tied to them so that they draw out the pin, executing the offender. The last documented execution by the Halifax Gibbet was in April of 1650. (Laurence 38-39, 99). The guillotine became a popular form of execution in France during the late 1700s. Dr. Joseph Guillotin proposed that all criminals should be executed by the same method. Decapitation was thought to be the least painful and most humane method of execution at the time. (Bobit). The prototype of the guillotine was tested on sheep and calves, and then on human corpses. After the blade was perfected, the first execution by guillotine took place in 1792. The guillotine was widely used during the French Revolution, where many executions took place in public outside the prison of Versailles. The last documented use of the guillotine was in 1977 in France. (Bobit). Hanging was a popular way of both executing and torturing a person. The condemned person stands on a platform. A noose is placed around his neck. When the platform drops, the person falls 6 to 8 feet before the rope tightens and fractures, or sometimes dislocates, the upper spinal column. This breaks, or badly bruises, the spinal cord. This form of execution br
Pages: 1 2
Please do not pass this sample essay as your own, otherwise you will be accused of plagiarism. Our writers can write any custom essay for you!
Capital Punishment Should be Stopped Each year there are about 250 people added to death row and 35 executed. The death penalty is the harshest form of punishment enforced in the United Sates today. Once a jury has convicted a criminal offense they then proceed to the second part of the trial, the punishment phase. If the jury recommends the To Kill Or Not To Kill Capital punishment has been in effect since the 1600's (Cole 451). However, in 1972 the U. S. Supreme Court ruled that the death penalty was cruel and unusual punishment, which was unconstitutional according to the Eighth amendment. It was public opinion that the current methods of execution, hanging, electrocution, and facing a firing squad, were Two Wrongs Don’t Make a Right? The question of whether capital punishment is right or wrong is a truly tough choice to make. Capital punishment (death penalty) is legal because the government of the United States of America says that it is all right to execute another human being if their crimes are not punishable by other means. There are many Capital Punsihment Sample essay topic, essay writing: Capital Punsihment - 777 words
Capital Punishment There once was a man, long ago, that was sentenced to death for crimes the government said he had committed. He was not a murderer or a thief. He wasn't a rebel he was just a normal man doing what was right. Since the Capital Punishment: For Sample essay topic, essay writing: Capital Punishment: For - 391 words
Capital Punishment: ForWith out the death penalty families of murder victims would be delt adouble blow. Many families feel that the only justice they could posibllyrecive is having the murder put on death row. In the book 'Rights ofexeccution' by louis Mansor there was a
Need Book Reports, essays, lectures? Save to bookmarks - » Evolution of Capital Punishment. Collections of essays on literature! | http://www.mannmuseum.com/evolution-of-capital-punishment/ | 13 |
20 | Interactive Ear - Click on an area of the ear to find out more.
Sound travels into the ear in waves, or vibrations.
The Outer Ear "funnels" sound vibrations to the tympanic membrane (eardrum). Blockage of the outer ear can lead to conductive hearing loss.
The Tympanic Membrane, otherwise known as the eardrum, vibrates when the sound waves hit it. If the eardrum is not vibrating properly then hearing will be affected. This is a conductive hearing loss.
The Middle Ear contains the three smallest bones in the human body; the malleus, incus, and stapes. Otherwise known as the hammer, anvil and stirrup due to their shapes. The malleus attaches to the tympanic membrane, so when it is vibrated by sound the vibration is passed through the chain of bones and on to the oval window. Fluid can build up in this area due to a blockage of the eustation tube. This causes conductive hearing loss. The three bones can become detached through a traumatic incident, or fused through disease, also causing conductive hearing loss.
The Eustation Tube is responsible for keeping the middle ear healthy. It connects to the throat and drains excess fluid. Blockage of the eustation tube results in a middle ear infection due to build up of fluid, this causes conductive hearing loss.
The Oval Window is where the stapes joins to the cochlea. This is the border of the middle and inner ear.
The Inner Ear houses the cochlea and the semicircular canals.
The Cochlea is a fluid filled spiral. It converts the vibrations delivered through the oval window into electrical signals which it delivers to the auditory nerve. Different parts of the cochlea are responsible for the hearing of different frequencies (pitches). Damage to the cochlea results in sensorineural hearing loss and can be caused by noise exposure.
The Auditory Nerve delivers the electrical sound signals from the cochlea to the auditory processing area in the brain where they are interpreted.
The Semicircular Canals are not related to hearing. They are part of the vestibular system. The vestibular system is responsible for balance.
Preventing Hearing Loss
Here are some simple steps you can take to protect your hearing:
- Always use hearing protection when participating in noisy activities, e.g. mowing the lawn, trimming hedges, using power tools etc. Use ear plugs, ear muffs, or both in extremely loud situations.
- If you come across loud noise in unusual situations when ear protection is not available e.g. road works or construction sites; use your hands to protect your ears. Some protection is better than none at all.
- Keep music at a reasonable level. This is especially important if you are listening through headphones or in the car, as enclosed spaces accentuate noise levels.
- Do not put small objects such as cotton buds or hair pins etc, into your ears. These can damage the sensitive skin in the ear canal, or the eardrum itself. If you are bumped or get a sudden fright the implement could pierce your eardrum causing serious long term damage.
- Get regular hearing checks if you work in a loud environment or participate in noisy activities.
- If you or anyone close to you suspects you have a hearing loss, see a specialist immediately.
Sounds louder than 80 decibels can damage your hearing. Some common sounds and their decibel levels are:
- Rock concerts, Firecrackers - 140 decibels
- Chainsaw - 110 decibels
- Wood shop - 100 decibels
- Lawnmower, motorcycle - 90 decibels
- Busy city traffic noise - 80 decibels
- Normal conversation - 60 decibels
- Refrigerator humming - 40 decibels
- Whispered voice - 20 decibels
Types of Hearing Loss
Hearing loss comes in many forms and can have many causes. The two main types of hearing loss are "conductive" hearing loss and "sensorineural" hearing loss.
"Conductive" hearing loss involves damage to, or a blockage of, the ear canal and/or middle ear cavity. This can be caused by numerous factors including, ear infections, fluid retention in the middle ear cavity or trauma that damages the structures within the ear.
"Sensorineural" hearing loss affects the structures of the inner ear and/or auditory nerve. It can be caused by things such as prolonged exposure to loud noise.
A person can have a mixed hearing loss in which both conductive hearing loss and sensorineural hearing loss are present.
Another common hearing complaint is Tinnitus.
Tinnitus is a condition where there is a sustained buzzing, ringing or hissing heard in the ears. Tinnitus is often accompanied by hearing loss and is thought in many cases to be due to damage to the auditory system.
Solutions to Hearing Loss
Any of the hearing problems listed above can be treated or compensated for by our team of knowledgeable and professional clinicians.
Our clinicians utilise a range of the very best technological devices to ensure that your needs are met effectively and efficiently in a stress free environment. Our team of highly trained professionals will ensure that you gain the maximum hearing possible to enhance your enjoyment of everyday life. | http://www.tritonhearing.co.nz/Information/20 | 13 |
17 | Just as the indomitable Sacramento City was beginning to cope with and protect itself from the common natural disasters of flooding, man had a hand in placing new obstacles in the path of this growing city.
The Gold Rush brought population, prosperity and even the state Capitol to Sacramento, but it also resulted in new environmental challenges and a new source of flooding that ultimately led to dramatic changes in flood control.
These changes began with increasing the heights of the levees, filling in creeks and sloughs, rechanneling tributaries and expanding the breadth of the Sacramento River through the creation of weirs and bypasses.
The property and economic devastation of the flood of 1861-62 left the people of Sacramento with a feeling that nature and the rivers had done their worst. And then the unthinkable happened, as the American River rose to its highest level in 1867.
This same flood caused the Sacramento River and its many tributaries to overflow their newly created levees and destroy the hastily prepared dams and modifications that were put in by local districts and privates citizens.
These new high water marks established throughout the region called for a more coordinated flood control effort on the part of cities and agricultural areas within the Sacramento Valley.
One of the first big engineering endeavors was to take the big bend out of the west end of the American River that flowed into Sutter Lake, near the confluence of the American and Sacramento rivers. This is part of the current location of the Union Pacific railyard, which is located north of the California State Railroad Museum.
The rechanneling project began in 1864 and was completed four years later. As a result of this new channel, the American River met with the Sacramento River one mile further north. Even after raising the levees and rechanneling the American River, the city experienced another flood.
The citizenry was perplexed in how the rainfall could be less, the snowmelt could be slower, the levees could be higher and yet the river could still overflow its banks.
The answer to this conundrum was found in the very phenomenon that gave the city its existence.
Gold brought wealth, people, and then it brought floods.
As the easy to reach placer deposits of gold dried up and deep hard rock mining became expensive, the miners turned to water power to seek their fortunes.
Hydraulic mining was used in small scale ventures in the 1850s, but by the following decade and into the 1870s, huge companies used enormous water cannons known as monitors to demolish large hills and even small mountains in their quest for gold.
After the gold was removed, the rest of the detritus was sent into streams, which flowed into larger waterways that filled the channels of the Sacramento River and its tributaries.
It became apparent to the engineers and many others that it was not rising waters that were causing the floods, but it was instead rising river bottoms choking the channels, causing the flooding and impacting navigation.
According to the 1957 book, “The Geography of the Sacramento-San Joaquin Delta, California,” by John Thompson, “By 1866, debris had ended the infamous side-by-side steamboat races along the Sacramento River.”
It also had a dramatic effect upon the farmers and their land, because the mining refuge left from the floods was not the same as the rich alluvium left by the natural annual rise and fall of the river that enriched the soil and increased production.
Instead what came down from the mines were rock fragments of varying sizes and elements. These waters carried mercury, cyanide and other poisons, which could sterilize the soil, kill crops and harm animals and even people.
Despite the obvious harm from hydraulic mining, the companies refused to halt or even limit this activity.
The hydraulic monitors allowed mine owners to hire a few men to perform work that once required hundreds of workers.
The friction created by this conflict of ideas caused a rift and debate among miners, farmers, environmentalists, navigation companies and recreationalists that lasted for decades.
Not everyone was going to be able to realize their objectives, so something would have to change.
The financially powerful mining industry and its strong political lobby was able to ignore the pleas of a concerned citizenry based on the concept that California and its Sacramento Valley were a state and a region born of the Gold Rush.
But as the waterways continued to fill with debris and mining slush, and levees failed and agricultural production decreased, it became apparent that channels, overflows and drains could not solve the problems created by hydraulic mining.
The unnatural flooding of the Sacramento River and its tributaries became a national, rather than a regional problem.
The mining interests were so powerful that they were able to defeat all legislative attempts to control the pollution and destruction. But 1878 became the proverbial “last straw.”
A city that had already endured several inundations and had gone to great lengths to protect itself from more flooding, once again found itself underwater, as Sacramento experienced another major flood on Feb. 1, 1878.
The 1880 book, “History of Sacramento County, California,” presented various details about this flood.
Included in the book were the following words: “At 2 o’clock on the morning of that day, a break was reported in the levee near Lovedall’s (sic) Ranch, on the Sacramento River, the city and Sutterville. Almost immediately thereafter, a section of the levee, some twelve feet in width, washed out, having been completed honey-combed by gophers. The noise of the torrent pouring through the crevasse could be heard distinctly at a great distance. (That evening), the Sacramento (River) was twenty-five feet, 2 inches above the low water mark, higher than ever before known.”
Sacramentans were tired of floods, tired of mining – which was no longer the center of economy – and tired of politics and politicians who thwarted meaningful attempts to control these unnatural inundations.
Concerned citizens found a way to circumvent the powerful mining lobby by controlling navigation rather than extraction to stop the devastation of the hydraulic mining. But it took another six years to accomplish.
How the city finally controlled the problem and one of the most exotic solutions of how Sacramento tried to deal with the problem will be covered in the next article of this series.
Just as the indomitable Sacramento City was beginning to cope with and protect itself from the common natural disasters of flooding, man had a hand in placing new obstacles in the path of this growing city.
Editor’s Note: This is part three in a series about the history of the Sacramento River.
When presenting a history of the city’s rivers, it is important to not only provide details about major floods, but also measures that were made to combat potential floods.
The 1880 book, “History of Sacramento County, California,” notes that prior to the great flood of January 1850, “nothing had been attempted in the matter of protection from flood or high water.”
Capt. John Sutter and the Indians, who showed him where to build his fort, recognized that the proposed location for the new Sacramento City was in a natural flood plain that was regularly inundated in the winter months.
Flood control became an immediate concern of the citizenry and politicians.
The Saturday, Jan. 19, 1850 edition of the Placer Times included the following words: “A week ago last night, our city experienced one of the most terrific southeast storms known in this region, which had the effect of swelling the Sacramento (River) by Wednesday afternoon, so that the water commenced running over the slough on I Street, at various points between First and Third (streets). On Thursday morning, the entire city, within a mile of the embarcadero, was under water. The damage to merchandise and to buildings and the losses sustained by persons engaged in trade is very great – vast quantities of provisions and goods having been swept away by the rushing waters. The loss in livestock is almost incalculable; many persons have lost from 10 to 50 yoke of cattle each, and horses and mules have been carried down the stream in great numbers.”
It was obvious to all people concerned that flooding in the area needed to be stopped and the waters held at bay.
But there were some people who found a “gold lining” in the inundation.
The Times also reported in its Jan. 19, 1850 edition that “large numbers (of people) have been washing gold within the limits of our city during the week, without any great degree of success.”
It was also noted in the 1880 county history book that “waters had scarcely begun to recede from the city (following the January 1850 flood) when surveyors were employed to survey lines for and make a location of the proposed levee.”
A levee commission was established on Jan. 29, 1850 and one of the commissioners was Hardin Bigelow, who on April 1, 1850 became Sacramento’s first elected mayor, largely because of his support of building levees.
The need for building levees was immediate, but the funds for doing so were nonexistent.
Bigelow arranged for the city to borrow funds beyond the city’s $10,000 limit, and he also provided $6,000 from his personal assets.
With this money, the city was able to construct temporary embankments, which held off the anticipated second flood of 1850 and demonstrated the need and efficacy of levees.
On April 29, 1850, voters approved a special $250,000 tax assessment for a permanent levee that was built between September and December 1850.
The contract for the levee was given to Irwin, Gay & Co. on Sept. 6, 1850 and the labor began several days afterward.
Although the levee was not yet completed by Oct. 25, 1850, on that date, the San Francisco newspaper, the Daily Alta California referred to Sacramento City as “our sister, the Levee City.”
The levee, which commenced to the south at the high ground near Sutterville, ran for about nine miles along the northern and western boundaries of the city. And with this levee, the people of Sacramento City felt safe.
But less than a year and a half later – on March 7, 1852 – new raging waters broke through the sluice gate at Lake Sutter, breached the levee and once again inundated the city.
As a result, Sutter’s Fort, the knoll at the current site of Cesar Chavez Plaza and Poverty Ridge on the southeast side of the city stood as islands in a lake that in low spots reached 12 feet deep.
While once again the economic devastation was extensive, according to an article, titled “Sacramento defies the River: 1850-1878” by Marvin Brienes, “No lives were lost, and warnings before the levees gave way enabled many Sacramentans to remove their most valuable goods to high ground.”
Three days after the city was flooded, Mayor James Richmond Hardenbergh called for a new levee to be constructed on I Street, from the Front Street levee to 5th Street, from 5th Street along the edge of Lake Sutter and then to the levee of 1850, along the American River.
The proposal was adopted by the common council and this $50,000 project was completed after about two months of labor in November 1852.
Although local citizens were once again feeling safe in the Levee City, this feeling lasted only three weeks, as the American River levee was broken on Dec. 19, leaving a 40-foot-wide crevice.
Eventually, 150 feet of the levee was destroyed and Sacramento City was under water.
In its Dec. 25, 1852 edition, the Daily Alta California reported the following: “The water was running through Eighth Street, some six feet deep. Several lives were supposed to have been lost. One man was seen floating down the river on the top of his house. At the foot of L Street, a whole block is afloat; the Eagle Saloon is washed away and is floating round.”
As mentioned in the previous article of this series, on New Years Day 1853, the water level of the Sacramento River was 22 feet above the low water mark and two feet higher than the great flood of 1850.
By Jan. 2, 1853, floodwaters once again entered the heart of the city.
Frustrations mounted for the city’s “burned out and flooded citizens,” as one local man described the area’s residents.
In an early January 1853 letter to the editors of The Sacramento Union, the man wrote: “Our city government has been in operation nearly three years, has expended more than two hundred thousand dollars upon the levee, and very large sums for other purposes. Our taxes have been greater perhaps than those of any other city in the world; our city debt is now very large; and after all this taxation and expenditure, the city has not received a benefit commensurate with the costs. We have received nothing like a fair equivalent for our money.”
On July 29, 1853, a city ordinance “for widening, altering and improving the levee, and providing for the payment of the expense” was approved by the mayor and common council.
The cost was set at no more than $50,000 and the work, which was completed by the latter part of 1853, was paid for in scrip known as the “Levee Scrip.” The levee along Burns Slough at the eastern end of the city and down R Street was separate from this approximate sum and was paid for through a loan.
The levee system, which later underwent various improvements, proved to be a successful barrier against major floods in the city for several years. But that level of prosperity quickly changed on Dec. 9, 1861.
On Saturday, Sept. 15, about 2,500 volunteers are expected to take part in the American River Parkway Foundation’s annual Great American River Clean Up.
According to Stacy Springer, volunteer manager for the American River Parkway Foundation, which is based in Carmichael, these volunteers will spend three hours that morning cleaning up 20 site locations along the American River of trash and other debris. “And that does not even include the huge kayak and dive teams that go out and address the shoreline and deeper water channels,” she said.
Springer said it’s easy to volunteer for the Great American River Clean Up – volunteers just need to register on the Foundation’s website, www.arpf.org, and then show up on the day of the clean up wearing closed-toe shoes and long pants, plus sunblock and hat if the day is sunny and warm.
A site captain, such as Heidi Steger, a Sacramento resident who has been a site captain for the Great American River Clean Up for the past four years, mans each clean up location. Steger oversees the Howe Avenue river access location, which she said covers from Sacramento State upstream to the Watt Avenue location.
On the day of the clean up, Steger is in charge of putting up signs, handing out gloves to those who don’t have their own, distributing trash bags, waters and snacks, and giving some basic safety instructions to about 100 volunteers at her location.
She said there are both paved and unpaved portions of the Parkway, so volunteers can feel comfortable depending on their abilities. “If you’re the kind of person who doesn’t feel bad about walking through high grass or even under trees and through some brambles, that’s fine, but if you’re the kind that wants to stick to the path, that’s fine, too,” she said.
When out on the Parkway, Steger said volunteers are asked to pick up everything from cigarette butts to car tires. “There’s cans, bottles, paper trash, paper bags, plastic containers – it’s a mix,” she said.
There’s an emphasis on picking up cigarette butts at the clean up site location of Michael Rebensdorf, who has been a site captain for almost 10 years. At his site at Sailor Bar – just below Nimbus Dam, across the river from the fish hatchery – Rebensdorf holds a contest for picking up the most cigarette butts. “People walk by the little trash – they want to get the big trash to fill their bag up,” he said.
Having an Impact
Rebensdorf said through his years as a site captain at Sailor Bar, he has seen the amount of trash picked up each year decline significantly. “It’s probably 25 percent now from when I first started,” he said.
He said this is because people are more conscious of not throwing things on the ground and littering. “At Sailor Bar, there’s an entry point for fisherman to the river and I think they’ve become a lot more aware,” he said. “It’s more conscious in people’s minds that if you come out here and throw your things on the ground, you’re not going to be able to come out and fish anymore.”
Steger said the clean up also helps community members get a feel for what the riverbanks are like. “They figure out if you want to be able to enjoy this wonderful gift of the American River, you’ve got to take care of it a little bit, “she said.
And Stringer said volunteers leave with an awareness that everyone is responsible for their backyard regardless of where the trash comes from. “As good citizens and good Samaritans, we want to make sure that if we have to pick up somebody else’s trash because it’s laying there, then that’s what we do – it’s taking on a higher level of responsibility,” she said.
In addition to the Great American River Clean Up, the American River Parkway Foundation has volunteers that help keep the Parkway clean all year long through various programs. One of these programs is the Volunteer Mile Steward program, where individuals and groups adopt a mile of the Parkway and commit to 20 hours of service per quarter to help keep it clean, according to Springer.
“Every mile is a little different – 99 percent of it is trash removal, but we have graffiti removal issues at times,” Springer said. “That’s a very popular program and we have very dedicated volunteers.”
Two of those volunteers are residents Theresa and Steve Graham. About seven years ago they adopted Mile 4, which starts behind the REI in the Arden area and runs down to Cal Expo. Steve said he and Theresa decided to adopt that particular mile because as an avid bicyclist he was using it all the time and realized he should give back.
Steve said he and Theresa go out two hours about twice a month to pick up trash on their mile and report any graffiti or encampments they encounter. “We pick up every little bit because I don’t want an animal stepping in this or eating this, so even if it’s a flip-top from a can it all comes up because we’ve got to keep this clean for the animals as they are there all the time,” he said.
Theresa enjoys their work on the mile as she enjoys being outside and exploring the nature in the area, as well as the flexibility the program offers. “You can do it at what time-frame works for you – you can make it work into your schedule, which really works for us,” she said.
And she also likes the good internal feeling volunteering gives her. “You feel like you’re contributing to the good of the society,” she explained. “I think everybody should do something for the good of their community – it just gives you pride in it.”
For more information on the Great American River clean up or volunteering with the American River Parkway Foundation, visit www.arpf.org.
About 60 percent of the world’s population does not have access to fresh drinking water. By making simple changes, everyone can make a big impact on water consumption.
This concept was conveyed at a water conservation workshop presented by the city of Sacramento Department of Utilities Water Conservation Office on July 14 at 2260 Glen Ellen Circle.
Vincent Smelser, water conservation specialist for the city of Sacramento, began the morning by explaining the city ordinances in effect to save water. Smelser let folks know there are many ways to save on their water bill. He pointed out enforcement comes in the form of citations and fines can get up to $500.
Water use around the home
Smelser suggested when washing the car, use a shut-off nozzle. Running hoses are no longer allowed, he said.
Another way to save on water is sweeping the patio or sidewalk instead of hosing it down.
Smelser said per city ordinance, the only time water is allowed for cleaning a sidewalk is if there is an unsanitary event, but to be careful not to wash animal excrement or chemicals into the gutter, that also constitutes a fine.
When to water
Watering is allowed between 7 a.m. and 4 p.m. For spring through fall, odd number addresses water on Tuesdays, Thursdays and Saturdays. Even number addresses water on Wednesdays, Fridays and Sundays.
During winter, (when daylight saving time ends) folks are allowed to water only one day a week, either Saturday or Sunday.
Smelser said often times improperly functioning sprinklers waste a lot of water.
Another water waster are older toilets. The city has a rebate program up to $100 for toilets installed prior to 1992. The city also offers free showerheads and aerators for the sink.
On average a person saves 25 gallons of water the first 10 minutes of their shower using a water saving showerhead, he said.“The courthouse on Bicentennial Circle saved 300,000 gallons of water a year just by replacing the aerators,” Smelser said. “Just by using a water efficient toilet, one can save 12,000 gallons of water a year.”
The city of Sacramento makes water-wise house calls for folks within city limits. A trained water conservation specialist will visit the home or office to identify potential water savings both inside the home and outside. If needed, the city will analyze and make suggestions on how to improve the soil, keeping water costs down.
Smelser said the city is able to identify leaks through smart meter technology. The water department is able to tell by looking at a residential water bill online where the leaks are located. Consumer’s now have the option of looking at their bill online to see where their water is being used most frequently.
Smelser demonstrated various methods used for watering; spray, hose and drip. The city provides information on the best watering system for different types of landscapes.
Smelser said to keep sprinklers in good repair. There are proper designs to keep sprinkler heads from breaking. Pop ups should be even with the ground. A good timer is essential to saving water.
“Seventy percent of water goes to landscaping in the summer, and switches to bathrooms in winter” Smelser said. “27 to a 1,000 gallons of water per irrigation is used for a typical landscape.”
A water-efficient yard
David Campbell, Siegfried Engineering and designer of the city of Sacramento’s water efficient demonstration garden, gave a presentation discussing drought tolerant plants, shrubs and grasses used for landscaping. He also discussed efficient ways to design yards and water saving irrigation systems.
Campbell, a licensed landscape architect, said when designing a landscape around saving water, there are specific things to think about.
The function and design of outdoor landscaping, turf alternatives and how efficiently the water is delivered are important in designing a water saving landscape.
“When thinking about what your yard is used for, turf is not the only answer,” Campbell said. “Grass is the cheapest, but not the most water efficient way to landscape a yard.”
Landscapes may include gardens, a place to escape to, or a place to attract birds and butterflies. Campbell said often yards are used for screening or buffering the home from busy streets and noise.
Types of plants
Campbell discussed a variety of plants, ornamental grasses, shrubs and groundcovers that are drought tolerant. He said some landscapes change throughout the year with the seasons and some folks enjoy seeing their landscape change.
There are many types of grasses that do not need constant mowing, watering, aerating, or fertilizing. He said ornamental grasses are not meant for foot traffic.
“A group called WUCOS (Water Use Classification of Landscape Species) now has empirical data on how much water certain types of landscapes use,” Campbell said. “The information can be accessed online through the University of California Extension.”
The irrigation system
Campbell explained there are different types of conversions kits people can use to update and improve their irrigation system. In general, overhead sprays are 30 to 55 percent efficient, rotators and rotors are 65 to 75 percent efficient, bubbles and micro sprays are 80 to 85 percent efficient and drip is 85 to 90 percent efficient.
All who came to the meeting left with buckets full of free goodies to improve water use in the home and information on how to conserve water with an efficient landscape.
For more information on water savings, visit www.cityofsacramento.org/utilities or call 311.
On the first Saturday of October, more than 50 volunteers converged around Duck Lake, William Land Park’s largest pond, armed with rakes, gloves and a determination to clean up the park’s pond and surrounding areas.
These folks are called the Land Park Volunteer Corps and they meet each month to take part in what they call “park work days.” The group was created after the City of Sacramento had to cut Department of Parks and Recreation employees by more than 60 percent in the last three years. Neighbors and city residents decided to step up and do their part to keep their local parks running green.
“I think it’s wonderful what the volunteers are doing because it maintains the ecology of the area, and it’s vitally important when you live in such a crowded area that you have a place you can take a walk or have a picnic in,” said Greenhaven resident, Alessia Wood.
Every month for nearly over two hours, the environmentally-aware group cuts, prunes, plants, and fills garbage bags of debris. But overgrown bushes, roots and left over picnic garbage is not the only thing this group picks up. Land Park Corps organizer, Craig Powell, said there are times when volunteers also see dead fish and birds around the big pond area.
“Some of our volunteers use extension nets and weed around the border of the pond. It’s a dark, murky pond. It’s very difficult for anyone to look at to see what’s in it,” Powell said. “Besides the concern of the appearance of Duck Lake, our main concern is that there are a lot of migratory birds, like the Canada geese, and families who fish there every single day for food for their table. We are not aware of anybody testing the quality of this water to see if it’s safe to eat the fish from there.”
Duck Lake was established in the early 1920s, and is located in the western-most part of the park, along Land Park Drive. Duck Lake was drained, dredged and widened in the winter of 1959. In 1998, it was stocked with 370 trout.
Powell claims that at one time he has seen 15 to 20 dead fish floating on top of the pond and that he has called and alerted the City.
“That should raise some alarm; there is something going on,” he said. “The response I got back from the City is, ‘it just happens sometimes.’”
Powell suspects that run-off from the street is the cause. He believes the City has failed to put in new plumping pipes to resolve the problem.
City leaders say that is not the case. While no testing has been done on the water by either the City or the Volunteer Corps, officials said there are a number of potential reasons for the issues the neighbors are concerned about at Duck Lake.
“Duck Lake is filled with well water from the park’s ground water wells,” said Jessica Hess, City of Sacramento Department of Utilities spokesperson. Ponds such as this do not have natural filtration systems and tend to become polluted from the wildlife they attract. And the hot summertime temperatures is another issue; the water is relatively stagnant.”
According to Hess, the pond gets run-off from two sources: the golf course and a drain. The golf course is the main source of run-off. This water flows through some grassy areas which act as a filter to help extract any potential contaminants from the run-off. The drain in the parking lot on 15th Ave, which runs alongside Fairytale Town, sends water into the botanical garden.
“This botanical garden acts as a natural filter for the urban runoff from the parking lot,” Hess explained. “As the urban runoff goes through the garden, the plants and small ponds within the garden act like ‘nature’s soap’ and allow the contaminants to settle.”
Then what about the dead fish and birds seen around the pond area?
Some say it could be caused by people pouring liquids and throwing trash and debris into the pond or on the ground nearby – where it can then flow into the water.
“These, too, can impact the amount of available oxygen which can impact water clarity,” said Hess.
Susan Helay, Birds Exhibit supervisor at the Sacramento Zoo, suspect’s human error can also be to blame, particularly among those who fish out of Duck Lake.
“We get a lot of the ducks that have swallowed fishing hooks, or their necks are tied up in left-over fishing lines,” Helay said. “Sometimes we can’t catch the birds to help them because they fly away. Not to mention, many of these animals and fish get old and die off naturally as well.”
Helay did say that if there were several fish or birds found dead at one period of time then there should be concern, but they have not seen anything like that recently.
“Sometimes the animals’ waste in the water can impact the amount of oxygen available which can impact the clarity,” she said.
Helay added that the well-water that is provided at the pond is considered safe and is used at the Zoo as well.
Councilman Robert Fong said he is aware of the Volunteer Corps concern about the District 4 Duck Lake and surrounding area. He said that the City is doing everything they can to keep the park and ponds safe and clean.
“The water in the pond is being filled with well-water, the same water we use in City drinking fountains,” said Councilman Fong. “I’ve been going to William Land Park as a kid, it’s one of our crown jewels, and we would never do anything to hurt one of our natural beauties.”
Residents and Businesses may only water one day a week under City’s Irrigation Ordinance
The City of Sacramento Department of Utilities reminds residents and businesses when changing their clocks on Nov. 7, to change their irrigation schedules as well.
The City’s current irrigation rules, found in the Water Conservation ordinance, state that at the conclusion of daylight savings time, residents and businesses may water on either Saturday or Sunday only. There is no watering allowed on weekdays.
For more information about water conservation and the City’s conservation ordinance, please visit www.sparesacwater.org.
The Regional Water Authority and local water providers launched a new public service campaign April 14 in Land Park that promotes landscape water efficiency in the Sacramento region.
With the Sacramento region’s hot, dry climate and long summer season, more than 65 percent of a household’s yearly water consumption typically goes toward landscape irrigation. Of that, 30 percent is lost due to overwatering or evaporation.
The kick-off event for the new public service campaign was held at Sara Shultz’s Land Park home. Shultz and her 3-year-old daughter will be featured in the television advertisements demonstrating how they earned a “Blue Thumb.”
For more information on neighborhood water conservation, visit www.bewatersmart.info. | http://www.valcomnews.com/?tag=water | 13 |
17 | Introduction to coaxial cables
A coaxial cable is one that consists of two conductors that share a common axis. The inner conductor is typically a straight wire, either solid or stranded and the outer conductor is typically a shield that might be braided or a foil.
Coaxial cable is a cable type used to carry radio signals, video signals, measurement signals and data signals. Coaxial cables exists because we can't run open-wire line near metallic objects (such as ducting) or bury it. We trade signal loss for convenience and flexibility. Coaxial cable consists of an insulated ceter conductor which is covered with a shield. The signal is carried between the cable shield and the center conductor. This arrangement give quite good shielding agains noise from outside cable, keeps the signal well inside the cable and keeps cable characteristics stable.
Coaxial cables and systems connected to them are not ideal. There is always some signal radiating from coaxial cable. Hence, the outer conductor also functions as a shield to reduce coupling of the signal into adjacent wiring. More shield coverage means less radiation of energy (but it does not necessarily mean less signal attenuation).
Coaxial cable are typically characterized with the impedance and cable loss. The length has nothing to do with a coaxial cable impedance. Characteristic impedance is determined by the size and spacing of the conductors and the type of dielectric used between them. For ordinary coaxial cable used at reasonable frequency, the characteristic impedance depends on the dimensions of the inner and outer conductors. The characteristic impedance of a cable (Zo) is determined by the formula 138 log b/a, where b represents the inside diameter of the outer conductor (read: shield or braid), and a represents the outside diameter of the inner conductor.
Most common coaxial cable impedances in use in various applications are 50 ohms and 75 ohms. 50 ohms cable is used in radio transmitter antenna connections, many measurement devices and in data communications (Ethernet). 75 ohms coaxial cable is used to carry video signals, TV antenna signals and digital audio signals. There are also other impedances in use in some special applications (for example 93 ohms). It is possible to build cables at other impedances, but those mentioned earlier are the standard ones that are easy to get. It is usually no point in trying to get something very little different for some marginal benefit, because standard cables are easy to get, cheap and generally very good. Different impedances have different characteristics. For maximum power handling, somewhere between 30 and 44 Ohms is the optimum. Impedance somewhere around 77 Ohms gives the lowest loss in a dielectric filled line. 93 Ohms cable gives low capacitance per foot. It is practically very hard to find any coaxial cables with impedance much higher than that.
Here is a quick overview of common coaxial cable impedances and their main uses:
- 50 ohms: 50 ohms coaxial cable is very widely used with radio transmitter applications. It is used here because it matches nicely to many common transmitter antenna types, can quite easily handle high transmitter power and is traditionally used in this type of applications (transmitters are generally matched to 50 ohms impedance). In addition to this 50 ohm coaxial cable can be found on coaxial Ethernet networks, electronics laboratory interconnection (foe example high frequency oscilloscope probe cables) and high frequency digital applications (fe example ECL and PECL logic matches nicely to 50 ohms cable). Commonly used 50 Ohm constructions include RG-8 and RG-58.
- 60 Ohms: Europe chose 60 ohms for radio applications around 1950s. It was used in both transmitting applications and antenna networks. The use of this cable has been pretty much phased out, and nowdays RF system in Europe use either 50 ohms or 75 ohms cable depending on the application.
- 75 ohms: The characteristic impedance 75 ohms is an international standard, based on optimizing the design of long distance coaxial cables. 75 ohms video cable is the coaxial cable type widely used in video, audio and telecommunications applications. Generally all baseband video applications that use coaxial cable (both analogue and digital) are matched for 75 ohm impedance cable. Also RF video signal systems like antenna signal distribution networks in houses and cable TV systems are built from 75 ohms coaxial cable (those applications use very low loss cable types). In audio world digital audio (S/PDIF and coaxial AES/EBU) uses 75 ohms coaxial cable, as well as radio receiver connections at home and in car. In addition to this some telecom applications (for example some E1 links) use 75 ohms coaxial cable. 75 Ohms is the telecommunications standard, because in a dielectric filled line, somewhere around 77 Ohms gives the lowest loss. For 75 Ohm use common cables are RG-6, RG-11 and RG-59.
- 93 Ohms: This is not much used nowadays. 93 ohms was once used for short runs such as the connection between computers and their monitors because of low capacitance per foot which would reduce the loading on circuits and allow longer cable runs. In addition thsi was used in some digital commication systems (IBM 3270 terminal networks) and some early LAN systems.
The characteristic impedance of a coaxial cable is determined by the relation of outer conductor diameter to inner conductor diameter and by the dielectric constant of the insulation. The impednage of the coaxial cable chanes soemwhat with the frequency. Impedance changes with frequency until resitance is a minor effect and until dielectric dielectric constant is table. Where it levels out is the "characteristic impedance". The freqnency where the impedance matches to the characteristic impedance varies somwehat between different cables, but this generally happens at frequency range of around 100 kHz (can vary).
Essential properties of coaxial cables are their characteristic impedance and its regularity, their attenuation as well as their behaviour concerning the electrical separation of cable and environment, i.e. their screening efficiency. In applications where the cable is used to supply voltage for active components in the cabling system, the DC resistance has significance. Also the cable velocity information is needed on some applications. The coaxial cable velocity of propagation is defined by the velocity of the dielectric. It is expressed in percents of speed of light. Here is some data of come common coaxial cable insulation materials and their velocities:
Polyethylene (PE) 66% Teflon 70% Foam 78..86%
Return loss is one number which shows cable performance meaning how well it matches the nominal impedance. Poor cable return loss can show cable manufacturing defects and installation defects (cable damaged on installation). With a good quality coaxial cable in good condition you generally get better than -30 dB return loss, and you should generally not got much worse than -20 dB. Return loss is same thing as VSWR term used in radio world, only expressed differently (15 dB return loss = 1.43:1 VSWR, 23 dB return loss = 1.15:1 VSWR etc.).
Often used coaxial cable types
General data on some commonly used coaxial cables compared (most data from http://dct.draka.com.sg/coaxial_cables.htm, http://www.drakausa.com/pdfsDSC/pCOAX.pdf and http://users.viawest.net/~aloomis/coaxdat.htm):
Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U RG-58 C/U RG-213U RG-62 A/U Impedance (ohms) 75 75 75 75 75 50 50 93 Conductor material Bare Copper Bare Tinned Tinned Tinned Bare Copper Copper Planted Copper Copper Copper Copper Copper Planted Steel Steel Conductor strands 1 1 1 7 7 19 7 1 Conductor area (mm2) 0.95 0.58 1.63 0.40 0.40 0.18 0.75 0.64 Conductor diameter 0.028" 0.023" 0.048" 0.035" 0.089" 0.025" 21AWG 23AWG 18AWG 20AWG 13AWG 22AWG Insulation material Foam PE PE Foam PE PE PE PE Pe PE (semi-solid) Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm 2.95 7.25 3.7 mm Outer conductor Aluminium Bare Aluminium Bare Base Tinned Bare Bare polyester copper polyester copper copper copper copper copper tape and wire tape and wire wire wire wire wire tin copper braid tin copper braid braid braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% 95% 97% 95% braid 61% Braid 61% Outer sheath PVC PVC PVC PVC PE PVC PVC PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 4.95 mm 10.3 6.15 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 100 pF 100 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF 28.3 pF 30.8 13.5 pF Velocity 78% 66% 78% 66% 66% 66% 66% 83% Weight (g/m) 59 56 108 140 220 38 Attenuation db/100m 50 MHz 5.3 8 3.3 4.6 4.6 6.3 100 MHz 8.5 12 4.9 7 7 16 7 10 200 MHz 10 18 7.2 10 10 23 9 13 400 MHz 12.5 24 10.5 14 14 33 14 17 500 MHz 16.2 27.5 12.1 16 16 20 900 MHz 21 39.5 17.1 24 24 28.5
NOTE: The comparision table above is for information only. There is no guarantee of correctness of data presented. When selecting cable for a certain application, check the cable data supplied by the cable manifacturer. There can be some differences on the performance and specifications of different cables from different manufacturers. For example the insulation rating of cables vary. Many PE insulated coax cables can handle several kilovots voltage, while some foam insulated coax cables cna handle only 200 volts or so.
NOTE: Several of cables mentioned above are available with foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80).
Cable type RG-6 RG-59 B/U RG-11 RG-11 A/U RG-12 A/U TELLU 13 Tasker RGB-75 Impedance (ohms) 75 75 75 75 75 75 75 Impedance accuracy +-2 ohms +-3 ohms +-2 ohms +-3% Conductor material Bare Copper Bare Tinned Tinned Bare Bare Copper Planted Copper Copper Copper Copper Copper Steel Conductor strands 1 1 1 7 7 1 10 Conductor strand(mm2) 0.95 0.58 1.63 0.40 0.40 1mm diameter 0.10mm diameter Resistance (ohm/km) 44 159 21 21 22 210 Insulation material Foam PE PE Foam PE PE PE Foam PE Insulation diameter 4.6 mm 3.7 mm 7.24 mm 7.25 mm 9.25 mm Outer conductor Aluminium Bare Aluminium Bare Base Copper Tinned polyester copper polyester copper copper foil under copper tape and wire tape and wire wire bare copper tin copper braid tin copper braid braid braid braid braid Coverage Foil 100% 95 % Foil 100% 95% 95% Foil ~95% braid 61% Braid 61% Braid 66% Resistance (ohm/km) 6.5 8.5 4 4 12 ~40 Outer sheath PVC PVC PVC PVC PE PVC (white) PVC Outside diameter 6.90 mm 6.15 mm 10.3 mm 10.3 mm 14.1 mm 7.0 mm 2.8 mm Capacitance per meter 67 pF 67 pF 57 pF 67 pF 67 pF 55 pF ~85 pF Capacitance per feet 18.6 20.5 16.9 20.6 20.6 pF Velocity 78% 66% 78% 66% 66% 80% 66% Screening factor 80 dB Typical voltage (max) 2000V 5000V 1500V Weight (g/m) 59 56 108 140 220 58 Attenuation db/100m 5 MHz 2.5 1.5 50 MHz 5.3 8 3.3 4.6 4.6 4.7 19.5 100 MHz 8.5 12 4.9 7 7 6.2 28.5 200 MHz 10 18 7.2 10 10 8.6 35.6 400 MHz 12.5 24 10.5 14 14 12.6 60.0 500 MHz 16.2 27.5 12.1 16 16 ~14 ~70 900 MHz 21 39.5 17.1 24 24 19.2 90.0 2150 MHz 31.6 3000 MHz 37.4NOTE: The numbers with ~ mark in front of them are approximations calculated and/or measured from cables or cable data. Those numbera are not from manufacturer literature. NOTE2: Several of cables mentioned above are available in sepcial versionswith foam insulation material. This changes the capacitances to somewhat lower value and gives higher velocity (typically around 0.80).
General coaxial cable details
The dielectric of a coaxial cable serves but one purpose - to maintain physical support and a constant spacing between the inner conductor and the outer shield. In terms of efficiency, there is no better dielectric material than air. In most practical cables cable companies use a variety of hydrocarbon-based materials such as polystyrene, polypropylenes, polyolefins and other synthetics to maintain structural integrity.
Sometimes coaxial cables are used also for carrying low frequency signals, like audio signals or measurement device signals. In audio applications especially the coaxial cable impedance does not matter much (it is a high frequency property of cable). Generally coaxial has a certain amount of capacitance (50 pF/foot is typical) and a certain amount of inductance. But it has very little resistance.
General characteristics of cables:
- A typical 50 ohm coax coaxial cable is pretty much 30pf per foot (doesn't apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 50 ohms coaxial cables are used in most radio applications, in coaxial Ethernet and in many instrumentation applications.
- A typical 75 ohm coaxial cable is about 20 pf per foot (doesn't apply to miniature cables or big transmitter cables, check a cable catalogue for more details). 75 ohms cable is used for all video application (baseband video, monitor cables, antenna networks cable TV, CCTV etc.), for digital audio (S/PDIF, coaxial AES/EBU) and for telecommunication application (for example for E1 coaxial cabling).
- A typical 93 ohm is around 13 pf per foot (does not apply to special cables). This cable type is ued for some special applications.
Please note that these are general statements. A specific 75 ohm cable could be 20pF/ft. Another 75 ohm cable could be 16pF/ft. There is no exact correlation between characteristic impedance and capacitance.
In general, a constant impedance (including connectors) cable, when terminated at both ends with the correct load, represents pure resistive loss. Thus, cale capacitance is immaterial for video and digital applications.
Typical coaxial cable constructions are:
- Flexible (Braided) Coaxial Cable is by far the most common type of closed transmission line because of its flexibility. It is a coaxial cable, meaning that both the signal and the ground conductors are on the same center axis. The outer conductor is made from fine braided wire, hence the name "braided coaxial cable". This type of cable is used in practically all applications requiring complete shielding of the center conductor. The effectiveness of the shielding depends upon the weave of the braid and the number of braid layers. One of the draw-backs of braided cable is that the shielding is not 100% effective, especially at higher frequencies. This is because the braided construction can permit small amounts of short wavelength (high frequency) energy to radiate. Normally this does not present a problem; however, if a higher degree of shielding is required, semirigid coaxial cable is recommended. In some high frequency flexible coaxial cables the outer shield consists if normal braids and an extra aluminium foil shield to give better high frequency shielding.
- Semirigid Coaxial Cable uses a solid tubular outer conductor, so that all the RF energy is contained within the cable. For applications using frequencies higher than 30 GHz a miniature semirigid cable is recommended.
- Ribbon Coaxial Cable combines the advantages of both ribbon cable and coaxial cable. Ribbon Coaxial Cable consists of many tiny coaxial cables placed physically on the side of each other to form a flat cable. Each individual coaxial cable consists of the signal conductor, dielectric, a foil shield and a drain wire which is in continuous contact with the foil. The entire assembly is then covered with an outer insulating jacket. The major advantage of this cable is the speed and ease with which it can be mass terminated with the insulation displacement technique.
Often you will hear the term shielded cable. This is very similar to coaxial cable except the spacing between center conductor and shield is not carefully controlled during manufacture, resulting in non-constant impedance.
If the cable impedance is critical enough to worry about correctly choosing between 50 and 75 Ohms, then the capacitance will not matter. The reason this is so is that the cable will be either load terminated or source terminated, or both, and the distributed capacitance of the cable combines with its distributed inductance to form its impedance.
A cable with a matched termination resistance at the other end appears in all respects resistive, no matter whether it is an inch long or a mile. The capacitance is not relevant except insofar as it affects the impedance, already accounted for. In fact, there is no electrical measurement you could make, at just the end of the cable, that could distinguish a 75 Ohm (ideal) cable with a 75 Ohm load on the far end from that same load without intervening cable. Given that the line is teminated with a proper 75 ohm load (and if it's not, it damn well should be!), the load is 75 ohms resistive, and the lumped capacitance of the cable is irrelevant. Same applies to other impedance cables also when terminated to their nominal impedance.
There exist an effect that characteristic impedance of a cable if changed with frequency. If this frequency-dependent change in impedance is large enough, the cable will be impedance-matched to the load and source at some frequencies, and mismatched at others. Characteristic impedance is not the only detail in cable. However there is another effect that can cause loss of detail fast-risetime signals. There is such a thing as frequency-dependent losses in the cable. There is also a property of controlled impedance cables known as dispersion, where different frequencies travel at slightly different velocities and with slightly different loss.
In some communications applications a pair of 50 ohm coaxial cables are used to transmit a differential signal on two non-interacting pieces of 50-ohm coax. The total voltage between the two coaxial conductors is double the single-ended voltage, but the net current in each is the same, so the differential impedance between two coax cable used in a differential configuration would be 100 ohms. As long as the signal paths don't interact, the differential impedance is always precisely twice the single-ended impedance of either path.
RF coax(ial) connectors are a vital link in the system which uses coaxial cables and high frequency signals. Coax connectors are often used to interface two units such as the antenna to a transmission line, a receiver or a transmitter. The proper choice of a coax connector will facilitate this interface.
Coax connectors come in many impedances, sizes, shapes and finishings. There are also female and male versions of each. As a consequence, there are thousands of models and variations, each with its advantages and disadvantages. Coax connectors are usually referred to by series designations. Fortunately there are only about a dozen or so groupings or series designations. Each has its own important characteristics, The most popular RF coax connector series not in any particular order are UHF, N, BNC, TNC , SMA, 7-16 DIN and F. Here is quicl introduction to those connector types:
- "UHF" connector: The "UHF" connector is the old industry standby for frequencies above 50 MHz (during World War II, 100 MHz was considered UHF). The UHF connector is primarily an inexpensive all purpose screw on type that is not truly 50 Ohms. Therefore, it's primarily used below 300 MHz. Power handling of this connector is 500 Watts through 300 MHz. The frequency range is 0-300 MHz.
- "N" connectors: "N" connectors were developed at Bell Labs soon after World War II so it is one of the oldest high performance coax connectors. It has good VSWR and low loss through 11 GHz. Power handling of this connector is 300 Watts through 1 GHz. The frequency range is 0-11 GHz.
- "BNC" connctor: "BNC" connectors have a bayonet-lock interface which is suitable for uses where where numerous quick connect/disconnect insertions are required. BNC connector are for exampel used in various laboratory instruments and radio equipment. BNC connector has much lower cutoff frequency and higher loss than the N connector. BNC connectors are commonly available at 50 ohms and 75 ohms versions. Power handling of this connector is 80 Watts at 1 GHz. The frequency range is 0-4 GHz.
- "TNC" connectors are an improved version of the BNC with a threaded interface. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-11 GHz.
- "SMA" connector: "SMA" or miniature connectors became available in the mid 1960's. They are primarily designed for semi-rigid small diameter (0.141" OD and less) metal jacketed cable. Power handling of this connector is 100 Watts at 1 GHz. The frequency range is 0-18 GHz.
- "7-16 DIN" connector: "7-16 DIN" connectors are recently developed in Europe. The part number represents the size in metric millimeters and DIN specifications. This quite expensive connector series was primarily designed for high power applications where many devices are co-located (like cellular poles). Power handling of this connector is 2500 Watts at 1 GHz. The frequency range is 0-7.5 GHz.
- "F" connector: "F" connectors were primarily designed for very low cost high volume 75 Ohm applications much as TV and CATV. In this connector the center wire of the coax becomes the center conductor.
- "IEC antenna connector": This is a very low-cost high volume 75 ohm connector used for TV and radio antenna connections around Europe.
Tomi Engdahl <Tomi.Engdahl@iki.fi> | http://www.epanorama.net/documents/wiring/coaxcable.html | 13 |
23 | * In the 1960s there was an international competition to build a supersonic transport (SST), which resulted in the development of two supersonic airliners, the Anglo-French "Concorde" and the Soviet Tupolev "Tu-144". Although the SST was seen as the way of the future, that wasn't how things actually turned out. This document provides a short history of the rise and fall of the supersonic transport.
* With the push towards supersonic combat aircraft during the 1950s, aircraft manufacturers began to think about developing a supersonic airliner, what would eventually become known as a "supersonic transport (SST)". In 1961, Douglas Aircraft publicized a design study for an SST that would be capable of flying at Mach 3 at 21,350 meters (71,000 feet) and could be flying by 1970. Douglas forecast a market for hundreds of such machines.
At the time, such a forecast seemed realistic. During the 1950s, commercial air transport had made a radical shift from piston-powered airliners to the new jetliners like the Boeing 707. Going to an SST was simply the next logical step. In fact, as discussed in the next section, Europe was moving even faster down this road than the US. In 1962 the British and French signed an agreement to actually build an SST, the "Concorde". With the Europeans committed to the SST, of course the Americans had to follow, and the US Federal Aviation Administration (FAA) set up a competition for an SST that would be faster, bigger, and better than the Concorde.
In 1964, SST proposals from North American, Lockheed, and Boeing were selected as finalists. Although North American had built the two XB-70 Valkyrie experimental Mach 3 bombers, which had a configuration and performance similar to that of an SST and were used as testbeds for SST concepts, the company was eliminated from the competition in 1966. Lockheed proposed the "L-2000", a double-delta machine with a capacity of 220 passengers, but the winner was Boeing's "Model 2707", the name obviously implying a Mach 2 aircraft that would be as significant as the classic Boeing 707. Boeing was awarded a contract for two prototypes on 1 May 1967.
The 2707 was to be a large aircraft, about 90 meters (300 feet) long, with a maximum load of 350 passengers. It would be able to cruise at Mach 2.7 over a range of 6,440 kilometers (4,000 miles) with 313 passengers. At first, the 2707 was envisioned as fitted with variable geometry "swing wings" to permit efficient high-speed flight -- with the wings swept back -- and good low-speed handling -- with the wings extended.
Powerplants were to be four General Electric GE-J5P afterburning turbojet engines, derived from the GE J93 engines used on the XB-70, with a maximum afterburning thrust of 267 kN (27,200 kgp / 60,000 lbf) each. The engines were to be fitted into separate nacelles under the wing. Further work on the design demonstrated that the swing-wing configuration was simply too heavy, and so Boeing engineers came up with a new design, the "2707-300", that had fixed wings.
However, America in the late 1960s was all but overwhelmed by social upheaval that involved questioning the need to come up with something bigger and better, as well as much increased concerns over the environment. Critics massed against the SST, voicing worries about its sonic booms and the possible effects of its high-altitude cruise on the ozone layer. The US Congress finally zeroed funds for the program on 24 March 1971 after the expenditure of about a billion USD on the project. There were 121 orders on the books for the aircraft when it was canceled. SST advocates were dismayed, but later events would prove that -- even ignoring the arguments over environmental issues -- the SST was simply not a good business proposition and proceeding with the project would have been a big mistake.BACK_TO_TOP
* As mentioned, the British and French were actually ahead of the US on SST plans. In 1955, officials of the British aviation industry and British government agencies had discussions on the notion of an SST, leading to the formation of the "Supersonic Transport Aircraft Committee (STAC)" in 1956. STAC conducted a series of design studies, leading to leading to the Bristol company's "Bristol 198", which was a slim, delta-winged machine with eight turbojet engines designed to cross the Atlantic at Mach 2. This evolved into the somewhat less ambitious "Bristol 223", which had four engines and 110 seats.
In the meantime, the French had been conducting roughly similar studies, with Sud-Aviation of France coming up with a design surprisingly similar to the Bristol 223, named the "Super Caravelle" after the innovative Caravelle twinjet airliner developed by Sud-Aviation in the 1950s. Given the similarity in the designs and the high cost of developing an SST, British and French government and industry officials began talks in September 1961 to see if the two nations could join hands for the effort.
After extensive discussions, on 29 November 1962, the British and French governments signed a collaborative agreement to develop an Anglo-French SST, which became the "Concorde". It was to be built by the British Aircraft Corporation (BAC), into which Bristol had been absorbed in the meantime, and Rolls-Royce in the UK; and Sud-Aviation and the SNECMA engine firm in France. The original plan was to build a 100-seat long-range aircraft for transoceanic operations and a 90-seat mid-range aircraft for continental flights. In fact, the mid-range aircraft would never be built.
The initial contract specified the construction of two flight prototypes, two static-test prototypes, and two preproduction aircraft. BAC was responsible for development and production of:
Sud-Aviation was responsible for development and production of:
Design of the automatic flight control system was subcontracted by Aerospatiale to Marconi (now GEC-Marconi) in Britain and SFENA (now Sextant Avionique) in France. Final assembly of British Concordes was at Filton and of French Concordes was at Toulouse.
Airlines began to place options for purchase of Concordes in June 1963, with service deliveries originally expected to begin in 1968. That proved a bit over-optimistic. Prototype construction began in February 1965. The initial "001" prototype was rolled out at Toulouse on 11 December 1967, but it didn't perform its first flight for another 15 months, finally taking to the air on 2 March 1969, with a flight crew consisting of Andre Turcat, Jacques Guignard, Michel Retif, and Henri Perrier. The first flight of the "002" prototype took place from Filton on 9 April 1969. Flight trials showed the design to be workable, though it was such a "bleeding edge" machine that there were a lot of bugs to be worked out. First supersonic flight by 001 wasn't until 1 October 1969, and its first Mach 2 flight wasn't until 4 November 1970.
The first preproduction machine, "101", performed its initial flight from Toulouse on 17 December 1971, followed by the second, "102", which performed its initial flight from Filton on 10 January 1973. The first French production aircraft, "201", performed its initial flight from Toulouse on 6 December 1973, by which time Sud-Aviation had been absorbed into Aerospatiale. The first British production machine, "202", performed its initial flight from Filton on 13 February 1974, both machines well exceeding Mach 1 on their first flight. These two production machines were used for flight test and never entered commercial service.
14 more production machines were built, the last performing its initial flight on 20 April 1979, with seven Concordes going into service with British Airways and seven into service with Air France. The Concorde received French certification for passenger operations on 13 October 1975, followed by British certification on 5 December 1975. Both British Airways and Air France began commercial flights on 21 January 1976. The Concorde was finally in service.
There has never been a full accounting of how much it cost the British and French governments to get it there, but one modern estimate is about 1.1 billion pounds in 1976 values, or about 11 billion pounds or $18.1 billion USD in 2003 values. Of the 20 Concordes built, six never carried any paying passengers. In fact, only nine of the production machines were sold at "list value". The other five were simply given to British Airways and Air France for literally pocket change, apparently just to get them out of the factories.
* The initial routes were London to Bahrain, and Paris to Rio de Janiero via Dakar. Service to Washington DC began on 24 May 1976, followed by flights to New York City in December 1977. Other routes were added later, and there were also large numbers of charter flights, conducted mostly by British Airways.
The manufacturers had obtained options for 78 Concordes, most prominently from the US carrier Pan-American, but by the time the aircraft was ready to enter service interest had evaporated. Sonic boom ensured that it could not be operated on overland routes, a consideration that had helped kill off the mid-range Concorde, and even on the trans-Atlantic route the thundering noise of the four Olympus engines led to restrictions on night flights to New York City, cutting the aircraft's utilization on the prime trans-Atlantic route in half.
The worst problem, however, was that the 1970s were characterized by rising fuel prices that rendered the thirsty SST clearly uneconomical to operate. It required 3.5 times more fuel to carry a passenger in the Concorde than in a Boeing 747 with its modern, fuel-efficient high-bypass turbofans. The Americans had been sensible to kill off the Boeing 2707-300: even if the environmental threat of the machine had been greatly exaggerated, the 2707-300 would have never paid itself off.
There was some muttering in Britain and France that Pan-Am's cancellation of its Concorde orders and the restrictions on night flights into New York City were part of a jealous American conspiracy to kill the Concorde, but Pan-Am brass had simply done the numbers and wisely decided the Concorde didn't make business sense. Pan Am had analyzed use of the Concorde on trans-Pacific flights, such as from San Francisco to Tokyo, and quickly realized that its relatively limited range meant refueling stops in Honolulu and Wake Island. A Boeing 747 could make the long-haul trip without any stops, and in fact would get to Tokyo faster than the Concorde under such circumstances. First-class customers would also have a much more comfortable ride on the 747.
The Port Authority of New York & New Jersey was mainly worried about irate townspeople raising hell over noisy Concordes waking them up in the middle of the night. These "townspeople" were assertive New Yorkers, after all, and they had been pressuring the Port Authority with various complaints, justified or not, over aircraft operations from Idlewild / Kennedy International Airport since 1958. In fact there were few jetliners noisier than the Concorde, and in another unfortunate irony the new high-bypass turbofans used by airliners such as the 747 were not only much more fuel-efficient than older engines, they were much quieter, making the Concorde look all that much worse in comparison.
Some Europeans were not surprised by the Concorde's problems. In 1966, Henri Ziegler, then head of Breguet Aviation of France, commented with classic French directness: "Concorde is a typical example of a prestige program hastily launched without the benefit of detailed specifications studied in partnership with airlines."
Ziegler would soon become the first boss of Airbus Industries, which would rise to effectively challenge mighty Boeing for the world's airliner market. Airbus was established on the basis of such consultations between aircraft manufacturers and airlines. The Concorde program would have important lessons for Airbus, though mostly along the lines of how not to do things. The full duplication of Concorde production lines in the UK and France was seen as a particular blunder that substantially increased program costs. Airbus took the more sensible strategy of having different elements built in different countries, then transporting them to Toulouse for final assembly and flight check.
* The Concorde was a long, dartlike machine with a low-mounted delta wing and four Orpheus afterburning turbojets, with two mounted in a pod under each wing. It was mostly made of aircraft aluminum alloys plus some steel assemblies, but featured selective high-temperature elements fabricated from Iconel nickel alloy. It was designed for a cruise speed of Mach 2.2. Higher speeds would have required much more extensive use of titanium and other high-temperature materials.
The pilot and copilot sat side-by-side, with a flight engineer behind on the right, and provision for a fourth seat. The crew flew the aircraft with an automatic flight control system, guiding their flight with an inertial navigation system backed up by radio navigation systems. Avionics also included a suite of radios, as well as a flight data recorder.
The nose was drooped hydraulically to improve the forward view during takeoff and landing. A retractable transparent visor covered the forward windscreen during supersonic cruise flight. There were short "strake" flight surfaces beneath the cockpit, just behind the drooping nose, apparently to help ensure airflow over the tailfin when the aircraft was flying at high angles of attack.
Each of the four Rolls-Royce / SNECMA Olympus 593 Mark 10 engines was rated at 169.3 kN (17,255 kgp / 38,050 lbf) thrust with 17% afterburning. The engine inlets had electrical de-icing, variable ramps on top of the inlet throat, and auxiliary inlet / outlet doors on the bottom. Each engine was fitted with a bucket-style variable exhaust / thrust reverser. The Olympus had been originally developed in a non-afterburning form for the Avro Vulcan bomber, and a Vulcan had been used in trials of the Concorde engines. The Concorde used afterburner to get off the ground and up to operating speed and altitude, and then cruised at Mach 2 on dry (non-afterburning) thrust. It was one of the first, possibly the first, operational aircraft to actually cruise continuously at supersonic speeds. Interestingly, at subsonic speeds the aircraft was inefficient, requiring high engine power that drained the fuel tanks rapidly.
Total fuel capacity was 119,786 liters (26,350 Imperial gallons / 31,645 US gallons), with four tanks in the fuselage and five in each wing. Fuel trim was maintained by an automatic system that shuttled fuel between trim tanks, one in the tail and a set in the forward section of the wings, to maintain the proper center of gravity in different flight phases.
The wing had an elegantly curved "ogival" form factor, and a chord-to-thickness ratio of 3% at the wing root, and featured six hydraulically-operated elevon control surfaces on each wing, organized in pairs. The tailfin featured a two-section rudder, apparently to provide redundancy and improve safety. The Concorde had tricycle landing gear, with a twin-wheel steerable nosewheel retracting forward, and four-wheel bogies in a 2-by-2 arrangement for the main gear, retracting inward. The landing gear featured carbon disk brakes and an antiskid system. There was a retractable tail bumper wheel to protect the rear of the aircraft on takeoff and landing.
Maximum capacity was in principle 144 passengers with a high-density seating layout, but in practice seating was not more than 128, and usually more like 100. Of course all accommodations were pressurized and climate-controlled, and the soundproofing was excellent, resulting in a smooth and quiet ride. There were toilets at the front and middle of the fuselage, and galleys front and back. Customer service on the flights placed substantial demands on the stewards and stewardesses because at cruise speed, the Concorde would reach the limit of its range in three hours.
AEROSPATIALE-BAC CONCORDE: _____________________ _________________ _______________________ spec metric english _____________________ _________________ _______________________ wingspan 25.56 meters 83 feet 10 inches wing area 385.25 sq_meters 3,856 sq_feet length 62.10 meters 203 feet 9 inches height 11.40 meters 37 feet 5 inches empty weight 78,700 kilograms 173,500 pounds MTO weight 185,065 kilograms 408,000 pounds max cruise speed 2,180 KPH 1,345 MPH / 1,175 KT service ceiling 18,300 meters 60,000 feet range 6,580 kilometers 4,090 MI / 3,550 NMI _____________________ _________________ _______________________
The two prototypes had been slightly shorter and had been fitted with less powerful Olympus engines. A "Concorde B" was considered, with airframe changes -- including leading edge flaps, wingtip extensions, modified control surfaces, and 4.8% more fuel capacity -- plus significantly improved Olympus engines that provided incrementally better fuel economy, allowing a nonstop trans-Pacific flight, and greater dry thrust, allowing takeoffs without noisy afterburner. However, the Concorde B still couldn't operate over land and it still couldn't compete with modern subsonic jetliners in terms of fuel economy. It never got off the drawing board.
* On 25 July 2000, an Air France Concorde was departing from the Charles de Gaulle airport outside Paris when one of its tires hit a piece of metal lying on the runway. The tire disintegrated and a piece of rubber spun off and hit the aircraft, setting up a shockwave that ruptured a fuel tank. The airliner went down in flames and crashed near the town of Gonesse, killing all 109 people aboard and four people who had the bad luck to be in the impact area. All 12 surviving Concordes were immediately grounded pending an investigation.
Safety modifications were made to all seven British Airways and all five surviving Air France Concordes. The bottom of the fuel tanks, except those in the wing outboard of the engines, was fitted with flexible Kevlar-rubber liners to provide them with a limited "self sealing" capability; minor safety modifications were made to some electrical systems; and new "no blowout" tires developed by Michelin were fitted. British Airways also implemented a previously planned update program to fit their seven aircraft with new passenger accommodations.
The Concorde returned to flight status on 7 November 2001, but it was a hollow triumph. The economics of even operating the Concorde, let alone developing it, were marginal, and with the economic slump of the early 21st century both Air France and British Airways were losing money on Concorde flights. In the spring of 2003, Air France announced that they would cease Concorde operations as of 31 May 2003, while British Airways would cease flights by the end of October 2003. The announcement led to unprecedented levels of passenger bookings for the final flights.
Air France's most worked aircraft, named the "Fox Alpha", had performed 5,845 flights and accumulated 17,723 flight hours. One Air France technical manager claimed that the British and French Concorde fleets had accumulated more supersonic time than all the military aircraft ever built. That may be an exaggeration -- how anyone could compile and validate such a statistic is a good question -- but it does illustrate the unique capabilities of the aircraft. Interestingly, spares were never a problem, despite the age and small numbers of Concordes, since large inventories of parts had been stockpiled for the machines.
It was a sign of the Concorde's mystique that the aircraft were in great demand as museum pieces. Air France CEO Jean-Cyril Spinetta said: "We had more requests for donations than we have aircraft." One ended up on display at the Charles de Gaulle Airport near Paris, while another found a home at the US National Air & Space Museum's Steven F. Udvar-Hazy Center at Dulles International Airport in Washington DC. In something of an irony, one of the British Concordes was given to the Museum of Flight at Boeing Field in Seattle, Washington.
The last operational flight of the Concorde was on 24 October 2003, with a British Airways machine flying from New York to London. British aviation enthusiasts flocked to Heathrow to see the arrival. As it taxied off the runway it passed under an honorary "water arch" created by the water cannons of two fire engines. During the type's lifetime, Air France had racked up 105,000 hours of commercial flight operations with the Concorde, while British Airways had run up a tally of 150,000 hours.
On 25 November 2003, a Concorde that had landed at Kennedy on 10 November was hauled up the Hudson river on a barge past the Statue of Liberty for display at New York City's Intrepid Air Museum. New Yorkers turned out along the waterfront to greet the arrival.
The very last flight of a Concorde was on 26 November 2003, when a British Airways Concorde took off from Heathrow, performed a ceremonial loop over the Bay of Biscay and then flew back to Filton, where it was to be put on display. The aircraft performed a "photo op" by flying over Isambard Kingdom Brunel's famous chain suspension bridge at Clifton, not far from Filton; as the crew taxied the airliner after landing, they hung Union Jacks out the windows and raised the nose up and down to please the crowd of 20,000 that was on hand. When the Olympus engines were shut down for the very last time, the crew got out and handed over the flight logs to HRH Prince Andrew in a formal ceremony.BACK_TO_TOP
* Of course, during the 1960s the Soviets and the West were in competition, and anything spectacular the West wanted to do, the Soviets wanted to do as well. That included an SST.
The Soviet Tupolev design bureau developed the USSR's answer to the Concorde, the Tupolev "Tu-144", also known by the NATO codename "Charger". The Tu-144 prototype performed its first flight on 31 December 1968, with test pilot Eduard Elyan at the controls, beating the Concorde by three months. 17 Tu-144s were built, the last one coming off the production line in 1981. This sum includes one prototype; two "Tu-144C" preproduction aircraft; and 14 full production machines, including nine initial-production "Tu-144S" aircraft, and five final production "Tu-144Ds" with improved engines.
* The Tu-144 got off to a terrible start, the second Tu-144C preproduction machine breaking up in midair during a demonstration at the Paris Air Show on 9 June 1973 and the debris falling into the village of Goussainville. All six crew in the aircraft and eight French citizens on the ground were killed, 15 houses were destroyed, and 60 people were injured. Since the initial reaction of the crowd watching the accident was that hundreds of people were likely to have been killed, there was some small relief that the casualties were relatively light. The entire ghastly accident was captured on film.
The details of the incident remain murky. The Concorde had put on a flight display just before the takeoff of the Tu-144, and a French air force Dassault Mirage fighter was in the air, observing the two aircraft. The Concorde crew had been alerted that the fighter was in the area, but the Tu-144 crew had not. The speculation is that the pilot of the Tu-144, M.V. Kozlov, saw the Mirage shadowing him. Although the Mirage was keeping a safe distance, Kozlov might have been surprised and nosed the Tu-144 down sharply to avoid a collision. Whatever the reason for the nosedive, it flamed out all of the engines. Kozlov put the aircraft into a dive so he could get a relight and overstressed the airframe when he tried to pull out.
This scenario remains speculation. Other scenarios suggest that Kozlov was trying too hard to outperform the Concorde and took the machine out of its envelope. After a year's investigation, the French and Soviet governments issued a brief statement saying that the cause of the accident could not be determined. Some suspect a cover-up, but it is impossible to make a credible judgement given the muddy trail, particularly since the people who could have told exactly what had happened weren't among the living any more.
* The Tu-144 resembled the Concorde, sometimes being called the "Concordski", and there were accusations that it was a copy. Many Western observers pointed out that there were also similarities between the Concorde and American SST proposals, and there was no reason to believe the resemblances between the Concorde and the Tu-144 were much more than a matter of the normal influence of published design concepts on organizations -- as well as "convergent evolution", or the simple fact that two machines designed separately to do the same task may out of simple necessity look alike.
The truth was muddier. Building an SST was an enormous design challenge for the Soviet Union. As a matter of national prestige, it had to be done, with the Soviet aircraft doing it first, and since the USSR was behind the West's learning curve the logical thing to do was steal. An organization was established to collect and analyze open-source material on SSTs from the West, and Soviet intelligence targeted the Concorde effort for penetration.
In 1964, French counterintelligence got wise to this game and sent out an alert to relevant organizations to beware of snoops and to be careful about releases of information. They began to keep tabs on Sergei Pavlov, the head of the Paris office of Aeroflot, whose official job gave him legitimate reasons for obtaining information from the French aviation industry and put him in an excellent position to spy on the Concorde effort. Pavlov was not aware that French counterintelligence was on to him, and so the French fed him misinformation to send Soviet research efforts down dead ends. Eventually, on 1 February 1965, the French arrested him while he was going to a lunch date with a contact, and found that he had plans for the Concorde's landing gear in his briefcase. Pavlov was thrown out of the country.
However, the Soviets had another agent, Sergei Fabiew, collecting intelligence on the Concorde effort, and French counterintelligence knew nothing about him. His cover was finally blown in 1977 by a Soviet defector, leading to Fabiew's arrest. Fabiew had been highly productive up to that time. In the documents they seized from him, they found a congratulations from Moscow for passing on a complete set of Concorde blueprints.
* Although the Soviets did obtain considerable useful intelligence on the Concorde, they were traditionally willing to use their own ideas or stolen ideas on the basis of which seemed the best. They could make good use of fundamental research obtained from the Concorde program to avoid dead ends and get a leg up, and they could leverage designs of Concorde subsystems to cut the time needed to build subsystems for the Tu-144.
In other words, the Tu-144 was still by no means a straight copy of the Concorde. The general configuration of the two aircraft was similar, both being dartlike delta-type aircraft with four afterburning engines paired in two nacelles; a drooping nose to permit better view on takeoff and landing; and a flight crew of three. Both were mostly built of conventional aircraft alloys. However, there were many differences in detail:
The Tu-144 was powered by four Kuznetsov NK-144 afterburning turbofans with 196.2 kN (20,000 kgp / 44,100 lbf) afterburning thrust each. The engines had separate inlet ducts in each nacelle and variable ramps in the inlets. The Tu-144D, which performed its first flight in 1978, was fitted with Kolesov RD-36-51 engines that featured much improved fuel economy and apparently uprated thrust. Production machines seem to have had thrust reversers, but some sources claim early machines used drag parachutes instead.
TUPOLEV TU-144: _____________________ _________________ _______________________ spec metric english _____________________ _________________ _______________________ wingspan 28.80 meters 94 feet 6 inches wing area 438.00 sq_meters 4,715 sq_feet length 65.70 meters 215 feet 6 inches height 12.85 meters 42 feet 2 inches empty weight 85,000 kilograms 187,395 pounds MTO weight 180,000 kilograms 396,830 pounds max cruise speed 2,500 KPH 1,555 MPH / 1,350 KT service ceiling 18,300 meters 60,000 feet range 6,500 kilometers 4,040 MI / 3,515 NMI _____________________ _________________ _______________________
The Tu-144 prototype was a bit shorter and had ejection seats, though production aircraft did not, and the prototype also lacked the retractable canards. The engines fitted to the prototype had a lower thrust rating and were fitted into a single engine box, not a split box as in the production machines. Pictures of the preproduction machines show them to have had a production configuration, though no doubt they differed in minor details.
* The Tu-144 was not put into service until 26 December 1976, and then only for cargo and mail transport by Aeroflot between Moscow and Alma Ata, Kazakhstan, for operational evaluation. The Tu-144 didn't begin passenger service until 1 November 1977, and then apparently it was a cramped and uncomfortably noisy ride. Operating costs were unsurprisingly high and apparently the aircraft's reliability left something to be desired, which would not be surprising given its "bleeding edge" nature and particularly the haste in which it was developed.
The next year, on 23 May 1978, the first Tu-144D caught fire, had to perform an emergency landing, and was destroyed with some fatalities. The program never recovered. The Tu-144 only performed a total of 102 passenger-carrying flights. Some flight research was performed on two of the aircraft up to 1990, when the Tu-144 was finally grounded.
That was not quite the end of the story. As discussed in the next section, even though the Concorde and Tu-144 were clearly not money-making propositions, interest in building improved SSTs lingered on through the 1980s and 1990s. The US National Aeronautics & Space Administration (NASA) conducted studies on such aircraft, and in June 1993 officials the Tupolev organization met with NASA officials at the Paris Air Show to discuss pulling one of the Tu-144s out of mothballs to be used as an experimental platform for improved SST design. The meeting had been arranged by British intermediaries.
In October 1993, the Russians and Americans announced that they would conduct a joint advanced SST research effort. The program was formalized in an agreement signed by American Vice-President Al Gore and Russian Prime Minister Viktor Chernomyrdin at Vancouver, Canada, in June 1994. This agreement also formalized NASA shuttle flights to the Russian Mir space station.
The final production Tu-144D was selected for the tests, since it had only 83 flight hours when it was mothballed. Tupolev performed a major refurbishment on it, providing new uprated engines; strengthening the wing to handle the new engines; updating the fuel, hydraulic, electrical, and avionics systems; and adding about 500 sensors feeding a French-designed digital data-acquisition system. The modified Tu-144D was redesignated the "Tu-144LL", where "LL" stood for "Letnoya Laboritoya (Flying Laboratory)", a common Russian suffix for testbeds.
The new engines were Kuznetsov NK-321 turbofans, used on the huge Tupolev Tu-160 "Blackjack" bomber, replacing the Tu-144's Kolesov RD-36 engines. The NK-321 provided about 20% more power than the RD-36-51 and still better fuel economy. Each NK-321 had a max dry thrust of 137.3 kN (14,000 kgp / 31,000 lbf) and an afterburning thrust of 245.2 kN (25,000 kgp / 55,000 lbf). The details of the NK-321s were secret, and the Western partners in the venture were not allowed to inspect them.
A sequence of about 26 test flights was conducted in Russia with officials from the NASA Langley center at the Zhukovsky Flight Test Center from 1996 into 1999. Two NASA pilots, including NASA space shuttle pilot C. Gordon Fullerton, flew the machine during the course of the trials. As also discussed in the next section, the whole exercise came to nothing, but it was at least nice to get the machine back in the air one last time.BACK_TO_TOP
* Although the US had given up on the Boeing 2707-300 in 1971, NASA continued to conduct paper studies on SSTs, and in 1985 US President Ronald Reagan announced that the US was going to develop a high-speed transport named the "Orient Express". The announcement was a bit confusing because it blended an attempt to develop a hypersonic spaceplane, which emerged as the dead-end "National Aerospace Plane (NASP)" effort, with NASA studies for an improved commercial SST.
By the early 1990s, NASA's SST studies had emerged as the "High Speed Research (HSR)" effort, a collaboration with US aircraft industries to develop a "High Speed Civil Transport (HSCT)" that would carry up to 300 passengers at speeds from Mach 2 to 3 over a distance of 10,500 kilometers (6,500 miles), with a ticket price only 20% more than that of a conventional subsonic airliner. The fact that an SST could move more people in a shorter period of time was seen as a possible economic advantage. The NASA studies focused heavily on finding solutions to the concerns over high-altitude air pollution, airport vicinity noise levels, and sonic boom that had killed the 2707-300.
Other nations also conducted SST studies, with Japan flying large rocket-boosted scale models in the Australian outback, and there was an interest in international collaborative development efforts. The biggest non-environmental obstacle was simple development cost. While it might have been possible to develop an SST with reasonable operating costs -- though obviously not as low as those of a subsonic fanjet airliner -- given the high development costs it was hard to see how such a machine could be offered at a competitive price and achieve the sales volumes needed to make it worthwhile to build.
Some aerospace firms took a different approach on the matter, proposing small "supersonic business jets (SSBJs)". The idea was that there is a market of people who regard time as money and who would be willing to pay a high premium to shave a few hours for a trip across the ocean. Development costs of such a machine would be relatively modest, and the business model of serving a wealthy elite, along with delivering small volumes of urgent parcels in the cargo hold, seemed realistic. Firms such as Dassault in France, Gulfstream in the US, and Sukhoi in Russia came up with concepts in the early 1990s, but the idea didn't go anywhere at the time.
* Although the NASA HSR program did put the Tu-144LL back in the air, the study was finally axed in 1999. NASA, in good bureaucratic form, kept the program's cancellation very quiet, in contrast to the grand press releases that had accompanied the effort. That was understandable since NASA has to be wary of politicians out to grab headlines by publicly attacking government boondoggles, but in a sense the agency had nothing to hide: NASA studied the matter front to back, and one official stated off the record that in the end nobody could figure out how to make money on the HSCT. From an engineering point of view, a conclusive negative answer is as useful as a conclusive positive answer -- but few politicians have an engineering background and understand such things.
Some aircraft manufacturers didn't give up on SST research after the fall of the HSCT program. One of the major obstacles to selling an SST was the fact that sonic booms prevented it from being operated at high speed over land, limiting its appeal, and of course an SST that didn't produce a sonic boom would overcome that obstacle. Studies showed that sonic boom decreased with aircraft length and with reduction in aircraft size. There was absolutely no way the big HSCT, which was on a scale comparable to that of the Boeing 2707-300, could fly without generating a sonic boom, and so current industry notional configurations envision an SSBJ or small supersonic airliner.
Gulfstream released a notional configuration of a "Quiet Supersonic Jet (QSJ)" that would seat 24 passengers, have a gross takeoff weight of 68,000 kilograms (150,000 pounds), a length of 49 meters (160 feet), and swing wings. Gulfstream officials projected a market of from 180 to 400 machines over ten years, and added that the company had made a good profit building machines in productions runs as small as 200 aircraft. Other manufacturers have envisioned small SSTs with up to 50 seats.
* In 2005 Aerion Corporation, a startup in Reno, Nevada, announced concepts for an SSBJ designed to carry 8 to 12 passengers, with a maximum range of 7,400 kilometers (4,000 NMI) at Mach 1.5, a length of 45.18 meters (149 feet 2 inches), a span of 19.56 meters (64 feet 2 inches), and a maximum takeoff weight of 45,350 kilograms (100,000 pounds). The machine is technologically conservative in most respects, with no flashy features such as swing wings or drooping nose. Current configurations envision a dartlike aircraft, with wedge-style wings fitted with long leading-edge strakes, a steeply swept tailfin with a center-mounted wedge-style tailplane, and twin engines mounted on stub pylons on the rear of the wings. A fly-by-wire system will provide controllability over a wide range of flight conditions.
The wings are ultra-thin, to be made of carbon composite materials, and feature full-span trailing-edge flaps to allow takeoffs on typical runways. The currently planned engines are Pratt & Whitney JT8D-219 turbofans, each derated to 80.1 kN (8,165 kgp / 18,000 lbf). The JT8Ds are non-afterburning and use a fixed supersonic inlet configuration. The expected power-to-weight ratio at normal operating weights is expected to be about 40%, about the same as a Northrop F-5 fighter in afterburner. The Aerion SSBJ will be able to operate efficiently at high subsonic or low supersonic speeds over populated areas, where sonic boom would be unacceptable.
The company believes there is a market for 250 to 300 SSBJs, and opened up the books for orders at the Dubai air show in 2007. The company claims to have dozens of orders on the books, but no prototype has flown yet and there is no indication of when one will be.
* Even with the final grounding of the Concorde, the idea of the SST continues to flicker on. In 2011, the European Aerospace & Defense Systems (EADS) group released a concept for a "Zero-Emissions Hypersonic Transport (ZEHST) that could carry up to 100 passengers at Mach 4 using turbofan / ramjet / rocket propulsion. It was nothing more than an interesting blue-sky concept with no prospect of entering development any time soon. Most agree that the SST is a sexy idea; few are confident that it can be made to pay.BACK_TO_TOP
* In hindsight, the SST mania that produced the Concorde sounded persuasive at the time, but it suffered from a certain lack of realism. Although the Concorde was a lovely, magnificent machine and a technological marvel even when it was retired, it was also a testimony to a certain naivete that characterized the 1950s and 1960s, when people thought that technology could accomplish anything and set out on unbelievably grand projects. Some of these projects they incredibly pulled off, but some of them turned out very differently than expected. It's still hard not to admire their dash.
There's also a certain perverse humor to the whole thing. The French and the British actually built the Concorde, while the Americans, in typical grand style, cooked up a plan to build a machine that was twice as big and faster -- and never got it off the ground. The irony was that Americans made the right decision when they killed the 2707-300. The further irony was that they did it for environmental reasons that, whether they were right or wrong, were irrelevant given the fact they would have lost their shirts on it.
Development and purchase costs were almost guaranteed to have been greater than those of a subsonic airliner, the SST being much more like a combat aircraft; maintenance costs were by a good bet to have been higher as well; an SST would have only been useful for transcontinental operations and would have been absurd as a bulk cargo carrier, meaning production volumes would have been relatively low; and by the example of the Concorde, which had about three times the fuel burn per passenger-mile of a Boeing 747, there's no doubt that the costs of fuel would have made a 300-seat SST hopelessly uncompetitive to operate for a mass market. It would have been very interesting to have fielded a 12-seat supersonic business jet, a much less challenging proposition from both the technical and commercial points of view, in the 1970s, but people simply could not think small.
* Incidentally, interest in SSTs from the late 1950s through much of the 1960s was so great that most companies that came up with large supersonic combat aircraft also cooked up concepts for SST derivatives. General Dynamics considered a "stretched" derivative of the company's B-58 Hustler bomber designated the "Model 58-9", and the MiG organization of the USSR even came up with an SSBJ derivative of the MiG-25 "Foxbat" interceptor. Of course, none of these notions ever amounted to much more than "back of envelope" designs.
* Sources include:
The information on the Soviet effort to penetrate the Concorde program was obtained from "Supersonic Spies", an episode of the US Public Broadcasting System's NOVA TV program, released in early 1998. NASA's website also provided some useful details on the Tu-144LL test program and the Tu-144 in general, as did the surprisingly good Russian Monino aviation museum website.
* Revision history:
v1.0.0 / 01 aug 03 / gvg v1.0.1 / 01 nov 03 / gvg / Cleanup, comments on final Concorde flight. v1.0.2 / 01 dec 03 / gvg / Comments on QSP. v1.0.3 / 01 jan 04 / gvg / A few minor tweaks on the Concorde. v1.0.4 / 01 feb 04 / gvg / Very last flight of Concorde. v1.0.5 / 01 dec 05 / gvg / Cosmetic changes, SSBJ efforts. v1.0.7 / 01 nov 07 / gvg / Review & polish. v1.0.8 / 01 jan 09 / gvg / Review & polish. v1.0.9 / 01 nov 09 / gvg / Corrected Paris accident details. v1.1.0 / 01 oct 11 / gvg / Review & polish.BACK_TO_TOP | http://www.airvectors.net/avsst.html | 13 |
19 | |Schematic diagram of a high-bypass turbofan engine|
|Part of a series on
A turbofan is a type of aircraft gas turbine engine that provides thrust using a combination of a ducted fan and a jet exhaust nozzle. Part of the airstream from the ducted fan passes through the core, providing oxygen to burn fuel to create power. However, the rest of the air flow bypasses the engine core and mixes with the faster stream from the core, significantly reducing exhaust noise. The rather slower bypass airflow produces thrust more efficiently than the high-speed air from the core, and this reduces the specific fuel consumption.
A few designs work slightly differently and have the fan blades as a radial extension of an aft-mounted low-pressure turbine unit.
Turbofans have a net exhaust speed that is much lower than a turbojet. This makes them much more efficient at subsonic speeds than turbojets, and somewhat more efficient at supersonic speeds up to roughly Mach 1.6, but have also been found to be efficient when used with continuous afterburner at Mach 3 and above. However, the lower speed also reduces thrust at high speeds.
All of the jet engines used in currently manufactured commercial jet aircraft are turbofans. They are used commercially mainly because they are highly efficient and relatively quiet in operation. Turbofans are also used in many military jet aircraft, such as the F-15 Eagle.
Unlike a reciprocating engine, a turbojet undertakes a continuous-flow combustion process.
In a single-spool (or single-shaft) turbojet, which is the most basic form and the earliest type of turbojet to be developed, air enters an intake before being compressed to a higher pressure by a rotating (fan-like) compressor. The compressed air passes on to a combustor, where it is mixed with a fuel (e.g. kerosene) and ignited. The hot combustion gases then enter a windmill-like turbine, where power is extracted to drive the compressor. Although the expansion process in the turbine reduces the gas pressure (and temperature) somewhat, the remaining energy and pressure is employed to provide a high-velocity jet by passing the gas through a propelling nozzle. This process produces a net thrust opposite in direction to that of the jet flow.
After World War II, 2-spool (or 2-shaft) turbojets were developed to make it easier to throttle back compression systems with a high design overall pressure ratio (i.e., combustor inlet pressure/intake delivery pressure). Adopting the 2-spool arrangement enables the compression system to be split in two, with a Low Pressure (LP) Compressor supercharging a High Pressure (HP) Compressor. Each compressor is mounted on a separate (co-axial) shaft, driven by its own turbine (i.e HP Turbine and LP Turbine). Otherwise a 2-spool turbojet is much like a single-spool engine.
Modern turbofans evolved from the 2-spool axial-flow turbojet engine, essentially by increasing the relative size of the Low Pressure (LP) Compressor to the point where some (if not most) of the air exiting the unit actually bypasses the core (or gas-generator) stream, passing through the main combustor. This bypass air either expands through a separate propelling nozzle, or is mixed with the hot gases leaving the Low Pressure (LP) Turbine, before expanding through a Mixed Stream Propelling Nozzle. Owing to a lower jet velocity, a modern civil turbofan is quieter than the equivalent turbojet. Turbofans also have a better thermal efficiency, which is explained later in the article. In a turbofan, the LP Compressor is often called a fan. Civil-aviation turbofans usually have a single fan stage, whereas most military-aviation turbofans (e.g. combat and trainer aircraft applications) have multi-stage fans. It should be noted, however, that modern military transport turbofan engines are similar to those that propel civil jetliners.
Turboprop engines are gas-turbine engines that deliver almost all of their power to a shaft to drive a propeller. Turboprops remain popular on very small or slow aircraft, such as small commuter airliners, for their fuel efficiency at lower speeds, as well as on medium military transports and patrol planes, such as the C-130 Hercules and P-3 Orion, for their high takeoff performance and mission endurance benefits respectively.
If the turboprop is better at moderate flight speeds and the turbojet is better at very high speeds, it might be imagined that at some speed range in the middle a mixture of the two is best. Such an engine is the turbofan (originally termed bypass turbojet by the inventors at Rolls Royce). Another name sometimes used is ducted fan, though that term is also used for propellers and fans used in vertical-flight applications.
The difference between a turbofan and a propeller, besides direct thrust, is that the intake duct of the former slows the air before it arrives at the fan face. As both propeller and fan blades must operate at subsonic inlet velocities to be efficient, ducted fans allow efficient operation at higher vehicle speeds.
Depending on specific thrust (i.e. net thrust/intake airflow), ducted fans operate best from about 400 to 2000 km/h (250 to 1300 mph), which is why turbofans are the most common type of engine for aviation use today in airliners as well as subsonic/supersonic military fighter and trainer aircraft. It should be noted, however, that turbofans use extensive ducting to force incoming air to subsonic velocities (thus reducing shock waves throughout the engine).
The noise of any type of jet engine is strongly related to the velocity of the exhaust gases, typically being proportional to the eighth power of the jet velocity. High-bypass-ratio (i.e., low-specific-thrust) turbofans are relatively quiet compared to turbojets and low-bypass-ratio (i.e., high-specific-thrust) turbofans. A low-specific-thrust engine has a low jet velocity by definition, as the following approximate equation for net thrust implies:
Rearranging the above equation, specific thrust is given by:
So for zero flight velocity, specific thrust is directly proportional to jet velocity. Relatively speaking, low-specific-thrust engines are large in diameter to accommodate the high airflow required for a given thrust.
Jet aircraft are often considered loud, but a conventional piston engine or a turboprop engine delivering the same thrust would be much louder.
Early turbojet engines were very fuel-inefficient, as their overall pressure ratio and turbine inlet temperature were severely limited by the technology available at the time. The very first running turbofan was the German Daimler-Benz DB 670 (also known as 109-007) which was operated on its testbed on April 1, 1943. The engine was abandoned later while the war went on and problems could not be solved. The British wartime Metrovick F.2 axial flow jet was given a fan to create the first British turbofan.
Improved materials, and the introduction of twin compressors such as in the Pratt & Whitney JT3C engine, increased the overall pressure ratio and thus the thermodynamic efficiency of engines, but they also led to a poor propulsive efficiency, as pure turbojets have a high specific thrust/high velocity exhaust better suited to supersonic flight.
The original low-bypass turbofan engines were designed to improve propulsive efficiency by reducing the exhaust velocity to a value closer to that of the aircraft. The Rolls-Royce Conway, the first production turbofan, had a bypass ratio of 0.3, similar to the modern General Electric F404 fighter engine. Civilian turbofan engines of the 1960s, such as the Pratt & Whitney JT8D and the Rolls-Royce Spey had bypass ratios closer to 1, but were not dissimilar to their military equivalents.
The unusual General Electric CF700 turbofan engine was developed as an aft-fan engine with a 2.0 bypass ratio. This was derived from the T-38 Talon and the Learjet General Electric J85/CJ610 turbojet (2,850 lbf or 12,650 N) to power the larger Rockwell Sabreliner 75/80 model aircraft, as well as the Dassault Falcon 20 with about a 50% increase in thrust (4,200 lbf or 18,700 N). The CF700 was the first small turbofan in the world to be certified by the Federal Aviation Administration (FAA). There are now over 400 CF700 aircraft in operation around the world, with an experience base of over 10 million service hours. The CF700 turbofan engine was also used to train Moon-bound astronauts in Project Apollo as the powerplant for the Lunar Landing Research Vehicle.
A high specific thrust/low bypass ratio turbofan normally has a multi-stage fan, developing a relatively high pressure ratio and, thus, yielding a high (mixed or cold) exhaust velocity. The core airflow needs to be large enough to give sufficient core power to drive the fan. A smaller core flow/higher bypass ratio cycle can be achieved by raising the (HP) turbine rotor inlet temperature.
Imagine a retrofit situation where a new low bypass ratio, mixed exhaust, turbofan is replacing an old turbojet, in a particular military application. Say the new engine is to have the same airflow and net thrust (i.e. same specific thrust) as the one it is replacing. A bypass flow can only be introduced if the turbine inlet temperature is allowed to increase, to compensate for a correspondingly smaller core flow. Improvements in turbine cooling/material technology would facilitate the use of a higher turbine inlet temperature, despite increases in cooling air temperature, resulting from a probable increase in overall pressure ratio.
Efficiently done, the resulting turbofan would probably operate at a higher nozzle pressure ratio than the turbojet, but with a lower exhaust temperature to retain net thrust. Since the temperature rise across the whole engine (intake to nozzle) would be lower, the (dry power) fuel flow would also be reduced, resulting in a better specific fuel consumption (SFC).
A few low-bypass ratio military turbofans (e.g. F404) have Variable Inlet Guide Vanes, with piano-style hinges, to direct air onto the first rotor stage. This improves the fan surge margin (see compressor map) in the mid-flow range. The swing wing F-111 achieved a very high range / payload capability by pioneering the use of this engine, and it was also the heart of the famous F-14 Tomcat air superiority fighter which used the same engines in a smaller, more agile airframe to achieve efficient cruise and Mach 2 speed.
Since the 1970s, most jet fighter engines have been low/medium bypass turbofans with a mixed exhaust, afterburner and variable area final nozzle. An afterburner is a combustor located downstream of the turbine blades and directly upstream of the nozzle, which burns fuel from afterburner-specific fuel injectors. When lit, prodigious amounts of fuel are burnt in the afterburner, raising the temperature of exhaust gases by a significant degree, resulting in a higher exhaust velocity/engine specific thrust. The variable geometry nozzle must open to a larger throat area to accommodate the extra volume flow when the afterburner is lit. Afterburning is often designed to give a significant thrust boost for take off, transonic acceleration and combat maneuvers, but is very fuel intensive. Consequently afterburning can only be used for short portions of a mission. However the Mach 3 SR-71 was designed for continuous operation and to be efficient with the afterburner lit.
Unlike the main combustor, where the downstream turbine blades must not be damaged by high temperatures, an afterburner can operate at the ideal maximum (stoichiometric) temperature (i.e. about 2100K/3780Ra/3320F). At a fixed total applied fuel:air ratio, the total fuel flow for a given fan airflow will be the same, regardless of the dry specific thrust of the engine. However, a high specific thrust turbofan will, by definition, have a higher nozzle pressure ratio, resulting in a higher afterburning net thrust and, therefore, a lower afterburning specific fuel consumption. However, high specific thrust engines have a high dry SFC. The situation is reversed for a medium specific thrust afterburning turbofan: i.e. poor afterburning SFC/good dry SFC. The former engine is suitable for a combat aircraft which must remain in afterburning combat for a fairly long period, but only has to fight fairly close to the airfield (e.g. cross border skirmishes) The latter engine is better for an aircraft that has to fly some distance, or loiter for a long time, before going into combat. However, the pilot can only afford to stay in afterburning for a short period, before his/her fuel reserves become dangerously low.
Modern low-bypass military turbofans include the Pratt & Whitney F119, the Eurojet EJ200 and the General Electric F110 and F414, all of which feature a mixed exhaust, afterburner and variable area propelling nozzle. Non-afterburning engines include the Rolls-Royce/Turbomeca Adour (afterburning in the SEPECAT Jaguar) and the unmixed, vectored thrust, Rolls-Royce Pegasus.
The low specific thrust/high bypass ratio turbofans used in today's civil jetliners (and some military transport aircraft) evolved from the high specific thrust/low bypass ratio turbofans used in such aircraft back in the 1960s.
Low specific thrust is achieved by replacing the multi-stage fan with a single stage unit. Unlike some military engines, modern civil turbofans do not have any stationary inlet guide vanes in front of the fan rotor. The fan is scaled to achieve the desired net thrust.
The core (or gas generator) of the engine must generate sufficient core power to at least drive the fan at its design flow and pressure ratio. Through improvements in turbine cooling/material technology, a higher (HP) turbine rotor inlet temperature can be used, thus facilitating a smaller (and lighter) core and (potentially) improving the core thermal efficiency. Reducing the core mass flow tends to increase the load on the LP turbine, so this unit may require additional stages to reduce the average stage loading and to maintain LP turbine efficiency. Reducing core flow also increases bypass ratio (5:1, or more, is now common).
Further improvements in core thermal efficiency can be achieved by raising the overall pressure ratio of the core. Improved blade aerodynamics reduces the number of extra compressor stages required. With multiple compressors (i.e. LPC, IPC, HPC) dramatic increases in overall pressure ratio have become possible. Variable geometry (i.e. stators) enable high pressure ratio compressors to work surge-free at all throttle settings.
The first high-bypass turbofan engine was the General Electric TF39, designed in mid 1960s to power the Lockheed C-5 Galaxy military transport aircraft. The civil General Electric CF6 engine used a derived design. Other high-bypass turbofans are the Pratt & Whitney JT9D, the three-shaft Rolls-Royce RB211 and the CFM International CFM56. More recent large high-bypass turbofans include the Pratt & Whitney PW4000, the three-shaft Rolls-Royce Trent, the General Electric GE90/GEnx and the GP7000, produced jointly by GE and P&W.
High-bypass turbofan engines are generally quieter than the earlier low bypass ratio civil engines. This is not so much due to the higher bypass ratio, as to the use of a low pressure ratio, single stage, fan, which significantly reduces specific thrust and, thereby, jet velocity. The combination of a higher overall pressure ratio and turbine inlet temperature improves thermal efficiency. This, together with a lower specific thrust (better propulsive efficiency), leads to a lower specific fuel consumption.
For reasons of fuel economy, and also of reduced noise, almost all of today's jet airliners are powered by high-bypass turbofans. Although modern combat aircraft tend to use low bypass ratio turbofans, military transport aircraft (e.g. C-17 ) mainly use high bypass ratio turbofans (or turboprops) for fuel efficiency.
Because of the implied low mean jet velocity, a high bypass ratio/low specific thrust turbofan has a high thrust lapse rate (with rising flight speed). Consequently the engine must be over-sized to give sufficient thrust during climb/cruise at high flight speeds (e.g. Mach 0.83). Because of the high thrust lapse rate, the static (i.e. Mach 0) thrust is consequently relatively high. This enables heavily laden, wide body aircraft to accelerate quickly during take-off and consequently lift-off within a reasonable runway length.
The turbofans on twin engined airliners are further over-sized to cope with losing one engine during take-off, which reduces the aircraft's net thrust by 50%. Modern twin engined airliners normally climb very steeply immediately after take-off. If one engine is lost, the climb-out is much shallower, but sufficient to clear obstacles in the flightpath.
The Soviet Union's engine technology was less advanced than the West's and its first wide-body aircraft, the Ilyushin Il-86, was powered by low-bypass engines. The Yakovlev Yak-42, a medium-range, rear-engined aircraft seating up to 120 passengers introduced in 1980 was the first Soviet aircraft to use high-bypass engines.
Turbofan engines come in a variety of engine configurations. For a given engine cycle (i.e. same airflow, bypass ratio, fan pressure ratio, overall pressure ratio and HP turbine rotor inlet temperature), the choice of turbofan configuration has little impact upon the design point performance (e.g. net thrust, SFC), as long as overall component performance is maintained. Off-design performance and stability is, however, affected by engine configuration.
As the design overall pressure ratio of an engine cycle increases, it becomes more difficult to throttle the compression system, without encountering an instability known as compressor surge. This occurs when some of the compressor aerofoils stall (like the wings of an aircraft) causing a violent change in the direction of the airflow. However, compressor stall can be avoided, at throttled conditions, by progressively:
1) opening interstage/intercompressor blow-off valves (inefficient)
2) closing variable stators within the compressor
Most modern American civil turbofans employ a relatively high pressure ratio High Pressure (HP) Compressor, with many rows of variable stators to control surge margin at part-throttle. In the three-spool RB211/Trent the core compression system is split into two, with the IP compressor, which supercharges the HP compressor, being on a different coaxial shaft and driven by a separate (IP) turbine. As the HP Compressor has a modest pressure ratio it can be throttled-back surge-free, without employing variable geometry. However, because a shallow IP compressor working line is inevitable, the IPC requires at least one stage of variable geometry.
Although far from common, the Single Shaft Turbofan is probably the simplest configuration, comprising a fan and high pressure compressor driven by a single turbine unit, all on the same shaft. The SNECMA M53, which powers Mirage fighter aircraft, is an example of a Single Shaft Turbofan. Despite the simplicity of the turbomachinery configuration, the M53 requires a variable area mixer to facilitate part-throttle operation.
One of the earliest turbofans was a derivative of the General Electric J79 turbojet, known as the CJ805, which featured an integrated aft fan/low pressure (LP) turbine unit located in the turbojet exhaust jetpipe. Hot gas from the turbojet turbine exhaust expanded through the LP turbine, the fan blades being a radial extension of the turbine blades. This Aft Fan configuration was later exploited in the General Electric GE-36 UDF (propfan) Demonstrator of the early 80's. One of the problems with the Aft Fan configuration is hot gas leakage from the LP turbine to the fan.
Many turbofans have the Basic Two Spool configuration where both the fan and LP turbine (i.e. LP spool) are mounted on a second (LP) shaft, running concentrically with the HP spool (i.e. HP compressor driven by HP turbine). The BR710 is typical of this configuration. At the smaller thrust sizes, instead of all-axial blading, the HP compressor configuration may be axial-centrifugal (e.g. General Electric CFE738), double-centrifugal or even diagonal/centrifugal (e.g. Pratt & Whitney Canada PW600).
Higher overall pressure ratios can be achieved by either raising the HP compressor pressure ratio or adding an Intermediate Pressure (IP) Compressor between the fan and HP compressor, to supercharge or boost the latter unit helping to raise the overall pressure ratio of the engine cycle to the very high levels employed today (i.e. greater than 40:1, typically). All of the large American turbofans (e.g. General Electric CF6, GE90 and GEnx plus Pratt & Whitney JT9D and PW4000) feature an IP compressor mounted on the LP shaft and driven, like the fan, by the LP turbine, the mechanical speed of which is dictated by the tip speed and diameter of the fan. The high bypass ratios (i.e. fan duct flow/core flow) used in modern civil turbofans tends to reduce the relative diameter of the attached IP compressor, causing its mean tip speed to decrease. Consequently more IPC stages are required to develop the necessary IPC pressure rise.
Rolls-Royce chose a Three Spool configuration for their large civil turbofans (i.e. the RB211 and Trent families), where the Intermediate Pressure (IP) compressor is mounted on a separate (IP) shaft, running concentrically with the LP and HP shafts, and is driven by a separate IP Turbine. Consequently, the IP compressor can rotate faster than the fan, increasing its mean tip speed, thereby reducing the number of IP stages required for a given IPC pressure rise. Because the RB211/Trent designs have a higher IPC pressure rise than the American engines, the HPC pressure rise is less resulting in a shorter, lighter engine. However, three spool engines are harder to both build and maintain.
As bypass ratio increases, the mean radius ratio of the fan and LP turbine increases. Consequently, if the fan is to rotate at its optimum blade speed the LP turbine blading will spin slowly, so additional LPT stages will be required, to extract sufficient energy to drive the fan. Introducing a (planetary) reduction gearbox, with a suitable gear ratio, between the LP shaft and the fan, enables both the fan and LP turbine to operate at their optimum speeds. Typical of this configuration are the long-established Honeywell TFE731, the Honeywell ALF 502/507, and the recent Pratt & Whitney PW1000G.
Most of the configurations discussed above are used in civil turbofans, while modern military turbofans (e.g. SNECMA M88) are usually Basic Two Spool.
Most civil turbofans use a high efficiency, 2-stage HP turbine to drive the HP compressor. The CFM56 uses an alternative approach: a single stage, high-work unit. While this approach is probably less efficient, there are savings on cooling air, weight and cost. In the RB211 and Trent series, Rolls-Royce split the two stages into two discrete units; one on the HP shaft driving the HP compressor; the other on the IP shaft driving the IP (Intermediate Pressure) Compressor. Modern military turbofans tend to use single stage HP turbines.
Modern civil turbofans have multi-stage LP turbines (e.g. 3, 4, 5, 6, 7). The number of stages required depends on the engine cycle bypass ratio and how much supercharging (i.e. IP compression) is on the LP shaft, behind the fan. A geared fan may reduce the number of required LPT stages in some applications. Because of the much lower bypass ratios employed, military turbofans only require one or two LP turbine stages.
Consider a mixed turbofan with a fixed bypass ratio and airflow. Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow there is an increase in (HP) turbine rotor inlet temperature. Although the higher temperature rise across the compression system implies a larger temperature drop over the turbine system, the mixed nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio, causing an increase in the hot mixer entry pressure. Consequently, net thrust increases, whilst specific fuel consumption (fuel flow/net thrust) decreases. A similar trend occurs with unmixed turbofans.
So turbofans can be made more fuel efficient by raising overall pressure ratio and turbine rotor inlet temperature in unison. However, better turbine materials and/or improved vane/blade cooling are required to cope with increases in both turbine rotor inlet temperature and compressor delivery temperature. Increasing the latter may require better compressor materials.
Overall pressure ratio can be increased by improving fan (or) LP compressor pressure ratio and/or HP compressor pressure ratio. If the latter is held constant, the increase in (HP) compressor delivery temperature (from raising overall pressure ratio) implies an increase in HP mechanical speed. However, stressing considerations might limit this parameter, implying, despite an increase in overall pressure ratio, a reduction in HP compressor pressure ratio.
According to simple theory, if the ratio turbine rotor inlet temperature/(HP) compressor delivery temperature is maintained, the HP turbine throat area can be retained. However, this assumes that cycle improvements are obtained, whilst retaining the datum (HP) compressor exit flow function (non-dimensional flow). In practise, changes to the non-dimensional speed of the (HP) compressor and cooling bleed extraction would probably make this assumption invalid, making some adjustment to HP turbine throat area unavoidable. This means the HP turbine nozzle guide vanes would have to be different from the original! In all probability, the downstream LP turbine nozzle guide vanes would have to be changed anyway.
Thrust growth is obtained by increasing core power. There are two basic routes available:
a) hot route: increase HP turbine rotor inlet temperature
b) cold route: increase core mass flow
Both routes require an increase in the combustor fuel flow and, therefore, the heat energy added to the core stream.
The hot route may require changes in turbine blade/vane materials and/or better blade/vane cooling. The cold route can be obtained by one of the following:
all of which increase both overall pressure ratio and core airflow.
Alternatively, the core size can be increased, to raise core airflow, without changing overall pressure ratio. This route is expensive, since a new (upflowed) turbine system (and possibly a larger IP compressor) is also required.
Changes must also be made to the fan to absorb the extra core power. On a civil engine, jet noise considerations mean that any significant increase in Take-off thrust must be accompanied by a corresponding increase in fan mass flow (to maintain a T/O specific thrust of about 30lbf/lb/s), usually by increasing fan diameter. On military engines, the fan pressure ratio would probably be increased to improve specific thrust, jet noise not normally being an important factor.
The turbine blades in a turbofan engine are subject to high heat and stress, and require special fabrication. New material construction methods and material science have allowed blades, which were originally polycrystalline (regular metal), to be made from lined up metallic crystals and more recently mono-crystalline (i.e. single crystal) blades, which can operate at higher temperatures with less distortion.
Nickel-based superalloys are used for HP turbine blades in almost all of the modern jet engines. The temperature capabilities of turbine blades have increased mainly through four approaches: the manufacturing (casting) process, cooling path design, thermal barrier coating (TBC), and alloy development.
Although turbine blade (and vane) materials have improved over the years, much of the increase in (HP) turbine inlet temperatures is due to improvements in blade/vane cooling technology. Relatively cool air is bled from the compression system, bypassing the combustion process, and enters the hollow blade or vane. After picking up heat from the blade/vane, the cooling air is dumped into the main gas stream. If the local gas temperatures are low enough, downstream blades/vanes are uncooled and solid.
Strictly speaking, cycle-wise the HP Turbine Rotor Inlet Temperature (after the temperature drop across the HPT stator) is more important than the (HP) turbine inlet temperature. Although some modern military and civil engines have peak RITs of the order of 3300 °R (2840 °F) or 1833 K (1560 °C), such temperatures are only experienced for a short time (during take-off) on civil engines.
The turbofan engine market is dominated by General Electric, Rolls-Royce plc and Pratt & Whitney, in order of market share. GE and SNECMA of France have a joint venture, CFM International which, as the 3rd largest manufacturer in terms of market share, fits between Rolls Royce and Pratt & Whitney. Rolls Royce and Pratt & Whitney also have a joint venture, International Aero Engines, specializing in engines for the Airbus A320 family, whilst finally, Pratt & Whitney and General Electric have a joint venture, Engine Alliance marketing a range of engines for aircraft such as the Airbus A380. Williams International is the world leader in smaller business jet turbofans.
GE Aviation, part of the General Electric Conglomerate, currently has the largest share of the turbofan engine market. Some of their engine models include the CF6 (available on the Boeing 767, Boeing 747, Airbus A330 and more), GE90 (only the Boeing 777) and GEnx (developed for the Boeing 747-8 & Boeing 787 and proposed for the Airbus A350, currently in development) engines. On the military side, GE engines power many U.S. military aircraft, including the F110, powering 80% of the US Air Force's F-16 Fighting Falcons and the F404 and F414 engines, which power the Navy's F/A-18 Hornet and Super Hornet. Rolls Royce and General Electric are jointly developing the F136 engine to power the Joint Strike Fighter.
CFM International is a joint venture between GE Aircraft Engines and SNECMA of France. They have created the very successful CFM56 series, used on Boeing 737, Airbus A340, and Airbus A320 family aircraft.
Rolls-Royce plc is the second largest manufacturer of turbofans and is most noted for their RB211 and Trent series, as well as their joint venture engines for the Airbus A320 and Boeing MD-90 families (IAE V2500 with Pratt & Whitney and others), the Panavia Tornado (Turbo-Union RB199) and the Boeing 717 (BR700). Rolls Royce, as owners of the Allison Engine Company, have their engines powering the C-130 Hercules and several Embraer regional jets. Rolls-Royce Trent 970s were the first engines to power the new Airbus A380. It was also Rolls-Royce Olympus/SNECMA jets that powered the now retired Concorde although they were turbojets rather than turbofans. The famous thrust vectoring Pegasus engine is the primary powerplant of the Harrier "Jump Jet" and its derivatives.
Pratt & Whitney is third behind GE and Rolls-Royce in market share. The JT9D has the distinction of being chosen by Boeing to power the original Boeing 747 "Jumbo jet". The PW4000 series is the successor to the JT9D, and powers some Airbus A310, Airbus A300, Boeing 747, Boeing 767, Boeing 777, Airbus A330 and MD-11 aircraft. The PW4000 is certified for 180-minute ETOPS when used in twinjets. The first family has a 94 inch fan diameter and is designed to power the Boeing 767, Boeing 747, MD-11, and the Airbus A300. The second family is the 100 inch (2.5 m) fan engine developed specifically for the Airbus A330 twinjet, and the third family has a diameter of 112 inch designed to power Boeing 777. The Pratt & Whitney F119 and its derivative, the F135, power the United States Air Force's F-22 Raptor and the international F-35 Lightning II, respectively. Rolls Royce are responsible for the lift fan which will provide the F-35B variants with a STOVL capability. The F100 engine was first used on the F-15 Eagle and F-16 Fighting Falcon. Newer Eagles and Falcons also come with GE F110 as an option, and the two are in competition.
Aviadvigatel (Russian:Авиационный Двиѓатель) is the Russian aircraft engine company that succeeded the Soviet Soloviev Design Bureau. They have made 1 engine on the market, the Aviadvigatel PS-90. The engine is used on the Ilyushin Il-96-300, -400, T, Tupolev Tu-204, Tu-214 series and the Ilyushin Il-76-MD-90. Later, the company changed its name to Perm Engine Company.
Ivchenko-Progress is the Ukrainian aircraft engine company that succeeded the Soviet Ivchenko Design Bureau. Some of their engine models include Progress D-436 available on the Antonov An-72/74, Yakovlev Yak-42, Beriev Be-200, Antonov An-148 and Tupolev Tu-334 and Progress D-18T that powers two of the world largest airplanes, Antonov An-124 and Antonov An-225.
In the 1970s Rolls-Royce/SNECMA tested a M45SD-02 turbofan fitted with variable pitch fan blades to improve handling at ultra low fan pressure ratios and to provide thrust reverse down to zero aircraft speed. The engine was aimed at ultra quiet STOL aircraft operating from city centre airports.
In a bid for increased efficiency with speed, a development of the turbofan and turboprop known as a propfan engine, was created that had an unducted fan. The fan blades are situated outside of the duct, so that it appears like a turboprop with wide scimitar-like blades. Both General Electric and Pratt & Whitney/Allison demonstrated propfan engines in the 1980s. Excessive cabin noise and relatively cheap jet fuel prevented the engines being put into service.
The Unicode standard includes a turbofan character, #274B, in the dingbats range. Its official name is "HEAVY EIGHT TEARDROP-SPOKED PROPELLER. ASTERISK. = turbofan". In appropriately-configured browsers, it should appear in the box on the right. | http://www.thefullwiki.org/Turbofan | 13 |
15 | Ratio And Proportion Activities DOC
Proportion Activity (One Foot Tale) Class: Algebra I. Summary: In this lesson, students apply proportions to examine real life questions. ... Write a ratio using your height in inches: 12” : _____ (actual height)
A ratio is a comparison of two quantities that tells the scale between them. Ratios can be expressed as quotients, fractions, decimals, percents, or in the form of a:b. Here are some examples: The ratio of girls to boys on the swim team is 2:3 or .
Number / Rate Ratio Proportion / Video Interactive / Print Activity. Name: ( adding or subtracting multiplying or dividing. Title: Rate Ratio Proportion PRINT ACTIVITY Author: Alberta Learning Last modified by: Mike.Olsson Created Date:
This can be done playing concentration with the words and definitions or similar activities. (D) Ratio, Proportion, Scale: What is the connection? Using the FWL (Fifty Words or Less) strategy (attachment 8) the students will explain the connection between ratio, proportion, and scale.
Activities (P/E ratio) Definition. The P/E ratio of a stock is the price of a company's stock divided by the company's earnings per share. ... Tutorial (RATIO AND PROPORTION) Author: 1234 Last modified by: vtc Created Date: 9/15/2003 1:39:00 AM
Group activities. Question & Answer during the session.. Learner engagement during session. Worksheet Linked Functional Skills: ... Ratio, Proportion and Scale (Sample Lesson Plan) Functional Skills: Mathematics Level 2. Scale Card 2.
Ratio and Proportion Chapter 10 in the Impact Text. Home Activities. Title: Ratio and Proportion Chapter 10 in the Impact Text Author: DOE Last modified by: DOE Created Date: 11/15/2012 3:03:00 PM Company: DOE Other titles:
Ratio (Proportion), Percent (Share), and Rate. 1) Ratio and Proportion. ... When you examine “GDP Composition by Economic Activities in Major Countries (2003)” in page 18, how could you describe the main character of US GDP composition? A:
To know ratio and proportion. References to the Framework: Year 5 - Numbers and the number system, Ratio and Proportion, p26 – To solve simple problems involving ratio and proportion. Activities: Organisation. Whole class for mental starter and teacher input;
Unit 10 Ratio, proportion, ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Find pairs of numbers with a sum of 100; ...
Ratio for one serving, (i.e. if the recipe uses 1 cup of sugar, and the recipe serves 8, the ratio for one serving equals 1/8 c. sugar). Proportion used to increase recipe to 30 servings. 1/8 servings=x/30 servings. Show the work to solve proportion.
Ratios and Proportion Online activities 3.25. ... a subset : set ratio of 4:9 can be expressed equivalently as 4/9 = 0.‾ 4 ˜ 44.44%) Balance the blobs 5.0 understand ratio as both set: set comparison (for example, number of boys : ...
RATIO/PROPORTION WORD PROBLEMS A . ratio/rate. is the comparison of two quantities. Examples: The ratio of 12 miles to 18 miles = 12 mi / 18 mi = 2 mi / 3 mi. The rate of $22.50 for 3 hours = $22.50 / 3 hrs. NOTE: 1 ...
If we’re interested in the proportion of all M&Ms© that are orange, what are the parameter,, and the statistic,? What is your value of ? p = proportion of all M&Ms that are orange = proportion of M&Ms in my sample of size n that are orange = x/n.
Write the proportion. 8 = 192 . 3 n. 2. Write the cross products 8 * n = 192 * 3. 3. Multiply 8n = 576. 4. Undo ... the male to female ratio is 6:6. If there are 160 players in the league, how many are female? 22.
RATE/RATIO/PROPORTION /% UNIT. Day 1: Monday. Objective: Students will be able to find the unit rate in various situations. Warm-Up: (Review from last week) Put on board or overhead. Express each decimal as a fraction in simplest form:.60 2) 1.25 3) .35.
Students complete Independent Practice activities that serve as a summative assessment, since instructional feedback is not provided at this level. ... Ratio and Proportion Level 1- Level 6 (Individually) http://www.thinkingblocks.com/TB_Ratio/tb_ratio1.html .
Fractions, decimals and percentages, ratio and proportion. Year 6 Autumn term Unit ... 6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions
Understand the ratio concept and use ratio reasoning to solve related problems. ... John A. Van de Walle’s Developing Concepts of Ratio and Proportion gives the instructor activities for the development of proportional reasoning in the student.
The activities which follow include a good introduction to graphing as well as a great application of ratio and proportion. Directions: What’s the Story, What’s the Math .
Proportion. Ratio. Similarity. Generic. Activities: Visit Little Studio Lincoln Room. View Standing Lincoln Exhibit. Examine Resin cast of Volk’s life mask of Lincoln. ... Additional Activities: Visit Atrium Gallery. View Bust Cast from Standing Lincoln Statue in 1910.
Activities: Divide the class into small groups. Have the students create in Geogebra create Fibonacci Rectangles and the Shell Spirals. ... which is the Golden ratio, which has many applications in the human body, architecture, and nature.
Activities: The following exercises meet the Gateway Standards for Algebra I – 3.0 (Patterns, Functions and Algebraic Thinking) ... The exterior of the Parthenon likewise fits into the golden proportion so that the ratio of the entablature ...
Concept Development Activities . 7- 1 Ratio and proportion: Restless rectangles . activity require students to compare and contrast rectangles of the same shapes but different sizes and make a discovery of their lengths and width to discover the properties of similar rectangles.
Definitions of ratio, proportion, extremes, means and cross products Write and simplify ratios The difference between a ratio and a proportion ... Learning Activities ...
Once completed, students should calculate the ratio of the length to the width by dividing. In both cases, students should calculate a ratio of L :W approximately equal to 1.6 if rounded to the nearest tenth.
Ratio, Proportion, Data Handling and Problem Solving. Five Daily Lessons. Unit ... Year Group 5/6 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities / Focus Questions Find pairs of numbers ...
Apply the concept of ratio, proportion, and similarity* in problem-solving situations*. 4.5.a. ... Planning Daily Lesson Activities that Incorporate the MYP: (AOIs, LP, rigor, holistic learning, communication, internationalism, etc)
Activities 1) Use guided practice to refresh the concept of ratio and proportion, and the process for solving proportions for a missing value. 2) ...
... Gulliver's Travels Swift 7-12 Ratio, proportion, measurement Webpage Holes Sachar 6-8 Ratio, proportion, data collection , percent ... Activities, Stories, Puzzles, and Games Gonzales, Mitchell & Stone Mathematical activities, puzzles, stories & games from history Moja ...
Apply knowledge of ratio and proportion to solve relationships between similar geometric figures. ... Digital Cameras Activities– Heather Sparks. Literature: If you Hopped Like a Frog and Lesson. Materials: Jim and the Bean Stalk Book.
They calculate the ratio of circumference to diameter for each object in an attempt ... The interactive Paper Pool game provides an opportunity for students to develop their understanding of ratio, proportion, ... The three activities in this investigation center on situations involving rational ...
LessonTitle: Proportion Activities (One Foot Tale, ... Application of Ratio and Proportion Vocabulary Focus. Proportion Materials . A literature book such as Gulliver’s Travels or If You Hopped Like a Frog, Catalogues, Measuring tools. Assess
ratio. is a comparison of two numbers. How much sugar do you put in your favorite cookie recipe? How much flour? ... What is the ratio of browns to rainbows? Proportion. A . proportion. is two equal ratios. Look at the first example on page 1 again.
Algebra: Ratio & proportion, formulas (Statistics & Prob.: ... questions, and student activities associated with the delivery of the lesson. Nothing should be left to the imagination. Other teachers should be able to reproduce this exact lesson using this lesson plan.
Math Forum: Clearinghouse of ratio and proportion activities for 6th grade. http://mathforum.org/mathtools/cell/m6,8.9,ALL,ALL/ Middle School Portal: Here you will find games, problems, ...
What is a ratio and how do you use it to solve problems? ... Read and write a proportion. Determining how to solve proportions by cross multiplying. ... Activities. Day 1. Jumping Jacks: the test of endurance:
• Is the approach to ratio, proportion and percentages compatible with work in mathematics? ... Athletic activities use measurement of height, distance and time, and data-logging devices to quantify, explore, and improve performance.
This material can also be used in everyday problem solving that stems from activities such as baking. Goals and Standards. ... I also expect students to be familiar with the word ratio, ... Write the proportion you find from number 1 in 4 different ways. (Use properties 2-4)
(Write and solve a proportion using the scale as one ratio.) ... CDGOALS\BK 6-8\Chp3\AA\Activities\Making a Scale Drawing (n = 1( x 4; n = 14 ft. 2 cm n cm. 25 km 80 km. Title: GRADE SIX-CONTENT STANDARD #4 Author: Compaq Last modified by:
Unit 5 Fractions, decimals, percentages, ratio and proportion Term: Summer Year Group: 4/5 Oral and Mental Main Teaching Plenary Objectives and Vocabulary Teaching Activities Objectives and Vocabulary Teaching Activities Teaching Activities/ Focus Questions Y4 ...
... Algebra, 5701, Trade and Industrial, Measurement, Circle, Area, Estimation, Ratio, Proportion, Scale. June, 2005 Career and Technical Education Sample Lesson Plan Format. Title: Constructing a Holiday Wreath. ... Activities: Sell stock. Purchase supplies. Identify the audience in writing ...
ACTIVITIES: ICT: RESOURCES: MATHSWATCH: Clip 101 Estimating Answers. Clip 160 Upper & Lower Bounds Difficult Questions. B-A. ... Solve simple ratio and proportion problems such as finding the ratio of teachers to students in a school.
Ratio Method of Comparison Significance ... cash plus cash equivalents plus cash flow from operating activities Average Collection Period. ... Total Assets Shows proportion of all assets that are financed with debt Long Term Debt to Total Capitalization Long Term Debt.
Ratio, proportion, fraction, equivalent, lowest terms, simplify, percentage, ... ACTIVITIES ICT RESOURCES How to get more pupils from L3 to L5 in mathematics part 2: Learning from misconceptions: Fractions and Decimals Resource sheet A5.
formulate how and when a ratio is used . write appropriate labels . apply knowledge of ratios to the project (ie. Holocaust Ratio Project) identify basic rates . differentiate between rates and ratios. ... Proportion. Differentiated Learning Activities.
KS3 Framework reference Targeted activities for the introduction or plenary part of lesson Activity Ref: Simplify or transform linear expressions by collecting like terms; multiply a single term over a bracket. ... Ratio & Proportion Date: 2 HRS
Ratio and proportion; e. Scale factor; f. Dilations; g. Real-life examples of similarity and congruency; h. Angle measures; j. ... Activities exploring similarity and congruence in three-dimensional figures and analyze the relationship of the area, ...
Ratio and proportion. Topic/Sub-topic: Proportions of the human body. Foundational objective(s): ... contribute positively in group learning activities, and treat with respect themselves, others, and the learning materials used (PSVS)
UEN- Lesson “Ratio, Rate, and Proportion” Activities 1 and 2 from . http://mypages.iit.edu/~smart/dvorber/lesson3.htm. Sample Formative Assessment Tasks Skill-based task. Identify (given examples) the difference between a ratio and a rate. Problem Task. | http://freepdfdb.com/doc/ratio-and-proportion-activities | 13 |
21 | To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here.
The immune system protects the body against infection and disease. It is a complex and integrated system of cells, tissues, and organs that have specialized roles in defending against foreign substances and pathogenic microorganisms, including bacteria, viruses, and fungi. The immune system also functions to guard against the development of cancer. For these actions, the immune system must recognize foreign invaders as well as abnormal cells and distinguish them from self (1). However, the immune system is a double-edged sword in that host tissues can be damaged in the process of combating and destroying invading pathogens. A key component of the immediate immune response is inflammation, which can cause damage to host tissues, although the damage is usually not significant (2). Inflammation is discussed in a separate article; this article focuses on nutrition and immunity.
Cells of the immune system originate in the bone marrow and circulate to peripheral tissues through the blood and lymph. Organs of the immune system include the thymus, spleen, and lymph nodes (3). T-lymphocytes develop in the thymus, which is located in the chest directly above the heart. The spleen, which is located in the upper abdomen, functions to coordinate secretion of antibodies into the blood and also removes old and damaged red blood cells from the circulation (4). Lymph nodes serve as local sentinel stations in tissues throughout the body, trapping antigens and infectious agents and promoting organized immune cell activation.
The immune system is broadly divided into two major components: innate immunity and adaptive immunity. Innate immunity involves immediate, nonspecific responses to foreign invaders, while adaptive immunity requires more time to develop its complex, specific responses (1).
Innate immunity is the first line of defense against foreign substances and pathogenic microorganisms. It is an immediate, nonspecific defense that does not involve immunologic memory of pathogens. Because of the lack of specificity, the actions of the innate immune system can result in damage to the body’s tissues (5). A lack of immunologic memory means that the same response is mounted regardless of how often a specific antigen is encountered (6).
The innate immune system is comprised of various anatomical barriers to infection, including physical barriers (e.g., the skin), chemical barriers (e.g., acidity of stomach secretions), and biological barriers (e.g., normal microflora of the gastrointestinal tract) (1). In addition to anatomical barriers, the innate immune system is comprised of soluble factors and phagocytic cells that form the first line of defense against pathogens. Soluble factors include the complement system, acute phase reactant proteins, and messenger proteins called cytokines (6). The complement system, a biochemical network of more than 30 proteins in plasma and on cellular surfaces, is a key component of innate immunity. The complement system elicits responses that kill invading pathogens by direct lysis (cell rupture) or by promoting phagocytosis. Complement proteins also regulate inflammatory responses, which are an important part of innate immunity (7-9). Acute phase reactant proteins are a class of plasma proteins that are important in inflammation. Cytokines secreted by immune cells in the early stages of inflammation stimulate the synthesis of acute phase reactant proteins in the liver (10). Cytokines are chemical messengers that have important roles in regulating the immune response; some cytokines directly fight pathogens. For example, interferons have antiviral activity (6). These soluble factors are important in recruiting phagocytic cells to local areas of infection. Monocytes, macrophages, and neutrophils are key immune cells that engulf and digest invading microorganisms in the process called phagocytosis. These cells express pattern recognition receptors that identify pathogen-associated molecular patterns (PAMPs) that are unique to pathogenic microorganisms but conserved across several families of pathogens (see figure). For more information about the innate immune response, see the article on Inflammation.
Adaptive immunity (also called acquired immunity), a second line of defense against pathogens, takes several days or weeks to fully develop. However, adaptive immunity is much more complex than innate immunity because it involves antigen-specific responses and immunologic “memory.” Exposure to a specific antigen on an invading pathogen stimulates production of immune cells that target the pathogen for destruction (1). Immunologic “memory” means that immune responses upon a second exposure to the same pathogen are faster and stronger because antigens are “remembered.” Primary mediators of the adaptive immune response are B lymphocytes (B cells) and T lymphocytes (T cells). B cells produce antibodies, which are specialized proteins that recognize and bind to foreign proteins or pathogens in order to neutralize them or mark them for destruction by macrophages. The response mediated by antibodies is called humoral immunity. In contrast, cell-mediated immunity is carried out by T cells, lymphocytes that develop in the thymus. Different subgroups of T cells have different roles in adaptive immunity. For instance, cytotoxic T cells (killer T cells) directly attack and kill infected cells, while helper T cells enhance the responses and thus aid in the function of other lymphocytes (5, 6). Regulatory T cells, sometimes called suppressor T cells, suppress immune responses (12). In addition to its vital role in innate immunity, the complement system modulates adaptive immune responses and is one example of the interplay between the innate and adaptive immune systems (7, 13). Components of both innate and adaptive immunity interact and work together to protect the body from infection and disease.
Nutritional status can modulate the actions of the immune system; therefore, the sciences of nutrition and immunology are tightly linked. In fact, malnutrition is the most common cause of immunodeficiency in the world (14), and chronic malnutrition is a major risk factor for global morbidity and mortality (15). More than 800 million people are estimated to be undernourished, most in the developing world (16), but undernutrition is also a problem in industrialized nations, especially in hospitalized individuals and the elderly (17). Poor overall nutrition can lead to inadequate intake of energy and macronutrients, as well as deficiencies in certain micronutrients that are required for proper immune function. Such nutrient deficiencies can result in immunosuppression and dysregulation of immune responses. In particular, deficiencies in certain nutrients can impair phagocytic function in innate immunity and adversely affect several aspects of adaptive immunity, including cytokine production as well as antibody- and cell-mediated immunities (18, 19). Overnutrition, a form of malnutrition where nutrients, specifically macronutrients, are provided in excess of dietary requirements, also negatively impacts immune system functions (see Overnutrition and Obesity below).
Impaired immune responses induced by malnutrition can increase one’s susceptibility to infection and illness. Infection and illness can, in turn, exacerbate states of malnutrition, for example, by reducing nutrient intake through diminished appetite, impairing nutrient absorption, increasing nutrient losses, or altering the body’s metabolism such that nutrient requirements are increased (19). Thus, states of malnutrition and infection can aggravate each other and lead to a vicious cycle (14).
Protein-energy malnutrition (PEM; also sometimes called protein-calorie malnutrition) is a common nutritional problem that principally affects young children and the elderly (20). Clinical conditions of severe PEM are termed marasmus, kwashiorkor, or a hybrid of these two syndromes. Marasmus is a wasting disorder that is characterized by depletion of fat stores and muscle wasting. It results from a deficiency in both protein and calories (i.e., all nutrients). Individuals afflicted with marasmus appear emaciated and are grossly underweight and do not present with edema (21). In contrast, a hallmark of kwashiorkor is the presence of edema. Kwashiorkor is primarily caused by a deficiency in dietary protein, while overall caloric intake may be normal (21, 22). Both forms are more common in developing nations, but certain types of PEM are also present in various subgroups in industrialized nations, such as the elderly and individuals who are hospitalized (17). In the developed world, PEM more commonly occurs secondary to a chronic disease that interferes with nutrient metabolism, such as inflammatory bowel disease, chronic renal failure, or cancer (22).
Regardless of the specific cause, PEM significantly increases susceptibility to infection by adversely affecting aspects of both innate immunity and adaptive immunity (15). With respect to innate immunity, PEM has been associated with reduced production of certain cytokines and several complement proteins, as well as impaired phagocyte function (20, 23, 24). Such malnutrition disorders can also compromise the integrity of mucosal barriers, increasing vulnerability to infections of the respiratory, gastrointestinal, and urinary tracts (21). With respect to adaptive immunity, PEM primarily affects cell-mediated aspects instead of components of humoral immunity. In particular, PEM leads to atrophy of the thymus, the organ that produces T cells, which reduces the number of circulating T cells and decreases the effectiveness of the memory response to antigens (21, 24). PEM also compromises functions of other lymphoid tissues, including the spleen and lymph nodes (20). While humoral immunity is affected to a lesser extent, antibody affinity and response is generally decreased in PEM (24). It is important to note that PEM usually occurs in combination with deficiencies in essential micronutrients, especially vitamin A, vitamin B6, folate, vitamin E, zinc, iron, copper, and selenium (21).
Experimental studies have shown that several types of dietary lipids (fatty acids) can modulate the immune response (25). Fatty acids that have this role include the long-chain polyunsaturated fatty acids (PUFAs) of the omega-3 and omega-6 classes. PUFAs are fatty acids with more than one double bond between carbons. In all omega-3 fatty acids, the first double bond is located between the third and fourth carbon atom counting from the methyl end of the fatty acid (n-3). Similarly, the first double bond in all omega-6 fatty acids is located between the sixth and seventh carbon atom from the methyl end of the fatty acid (n-6) (26). Humans lack the ability to place a double bond at the n-3 or n-6 positions of a fatty acid; therefore, fatty acids of both classes are considered essential nutrients and must be derived from the diet (26). More information is available in the article on Essential fatty acids. Alpha-linolenic acid (ALA) is a nutritionally essential n-3 fatty acid, and linoleic acid (LA) is a nutritionally essential n-6 fatty acid; dietary intake recommendations for essential fatty acids are for ALA and LA. Other fatty acids in the n-3 and n-6 classes can be endogenously synthesized from ALA or LA (see the figure in a separate article on essential fatty acids). For instance the long-chain n-6 PUFA, arachidonic acid, can be synthesized from LA, and the long-chain n-3 PUFAs, eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), can be synthesized from ALA (26). However, synthesis of EPA and, especially, DHA may be insufficient under certain conditions, such as during pregnancy and lactation (27, 28). EPA and DHA, like other PUFAs, modulate cellular function, including immune and inflammatory responses (29).
Long-chain PUFAs are incorporated into membrane phospholipids of immune cells, where they modulate cell signaling of immune and inflammatory responses, such as phagocytosis and T-cell signaling. They also modulate the production of eicosanoids and other lipid mediators (29, 30). Eicosanoids are 20-carbon PUFA derivatives that play key roles in inflammatory and immune responses. During an inflammatory response, long-chain PUFAs (e.g., arachidonic acid [AA] of the n-6 series and EPA of the n-3 series) in immune cell membranes can be metabolized by enzymes to form eicosanoids (e.g., prostaglandins, leukotrienes, and thromboxanes), which have varying effects on inflammation (29). Eicosanoids derived from AA can also regulate B- and T-cell functions. Resolvins are lipid mediators derived from EPA and DHA that appear to have anti-inflammatory properties (30). To a certain extent, the relative production of these lipid mediators can be altered by dietary and supplemental intake of lipids. In those who consume a typical Western diet, the amount of AA in immune cell membranes is much greater than the amount of EPA, which results in the formation of more eicosanoids derived from AA than EPA. However, increasing n-3 fatty acid intake dose-dependently increases the EPA content of immune cell membranes. The resulting effect would be increased production of eicosanoids derived from EPA and decreased production of eicosanoids derived from AA, leading to an overall anti-inflammatory effect (30, 31). While eicosanoids derived from EPA are less biologically active than AA-derived eicosanoids (32), supplementation with EPA and other n-3 PUFAs may nevertheless have utility in treating various inflammatory diseases. This is a currently active area of investigation; see the article on Essential fatty acids. While n-3 PUFA supplementation may benefit individuals with inflammatory or autoimmune diseases, high n-3 PUFA intakes could possibly impair host-defense mechanisms and increase vulnerability to infectious disease (for more information, see the article on Essential fatty acids) (25, 33).
In addition to PUFAs, isomers of LA called conjugated linoleic acid (CLA) have been shown to modulate immune function, mainly in animal and in vitro studies (34). CLA is found naturally in meat and milk of ruminant animals, but it is also available as a dietary supplement that contains two isomers, cis-9,trans-11 CLA and trans-10,cis-12 CLA. One study in 28 men and women found that CLA supplementation (3 g/day of a 50:50 mixture of the two main CLA isomers) was associated with an increase in plasma levels of IgA and IgM (35), two classes of antibodies. CLA supplementation was also associated with a decrease in levels of two pro-inflammatory cytokines and an increase in levels of an anti-inflammatory cytokine (35). Similar effects on the immune response have been observed in some animal studies (36, 37); however, a few other human studies have not found beneficial effects of CLA on various measures of immune status and function (38-40). More research is needed to understand the effects of CLA on the human immune response.
Further, lipids in general have a number of other roles in immunity besides being the precursors of eicosanoids and similar immune mediators. For instance, lipids are metabolized by immune cells to generate energy and are also important structural and functional components of cell membranes. Moreover, lipids can regulate gene expression through stimulation of membrane receptors or through modification of transcription factor activity. Further, lipids can covalently modify proteins, thereby affecting their function (30).
Deficiencies in select micronutrients (vitamins and nutritionally-essential minerals) can adversely affect aspects of both innate and adaptive immunity, increasing vulnerability to infection and disease. Micronutrient inadequacies are quite common in the general U.S. population, but especially in the poor, the elderly, and those who are obese (see Overnutrition and Obesity below) (41, 42). According to data from the U.S. National Health and Nutrition Examination Survey (NHANES), 93% of the U.S. population do not meet the estimated average requirement (EAR) for vitamin E, 56% for magnesium, 44% for vitamin A, 31% for vitamin C, 14% for vitamin B6, and 12% for zinc (43). Moreover, vitamin D deficiency is a major problem in the U.S. and elsewhere; it has been estimated that 1 billion people in the world have either vitamin D deficiency or insufficiency (44). Because micronutrients play crucial roles in the development and expression of immune responses, selected micronutrient deficiencies can cause immunosuppression and thus increased susceptibility to infection and disease. The roles of several micronutrients in immune function are addressed below.
Vitamin A and its metabolites play critical roles in both innate and adaptive immunity. In innate immunity, the skin and mucosal cells of the eye and respiratory, gastrointestinal, and genitourinary tracts function as a barrier against infections. Vitamin A helps to maintain the structural and functional integrity of these mucosal cells. Vitamin A is also important to the normal function of several types of immune cells important in the innate response, including natural killer (NK) cells, macrophages, and neutrophils. Moreover, vitamin A is needed for proper function of cells that mediate adaptive immunity, such as T and B cells; thus, vitamin A is necessary for the generation of antibody responses to specific antigens (45).
Most of the immune effects of vitamin A are carried out by vitamin A derivatives, namely isomers of retinoic acid. Isomers of retinoic acid are steroid hormones that bind to retinoid receptors that belong to two different classes: retinoic acid receptors (RARs) and retinoid X receptors (RXRs). In the classical pathway, RAR must first heterodimerize with RXR and then bind to small sequences of DNA called retinoic acid response elements (RAREs) to initiate a cascade of molecular interactions that modulate the transcription of specific genes (46). More than 500 genes are directly or indirectly regulated by retinoic acid (47). Several of these genes control cellular proliferation and differentiation; thus, vitamin A has obvious importance in immunity.
Vitamin A deficiency is a major public health problem worldwide, especially in developing nations, where availability of foods containing preformed vitamin A is limited (for information on sources of vitamin A, see the separate article on Vitamin A). Experimental studies in animal models, along with epidemiological studies, have shown that vitamin A deficiency leads to immunodeficiency and increases the risk of infectious diseases (45). In fact, deficiency in this micronutrient is a leading cause of morbidity and mortality among infants, children, and women in developing nations. Vitamin A-deficient individuals are vulnerable to certain infections, such as measles, malaria, and diarrheal diseases (45). Subclinical vitamin A deficiency might increase risk of infection as well (48). Infections can, in turn, lead to vitamin A deficiency in a number of different ways, for example, by reducing food intake, impairing vitamin absorption, increasing vitamin excretion, interfering with vitamin utilization, or increasing metabolic requirements of vitamin A (49).
Many of the specific effects of vitamin A deficiency on the immune system have been elucidated using animal models. Vitamin A deficiency impairs components of innate immunity. As mentioned above, vitamin A is essential in maintaining the mucosal barriers of the innate immune system. Thus, vitamin A deficiency compromises the integrity of this first line of defense, thereby increasing susceptibility to some types of infection, such as eye, respiratory, gastrointestinal, and genitourinary infections (50-56). Vitamin A deficiency results in reductions in both the number and killing activity of NK cells, as well as the function of neutrophils and other cells that phagocytose pathogens like macrophages. Specific measures of functional activity affected appear to include chemotaxis, phagocytosis, and immune cell ability to generate oxidants that kill invading pathogens (45). In addition, cytokine signaling may be altered in vitamin A deficiency, which would affect inflammatory responses of innate immunity.
Additionally, vitamin A deficiency impairs various aspects of adaptive immunity, including humoral and cell-mediated immunity. In particular, vitamin A deficiency negatively affects the growth and differentiation of B cells, which are dependent on retinol and its metabolites (57, 58). Vitamin A deficiency also affects B cell function; for example, animal experiments have shown that vitamin A deficiency impairs antibody responses (59-61). With respect to cell-mediated immunity, retinol is important in the activation of T cells (62), and vitamin A deficiency may affect cell-mediated immunity by decreasing the number or distribution of T cells, altering cytokine production, or by decreasing the expression of cell-surface receptors that mediate T-cell signaling (45).
Vitamin A supplementation enhances immunity and has been shown to reduce the infection-related morbidity and mortality associated with vitamin A deficiency. A meta-analysis of 12 controlled trials found that vitamin A supplementation in children decreased the risk of all-cause mortality by 30%; this analysis also found that vitamin A supplementation in hospitalized children with measles was associated with a 61% reduced risk of mortality (63). Vitamin A supplementation has been shown to decrease the severity of diarrheal diseases in several studies (64) and has also been shown to decrease the severity, but not the incidence, of other infections, such as measles, malaria, and HIV (45). Moreover, vitamin A supplementation can improve or reverse many of the abovementioned, untoward effects on immune function, such as lowered antibody production and an exacerbated inflammatory response (65). However, vitamin A supplementation is not beneficial in those with lower respiratory infections, such as pneumonia, and supplementation may actually aggravate the condition (45, 66, 67). Because of potential adverse effects, vitamin A supplements should be reserved for undernourished populations and those with evidence of vitamin A deficiency (64). For information on vitamin A toxicity, see the separate article on Vitamin A.
Like vitamin A, the active form of vitamin D, 1,25-dihydroxyvitamin D3, functions as a steroid hormone to regulate expression of target genes. Many of the biological effects of 1,25-dihydroxyvitamin D3 are mediated through a nuclear transcription factor known as the vitamin D receptor (VDR) (68). Upon entering the nucleus of a cell, 1,25-dihydroxyvitamin D3 associates with the VDR and promotes its association with the retinoid X receptor (RXR). In the presence of 1,25-dihydroxyvitamin D3, the VDR/RXR complex binds small sequences of DNA known as vitamin D response elements (VDREs) and initiates a cascade of molecular interactions that modulate the transcription of specific genes. More than 200 genes in tissues throughout the body are known to be regulated either directly or indirectly by 1,25-dihydroxyvitamin D3 (44).
In addition to its effects on mineral homeostasis and bone metabolism, 1,25-dihydroxyvitamin D3 is now recognized to be a potent modulator of the immune system. The VDR is expressed in several types of immune cells, including monocytes, macrophages, dendritic cells, and activated T cells (69-72). Macrophages also produce the 25-hydroxyvitamin D3-1-hydroxylase enzyme, allowing for local conversion of vitamin D to its active form (73). Studies have demonstrated that 1,25-dihydroxyvitamin D3 modulates both innate and adaptive immune responses.
Antimicrobial peptides (AMPs) and proteins are critical components of the innate immune system because they directly kill pathogens, especially bacteria, and thereby enhance immunity (74). AMPs also modulate immune functions through cell-signaling effects (75). The active form of vitamin D regulates an important antimicrobial protein called cathelicidin (76-78). Vitamin D has also been shown to stimulate other components of innate immunity, including immune cell proliferation and cytokine production (79). Through these roles, vitamin D helps protect against infections caused by pathogens.
Vitamin D has mainly inhibitory effects on adaptive immunity. In particular, 1,25-dihydroxyvitamin D3 suppresses antibody production by B cells and also inhibits proliferation of T cells in vitro (80-82). Moreover, 1,25-dihydroxyvitamin D3 has been shown to modulate the functional phenotype of helper T cells as well as dendritic cells (75). T cells that express the cell-surface protein CD4 are divided into two subsets depending on the particular cytokines that they produce: T helper (Th)1 cells are primarily involved in activating macrophages and inflammatory responses and Th2 cells are primarily involved in stimulating antibody production by B cells (12). Some studies have shown that 1,25-dihydroxyvitamin D3 inhibits the development and function of Th1 cells (83, 84) but enhances the development and function of Th2 cells (85, 86) and regulatory T cells (87, 88). Because these latter cell types are important regulators in autoimmune disease and graft rejections, vitamin D is suggested to have utility in preventing and treating such conditions (89). Studies employing various animal models of autoimmune diseases and transplantation have reported beneficial effects of 1,25-dihydroxyvitamin D3 (reviewed in 84).
Indeed, vitamin D deficiency has been implicated in the development of certain autoimmune diseases, such as insulin-dependent diabetes mellitus (IDDM; type 1 diabetes mellitus), multiple sclerosis (MS), and rheumatoid arthritis (RA). Autoimmune diseases occur when the body mounts an immune response against its own tissues instead of a foreign pathogen. The targets of the inappropriate immune response are the insulin-producing beta-cells of the pancreas in IDDM, the myelin-producing cells of the central nervous system in MS, and the collagen-producing cells of the joints in RA (90). Some epidemiological studies have found the prevalence of various autoimmune conditions increases as latitude increases (91). This suggests that lower exposure to ultraviolet-B radiation (the type of radiation needed to induce vitamin D synthesis in skin) and the associated decrease in endogenous vitamin D synthesis may play a role in the pathology of autoimmune diseases. Additionally, results of several case-control and prospective cohort studies have associated higher vitamin D intake or serum levels with decreased incidence, progression, or symptoms of IDDM (92), MS (93-96), and RA (97). For more information, see the separate article on Vitamin D. It is not yet known whether vitamin D supplementation will reduce the risk of certain autoimmune disorders. Interestingly, a recent systematic review and meta-analysis of observational studies found that vitamin D supplementation during early childhood was associated with a 29% lower risk of developing IDDM (98). More research is needed to determine the role of vitamin D in various autoimmune conditions.
Vitamin C is a highly effective antioxidant that protects the body’s cells against reactive oxygen species (ROS) that are generated by immune cells to kill pathogens. Primarily through this role, the vitamin affects several components of innate and adaptive immunity; for example, vitamin C has been shown to stimulate both the production (99-103) and function (104, 105) of leukocytes (white blood cells), especially neutrophils, lymphocytes, and phagocytes. Specific measures of functions stimulated by vitamin C include cellular motility (104), chemotaxis (104, 105), and phagocytosis (105). Neutrophils, which attack foreign bacteria and viruses, seem to be the primary cell type stimulated by vitamin C, but lymphocytes and other phagocytes are also affected (106). Additionally, several studies have shown that supplemental vitamin C increases serum levels of antibodies (107, 108) and C1q complement proteins (109-111) in guinea pigs, which—like humans—cannot synthesize vitamin C and hence depend on dietary vitamin C. However, some studies have reported no beneficial changes in leukocyte production or function with vitamin C treatment (112-115). Vitamin C may also protect the integrity of immune cells. Neutrophils, mononuclear phagocytes, and lymphocytes accumulate vitamin C to high concentrations, which can protect these cell types from oxidative damage (103, 116, 117). In response to invading microorganisms, phagocytic leukocytes release non-specific toxins, such as superoxide radicals, hypochlorous acid (“bleach”), and peroxynitrite; these ROS kill pathogens and, in the process, can damage the leukocytes themselves (118). Vitamin C, through its antioxidant functions, has been shown to protect leukocytes from such effects of autooxidation (119). Phagocytic leukocytes also produce and release cytokines, including interferons, which have antiviral activity (120). Vitamin C has been shown to increase interferon levels in vitro (121). Further, vitamin C regenerates the antioxidant vitamin E from its oxidized form (122).
It is widely thought by the general public that vitamin C boosts the function of the immune system, and accordingly, may protect against viral infections and perhaps other diseases. While some studies suggest the biological plausibility of vitamin C as an immune enhancer, human studies published to date are conflicting. Controlled clinical trials of appropriate statistical power would be necessary to determine if supplemental vitamin C boosts the immune system. For a review of vitamin C and the common cold, see the separate article on Vitamin C.
Vitamin E is a lipid-soluble antioxidant that protects the integrity of cell membranes from damage caused by free radicals (123). In particular, the alpha-tocopherol form of vitamin E protects against peroxidation of polyunsaturated fatty acids, which can potentially cause cellular damage and subsequently lead to improper immune responses (124). Several studies in animal models as well as humans indicate that vitamin E deficiency impairs both humoral and cell-mediated aspects of adaptive immunity, including B and T cell function (reviewed in 124). Moreover, vitamin E supplementation in excess of current intake recommendations has been shown to enhance immunity and decrease susceptibility to certain infections, especially in elderly individuals.
Aging is associated with immune senescence (125). For example, T-cell function declines with increasing age, evidenced by decreased T-cell proliferation and decreased T-cell production of the cytokine, interleukin-2 (126). Studies in mice have found that vitamin E ameliorates these two age-related, immune effects (127, 128). Similar effects have been observed in some human studies (129). A few clinical trials of alpha-tocopherol supplementation in elderly subjects have demonstrated improvements in immunity. For example, elderly adults given 200 mg/day of synthetic alpha-tocopherol (equivalent to 100 mg of RRR-alpha-tocopherol or 150 IU of RRR-tocopherol; RRR-alpha-tocopherol is also referred to as "natural" or d-alpha-tocopherol) for several months displayed increased formation of antibodies in response to hepatitis B vaccine and tetanus vaccine (130). However, it is not known if such enhancements in the immune response of older adults actually translate to increased resistance to infections like the flu (influenza virus) (131). A randomized, placebo-controlled trial in elderly nursing home residents reported that daily supplementation with 200 IU of synthetic alpha-tocopherol (equivalent to 90 mg of RRR-alpha-tocopherol) for one year significantly lowered the risk of contracting upper respiratory tract infections, especially the common cold, but had no effect on lower respiratory tract (lung) infections (132). Yet, other trials have not reported an overall beneficial effect of vitamin E supplements on respiratory tract infections in older adults (133-136). More research is needed to determine whether supplemental vitamin E may protect the elderly against the common cold or other infections.
Vitamin B6 is required in the endogenous synthesis and metabolism of amino acids—the building blocks of proteins like cytokines and antibodies. Animal and human studies have demonstrated that vitamin B6 deficiency impairs aspects adaptive immunity, including both humoral and cell-mediated immunity. Specifically, deficiency in this micronutrient has been shown to affect lymphocyte proliferation, differentiation, and maturation as well as cytokine and antibody production (137-139). Correcting the vitamin deficiency restores the affected immune functions (139).
The B vitamin, folate, is required as a coenzyme to mediate the transfer of one-carbon units. Folate coenzymes act as acceptors and donors of one-carbon units in a variety of reactions critical to the endogenous synthesis and metabolism of nucleic acids (DNA and RNA) and amino acids (140, 141). Thus, folate has obvious importance in immunity. Folate deficiency results in impaired immune responses, primarily affecting cell-mediated immunity. However, antibody responses of humoral immunity may also be impaired in folate deficiency (142).
In humans, vitamin B12 functions as a coenzyme for two enzymatic reactions. One of the vitamin B12-dependent enzymes is involved in the synthesis of the amino acid, methionine, from homocysteine. Methionine in turn is required for the synthesis of S-adenosylmethionine, a methyl group donor used in many biological methylation reactions, including the methylation of a number of sites within DNA and RNA. The other vitamin B12-dependent enzyme, L-methylmalonyl-CoA mutase, converts L-methylmalonyl-CoA to succinyl-CoA, a compound that is important in the production of energy from fats and proteins as well as in the synthesis of hemoglobin, the oxygen carrying pigment in red blood cells (143). Patients with diagnosed vitamin B12 deficiency have been reported to have suppressed natural killer cell activity and decreased numbers of circulating lymphocytes (144, 145). One study found that these immunomodulatory effects were corrected by treating the vitamin deficiency (144).
Zinc is critical for normal development and function of cells that mediate both innate and adaptive immunity (146). The cellular functions of zinc can be divided into three categories: 1) catalytic, 2) structural, and 3) regulatory (see Function in the separate article on zinc) (147). Because zinc is not stored in the body, regular dietary intake of the mineral is important in maintaining the integrity of the immune system. Thus, inadequate intake can lead to zinc deficiency and compromised immune responses (148). With respect to innate immunity, zinc deficiency impairs the complement system, cytotoxicity of natural killer cells, phagocytic activity of neutrophils and macrophages, and immune cell ability to generate oxidants that kill invading pathogens (149-151). Zinc deficiency also compromises adaptive immune function, including lymphocyte number and function (152). Even marginal zinc deficiency, which is more common than severe zinc deficiency, can suppress aspects of immunity (148). Zinc-deficient individuals are known to experience increased susceptibility to a variety of infectious agents (see the separate article on Zinc).
Adequate selenium intake is essential for the host to mount a proper immune response because it is required for the function of several selenium-dependent enzymes known as selenoproteins (see the separate article on Selenium). For example, the glutathione peroxidases (GPx) are selenoproteins that function as important redox regulators and cellular antioxidants, which reduce potentially damaging reactive oxygen species, such as hydrogen peroxide and lipid hydroperoxides, to harmless products like water and alcohols by coupling their reduction with the oxidation of glutathione (see the diagram in the article on selenium) (153). These roles have implications for immune function and cancer prevention.
Selenium deficiency impairs aspects of innate as well as adaptive immunity (154, 155), adversely affecting both humoral immunity (i.e., antibody production) and cell-mediated immunity (156). Selenium deficiency appears to enhance the virulence or progression of some viral infections (see separate article on Selenium). Moreover, selenium supplementation in individuals who are not overtly selenium deficient appears to stimulate the immune response. In two small studies, healthy (157, 158) and immunosuppressed individuals (159) supplemented with 200 micrograms (mcg)/day of selenium as sodium selenite for eight weeks showed an enhanced immune cell response to foreign antigens compared with those taking a placebo. A considerable amount of basic research also indicates that selenium plays a role in regulating the expression of cytokines that orchestrate the immune response (160).
Iron is an essential component of hundreds of proteins and enzymes that are involved in oxygen transport and storage, electron transport and energy generation, antioxidant and beneficial pro-oxidant functions, and DNA synthesis (see Function in the article on iron) (161-163). Iron is required by the host in order to mount effective immune responses to invading pathogens, and iron deficiency impairs immune responses (164). Sufficient iron is critical to several immune functions, including the differentiation and proliferation of T lymphocytes and generation of reactive oxygen species (ROS) that kill pathogens. However, iron is also required by most infectious agents for replication and survival. During an acute inflammatory response, serum iron levels decrease while levels of ferritin (the iron storage protein) increase, suggesting that sequestering iron from pathogens is an important host response to infection (162, 165). Moreover, conditions of iron overload (e.g., hereditary hemochromatosis) can have detrimental consequences to immune function, such as impairments in phagocytic function, cytokine production, complement system activation, and T and B lymphocyte function (164). Further, data from the first National Health and Nutrition Examination Survey (NHANES), a U.S. national survey, indicate that elevated iron levels may be a risk factor for cancer and death, especially in men (167). For men and women combined, there were significant trends for increasing risk of cancer and mortality with increasing transferrin saturation, with risks being higher in those with transferrin saturation >40% compared to ≤30% (167).
Despite the critical functions of iron in the immune system, the nature of the relationship between iron deficiency and susceptibility to infection, especially with respect to malaria, remains controversial. High-dose iron supplementation of children residing in the tropics has been associated with increased risk of clinical malaria and other infections, such as pneumonia. Studies in cell cultures and animals suggest that the survival of infectious agents that spend part of their life cycle within host cells, such as plasmodia (malaria) and mycobacteria (tuberculosis), may be enhanced by iron therapy. Controlled clinical studies are needed to determine the appropriate use of iron supplementation in regions where malaria is common, as well as in the presence of infectious diseases, such as HIV, tuberculosis, and typhoid (168).
Copper is a critical functional component of a number of essential enzymes known as cuproenzymes (see the separate article on Copper). The mineral plays an important role in the development and maintenance of immune system function, but the exact mechanism of its action is not yet known. Copper deficiency results in neutropenia, an abnormally low number of neutrophils (169), which may increase one’s susceptibility to infection. Adverse effects of insufficient copper on immune function appear most pronounced in infants. Infants with Menkes disease, a genetic disorder that results in severe copper deficiency, suffer from frequent and severe infections (170, 171). In a study of 11 malnourished infants with evidence of copper deficiency, the ability of certain white blood cells to engulf pathogens increased significantly after one month of copper supplementation (172).
Immune effects have also been observed in adults with low intake of dietary copper. In one study, 11 men on a low-copper diet (0.66 mg copper/day for 24 days and 0.38 mg/day for another 40 days) showed a reduced proliferation response when white blood cells, called mononuclear cells, were isolated from blood and presented with an immune challenge in cell culture (173). While it is known that severe copper deficiency has adverse effects on immune function, the effects of marginal copper deficiency in humans are not yet clear (174). However, long-term high intakes of copper can result in adverse effects on immune function (175).
Probiotics are usually defined as live microorganisms that, when administered in sufficient amounts, benefit the overall health of the host (176). Common examples belong to the Lactobacilli and Bifidobacteria species; these probiotics are consumed in yogurt and other fermented foods. Ingested probiotics that survive digestion can transiently inhabit the lower part of the gastrointestinal tract (177). Here, they can modulate immune functions by interacting with various receptors on intestinal epithelial cells and other gut-associated immune cells, including dendritic cells and M-cells (178). Immune modulation requires regular consumption because probiotics have not been shown to permanently alter intestinal microflora (179). Probiotics have been shown to benefit both innate and adaptive immune responses of the host (180). For example, probiotics can strengthen the gut epithelial barrier—an important innate defense—through a number of ways, such as by inhibiting apoptosis and promoting the survival of intestinal epithelial cells (181). Probiotics can also stimulate the production of antibodies and T lymphocytes, which are critical in the adaptive immune response (180). Several immune effects of probiotics are mediated through altering cell-signaling cascades that modify cytokine and other protein expression (181). However, probiotics exert diverse effects on the immune system that are dependent not only on the specific strain but also on the dose, route, and frequency of delivery (182). Probiotics may have utility in the prevention of inflammatory bowel disorders, diarrheal diseases, allergic diseases, gastrointestinal and other types of infections, and certain cancers. However, more clinical research is needed in order to elucidate the health effects of probiotics (180).
Overnutrition is a form of malnutrition where nutrients are supplied in excess of the body’s needs. Overnutrition can create an imbalance between energy intake and energy expenditure and lead to excessive energy storage, resulting in obesity (15). Obesity is a major public health problem worldwide, especially in industrialized nations. Obese individuals are at increased risk of morbidity from a number of chronic diseases, including hypertension and cardiovascular diseases, type 2 diabetes, liver and gallbladder disease, osteoarthritis, sleep apnea, and certain cancers (183). Obesity has also been linked to increased risk of mortality (184).
Overnutrition and obesity have been shown to alter immunocompetence. Obesity is associated with macrophage infiltration of adipose tissue; macrophage accumulation in adipose tissue is directly proportional to the degree of obesity (185). Studies in mouse models of genetic and high-fat diet-induced obesity have documented a marked up-regulation in expression of inflammation and macrophage-specific genes in white adipose tissue (186). In fact, obesity is characterized by chronic, low-grade inflammation, and inflammation is thought to be an important contributor in the pathogenesis of insulin resistance—a condition that is strongly linked to obesity. Adipose tissue secretes fatty acids and other molecules, including various hormones and cytokines (called adipocytokines or adipokines), that trigger inflammatory processes (185). Leptin is one such hormone and adipokine that plays a key role in the regulation of food intake, body weight, and energy homeostasis (187, 188). Leptin is secreted from adipose tissue and circulates in direct proportion to the amount of fat stores. Normally, higher levels of circulating leptin suppress appetite and thereby lead to a reduction in food intake (189). Leptin has a number of other functions as well, such as modulation of inflammatory responses and aspects of humoral and cell-mediated responses of the adaptive immune system (187, 190). Specific effects of leptin, elucidated in animal and in vitro studies, include the promotion of phagocytic function of immune cells; stimulation of pro-inflammatory cytokine production; and regulation of neutrophil, natural killer (NK) cell, and dendritic cell functions (reviewed in 190). Leptin also affects aspects of cell-mediated immunity; for example, leptin promotes T helper (Th)1 immune responses and thus may have implications in the development of autoimmune disease (191). Th1 cells are primarily involved in activating macrophages and inflammatory responses (12). Obese individuals have been reported to have higher plasma leptin concentrations compared to lean individuals. However, in the obese, the elevated leptin signal is not associated with the normal responses of reduced food intake and increased energy expenditure, suggesting obesity is associated with a state of leptin resistance. Leptin resistance has been documented in mouse models of obesity, but more research is needed to better understand leptin resistance in human obesity (189).
Obese individuals may exhibit increased susceptibility to various infections. Some epidemiological studies have shown that obese patients have a higher incidence of postoperative and other nosocomial infections compared with patients of normal weight (192, 193; reviewed in 194). Obesity has been linked to poor wound healing and increased occurrence of skin infections (195-197). A higher body mass index (BMI) may also be associated with increased susceptibility to respiratory, gastrointestinal, liver, and biliary infections (reviewed in 194). In obesity, the increased vulnerability, severity, or complications of certain infections may be related to a number of factors, such as select micronutrient deficiencies. For example, one study in obese children and adolescents associated impairments in cell-mediated immunity with deficiencies in zinc and iron (198). Deficiencies or inadequacies of other micronutrients, including the B vitamins and vitamins A, C, D, and E, have also been associated with obesity (41). Overall, immune responses appear to be compromised in obesity, but more research is needed to clarify the relationship between obesity and infection-related morbidity and mortality.
Written in August 2010 by:
Victoria J. Drake, Ph.D.
Linus Pauling Institute
Oregon State University
Reviewed in August 2010 by:
Adrian F. Gombart, Ph.D.
Department of Biochemistry and Biophysics
Principal Investigator, Linus Pauling Institute
Oregon State University
Reviewed in August 2010 by:
Malcolm B. Lowry, Ph.D.
Department of Microbiology
Oregon State University
This article was underwritten, in part, by a grant from
Bayer Consumer Care AG, Basel, Switzerland.
Last updated 9/2/10 Copyright 2010-2013 Linus Pauling Institute
The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional.
The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed.
Thank you for subscribing to the Linus Pauling Institute's Research Newsletter.
You should receive your first issue within a month. We appreciate your interest in our work. | http://lpi.oregonstate.edu/infocenter/immunity.html | 13 |
34 | Exploring chemical bonding
Grade level(s):Grade 9, Grade 10, Grade 11, Grade 12
Atoms bond with each other to achieve a more energetically stable form. Atoms are more stable when their outer electron shell is complete. Atoms bond with each other by either sharing electrons (covalent bond) or transferring electrons (ionic bond).
Atom, molecule, ion, compound, electrons, neutrons, protons, electron shell/level, valence electrons, chemical bonding, ionic bond, covalent bond, electron transfer, metals, non-metals, noble gases, Octet rule
What you need:
You can borrow the following items from the SEP resource center:
- Kit 302: Exploring Chemical Bonds Kit (if you do not have access to the SEP resource center, refer to the "Getting ready" section of this lesson to put your own kit together)
- Periodic tables of the elements (one for each pair of students)
Students work in pairs or groups of three
Students will need enough desk space to lay out the content of the envelopes, so lab benches or tables will work better than individual student desks.
Students will engage in an exploration demonstrating the Octet rule and chemical bonding using paper models of elements forming covalent and ionic compounds.
Students should have explored and grasped the following concepts:
- Matter is everything that has mass and takes up space (See lesson "What is matter?" on SEPlessons.org)
- Matter is made up of tiny particles, called atoms. Atoms contain electrons, neutrons and protons (subatomic particles).
- The periodic table is a way of organizing the elements according to their characteristics and chemical behavior.
- The periodic table can be used to determine the number of protons, electrons and neutrons of atoms of different element.
- The number of protons defines an element.
- The number of valence electrons (electrons in the outer shell) determines the chemical properties of the element.
Students will be able to....
- predict whether two atoms will form a covalent or an ionic bond based on their valence electrons and their position in the periodic table
- model electron transfer between atoms to form ionic bonds and electron sharing between atoms to form covalent bonds
A molecule or compound is made when two or more atoms form a chemical bond, linking them together. The two types of bonds, addressed in this activity, are ionic bonds and covalent bonds. Atoms tend to bond in such a way that they each have a full valence (outer) shell. Molecules or ions tend to be most stable when the outermost electron shells of their constituent atoms contain the maximum number of electrons (for most electron shells that number is 8 = Octet rule). Ionic bonds are made between ions of a metal and a non-metal atom, whereas covalent bonds are made between non-metal atoms.
In an ionic bond, the atoms first transfer electrons between each other, change into ions that then are bound together by the attraction between the oppositely-charged ions. For example, sodium and chloride form an ionic bond, to make NaCl, or table salt. Chlorine (Cl) has seven valence electrons in its outer orbit, but to be in a more stable condition, it needs eight electrons in its outer orbit. On the other hand, Sodium has one valence electron and it would need eight electrons to fill up its outer electron level. A more energetically efficient way to achieve a full outer electron shell for Sodium is to "shed" the single electron in its outer shell instead. Sodium "donates" its single valence electron to Chlorine so that both have 8 electrons in their outer shell. The attraction between the resulting ions, Na+ and Cl-, forms the ionic bond.
In a covalent bond, the atoms are bound by shared electrons. A good example of a covalent bond is that which occurs between two hydrogen atoms. Atoms of hydrogen (H) have one valence electron in their outer (and only) electron shell. Since the capacity of this shell is two electrons, each hydrogen atom will "want" to pick up a second electron. In an effort to pick up a second electron, hydrogen atoms will react with nearby hydrogen (H) atoms to form the compound H2. Both atoms now share their 2 common electrons and achieve the stability of a full valence shell.
If the electron is shared equally between the atoms forming a covalent bond, like in the case of H2 , then the bond is said to be non-polar. Electrons are not always shared equally between two bonding atoms: one atom might exert more of a force on the electron than the other. This "pull" is termed electronegativity and measures the attraction for electrons a particular atom has. Atoms with high electronegativities — such as fluorine, oxygen, and nitrogen — exert a greater pull on electrons than atoms with lower electronegativities. In a bonding situation this can lead to unequal sharing of electrons between atoms, as electrons will spend more time closer to the atom with the higher electronegativity. When an electron is more attracted to one atom than to another, forming a polar covalent bond. A great example for a polar covalent bond is water:
Ionic and covalent compounds
Due to the strong attractive forces between the ions, ionic compounds are solids with a high melting and boiling point. When dissolved in water, the ions separate, resulting in a solution that conducts electricity. Covalent compounds have a much lower melting and boiling point than ionic compounds and can be solids, liquids or gases. Some covalent compounds are water soluble, some are not. But non conduct electricity when dissolved in water (unlike ionic compounds).
Over the years the model of an atom has changed. For an interesting review check out this link: http://www.clickandlearn.org/Gr9_Sci/atoms/modelsoftheatom.html
For simplicity we will be using the Bohr model throughout this lesson. It is not the latest model, but sufficiently explains the concepts of atomic structure as well as molecular bonding addressed in this lesson.
Bohr's model of the atom describes the electrons as orbiting in discrete, precisely defined circular orbits, similar to planets orbiting the sun. Electrons can only occupy certain allowed orbitals. For an electron to occupy an allowed orbit, a certain amount of energy must be available. Only a specified maximum number of electrons can occupy an orbital. Under normal circumstances, electrons occupy the lowest energy level orbitals closest to the nucleus. By absorbing additional energy, electrons can be promoted to higher orbitals, and release that energy when they return back to lower energy levels. The first energy level can hold a maximum of 2 electrons, the second and all subsequent energy levels can hold a maximum of 8 electrons (Note: This is highly simplified and partially inaccurate, however this simplification allows the students to gain a basic understanding of atomic structure and bonding before they will learn details in their high school chemistry class once they master these basic ideas.)
If you don't have access to the SEP resource center to check out the "Chemical bonding kit" follow the instructions below to put your own kit together.
- Print out the attached "Chemical_Bonds_Template" on card-stock. One set per pair of students. Laminate the printouts for repeated use.
- Cut out the "atoms" and punch a hole in the middle of each of the small circles on all of the rings, using a single hole punch. For the inner holes for the larger atoms you will need a hole punch with a long reach (at least 2"). Craft or office supplies stores carry them.
- Put brass fasteners in the holes of each atom according to the number of electrons it possesses. Elements of Group I will have one electron (one brass fastener) in the outer ring, elements of Group II will have two electrons (fasteners) in the outer rings and so on. Inner rings should be completly filled with fasteners.
NOTE: In order to safe time and fasteners, you can also NOT hole punch the inner rings and only use fasteners to show the electrons on the outermost electron level. However, make sure to tell your students that the inner electron levels are all full even though they do not hold fasteners.
- Label and assemble 3 envelopes for each pair:
- Envelope 1:
Argon (Ar), Neon (Ne), Helium (He)
- Envelope 2:
Magnesium (Mg) and Sulfur (S)
Sodium (Na) and Chlorine (Cl)
Lithium (li) and Fluorine (F)
Beryllium (Be) and Oxygen (O)
- Envelope 3:
2 Chlorine (Cl) atoms
2 Hydrogen (H) atoms
2 Fluorine (F) atoms
Chlorine (Cl) and Hydrogen (H)
- Envelope 1:
- Add a card in each evelope, showing which atoms combine to make compounds (see attached "Compounds_Info_Envelopes.pdf")
- Copy the attached "Task card" for your students
- Prepare "charge" labels by using round color coding labels (colored round sticky dots) and label half a sheet of labels with "-" and the other half with "+". You will need one sheet for each pair of students.
Lesson Implementation / Outline
- Tell your students that so far you have explored the question "What is matter made of?" and have worked with the periodic table and gained some familiarity with the elements. Some elements are found in nature in their elemental form but most elements usually don't occur in the elementary form on earth, but combine naturally with each other to create more energy-stable materials that we experience all around us like air, water, rocks and all living things. In this next activity they will explore HOW and also WHY elements combine with each other.
- Explain to students that they will work in pairs and receive three labeled envelopes, containing a group of atoms and an information card showing which atoms combine to form compounds. Their task it is to look at each group and see if they see any similarities or patterns. Hold up an example of one of the atoms in the envelope. Explain that the element symbol is in the middle, the circles represent the electron levels and the brass fasteners show the electrons.
- Instruct students to open one envelope at a time (1-3), working together to find commonalities among the atoms or compounds inside and to write those down before they move on to the next envelope. Suggest that placing atoms that form compounds side by side is helpful. They can use the periodic table to look up the elements that they have in front of you. After about 10-15 minutes they will regroup with the rest of the class and see what they have discovered.
- After students received verbal instructions, hand out the "Taskcard", the 3 envelopes and a periodic table to each group of students.
- While students work, roam the room and listen to students discussions. Answer questions as needed, but don't give away what they are supposed to discover.
- After 10-15 minutes (or when students had a chance to look at all 3 envelopes), ask the class what they have discovered. Use equal share- out strategies to make sure several pairs will have a chance to report out one discovery. Collect responses on overhead or board.
Envelope 1: He, Ne, Ar
All have a full outer electron level; are not combined with other atoms to form compounds; are all elements of group 8 = noble gases; all are non-metals.
Envelope 2: NaCl, LiF, MgO, BeS (ionic bonds)
All elements from group 1,2,7,8. Valence electrons of the two atoms in each compound add up to 8. Each compound contains one metal and one non-metal.
Envelope 3: Cl2, F2, H2, HCl (covalent bonds)
Always two non-metals combining; often 2 of the same elements bonding together.
- Ask students why they think neither of the noble gas elements bonded with another element. Tell students that atoms "strive" to be in their most stable form possible and that it turns out that they are most stable when their outer electron level is full. Since the nobel gases already have a complete outer electron level, they are already in their most stable form and therefor non-reactive. Other atoms bond together in order to achieve a more energetically stable form/ a full outer energy level. The first energy level is the smallest one and can only hold a maximum of two electrons. The second and third and the following electron levels can each hold a maximum of 8 electrons.
- Have students look at their Sodium Chloride from envelope #1. Ask: How many electrons does Chlorine have in its outer energy level?
How many more would it need to be energetically stable?
Sodium has one electron in its outer energy level. How many more electrons would it need to fill up that energy level?
What could these two atoms do in order to both have a full outer energy level? [Take some student ideas]
Explain that instead of trying to get 7 additional electrons, it is energetically better for Sodium just to get rid of this single electron on the outside. If Sodium and Chlorine come together, Sodium transfers one electron to Chlorine. Now Chlorine has a complete outer energy level. For Sodium, now that it gave away its outer electron, that outer energy level is no longer existing and what is now the outermost energy level is complete and therefore Sodium also achieved a more stable form.
Have students simulate the electron transfer by taking the single electron (fastener) from Sodium and adding it to the Chlorine atom.
Ask students what charge the Chlorine and Sodium are now. Chlorine gained one negatively charged electron, so it is now negatively charged and Sodium lost one negatively charged electron. It now has more positive charges than negative charges and is overall positively charged. Remind students that we call atoms that are charged, ions. This electron transfer resulted in a Chlorine ion and a Sodium ion. Just like the opposite poles of a magnet, positive and negative charges attract. This attraction is what then bonds the two atoms together.
Pass out the color "+" and "-" stickers to students and have them stick the correct one on their newly created Chlorine and Florine ion.
Explain that this kind of bond is called an "ionic bond" because it is between two ions that are the result of an electron transfer between atoms.
Have students go through the process with their partner, using a different element pair (for example, Mg - O). Have students report out how these two atoms form the ionic bond and what charges the created ions have.
Reinforce that ionic bonds are formed between a metal and a non-metal, that elements from Group 1 (with one valence electron) bond with elements from Group 7 (with 7 valence electrons) in a one to one ratio and elements from Group 2 with elements from Group 6 in a one to one ratio. Elements of group 2 can combine with elements of group 7 in a one to two ratio (for example MgF2).
- Have students look at a pair from envelope #3, for example the 2 Fluorine atoms. Now that they know that atoms "want" to achieve a more stable form by filling up their outer electron level, ask how these 2 Fluorine atoms could possibly achieve that.
Each of the Fluorine atoms needs one additional electron to complete their outer energy level. If one takes one electron from the other it has 8 electrons in its outer shell and is full, but the second Fluorine atom now only has 6 electrons. That's not an option that works for both atoms. But if they each share one electron with the other one, they BOTH have 8 electrons in the outer level and achieved their more stable form. Demonstrate the sharing of electrons with the models. Overlap the two atom models so that two holes of each model line up and put the brass fasteners through, connecting the two atoms. Explain that the shared electrons spend part of the time circling one nucleus and part of the time circling the other one.
Explain that this kind of bond between two atoms is called a "covalent bond", the prefix "co-" meaning mutually/together/jointly as the atoms are sharing their electron pairs. Have students simulate the formation of covalent bonds between the other atoms in envelope three.
Ask students to pull off the charge label dots, rearrange the brass fasteners so that each atom has the correct amounts of "electrons" in its outer energy level and make sure to put the atoms in the correct envelopes together with the compound information card. Use the inventory sheets to help students put the atoms and envelopes together correctly.
After students explored the compounds given to them in the envelope, you can challenge them to predict what other compounds they can make with the available atoms and what bond they would form. Some students might discover that they can also make covalent molecules with more than 2 atoms, for example H2O. If nobody makes that suggestion, ask students to use their two Hydrogen atoms and one Oxygen atom to make water. Ask what kind of bond those atoms would form and have them model it. Here you also can get into the concept of polar and non-polar covalent bonds.
Have students reflect on what they understood about chemical bonding and what questions they still have. This can be done in writing as a journal reflection, as a Think-Pair-Share or as a whole class share out on the board.
Extensions and Reflections
This lesson could be followed by a lesson on nominclature of ionic compounds and chemical formulas using the same materials provided in the kit.
|ChemBonds taskcard.pdf||43.79 KB| | http://seplessons.org/node/2241 | 13 |
91 | 1 These tables present experimental statistics showing the Gross Value of Irrigated Agricultural Production (GVIAP). Annual data are presented for the reference periods from 2000–01 to 2008–09 for Australia, States and Territories, for the Murray-Darling Basin for selected years (2000–01, 2005–06, 2006–07, 2007–08 and 2008–09) and for Natural Resource Management (NRM) regions from 2005–06 to 2008–09, for key agricultural commodity groups.
2 The tables also present the total gross value of agricultural commodities (GVAP) and the Volume of Water Applied (in megalitres) to irrigated crops and pastures.
WHAT IS GVIAP?
3 GVIAP refers to the gross value of agricultural commodities that are produced with the assistance of irrigation. The gross value of commodities produced is the value placed on recorded production at the wholesale prices realised in the marketplace. Note that this definition of GVIAP does not refer to the value that irrigation adds to production, or the "net effect" that irrigation has on production (i.e. the value of a particular commodity that has been irrigated "minus" the value of that commodity had it not been irrigated) - rather, it simply describes the gross value of agricultural commodities produced with the assistance of irrigation.
4 ABS estimates of GVIAP attribute all of the gross value of production from irrigated land to irrigated agricultural production. For this reason, extreme care must be taken when attempting to use GVIAP figures to compare different commodities - that is, the gross value of irrigated production should not be used as a proxy for determining the highest value water uses. Rather, it is a more effective tool for measuring changes over time or comparing regional differences in irrigated agricultural production.
5 Estimating the value that irrigation adds to agricultural production is difficult. This is because water used to grow crops and irrigate pastures comes from a variety of sources. In particular, rainwater is usually a component of the water used in irrigated agriculture, and the timing and location of rainfall affects the amount of irrigation water required. Other factors such as evaporation and soil moisture also affect irrigation water requirements. These factors contribute to regional and temporal variations in the use of water for irrigation. In addition, water is not the only input to agricultural production from irrigated land - fertiliser, land, labour, machinery and other inputs are also used. To separate the contribution that these factors make to total production is not currently possible.
Gross value of agricultural production
6 These estimates are based on data from Value of Agricultural Commodities Produced (cat. no. 7503.0), which are derived from ABS agricultural censuses and surveys. During the processing phase of the collections, data checking was undertaken to ensure key priority outputs were produced to high quality standards. As a result, some estimates will have been checked more comprehensively than others.
7 It is not feasible to check every item reported by every business, and therefore some anomalies may arise, particularly for small area estimates (e.g. NRM regions). To present these items geographically, agricultural businesses are allocated to a custom region based on where the business reports the location of their 'main agricultural property'. Anomalies can occur if location details for agricultural businesses are not reported precisely enough to accurately code their geographic location. In addition, some businesses operate more than one property, and some large farms may operate across custom region and NRM boundaries, but are coded to a single location. As a result, in some cases, a particular activity may not necessarily occur in the area specified and the Area of Holding and other estimates of agricultural activity may exceed or not account for all activities within that area. For these reasons, the quality of estimates may be lower for some NRMs and other small area geographies.
8 Gross value of agricultural production (GVAP) is the value placed on recorded production of agricultural commodities at the wholesale prices realised in the market place. It is also referred to as the Value of Agricultural Commodities Produced (VACP).
9 In 2005–06, the ABS moved to a business register sourced from the Australian Taxation Office's Australian Business Register (ABR). Previously the ABS had maintained its own register of agricultural establishments.
10 The ABR-based register consists of all businesses on the ABR classified to an 'agricultural' industry, as well as businesses which have indicated they undertake agricultural activities. All businesses with a turnover of $50,000 or more are required to register on the ABR. Many agricultural businesses with a turnover of less than $50,000 have also chosen to register on the ABR.
11 Moving to the ABR-based register required changes to many of the methods used for compiling agriculture commodity and water statistics. These changes included: using new methods for determining whether agricultural businesses were 'in-scope' of the collection; compiling the data in different ways; and improving estimation and imputation techniques.
12 The ABR-based frame was used for the first time to conduct the 2005–06 Agricultural Census. This means that Value of Agricultural Commodities Produced (VACP) data are not directly comparable with historical time series for most commodities. For detailed information about these estimates please refer to the Explanatory Notes in Value of Agricultural Commodities Produced (cat. no. 7503.0).
13 Statistics on area and production of crops relate in the main to crops sown during the reference year ended 30 June. Statistics of perennial crops and livestock relate to the position as at 30 June and the production during the year ended on that date, or of fruit set by that date. Statistics for vegetables, apples, pears and for grapes, which in some states are harvested after 30 June, are collected by supplementary collections. For 2005–06 to 2007–08, the statistics for vegetables, apples, pears and for grapes included in this product are those collected in the 2005–06 Agriculture Census at 30 June 2006, the 2006–07 Agricultural Survey at 30 June 2007 and the Agricultural Resource Management Survey 2007–08 at 30 June 2008, not those collected by the supplementary collections. For this reason the GVAP (VACP) estimates may differ from the published estimates in the products Agricultural Commodities: Small Area Data, Australia, 2005–06 (cat. no. 7125.0) and Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0).
14 Further, the GVAP (Gross Value of Agricultural Production, also referred to as VACP) and GVIAP estimates for 2005–06 and 2006–07 shown in this product have been revised where necessary, for example, when a new price has become available for a commodity after previous publications.
15 The VACP Market Prices survey collected separate prices for undercover and outdoor production for the first time in 2005–06. This enabled the ABS to better reflect the value of undercover and outdoor production for nurseries and cut flowers. The value of the commodity group “nurseries, cut flowers and cultivated turf” was significantly greater from 2005–06, reflecting an increase in production and an improved valuation of undercover production for nurseries and cut flowers.
Volume of water applied
16 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation.
17 This information is sourced from the ABS Agriculture Census for 2000–01 and 2005–06 and from the ABS Agricultural Survey for all other years, except for 2002–03 when ABS conducted the Water Survey, Agriculture. As explained above in paragraphs 9–12, there was a change to the register of businesses used for these collections, which may have some impact on the estimates. For further information refer to the Explanatory Notes for Water Use on Australian Farms (cat. no. 4618.0).
18 Volume of water applied is expressed in megalitres. A megalitre is one million litres, or one thousand kilolitres.
AGRICULTURAL COMMODITY GROUPS
19 GVIAP is calculated for each irrigated 'commodity group' produced by agricultural businesses. That is, GVIAP is generally not calculated for individual commodities, rather for groups of "like" commodities according to irrigated commodity grouping on the ABS Agricultural Census/Survey form. The irrigated commodity groups vary slightly on the survey form from year-to-year. The commodity groups presented in this publication are:
- cereals for grain and seed
- total hay production
- cereals for hay
- pastures cut for hay or silage (including lucerne for hay)
- pastures for seed production
- sugar cane
- other broadacre crops (see Appendix 1 for detail)
- fruit trees, nut trees, plantation or berry fruits (excluding grapes)
- vegetables for human consumption and seed
- nurseries, cut flowers and cultivated turf
- dairy production
- production from meat cattle
- production from sheep and other livestock (excluding cattle)
20 Note that the ABS Agricultural Census/Survey collects area and production data for a wide range of individual commodities within the irrigated commodity groups displayed in the list above. Appendix 1 provides more detail of which commodities comprise these groupings.
21 There were differences in data items (for production, area grown and area irrigated) collected on the Agricultural Census/Surveys in different years. This affects the availability of some commodities for some years. Appendix 2 outlines some of the specific differences and how they have been treated in compiling the estimates for this publication, thereby enabling the production of GVIAP estimates for each of the commodity groups displayed in the list above for every year from 2000–01 to 2008–09.
22 Note that in all GVAP tables, “Total GVAP” includes production from pigs, poultry, eggs, honey (2001 only) and beeswax (2001 only), for completeness. These commodities are not included in GVIAP estimates at all because irrigation is not applicable to them.
METHOD USED TO CALCULATE GVIAP
23 The statistics presented here calculate GVIAP at the unit (farm) level, using three simple rules:
a. If the area of the commodity group irrigated = the total area of the commodity group grown/sown, then GVIAP = GVAP for that commodity group;
b. If the area of the commodity group irrigated is greater than zero but less than the total area of the commodity group grown/sown, then a “yield formula” is applied, with a “yield difference factor”, to calculate GVIAP for the irrigated area of the commodity group;
c. If the area of the commodity group irrigated = 0, then GVIAP = 0 for that commodity group.
24 These three rules apply to most commodities; however there are some exceptions as outlined below in paragraph 26. It is important to note that the majority of cases follow rules 1 and 3; that is, the commodity group on a particular farm is either 100% irrigated or not irrigated at all. For example, in 2004–05, 90% of total GVAP came from commodity groups that were totally irrigated or not irrigated at all. Therefore, only 10% of GVAP had to be "split" into either "irrigated" or "non-irrigated" using the “yield formula” (described below). The yield formula is explained in full in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006).
25 Outlined here is the yield formula referred to in paragraph 20:
Ai = area of the commodity under irrigation (ha)
Yi = estimated irrigated production for the commodity (t or kg)
P = unit price of production for the commodity ($ per t or kg)
Q = total quantity of the commodity produced (t or kg)
Ad = area of the commodity that is not irrigated (ha)
Ydiff = yield difference factor, i.e. estimated ratio of irrigated to non-irrigated yield for the commodity produced
Yield difference factors
26 Yield difference factors are the estimated ratio of irrigated to non-irrigated yield for a given commodity group. They are calculated for a particular commodity group by taking the yield (production per hectare sown/grown) of all farms that fully irrigated the commodity group and dividing this "irrigated" yield by the yield of all farms that did not irrigate the commodity group. The yield difference factors used here were determined by analysing data from 2000–01 to 2004–05 and are reported for each commodity group in Appendix 1 of the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). It is anticipated that the yield difference factors will be reviewed following release of data from the 2010-11 Agriculture Census.
27 In this report "yield" is defined as the production of the commodity (in tonnes, kilograms or as a dollar value) per area grown/sown (in hectares).
Commodity groups for which the yield formula is used
28 The GVIAP for the following commodities have been calculated using the yield formula, with varying yield differences:
Cereals for grain/seed - yield formula with yield difference of 2
Cereals for hay - yield formula with yield difference of 1.5
Pastures for hay - yield formula with yield difference of 2
Pastures for seed - yield formula with yield difference of 2
Sugar cane - yield formula with yield difference of 1.3 (except for 2008–09 - see paragraphs 29 and 31 below)
Other broadacre crops - yield formula with yield difference of 2
Fruit and nuts - yield formula with yield difference of 2
Grapes - yield formula with yield difference of 1.2 (except for 2008–09 - see paragraphs 29 and 31 below)
Vegetables for human consumption and seed - yield formula with yield difference of 1
Nurseries, cut flowers and cultivated turf - yield formula with yield difference of 1
Note: a yield difference of 1 implies no difference in yield between irrigated and non-irrigated production.
29 However not all agricultural commodity groups can be satisfactorily calculated using this formula, so the GVIAP for a number of commodity groups has been calculated using other methods:
Rice - assume all rice production is irrigated.
Cotton - production formula (see paragraph 31).
Grapes - production formula (2008–09 only - see paragraph 31).
Sugar - production formula (2008–09 only - see paragraph 31).
Dairy production - assume that if there is any irrigation of grazing land on a farm that is involved in any dairy production, then all dairy production on that farm is classified as irrigated.
Meat cattle, sheep and other livestock - take the average of two other methods:
1. calculate the ratio of the area of irrigated grazing land to the total area of grazing land and multiply this ratio by the total production for the commodity group (this is referred to as the “area formula”);
2. if the farm has any irrigation of grazing land then assume that all livestock production on the farm is irrigated.
30 For more information on the “area formula” for calculating GVIAP please refer to the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006).
31 In 2008–09, cotton, grapes and sugar were the only commodities for which the production formula was used to estimate GVIAP. This formula is based on the ratio of irrigated production (kg or tonnes) to total production (kg or tonnes) and is outlined in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production (cat. no. 4610.0.55.006). The production formula is used for these three commodities because in 2008–09 they were the only commodities for which actual irrigated production was collected on the ABS agricultural censuses and surveys. Note that prior to 2008–09, cotton was the only commodity for which irrigated production data was collected, except in 2007–08, when there were no commodities for which this data was collected.
Qi = irrigated production of cotton (kg)
Qd = non-irrigated production of cotton (kg)
P = unit price of production for cotton ($ per kg)
Qt = total quantity of cotton produced (kg) = Qi + Qd
32 Most of the irrigated commodity groups included in these tables are irrigated simply by the application of water directly on to the commodity itself, or the soil in which it is grown. The exception relates to livestock, which obviously includes dairy. For example, the GVIAP of "dairy" simply refers to all dairy production from dairy cattle that grazed on irrigated pastures or crops. Estimates of GVIAP for dairy must be used with caution, because in this case the irrigation is not simply applied directly to the commodity, rather it is applied to a pasture or crop which is then eaten by the animal from which the commodity is derived (milk). Therefore, for dairy production, the true net contribution of irrigation (i.e. the value added by irrigation, or the difference between irrigated and non-irrigated production) will be much lower than the total irrigation-assisted production (the GVIAP estimate).
33 The difference between (a) the net contribution of irrigation to production and (b) the GVIAP estimate, is probably greater for livestock grazing on irrigated crops/pastures than for commodity groups where irrigation is applied directly to the crops or pastures.
34 Similarly, estimates of GVIAP for all other livestock (meat cattle, sheep and other livestock) must be treated with caution, because as for dairy production, the issues around irrigation not being directly applied to the commodity also apply to these commodity groups.
35 The estimates presented in this product are underpinned by estimates of the Value of Agricultural Commodities Produced (VACP), published annually in the ABS publication Value of Agricultural Commodities Produced (cat. no. 7503.0). VACP estimates (referred to as GVAP in this product) are calculated by multiplying the wholesale price by the quantity of agricultural commodities produced. The price used in this calculation is the average unit value of a given commodity realised in the marketplace. Price information for livestock slaughterings and wool is obtained from ABS collections. Price information for other commodities is obtained from non-ABS sources, including marketing authorities and industry sources. It is important to note that prices are state-based average unit values.
36 Sources of price data and the costs of marketing these commodities vary considerably between states and commodities. Where a statutory authority handles marketing of the whole or a portion of a product, data are usually obtained from this source. Information is also obtained from marketing reports, wholesalers, brokers and auctioneers. For all commodities, values are in respect of production during the year (or season) irrespective of when payments were made. For that portion of production not marketed (e.g. hay grown on farm for own use, milk used in farm household, etc.), estimates are made from the best available information and, in general, are valued on a local value basis.
37 It should be noted that the estimates for GVIAP are presented in current prices; that is, estimates are valued at the commodity prices of the period to which the observation relates. Therefore changes between the years shown in these tables reflect the effects of price change.
MURRAY-DARLING BASIN (MDB)
38 The gross value of irrigated agricultural production for the MDB is presented for 2000–01 and 2005–06 through to 2008–09. The 2000–01 and 2005–06 data are available because they are sourced from the Agricultural Census which supports finer regional estimates, while the 2006–07, 2007–08 and 2008–09 data are able to be produced because of the improved register of agricultural businesses (described in paragraphs 9–12).
39 The data for the Murray-Darling Basin (MDB) presented in this publication for 2000–01 were derived from a concordance of Statistical Local Area (SLA) regions falling mostly within the MDB. The data for the MDB for 2006–07, 2007–08 and 2008–09 were derived from a concordance of National Resource Management (NRM) regions falling mostly within the MDB. The MDB data for 2005–06 were derived from geo-coded data. As a result, there will be small differences in MDB data across years and this should be taken into consideration when comparisons are made between years.
COMPARABILITY WITH PREVIOUSLY PUBLISHED ESTIMATES
40 Because of this new methodology, the experimental estimates presented here are not directly comparable with other estimates of GVIAP released by ABS in Water Account, Australia, 2000–01 (cat. no. 4610), Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0), Water Account, Australia, 2004–05 (cat. no. 4610) and Water and the Murray-Darling Basin, A Statistical Profile 2000–01 to 2005–06 (cat. no. 4610.0.55.007). However, the GVIAP estimates published in the Water Account Australia 2008–09 are the same as those published in this publication.
41 As described above, 'Volume of water applied' refers to the volume of water applied to crops and pastures through irrigation. The estimates of 'Volume of water applied' presented in this publication are sourced directly from ABS Agricultural Censuses and Surveys and are the same as those presented in Water Use On Australian Farms (cat.no. 4618.0). Note that these volumes are different to the estimates of agricultural water consumption published in the 2008–09 Water Account Australia (cat. no. 4610.0) as the Water Account Australia estimates focus on total agricultural consumption (i.e. irrigation plus other agricultural water uses) and are compiled using multiple data sources (not just ABS Agricultural Censuses and Surveys).
42 The differences between the methods used to calculate the GVIAP estimates previously released and the method used to produce the estimates presented in this product, are explained in detail in the information paper Methods of estimating the Gross Value of Irrigated Agricultural Production, 2008 (cat. no. 4610.0.55.006).
43 In particular some commodity groups will show significant differences with what was previously published. These commodity groups include dairy production, meat production and sheep and other livestock production.
44 The main reason for these differences is that previous methods of calculating GVIAP estimates for these commodity groups were based on businesses being classified to a particular industry class (according to the industry classification ANZSIC), however the new method is based on activity. For example, for dairy production, previous methods of calculating GVIAP only considered dairy production from dairy farms which were categorised as such according to ANZSIC. The new method defines dairy production, in terms of GVIAP, as “all dairy production on farms on which any grazing land (pastures or crops used for grazing) has been irrigated”. Therefore, if there is any irrigation of grazing land on a farm that is involved in any dairy production (regardless of the ANZSIC classification of that farm), then all dairy production on that particular farm is classified as irrigated.
45 Where figures for individual states or territories have been suppressed for reasons of confidentiality, they have been included in relevant totals.
RELIABILITY OF THE ESTIMATES
46 The experimental estimates in this product are derived from estimates collected in surveys and censuses, and are subject to sampling and non-sampling error.
47 The estimates for gross value of irrigated agricultural production are based on information obtained from respondents to the ABS Agricultural Censuses and Surveys. These estimates are therefore subject to sampling variability (even in the case of the censuses, because the response rate is less than 100%); that is, they may differ from the figures that would have been produced if all agricultural businesses had been included in the Agricultural Survey or responded in the Agricultural Census.
48 One measure of the likely difference is given by the standard error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample was taken or received. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all establishments had been reported for, and about nineteen chances in twenty that the difference will be less than two SEs.
49 In this publication, sampling variability of the estimates is measured by the relative standard error (RSE) which is obtained by expressing the SE as a percentage of the estimate to which it refers. Most national estimates have RSEs less than 10%. For some States and Territories, and for many Natural Resource Management regions with limited production of certain commodities, RSEs are greater than 10%. Estimates that have an estimated relative standard error higher than 10% are flagged with a comment in the publication tables. If a data cell has an RSE of between 10% and 25, these estimates should be used with caution as they are subject to sampling variability too high for some purposes. For data cells with an RSE between 25% and 50% the estimate should be used with caution as it is subject to sampling variability too high for most practical purposes. Those data cells with with an RSE greater than 50% indicate that the sampling variability causes the estimates to be considered too unreliable for general use.
50 Errors other than those due to sampling may occur because of deficiencies in the list of units from which the sample was selected, non-response, and errors in reporting by providers. Inaccuracies of this kind are referred to as non-sampling error, which may occur in any collection, whether it be a census or a sample. Every effort has been made to reduce non-sampling error to a minimum in the collections by careful design and testing of questionnaires, operating procedures and systems used to compile the statistics.
51 Where figures have been rounded, discrepancies may occur between sums of the component items and totals.
52 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated: without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.
FUTURE DATA RELEASES
53 It is anticipated that ABS will release these estimates on an annual basis.
Agricultural Commodities, Australia (cat. no. 7121.0)
Agricultural Commodities: Small Area Data, Australia (cat.no. 7125.0)
Characteristics of Australia’s Irrigated Farms, 2000–01 to 2003–04 (cat. no. 4623.0)
Methods of estimating the Gross Value of Irrigated Agricultural Production (Information Paper) (cat. no. 4610.0.55.006).
Value of Agricultural Commodities Produced, Australia (cat. no. 7503.0)
Water Account Australia (cat. no. 4610.0)
Water and the Murray-Darling Basin, A Statistical Profile, 2000–01 to 2005–06 (cat. no. 4610.0.55.007)
Water Use on Australian Farms, Australia (cat. no. 4618.0) | http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/4610.0.55.008Explanatory%20Notes12000%E2%80%9301%20-%202008%E2%80%9309?OpenDocument | 13 |
47 | Most central banks exist solely to keep inflation under control by increasing or decreasing the money supply. Using monetary policy, these banks typically increase the money supply by purchasing or selling government debt with electronic money. Other unconventional techniques like quantitative easing are used when these traditional methods fail.
Quantitative easing involves the purchase of financial assets from private sector entities, which enables banks to effectively inject cash into the financial system. The desired effects include lower yields, greater liquidity and increased lending. And ultimately, these effects tend to increase inflation targets and support economic growth.
How Quantitative Easing Works
The controversy surrounding quantitative easing has led to many misunderstandings. While quantitative easing generally involves central banks purchasing financial assets with newly minted currency, that's like saying driving a car generally involves pressing the gas petal. In reality, there are many different types of quantitative easing designed for different purposes.
For example, some of the U.S. Federal Reserve's actions have been designed to shift bank balance sheets from long-term to short-term assets. The goal of this action was to help increase lending and liquidity, but the process did not result in any new money being printed. In contrast, many other programs have been designed to directly inject liquidity through outright purchases.
Here are some other examples of non-traditional QE programs:
- Term Deposits - Central banks can offer commercial banks interest to deposit money at its facilities and then use that money to purchase financial assets in order to remain cash neutral by simply moving cash from one party to another.
- Reserve Rates - Central banks can increase reserve rates, which would force banks to hold a higher portion of their funds at its deposit facility. Meanwhile, central banks could inject liquidity by purchasing assets in the same amount.
Effects of Quantitative Easing
Quantitative easing causes security prices to rise, lowering yields and making those securities less attractive as an investment. As a result, investors tend to seek better returns by purchasing corporate debt and equities, which lowers those businesses' financial costs, boosts investment and leads to a better economic outlook over the long-term.
At least that's the theory. In practice, the effects of quantitative easing have been mixed at best when employed in various countries around the world. For instance, banks tend to hoard cash during times of crisis, which can negate the intended effects of quantitative easing. Other programs have seen only a temporary boost that quickly disappears.
Japan's use of quantitative easing in the early 2000s was seen as largely unsuccessful, with banks refusing to lend money despite the extra liquidity. However, International Monetary Fund (IMF) suggested that the actions have been successful more recently in increasing market confidence and bottoming out the recession of the G-7 economies during Q2 2009.
QE1, QE2 and QE3 in the U.S.
The United States began a well-publicized period of quantitative easing in 2008 at the beginning of the financial crisis. After risk-free short-term nominal interest rates neared zero, central banks were unable to use traditional monetary policy to affect the markets. The securities covered by these actions ranged from Treasury bonds to mortgage-backed securities.
A second round of quantitative easing was initiated in November of 2010 with the purchase of $600 billion in Treasury securities by the end of the second quarter of 2011. Since then, the U.S. Federal Reserve has indicated that it stands ready to intervene further, if necessary, which has kept the markets prepared for a third round, called QE3. | http://internationalinvest.about.com/od/gettingstarted/a/What-Is-Quantitative-Easing.htm | 13 |
26 | (2002-06-23) The Basics:
| f ' =
What is a derivative?
Well, let me give you the traditional approach first.
This will be complemented by an abstract glimpse of the bigger picture,
which is more closely related to the way people actually use
derivatives, once they are familiar with them.
For a given real-valued function f of a real variable,
consider the slope (m) of its graph at some point.
That is to say, some straight line
of equation y = mx+b (for some irrelevant constant b)
is tangent to the graph of f at that point.
In some definite sense, mx+b is the best linear approximation to f(x) when
x is close to the point under consideration...
The tangent line at point x may be defined as the limit of
a secant line intersecting a curve at point x and point x+h,
when h tends to 0.
When the curve is the graph of f, the slope of such a secant is equal to
[ f(x+h)-f(x) ] / h,
and the derivative (m) at point x is therefore the limit of that quantity, as h tends to 0.
The above limit may or may not exist, so the derivative of f at point x may or
may not be defined. We'll skip that discussion.
The popular trivia question concerning the choice of the letter "m"
to denote the slope of a straight line (in most US textbooks) is discussed
Way beyond this introductory scope, we would remark that
the quantity we called h is of a vectorial nature
(think of a function of several variables),
so the derivative at point x is in fact a tensor
whose components are called partial derivatives.
Also beyond the scope of this article are functions of a complex variable,
in which case the above quantity h is simply a complex number,
and the above division by h remains thus purely numerical (albeit complex).
However, a complex number h (a point on the plane) may approach zero in a variety of ways
that are unknown in the realm of real numbers (points on the line).
This happens to severely restrict the class of functions
for which the above limit exists.
Actually, the only functions of a complex variable which have a derivative
are the so-called analytic functions
[essentially: the convergent sums of power series].
The above is the usual way the concept of derivative is
This traditional presentation may be quite a hurdle to overcome,
when given to someone who may not yet be thoroughly familar with
functions and/or limits.
Having defined the derivative of f at point x,
we define the derivative function
g = f ' = D( f )
of the function f,
as the function g whose value g(x) at point x is the
derivative of f at point x.
We could then prove, one by one, the algebraic rules listed
in the first lines of the following table.
These simple rules allow most derivatives to
be easily computed from the derivatives of just a few elementary functions,
like those tabulated below
(the above theoretical definition is thus rarely used in practice):
u and v are functions of x,
whereas a, b and n are constants.
|| Derivative D( f ) = f '
|Linearity||a u + b v
||a u' + b v'
|u ´ v
||u' ´ v +
u ´ v'
|u / v
||[ u' ´ v -
u ´ v' ] / v 2
||v' ´ u'(v)
|Inversion||v = u-1
||1 / u'(v)
||n x n-1
||1/x = x -1
|Exponentials||e x||e x
|a x||ln(a) a x
|sin x||cos x
|cos x||- sin x
|tg x||1 + (tg x)2
|ln | cos x |||- tg x
|sh x||ch x
|ch x||sh x
|th x||1 - (th x)2
|ln ( ch x )||th x
|| 1 / Ö(1-x2 )
| arccos x = p/2 - arcsin x ||-1 /
|arctg x|| 1 / (1 + x2 )
|| 1 / Ö(1+x2 )
(for |x|>1)|| 1 /
|argth x (for |x|<1)|| 1 /
(1 - x2 )
|gd x =
2 arctg ex - p/2|| 1 / ch x
| gd-1 x =
ln tg (x/2 + p/4) || 1 / cos x
One abstract approach to the derivative concept
would be to bypass (at first) the relevance to slopes,
and study the properties of some derivative operator D,
in a linear space of abstract functions
endowed with an internal product (´),
where D is only known to satisfy the following two axioms
(which we may call linearity and Leibniz' law,
as in the above table):
|D(au + bv)||=
||a D(u) + b D(v)|
|D( u ´ v )||=
||D(u) ´ v + u ´ D(v)|
For example, the product rule imposes that D(1) is zero [in the argument of D,
we do not distinguish between
a function and its value at point x, so that "1" denotes the function whose value
is the number 1 at any point x].
The linearity then imposes that D(a) is zero, for any constant a.
Repeated applications of the product rule give the derivative of x raised to the
power of any integer, so we obtain (by linearity) the correct derivative
for any polynomial.
(The two rules may also be used to prove the chain rule for polynomials.)
A function that has a derivative at point x (defined as a limit)
also has arbitrarily close polynomial approximations about x.
We could use this fact to show that both definitions of the D operator coincide,
whenever both are valid
(if we only assume D to be continuous, in a sense which we won't make more precise here).
This abstract approach is mostly for educational purposes at the elementary level.
For theoretical purposes (at the research level)
the abstract viewpoint which has proven to be the most fruitful is totally different:
In the Theory of Distributions,
a pointwise product like the
above (´) is not even defined, whereas everything revolves
around the so-called convolution product
(*), which has the following strange property concerning
the operator D:
D( u * v )
= D(u) * v
= u * D(v)
To differentiate a convolution product (u*v),
differentiate either factor!
What's the "Fundamental Theorem of Calculus" ?
Once known as Barrow's rule, it states that,
if f is the derivative of F, then:
| f (x) dx
In this, if f and F are real-valued functions of a real variable,
the right-hand side represents the area between the curve
y = f (x) and the x-axis (y = 0),
counting positively what's above the axis and negatively
[negative area!] what's below it.
Any function F whose derivative is equal to f
is called a primitive of f
(all such primitives simply differ by an arbitrary additive constant,
often called constant of integration).
A primitive function is often called an indefinite integral
(as opposed to a definite integral which is a mere number,
not a function, usually obtained as the difference of the values of the
primitive at two different points).
The usual indefinite notation is:
ò f (x) dx
At a more abstract level, we may also call "Fundamental Theorem of Calculus" the
generalization of the above expressed in the language of differential forms,
which is also known as Stokes' Theorem.
Fundamental Theorem of Calculus
(Theorem of the Day #2)
by Robin Whitty
Example involving complex exponentials
What is the indefinite integral of cos(2x) e 3x ?
That function is the real part of a
complex function of a real variable:
(cos 2x + i sin 2x) e 3x =
e i (2x) e 3x
= e (3+2i) x
Since the derivative of exp(a x) / a is
exp(a x) we obtain, conversely:
e(3+2i) x dx =
e(3+2i) x / (3+2i) =
e3x (cos 2x + i sin 2x) (3-2i) / 13
The relation we were after is obtained as the
real part of the above:
cos(2x) e 3x dx =
(3 cos 2x + 2 sin 2x) e 3x / 13
Integration by parts
A useful technique to reduce the computation of one integral to another.
This method was first published in 1715 by
Brook Taylor (1685-1731).
The product rule states that the derivative
(uv)' of a product of two
functions is u'v+uv'.
When the integral of some function f is sought,
integration by parts is a minor art form which attempts to use this
backwards, by writing f as a product u'v of two functions, one of which
(u') has a known integral (u). In which case:
ò f dx
= ò u'v dx
- ò uv' dx
This reduces the computation of the integral of f to that of
uv'. The tricky part, of course, is to guess what choice of
u would make the latter simpler...
The choice u' = 1 (i.e.,
u = x and v = f ) is occasionally useful. Example:
ò ln(x) dx
x ln(x) - ò (x/x) dx
x ln(x) - x
Another classical example pertains to Laplace transforms
( p > 0 )
and/or Heaviside's operational calculus, where all integrals are
understood to be definite integrals
from 0 to +¥
(with a subexponential function
ò f '(t) exp(-pt) dt
- f (0) +
p ò f (t) exp(-pt) dt
Integration by parts
What is the perimeter of a parabolic curve,
given the base length and height of [the] parabola?
Choose the coordinate axes so that your parabola has equation y = x2/2p
for some constant parameter p.
The length element ds along the parabola is such that
(ds)2 = (dx)2 + (dy)2, or
ds/dx = Ö(1+(dy/dx)2)
= Ö(1 + x2/p2).
The length s of the arc of parabola from the apex (0,0) to the point
(x, y = x2/2p) is simply the following integral of this
(in which we may eliminate x or p,
using 2py = x2 ).
|s || =
||1 + x2/p2
|| + (p/2) ln(
||1 + x2/p2
|| + x/p )
||1 + p/2y
|| + (p/2) ln(
||1 + 2y/p
||1 + (2y/x)2
|| + (x2/4y) ln(
||1 + (2y/x)2
|| + 2y/x )
For a symmetrical arc extending on both sides of the parabola's axis, the
length is 2s (twice the above).
If needed, the whole "perimeter" is 2s+2x.
What's the top height of a (parabolic) bridge?
If a curved bridge is a foot longer than its mile-long horizontal span...
Let's express all distances in feet (a mile is 5280 ft).
Using the notations of the previous article, 2x = 5280,
2s = 5281, u = x/p = 2y/x = y/1320
|s / x = 5281 / 5280|| = ½
||1 + u2
|| + (1/2u) ln(
||1 + u2
|| + u )
For small values of u, the right-hand side is roughly 1+u2/6.
Solving for u the equation thus simplified, we obtain
The height y is thus roughly equal to that quantity multiplied by
1320 ft ,or about 44.4972 ft.
This approximation is valid for any type of smooth enough curve.
It can be refined for the parabolic case using successive approximations to solve
for u the above equation. This yields u = 0.0337128658566...
which exceeds the above by about 85.2 ppm (ppm = parts per million) for
a final result of about 44.5010 ft. The
previous solution would have satisfied any engineer before the computer era.
(2008-03-27; e-mail) Length of a sagging horizontal cable:
How long is a cable which spans 28 m horizontally and sags 300 mm?
Answer : Surprisingly, just about 28.00857 m...
Under its own weight, a uniform cable without any rigidity
(a "chain") would actually assume the shape of a
In a coordinate system with a vertical y-axis and centered on its apex,
the catenary has the following cartesian equation:
y/a = ch (x/a) -1
½ (ex/a - 2 + e-x/a )
2 sh2 (x/2a)
Measured from the apex at x = y = 0,
the arclength s along the cable is:
s = a sh (x/a)
Those formulas are not easy
to work with, unless the parameter a is given.
For example, in the case at hand (a 28 m span with a 0.3 m sag)
all we know is:
x = 14
y = 0.3
So, we must solve for a (numerically) the transcendantal
0.3 / a =
2 sh2 (7/a)
This yields a = 326.716654425...
2s = 2a sh (14 / a)
Thus, an 8.57 mm slack produces a 30 cm sag
for a 28 m span.
In similar cases, the parameter a is also large
(it's equal to the radius of curvature at the curve's apex).
So, we may find a good approximation to the relevant transcendental
equation by equating the sh function to its (small)
2 sh2 (x/2a)
a » x2 / 2y
whereby s = a sh (x/a)
x ( 1 + x2 / 6a2 )
x ( 1 + 2y2 / 3x2 )
This gives 2s
2x ( 1 + 8/3 (y/2x)2 )
= 28.0085714... in the above case.
This is indeed a good approximation to the aforementioned exact result.
Parabolic Approximation :
If we plug the values x = 14 and y = 0.3
in the above formula for the exact length of a
parabolic arc, we obtain:
2s = 28.0085690686...
Circular Approximation :
A thin circular arc of width 2x and of height y
has a length
|| arcsin (
|| ) = 28.00857064...
In fact, all smooth enough approximations to a flat enough
catenary will have a comparable precision, because this is what results from equating
a curve to its osculating circle at the lowest point.
The approximative expression we derived above in the case of the catenary
is indeed quite general:
2 x [ 1 +
8/3 (y/2x) 2 ]
Find the ratio, over one revolution, of the distance moved by a wheel rolling
on a flat surface to the distance traced out by a point on its circumference.
As a wheel of unit radius rolls (on the x-axis),
the trajectory of a point on its circumference is a cycloid,
whose parametric equation is not difficult to establish:
x = t - sin(t)
y = 1 - cos(t)
In this, the parameter t is the abscissa [x-coordinate] of the center of the wheel.
In the first revolution of the wheel (one arch of the cycloid),
t goes from 0 to 2p.
The length of one full arch of a cycloid ("cycloidal arch")
was first worked out in the 17th century by
Evangelista Torricelli (1608-1647), just before the advent of the calculus.
Let's do it again with modern tools:
Calling s the curvilinear abscissa (the length along the curve), we have:
(dx)2 + (dy)2 =
[(1-cos(t))2 + (sin(t))2](dt)2
(ds/dt)2 = 2 - 2 cos(t) = 4 sin2(t/2)
so, if 0 ≤ t ≤ 2p:
ds/dt = 2 sin(t/2) ≥ 0
The length of the whole arch is the integral of this when t goes
from 0 to 2p
and it is therefore equal to 8,
[since the indefinite integral is -4 cos(t/2)].
On the other hand, the length of the trajectory of the wheel's center
(a straight line) is clearly 2p
(the circumference of the wheel).
In other words, the trajectory of a point on the circumference
is 4/p times as long as the trajectory of the center,
for any whole number of revolutions (that's about 27.324% longer, if you prefer).
The ratio you asked for is the reciprocal of that,
namely p/4 (which is about 0.7853981633974...),
the ratio of the circumference of the wheel to the length of the cycloidal arch.
However, the result is best memorized as:
"The length of a cycloidal arch is 4 times the diameter of the wheel."
(from Schenectady, NY. 2003-04-07; e-mail)
What is the [indefinite] integral of (tan x)1/3 dx ?
An obvious change of variable is to introduce y = tan x
[ dy = (1+y2 ) dx ],
so the integrand becomes
y1/3 dy / (1+y2 ).
This suggests a better change of variable, namely:
z = y2/3 = (tan x)2/3
[ dz = (2/3)y-1/3 dy ],
which yields z dz = (2/3)y1/3 dy,
and makes the integrand equal to the following rational function of z,
which may be integrated using standard methods
(featuring a decomposition into 3 easy-to-integrate terms):
(3/2) z dz / (1+z3 ) =
¼ (2z-1) dz / (1-z+z2 )
+ (3/4) dz / (1-z+z2 )
- ½ dz / (1+z)
As (1-z+z2 )
is equal to the positive quantity
¼ [(2z - 1)2 + 3] , we obtain:
ò (tan x)1/3 dx
¼ ln(1-z+z2 )
- ½ ln(1+z)
where z stands for | tan x | 2/3
(D. B. of Grand Junction, CO.
A particle moves from right to left along the parabola
y = Ö(-x)
in such a way that its x coordinate decreases at the rate of 8 m/s.
When x = -4, how fast is the change in the
angle of inclination of the line joining the particle to the origin?
We assume all distances are in meters.
When the particle is at a negative abscissa x,
the (negative) slope of the line in question is
y/x = Ö(-x)/x and the corresponding
(negative) angle is thus:
a = arctg(Ö(-x)/x)
[In this, "arctg" is the "Arctangent" function, which is also spelled "atan"
in US textbooks.]
Therefore, a varies with x at a (negative) rate:
da/dx = -1/(2´Ö(-x)(1-x)) (rad/m)
If x varies with time as stated, we have dx/dt = -8 m/s, so
the angle a varies with time at a (positive) rate:
da/dt = 4/(Ö(-x)(1-x)) (rad/s)
When x is -4 m, the rate dA/dt is therefore 4/(Ö4
´5) rad/s = 0.4 rad/s.
The angle a,
which is always negative, is thus increasing at a rate of 0.4 rad/s when
the particle is 4 meters to the left of the origin (rad/s = radian per second).
What's the area bounded by the following curves?
- y = f(x) = x3 - 9x
- y = g(x) = x + 3
The curves intersect when f(x) = g(x),
which translates into x3 - 10x - 3 = 0.
This cubic equation factors nicely into
(x + 3) (x2 - 3x - 1) = 0 ,
so we're faced with only a quadratic equation...
To find if there's a "trivial" integer which is a root of a polynomial with integer
coefficients [whose leading coefficient is ±1],
observe that such a root would have to divide the constant term.
In the above case, we only had 4 possibilities to try, namely -3, -1, +1, +3.
The abscissas A < B < C of the three intersections are therefore:
A = -3 ,
B = ½ (3 - Ö13)
C = ½ (3 + Ö13)
Answering an Ambiguous Question :
The best thing to do for a "figure 8", like the one at hand,
is to compute the (positive) areas of each of the two lobes.
The understanding is that you may add or subtract these,
according to your chosen orientation of the boundary:
- The area of the lobe from A to B (where f(x) is above g(x))
is the integral of f(x)-g(x) = x3 - 10x - 3
[whose primitive is x4/4 - 5x2 - 3x] from A to B,
namely (39Ö13 - 11)/8, or about 16.202...
- The area of the lobe from B to C (where f(x) is below g(x)) is the integral of
g(x)-f(x) from B to C,
namely (39Ö13)/4, or about 35.154...
The area we're after is thus either
the sum (±51.356...) or
the difference (±18.952...) of these two,
depending on an ambiguous boundary orientation...
If you don't switch curves at point B,
the algebraic area may also be obtained
as the integral of g(x)-f(x) from A to C
(up to a change of sign).
Signed Planar Areas Consistently Defined
A net planar area is best defined as the
apparent area of a 3D loop.
The area surrounded by a closed planar curve may
be defined in general terms, even when the curve does cross itself
The usual algebraic definition of areas depends on the orientation
(clockwise or counterclockwise)
given to the closed boundary of a simple planar surface.
The area is positive if the boundary runs counterclockwise around the surface,
and negative otherwise
the positive direction of planar angles is always counterclockwise).
In the case of a simple closed curve [without any multiple points]
this is often overlooked, since we normally consider only
whichever orientation of the curve makes the area of its interior positive...
The clear fact that there is such an "interior" bounded by any given closed planar curve
is known as "Jordan's Theorem".
It's a classical example of an "obvious" fact with a rather
However, when the boundary has multiple points (like the center of a "figure 8"),
there may be more than two oriented boundaries for it,
since we may have a choice at a double point:
Either the boundary crosses itself or it does not (in the latter case,
we make a sharp turn,
unless there's an unusual configuration about the intersection).
Not all sets of such choices lead to a complete tracing of the whole loop.
At left is the easy-to-prove "coloring rule" for a true self-crossing
of the boundary, concerning the number of times the ordinary area
is to be counted in the "algebraic area" dicussed here.
It's nice to consider a given oriented closed boundary
as a projection of a three-dimensional loop whose apparent area
is defined as a path integral.
x dy - y dx
- y dx
of Hickory, NC. 2001-04-13/email)
[How do you generalize the method] of variation of parameters when solving
differential equations (DE) of 3rd and higher order?
For example: x''' - 3x'' + 4x = exp(2t)
In memory of |
taught me this and
much more, many years ago.
As shown below,
a high-order linear DE can be reduced
to a system of first-order linear differential equations in several variables.
Such a system is of the form:
X' = dX/dt = AX + B
X is a column vector of n unknown functions of t.
The square matrix A may depend explicitely on t.
B is a vector
of n explicit functions of t, called forcing terms.
The associated homogeneous system
is obtained by letting B = 0.
For a nonconstant A, it may be quite difficult to find n independent solutions
of this homogeneous system (an art form in itself)
but, once you have them, a solution of the forced system
may be obtained by generalizing to n variables the method
(called "variation of parameters") commonly used for a
single variable. Let's do this using only n-dimensional notations:
The fundamental object is the square matrix W formed with the n columns corresponding
to the n independent solutions of the homogeneous system.
Clearly, W itself verifies the homogeneous equation:
W' = AW
It's an interesting exercise in the manipulation of
determinants to prove that det(W)' = tr(A) det(W)
(HINT: Differentiating just the i-th line of W gives a matrix
whose determinant is the product of det(W) by the i-th component
in the diagonal of the matrix A).
Since det(W), the so-called "Wronskian", is thus
solution of a first-order linear DE, it's proportional to the exponential of
some function and is therefore either nonzero everywhere or zero everywhere.
(Also, the Wronskians for different sets of homogeneous solutions must be proportional.)
Homogeneous solutions that are linearly independent at some point are therefore
independent everywhere and W(t) has an inverse for any t.
We may thus look for the solution X to the nonhomogeneous
system in the form X = WY :
AX + B = X' = W'Y + WY' =
AWY + WY'
= AX + WY'
Therefore, B = WY'
So, Y is simply obtained by integrating W-1 B
and the general solution of the forced system may be expressed as follows,
with a constant vector K (whose n components are
the n "constants of integration").
This looks very much like the corresponding formula for a single variable :
X(t) = W(t) [ K +
W-1(u) B(u) du ]
Linear Differential Equation of Order n :
A linear differential equation of order n has the following form
(where ak and b are explicit functions of t):
an-1 x(n-1) + ... +
a3 x(3) + a2 x" + a1 x' + a0 x
This reduces to the above system
X' = AX + B with the following notations :
|| X =
|| B =
The first n-1 components in the equation X' = AX+B
merely define each component of X as the derivative of the previous one,
whereas the last component expresses
the original high-order differential equation.
Now, the general discussion above applies
fully with a W matrix whose first line consists of n independent solutions of the
homogeneous equation (each subsequent line is simply the derivative of its predecessor).
Here comes the Green function...
We need not work out every component of W-1
since we're only interested in the first component of X...
The above boxed formula tells us that we only need the first component
of W(t)W-1(u)B(u) which may be written G(t,u)b(u),
by calling G(t,u) the first component of
W(t)W-1(u)Z, where Z is a vector whose component are all zero,
except the last one which is one.
G(t,u) is called
the Green function associated to the given homogeneous equation. It has a simple
expression (given below) in terms of a ratio of determinants computed for independent
solutions of the homogeneous equation.
(Such an expression makes it easy to prove that
the Green function is indeed associated to the equation itself and not to a particular
set of independent solutions, as it is clearly invariant if you replace any solution by
some linear combination in which it appears with a nonzero coefficient.)
For a third-order equation with homogeneous solutions A(t), B(t) and C(t), the expression of
the Green function (which generalizes to any order) is simply:
It's also a good idea to define G(t,u) to be zero when u>t,
since such values of G(t,u) are not used in the integral
ò t G(t,u) b(u) du.
This convention allows us to drop the upper limit of the integral,
so we may write a special solution of the inhomogeneous equation
as the definite integral
(from -¥ to +¥,
whenever it converges):
ò G(t,u) b(u) du.
If this integral does not converge (the issue may only arise when u goes to
-¥), we may still use this formal expression by considering
that the forcing term b(u) is zero at any time t earlier than whatever happens to be the
earliest time we wish to consider.
(This is one unsatisfying way to reestablish some kind of
fixed arbitrary lower bound for the integral of interest when the only natural one,
namely -¥, is not acceptable.)
In the case of the equation x''' - 3x" + 4x = exp(2t), three independent solutions are
A(t) = exp(-t),
B(t) = exp(2t), and
C(t) = t exp(2t). This makes the denominator in the above (the "Wronskian")
equal to 9 exp(3u) whereas the numerator is
With those values, the integral of G(t,u)exp(2u)(u)du when u goes from 0 to t
turns out to be equal to
f(t) = [ (9t2-6t+2)exp(2t) - 2 exp(-t) ]/54, which is therefore a
special solution of your equation. The general solution may be expressed as:
x(t) = (a + bt + t2/6) exp(2t) + c exp(-t)
[ a, b and c are constant ]
Clearly, this result could have been obtained without this heavy artillery:
Once you've solved the homogeneous equation and
realized that the forcing term is a solution of it,
it is very natural to look for an inhomogeneous solution of the form
z exp(2t) and find that z"=1/3 works.
That's far less tedious than computing and using the associated Green's function.
However, efficiency in this special
case is not what the question was all about...
Convolutions and the Theory of Distributions
An introduction to the epoch-making approach of Laurent Schwartz.
The above may be dealt with using the elegant idea of
convolution products among distributions.
The notorious Theory of Distributions occurred to the late
Schwartz (1915-2002) "one night in 1944".
For this, he received the first
ever awarded to a Frenchman, in 1950.
(Schwartz taught me functional analysis in the Fall of 1977.)
A linear differential equation with constant coefficients
(an important special case) may be expressed as a convolution
a * x = b.
The convolution operator * is bilinear,
associative and commutative.
Its identity element is the
Delta distribution d
(dubbed Dirac's "function").
Loosely speaking, the Delta distribution d
would correspond to a "function" whose integral is 1,
but whose value at every point except zero is zero.
The integral of an ordinary function which is zero almost everywhere
would necessarily be zero.
Therefore, the d distribution cannot possibly
be an ordinary function: Convolutions must be put in the proper context of the
Theory of Distributions.
A strong case can be made that the convolution product is the notion that gives rise
to the very concept of distribution.
Distributions had been used loosely by physicists for a long time, when
Schwartz finally found a very simple mathematical definition for them:
Considering a (very restricted) space D of so-called test functions,
a distribution is simply a linear function which associates a scalar
to every test function.
Although other possibilities have been studied (which give rise to less
general distributions) D is normally the so-called Schwartz space
of infinitely derivable functions of compact support
These are perfectly smooth functions vanishing outside of a bounded domain,
like the function of x which is
exp(-1 / (1-x 2 ))
in [-1,+1] and 0 elsewhere.
What could be denoted f(g) is written
This hint of an ultimate symmetry between the rôles of f and g
is fulfilled by the following relation, which holds whenever the integral exists
for ordinary functions f and g.
ò f(t-u)g(u) du
This relation may be used to establish commutativity
(switch the variable to v = t-u, going from
+¥ to -¥
when u goes from
-¥ to +¥).
The associativity of the convolution product is obtained
by figuring out a double integral.
Convolutions have many stunning properties.
In particular, the Fourier transform of the convolution product of two functions is
the ordinary product of their Fourier transforms.
Another key property is that the derivative of a convolution product may be obtained
by differentiating either one of its factors:
This means the derivatives of a function f can be expressed as convolutions, using
the derivatives of the d distribution
(strange but useful beasts):
f = d * f
f' = d'
f'' = d''
If the n-th order linear differential equation
discussed above has constant coefficients,
we may write it as f*x = b
by introducing the distribution
f = d(n) +
an-1 d(n-1) + ... +
a3 d(3) +
a2 d" +
a1 d' +
Clearly, if we we have a function such that
we will obtain a special solution of the inhomogeneous equation as
If you translate the convolution product into an integral, what you obtain is thus
the general expression involving a
Green function G(t,u)=g(t-u),
where g(v) is zero for negative values of v.
The case where coefficients are constant is therefore much simpler than
the general case:
Where you had a two-variable integrator, you now have a single-variable one.
Not only that, but the homogeneous solutions are well-known
(if z is an eigenvalue
of multiplicity n+1 for the matrix involved, the product of exp(zt) by any polynomial of
degree n, or less, is a solution).
In the important special case where all the eigenvalues are
distinct, the determinants involved in the expression of
G(t,u)=g(t-u) are essentially
or Vandermonde cofactors
(a Vandermonde determinant is a determinant where each column consists of the successive
powers of a particular number).
The expression is thus fairly easy to work out and may be put into the following simple form,
involving the characteristic polynomial P for the equation
(it's also the characteristic
polynomial of the matrix we called A in the above).
For any eigenvalue z, the derivative P'(z)
is the product of the all the differences between
that eigenvalue and each of the others (which is what Vandermonde expressions entail):
exp(z1v) / P'(z1) +
exp(z2v) / P'(z2) + ... +
exp(znv) / P'(zn)
With this, x = g*b is indeed
a special solution of our original equation f*x = b
(Brent Watts of Hickory, NC.
do you use Laplace transforms to solve this differential system?
Initial conditions, for t=0 : w=0, w'=1, y=0, y'=0, z= -1, z'=1.
- w" + y + z = -1
- w + y" - z = 0
- -w' -y' + z"=0
The (unilateral) Laplace transform g(p) of a function f(t) is given by:
g(p) = òo¥ f(t) exp(-pt) dt
This is defined, for a positive p, whenever the integral makes sense.
For example, the Laplace transform of a constant k is the function g such that
g(p) = k/p.
Integrating by parts
f '(t) exp(-pt) dt gives a simple relation,
which may be iterated, between the respective Laplace transforms
L(f ') and L(f) of f ' and f :
L(f ')[p] =
-f(0) + p L(f)[p]
L(f")[p] = -f '(0) + p L(f ')[p] =
-f '(0) - p f(0) + p2 L(f)[p]
This is the basis of the so-called Operational Calculus, invented by
Oliver Heaviside (1850-1925), which translates many practical systems of differential
equations into algebraic ones.
(Originally, Heaviside was interested in the transient solutions to the simple differential
equations arising in electrical circuits).
In this particular case, we may use capital letters to denote Laplace transforms of
lowercase functions (W=L(w), Y=L(y), Z=L(z)...)
and your differential system translates into:
In other words:
- (p2 W - 1 - 0p)+ Y + Z = -1/p
- W + (p2 Y - 0 - 0p) - Z = 0
- -(pW - 0) -(pY - 0) + (p2 Z - 1 + p) = 0
Solve for W,Y and Z and express the results as simple sums
(that's usually the tedious part,
but this example is clearly designed to be simpler than usual):
- p2 W + Y + Z = 1 -1/p
- W + p2 Y - Z = 0
- -pW -pY + p2 Z = 1-p
The last step is to go from these Laplace transforms back to the original
(lowercase) functions of t, with a reverse lookup using a table of
Laplace transforms, similar to the (short) one provided below.
- W = 1/(p2 +1)
- Y = p/(p2 +1) - 1/p
- Z = 1/(p2 +1) - p/(p2 +1)2
- w = sin(t)
- y = cos(t) - 1
- z = sin(t) - cos(t)
With other initial conditions, solutions may involve various linear combinations
of no fewer than 5 different types of functions
(namely: sin(t), cos(t), exp(-t), t and the constant 1),
which would make a better showcase for Operational Calculus than this
particularly simple example...
Below is a small table of Laplace transforms. This table enables a reverse lookup
which is more than sufficient to solve the above for any set of initial conditions:
= òo¥ f(t) exp(-pt) dt
|1 = t 0||1/p|
|t n||n! / pn+1|
|exp(at)||1 / (p-a)|
|sin(kt)||k / (p2 + k2 )|
|cos(kt)||p / (p2 + k2 )|
|exp(at) sin(kt)||k / ([p-a]2 + k2 )|
|exp(at) cos(kt)||[p-a] / ([p-a]2 + k2 )|
|d [Dirac Delta]||1|
|f '(t)||p g(p) - f(0)|
|f ''(t)||p2 g(p) -
p f '(0)|
Brent Watts of Hickory, NC.
1) What is an example of a function for which the integral from
-¥ to +¥
of |f(x)| dx exists, but [that of] of f(x)dx does not?
2) [What is an example of a function f ] for which the opposite is true?
The integral from
-¥ to +¥
exists for f(x)dx but not for |f(x)|dx .
1) Consider any nonmeasurable
set E within the interval [0,1]
(the existence of such a set is guaranteed by Zermelo's
Axiom of Choice)
and define f(x) to be:
The function f is not Lebesgue-integrable,
but its absolute value clearly is (|f(x)| is equal to 1 on [0,1] and
- +1 if x is in E
- -1 if x is in [0,1]
but not in E
- 0 if x is outside [0,1]
That was for Lebesgue integration. For Riemann integration, you may construct a simpler
example by letting the above E be the set of rationals between 0 and 1.
2) On the other hand, the function sin(x)/x is a simple example of a function
which is Riemann-integrable over
(Riemann integration can be defined over an infinite interval,
although it's not usually done in basic textbooks),
whereas the absolute value |sin(x)/x| is not.
Neither function is Lebesgue-integrable over
although both are over any finite interval.
Show that: f (D)[eax y] = eax f (D+a)[y] ,
where D is the operator d/dx.
The notation has to be explained to readers not familiar with
If f (x) is the converging sum
of all terms
(for some scalar sequence
f is called an analytic function
[about zero] and it can be defined
for some nonnumerical things that can be added,
scaled or "exponentiated"...
The possibility of exponentiation to the power of a nonnegative
integer reasonably requires the definition of some kind of
with a neutral element
(in order to define the zeroth power)
but that multiplication need not be commutative or even associative.
The lesser requirement of alternativity suffices
(as is observed in the case of the octonions).
Here we shall focus on the multiplication of square matrices of finite
sizes which corresponds to the composition of linear functions in
a vector space of finitely many dimensions.
If M is a finite square matrix representing some linear operator
(which we shall denote by the same symbol M for convenience)
f (M) is defined as a power series of M.
If there's a vector basis in which the operator M is diagonal,
f (M) is diagonal
in that same basis, with f (z) appearing on the diagonal of f (M)
wherever z appears in the diagonal of M.
Now, the differential operator D is a linear operator like any other,
whether it operates on a space of finitely many dimensions
(for example, polynomials of degree 57 or less) or infinitely many dimensions
(polynomials, formal series...).
f (D) may thus be defined the same way.
It's a formal definition which may or may not have a numerical counterpart,
as the formal series involved may or may not converge.
The same thing applies to any other differential operator,
and this is how f (D) and f (D+a)
are to be interpreted.
To prove that a linear relation holds when f appears homogeneously
(as is the case here),
it is enough to prove that it holds for any n
when f (x)=xn :
- The relation is trivial for n=0
(the zeroth power
of any operator is the identity operator) as the relation translates
into exp(ax)y = exp(ax)y.
- The case n=1 is:
D[exp(ax)y] = a exp(ax)y + exp(ax)D[y] = exp(ax)(D+a)[y].
- The case n=2 is obtained by differentiating the case n=1 exactly like the case n+1 is
obtained by differentiating case n, namely:
Dn+1[exp(ax)y] = D[exp(ax)(D+a)n(y)]
= a exp(ax)(D+a)n[y] + exp(ax) D[(D+a)n(y)]
= exp(ax) (D+a)[(D+a)n(y)] = exp(ax) (D+a)n+1[y].
This completes a proof by induction for any f (x) = xn,
which establishes the relation for any analytic function f,
through summation of such elementary results. | http://www.numericana.com/answer/calculus.htm | 13 |
20 | |Part of a series on|
An earthquake (also known as a quake, tremor or temblor) is the result of a sudden release of energy in the Earth's crust that creates seismic waves. The seismicity, seismism or seismic activity of an area refers to the frequency, type and size of earthquakes experienced over a period of time.
Earthquakes are measured using observations from seismometers. The moment magnitude is the most common scale on which earthquakes larger than approximately 5 are reported for the entire globe. The more numerous earthquakes smaller than magnitude 5 reported by national seismological observatories are measured mostly on the local magnitude scale, also referred to as the Richter scale. These two scales are numerically similar over their range of validity. Magnitude 3 or lower earthquakes are mostly almost imperceptible or weak and magnitude 7 and over potentially cause serious damage over larger areas, depending on their depth. The largest earthquakes in historic times have been of magnitude slightly over 9, although there is no limit to the possible magnitude. The most recent large earthquake of magnitude 9.0 or larger was a 9.0 magnitude earthquake in Japan in 2011 (as of October 2012), and it was the largest Japanese earthquake since records began. Intensity of shaking is measured on the modified Mercalli scale. The shallower an earthquake, the more damage to structures it causes, all else being equal.
At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground. When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami. Earthquakes can also trigger landslides, and occasionally volcanic activity.
In its most general sense, the word earthquake is used to describe any seismic event — whether natural or caused by humans — that generates seismic waves. Earthquakes are caused mostly by rupture of geological faults, but also by other events such as volcanic activity, landslides, mine blasts, and nuclear tests. An earthquake's point of initial rupture is called its focus or hypocenter. The epicenter is the point at ground level directly above the hypocenter.
Naturally occurring earthquakes
Tectonic earthquakes occur anywhere in the earth where there is sufficient stored elastic strain energy to drive fracture propagation along a fault plane. The sides of a fault move past each other smoothly and aseismically only if there are no irregularities or asperities along the fault surface that increase the frictional resistance. Most fault surfaces do have such asperities and this leads to a form of stick-slip behaviour. Once the fault has locked, continued relative motion between the plates leads to increasing stress and therefore, stored strain energy in the volume around the fault surface. This continues until the stress has risen sufficiently to break through the asperity, suddenly allowing sliding over the locked portion of the fault, releasing the stored energy. This energy is released as a combination of radiated elastic strain seismic waves, frictional heating of the fault surface, and cracking of the rock, thus causing an earthquake. This process of gradual build-up of strain and stress punctuated by occasional sudden earthquake failure is referred to as the elastic-rebound theory. It is estimated that only 10 percent or less of an earthquake's total energy is radiated as seismic energy. Most of the earthquake's energy is used to power the earthquake fracture growth or is converted into heat generated by friction. Therefore, earthquakes lower the Earth's available elastic potential energy and raise its temperature, though these changes are negligible compared to the conductive and convective flow of heat out from the Earth's deep interior.
Earthquake fault types
There are three main types of fault, all of which may cause an earthquake: normal, reverse (thrust) and strike-slip. Normal and reverse faulting are examples of dip-slip, where the displacement along the fault is in the direction of dip and movement on them involves a vertical component. Normal faults occur mainly in areas where the crust is being extended such as a divergent boundary. Reverse faults occur in areas where the crust is being shortened such as at a convergent boundary. Strike-slip faults are steep structures where the two sides of the fault slip horizontally past each other; transform boundaries are a particular type of strike-slip fault. Many earthquakes are caused by movement on faults that have components of both dip-slip and strike-slip; this is known as oblique slip.
Reverse faults, particularly those along convergent plate boundaries are associated with the most powerful earthquakes, including almost all of those of magnitude 8 or more. Strike-slip faults, particularly continental transforms can produce major earthquakes up to about magnitude 8. Earthquakes associated with normal faults are generally less than magnitude 7.
This is so because the energy released in an earthquake, and thus its magnitude, is proportional to the area of the fault that ruptures and the stress drop. Therefore, the longer the length and the wider the width of the faulted area, the larger the resulting magnitude. The topmost, brittle part of the Earth's crust, and the cool slabs of the tectonic plates that are descending down into the hot mantle, are the only parts of our planet which can store elastic energy and release it in fault ruptures. Rocks hotter than about 300 degrees Celsius flow in response to stress; they do not rupture in earthquakes. The maximum observed lengths of ruptures and mapped faults, which may break in one go are approximately 1000 km. Examples are the earthquakes in Chile, 1960; Alaska, 1957; Sumatra, 2004, all in subduction zones. The longest earthquake ruptures on strike-slip faults, like the San Andreas Fault (1857, 1906), the North Anatolian Fault in Turkey (1939) and the Denali Fault in Alaska (2002), are about half to one third as long as the lengths along subducting plate margins, and those along normal faults are even shorter.
The most important parameter controlling the maximum earthquake magnitude on a fault is however not the maximum available length, but the available width because the latter varies by a factor of 20. Along converging plate margins, the dip angle of the rupture plane is very shallow, typically about 10 degrees. Thus the width of the plane within the top brittle crust of the Earth can become 50 to 100 km (Japan, 2011; Alaska, 1964), making the most powerful earthquakes possible.
Strike-slip faults tend to be oriented near vertically, resulting in an approximate width of 10 km within the brittle crust, thus earthquakes with magnitudes much larger than 8 are not possible. Maximum magnitudes along many normal faults are even more limited because many of them are located along spreading centers, as in Iceland, where the thickness of the brittle layer is only about 6 km.
In addition, there exists a hierarchy of stress level in the three fault types. Thrust faults are generated by the highest, strike slip by intermediate, and normal faults by the lowest stress levels. This can easily be understood by considering the direction of the greatest principal stress, the direction of the force that 'pushes' the rock mass during the faulting. In the case of normal faults, the rock mass is pushed down in a vertical direction, thus the pushing force (greatest principal stress) equals the weight of the rock mass itself. In the case of thrusting, the rock mass 'escapes' in the direction of the least principal stress, namely upward, lifting the rock mass up, thus the overburden equals the least principal stress. Strike-slip faulting is intermediate between the other two types described above. This difference in stress regime in the three faulting environments can contribute to differences in stress drop during faulting, which contributes to differences in the radiated energy, regardless of fault dimensions.
Earthquakes away from plate boundaries
Where plate boundaries occur within continental lithosphere, deformation is spread out over a much larger area than the plate boundary itself. In the case of the San Andreas fault continental transform, many earthquakes occur away from the plate boundary and are related to strains developed within the broader zone of deformation caused by major irregularities in the fault trace (e.g., the "Big bend" region). The Northridge earthquake was associated with movement on a blind thrust within such a zone. Another example is the strongly oblique convergent plate boundary between the Arabian and Eurasian plates where it runs through the northwestern part of the Zagros mountains. The deformation associated with this plate boundary is partitioned into nearly pure thrust sense movements perpendicular to the boundary over a wide zone to the southwest and nearly pure strike-slip motion along the Main Recent Fault close to the actual plate boundary itself. This is demonstrated by earthquake focal mechanisms.
All tectonic plates have internal stress fields caused by their interactions with neighbouring plates and sedimentary loading or unloading (e.g. deglaciation). These stresses may be sufficient to cause failure along existing fault planes, giving rise to intraplate earthquakes.
Shallow-focus and deep-focus earthquakes
The majority of tectonic earthquakes originate at the ring of fire in depths not exceeding tens of kilometers. Earthquakes occurring at a depth of less than 70 km are classified as 'shallow-focus' earthquakes, while those with a focal-depth between 70 and 300 km are commonly termed 'mid-focus' or 'intermediate-depth' earthquakes. In subduction zones, where older and colder oceanic crust descends beneath another tectonic plate, deep-focus earthquakes may occur at much greater depths (ranging from 300 up to 700 kilometers). These seismically active areas of subduction are known as Wadati-Benioff zones. Deep-focus earthquakes occur at a depth where the subducted lithosphere should no longer be brittle, due to the high temperature and pressure. A possible mechanism for the generation of deep-focus earthquakes is faulting caused by olivine undergoing a phase transition into a spinel structure.
Earthquakes and volcanic activity
Earthquakes often occur in volcanic regions and are caused there, both by tectonic faults and the movement of magma in volcanoes. Such earthquakes can serve as an early warning of volcanic eruptions, as during the Mount St. Helens eruption of 1980. Earthquake swarms can serve as markers for the location of the flowing magma throughout the volcanoes. These swarms can be recorded by seismometers and tiltmeters (a device that measures ground slope) and used as sensors to predict imminent or upcoming eruptions.
A tectonic earthquake begins by an initial rupture at a point on the fault surface, a process known as nucleation. The scale of the nucleation zone is uncertain, with some evidence, such as the rupture dimensions of the smallest earthquakes, suggesting that it is smaller than 100 m while other evidence, such as a slow component revealed by low-frequency spectra of some earthquakes, suggest that it is larger. The possibility that the nucleation involves some sort of preparation process is supported by the observation that about 40% of earthquakes are preceded by foreshocks. Once the rupture has initiated it begins to propagate along the fault surface. The mechanics of this process are poorly understood, partly because it is difficult to recreate the high sliding velocities in a laboratory. Also the effects of strong ground motion make it very difficult to record information close to a nucleation zone.
Rupture propagation is generally modeled using a fracture mechanics approach, likening the rupture to a propagating mixed mode shear crack. The rupture velocity is a function of the fracture energy in the volume around the crack tip, increasing with decreasing fracture energy. The velocity of rupture propagation is orders of magnitude faster than the displacement velocity across the fault. Earthquake ruptures typically propagate at velocities that are in the range 70–90% of the S-wave velocity and this is independent of earthquake size. A small subset of earthquake ruptures appear to have propagated at speeds greater than the S-wave velocity. These supershear earthquakes have all been observed during large strike-slip events. The unusually wide zone of coseismic damage caused by the 2001 Kunlun earthquake has been attributed to the effects of the sonic boom developed in such earthquakes. Some earthquake ruptures travel at unusually low velocities and are referred to as slow earthquakes. A particularly dangerous form of slow earthquake is the tsunami earthquake, observed where the relatively low felt intensities, caused by the slow propagation speed of some great earthquakes, fail to alert the population of the neighbouring coast, as in the 1896 Meiji-Sanriku earthquake.
Most earthquakes form part of a sequence, related to each other in terms of location and time. Most earthquake clusters consist of small tremors that cause little to no damage, but there is a theory that earthquakes can recur in a regular pattern.
An aftershock is an earthquake that occurs after a previous earthquake, the mainshock. An aftershock is in the same region of the main shock but always of a smaller magnitude. If an aftershock is larger than the main shock, the aftershock is redesignated as the main shock and the original main shock is redesignated as a foreshock. Aftershocks are formed as the crust around the displaced fault plane adjusts to the effects of the main shock.
Earthquake swarms are sequences of earthquakes striking in a specific area within a short period of time. They are different from earthquakes followed by a series of aftershocks by the fact that no single earthquake in the sequence is obviously the main shock, therefore none have notable higher magnitudes than the other. An example of an earthquake swarm is the 2004 activity at Yellowstone National Park. In August 2012, a swarm of earthquakes shook Southern California's Imperial Valley, showing the most recorded activity in the area since the 1970s.
Sometimes a series of earthquakes occur in a sort of earthquake storm, where the earthquakes strike a fault in clusters, each triggered by the shaking or stress redistribution of the previous earthquakes. Similar to aftershocks but on adjacent segments of fault, these storms occur over the course of years, and with some of the later earthquakes as damaging as the early ones. Such a pattern was observed in the sequence of about a dozen earthquakes that struck the North Anatolian Fault in Turkey in the 20th century and has been inferred for older anomalous clusters of large earthquakes in the Middle East.
Size and frequency of occurrence
It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation. About 100,000 of these can be felt. Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the U.S., as well as in Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, India and Japan, but earthquakes can occur almost anywhere, including New York City, London, and Australia. Larger earthquakes occur less frequently, the relationship being exponential; for example, roughly ten times as many earthquakes larger than magnitude 4 occur in a particular time period than earthquakes larger than magnitude 5. In the (low seismicity) United Kingdom, for example, it has been calculated that the average recurrences are: an earthquake of 3.7–4.6 every year, an earthquake of 4.7–5.5 every 10 years, and an earthquake of 5.6 or larger every 100 years. This is an example of the Gutenberg–Richter law.
The number of seismic stations has increased from about 350 in 1931 to many thousands today. As a result, many more earthquakes are reported than in the past, but this is because of the vast improvement in instrumentation, rather than an increase in the number of earthquakes. The United States Geological Survey estimates that, since 1900, there have been an average of 18 major earthquakes (magnitude 7.0–7.9) and one great earthquake (magnitude 8.0 or greater) per year, and that this average has been relatively stable. In recent years, the number of major earthquakes per year has decreased, though this is probably a statistical fluctuation rather than a systematic trend. More detailed statistics on the size and frequency of earthquakes is available from the United States Geological Survey (USGS). A recent increase in the number of major earthquakes has been noted, which could be explained by a cyclical pattern of periods of intense tectonic activity, interspersed with longer periods of low-intensity. However, accurate recordings of earthquakes only began in the early 1900s, so it is too early to categorically state that this is the case.
Most of the world's earthquakes (90%, and 81% of the largest) take place in the 40,000 km long, horseshoe-shaped zone called the circum-Pacific seismic belt, known as the Pacific Ring of Fire, which for the most part bounds the Pacific Plate. Massive earthquakes tend to occur along other plate boundaries, too, such as along the Himalayan Mountains.
With the rapid growth of mega-cities such as Mexico City, Tokyo and Tehran, in areas of high seismic risk, some seismologists are warning that a single quake may claim the lives of up to 3 million people.
While most earthquakes are caused by movement of the Earth's tectonic plates, human activity can also produce earthquakes. Four main activities contribute to this phenomenon: storing large amounts of water behind a dam (and possibly building an extremely heavy building), drilling and injecting liquid into wells, and by coal mining and oil drilling. Perhaps the best known example is the 2008 Sichuan earthquake in China's Sichuan Province in May; this tremor resulted in 69,227 fatalities and is the 19th deadliest earthquake of all time. The Zipingpu Dam is believed to have fluctuated the pressure of the fault 1,650 feet (503 m) away; this pressure probably increased the power of the earthquake and accelerated the rate of movement for the fault. The greatest earthquake in Australia's history is also claimed to be induced by humanity, through coal mining. The city of Newcastle was built over a large sector of coal mining areas. The earthquake has been reported to be spawned from a fault that reactivated due to the millions of tonnes of rock removed in the mining process.
Measuring and locating earthquakes
Earthquakes can be recorded by seismometers up to great distances, because seismic waves travel through the whole Earth's interior. The absolute magnitude of a quake is conventionally reported by numbers on the Moment magnitude scale (formerly Richter scale, magnitude 7 causing serious damage over large areas), whereas the felt magnitude is reported using the modified Mercalli intensity scale (intensity II–XII).
Every tremor produces different types of seismic waves, which travel through rock with different velocities:
- Longitudinal P-waves (shock- or pressure waves)
- Transverse S-waves (both body waves)
- Surface waves — (Rayleigh and Love waves)
Propagation velocity of the seismic waves ranges from approx. 3 km/s up to 13 km/s, depending on the density and elasticity of the medium. In the Earth's interior the shock- or P waves travel much faster than the S waves (approx. relation 1.7 : 1). The differences in travel time from the epicentre to the observatory are a measure of the distance and can be used to image both sources of quakes and structures within the Earth. Also the depth of the hypocenter can be computed roughly.
In solid rock P-waves travel at about 6 to 7 km per second; the velocity increases within the deep mantle to ~13 km/s. The velocity of S-waves ranges from 2–3 km/s in light sediments and 4–5 km/s in the Earth's crust up to 7 km/s in the deep mantle. As a consequence, the first waves of a distant earthquake arrive at an observatory via the Earth's mantle.
On average, the kilometer distance to the earthquake is the number of seconds between the P and S wave times 8. Slight deviations are caused by inhomogeneities of subsurface structure. By such analyses of seismograms the Earth's core was located in 1913 by Beno Gutenberg.
Earthquakes are not only categorized by their magnitude but also by the place where they occur. The world is divided into 754 Flinn-Engdahl regions (F-E regions), which are based on political and geographical boundaries as well as seismic activity. More active zones are divided into smaller F-E regions whereas less active zones belong to larger F-E regions.
Standard reporting of earthquakes includes its magnitude, date and time of occurrence, geographic coordinates of its epicenter, depth of the epicenter, geographical region, distances to population centers, location uncertainty, a number of parameters that are included in USGS earthquake reports (number of stations reporting, number of observations, etc.), and a unique event ID.
Effects of earthquakes
The effects of earthquakes include, but are not limited to, the following:
Shaking and ground rupture
Shaking and ground rupture are the main effects created by earthquakes, principally resulting in more or less severe damage to buildings and other rigid structures. The severity of the local effects depends on the complex combination of the earthquake magnitude, the distance from the epicenter, and the local geological and geomorphological conditions, which may amplify or reduce wave propagation. The ground-shaking is measured by ground acceleration.
Specific local geological, geomorphological, and geostructural features can induce high levels of shaking on the ground surface even from low-intensity earthquakes. This effect is called site or local amplification. It is principally due to the transfer of the seismic motion from hard deep soils to soft superficial soils and to effects of seismic energy focalization owing to typical geometrical setting of the deposits.
Ground rupture is a visible breaking and displacement of the Earth's surface along the trace of the fault, which may be of the order of several metres in the case of major earthquakes. Ground rupture is a major risk for large engineering structures such as dams, bridges and nuclear power stations and requires careful mapping of existing faults to identify any which are likely to break the ground surface within the life of the structure.
Landslides and avalanches
Earthquakes, along with severe storms, volcanic activity, coastal wave attack, and wildfires, can produce slope instability leading to landslides, a major geological hazard. Landslide danger may persist while emergency personnel are attempting rescue.
Earthquakes can cause fires by damaging electrical power or gas lines. In the event of water mains rupturing and a loss of pressure, it may also become difficult to stop the spread of a fire once it has started. For example, more deaths in the 1906 San Francisco earthquake were caused by fire than by the earthquake itself.
Soil liquefaction occurs when, because of the shaking, water-saturated granular material (such as sand) temporarily loses its strength and transforms from a solid to a liquid. Soil liquefaction may cause rigid structures, like buildings and bridges, to tilt or sink into the liquefied deposits. For example, in the 1964 Alaska earthquake, soil liquefaction caused many buildings to sink into the ground, eventually collapsing upon themselves.
Tsunamis are long-wavelength, long-period sea waves produced by the sudden or abrupt movement of large volumes of water. In the open ocean the distance between wave crests can surpass 100 kilometers (62 mi), and the wave periods can vary from five minutes to one hour. Such tsunamis travel 600-800 kilometers per hour (373–497 miles per hour), depending on water depth. Large waves produced by an earthquake or a submarine landslide can overrun nearby coastal areas in a matter of minutes. Tsunamis can also travel thousands of kilometers across open ocean and wreak destruction on far shores hours after the earthquake that generated them.
Ordinarily, subduction earthquakes under magnitude 7.5 on the Richter scale do not cause tsunamis, although some instances of this have been recorded. Most destructive tsunamis are caused by earthquakes of magnitude 7.5 or more.
A flood is an overflow of any amount of water that reaches land. Floods occur usually when the volume of water within a body of water, such as a river or lake, exceeds the total capacity of the formation, and as a result some of the water flows or sits outside of the normal perimeter of the body. However, floods may be secondary effects of earthquakes, if dams are damaged. Earthquakes may cause landslips to dam rivers, which collapse and cause floods.
The terrain below the Sarez Lake in Tajikistan is in danger of catastrophic flood if the landslide dam formed by the earthquake, known as the Usoi Dam, were to fail during a future earthquake. Impact projections suggest the flood could affect roughly 5 million people.
An earthquake may cause injury and loss of life, road and bridge damage, general property damage (which may or may not be covered by earthquake insurance), and collapse or destabilization (potentially leading to future collapse) of buildings. The aftermath may bring disease, lack of basic necessities, and higher insurance premiums.
One of the most devastating earthquakes in recorded history occurred on 23 January 1556 in the Shaanxi province, China, killing more than 830,000 people (see 1556 Shaanxi earthquake). Most of the population in the area at the time lived in yaodongs, artificial caves in loess cliffs, many of which collapsed during the catastrophe with great loss of life. The 1976 Tangshan earthquake, with a death toll estimated to be between 240,000 to 655,000, is believed to be the largest earthquake of the 20th century by death toll.
The 1960 Chilean Earthquake is the largest earthquake that has been measured on a seismograph, reaching 9.5 magnitude on 22 May 1960. Its epicenter was near Cañete, Chile. The energy released was approximately twice that of the next most powerful earthquake, the Good Friday Earthquake, which was centered in Prince William Sound, Alaska. The ten largest recorded earthquakes have all been megathrust earthquakes; however, of these ten, only the 2004 Indian Ocean earthquake is simultaneously one of the deadliest earthquakes in history.
Earthquakes that caused the greatest loss of life, while powerful, were deadly because of their proximity to either heavily populated areas or the ocean, where earthquakes often create tsunamis that can devastate communities thousands of kilometers away. Regions most at risk for great loss of life include those where earthquakes are relatively rare but powerful, and poor regions with lax, unenforced, or nonexistent seismic building codes.
Many methods have been developed for predicting the time and place in which earthquakes will occur. Despite considerable research efforts by seismologists, scientifically reproducible predictions cannot yet be made to a specific day or month. However, for well-understood faults the probability that a segment may rupture during the next few decades can be estimated.
Earthquake warning systems have been developed that can provide regional notification of an earthquake in progress, but before the ground surface has begun to move, potentially allowing people within the system's range to seek shelter before the earthquake's impact is felt.
The objective of earthquake engineering is to foresee the impact of earthquakes on buildings and other structures and to design such structures to minimize the risk of damage. Existing structures can be modified by seismic retrofitting to improve their resistance to earthquakes. Earthquake insurance can provide building owners with financial protection against losses resulting from earthquakes.
Emergency management strategies can be employed by a government or organization to mitigate risks and prepare for consequences.
Ways to Survive an Earthquake
- Be Prepared: Before, During and After an Earthquake Earthquakes do not last for a long time, generally a few seconds to a minute. The 1989 San Francisco earthquake only lasted 15 seconds.
- Securing water heaters, major appliances and tall, heavy furniture to prevent them from toppling are prudent steps. So, too, are storing hazardous or flammable liquids, heavy objects and breakables on low shelves or in secure cabinets.
- If you're indoors, stay there. Get under -- and hold onto --a desk or table, or stand against an interior wall. Stay clear of exterior walls, glass, heavy furniture, fireplaces and appliances. The kitchen is a particularly dangerous spot. If you’re in an office building, stay away from windows and outside walls and do not use the elevator. Stay low and cover your head and neck with your hands and arms. Bracing yourself to a wall or heavy furniture when weaker earthquakes strike usually works.
- Cover your head and neck. Use your hands and arms. If you have any respiratory disease, make sure that you cover your head with a t-shirt or bandana, until all the debris and dust has settled. Inhaled dirty air is not good for your lungs.
- DO NOT stand in a doorway: An enduring earthquake image of California is a collapsed adobe home with the door frame as the only standing part. From this came our belief that a doorway is the safest place to be during an earthquake. True- if you live in an old, unreinforced adobe house or some older woodframe houses. In modern houses, doorways are no stronger than any other part of the house, and the doorway does not protect you from the most likely source of injury- falling or flying objects. You also may not be able to brace yourself in the door during strong shaking. You are safer under a table. Many are certain that standing in a doorway during the shaking is a good idea. That’s false, unless you live in an unreinforced adode structure; otherwise, you're more likely to be hurt by the door swinging wildly in a doorway.
- Inspect your house for anything that might be in a dangerous condition. Glass fragments, the smell of gas, or damaged electrical appliances are examples of hazards.
- Do not move. If it is safe to do so, stay where you are for a minute or two, until you are sure the shaking has stopped. Slowly get out of the house. Wait until the shaking has stopped to evacuate the building carefully.
- PRACTICE THE RIGHT THING TO DO… IT COULD SAVE YOUR LIFE, You will be more likely to react quickly when shaking begins if you have actually practiced how to protect yourself on a regular basis. A great time to practice Drop, Cover, and Hold.
- If you're outside, get into the open. Stay clear of buildings, power lines or anything else that could fall on you. Glass looks smooth and still, but when broken apart, a small piece can damage your foot. This is why you wear heavy shoes to protect your feet at such times.
- Be aware that items may fall out of cupboards or closets when the door is opened, and also that chimneys can be weakened and fall with a touch. Check for cracks and damage to the roof and foundation of your home.
- Things You'll Need: Blanket, Sturdy shoes, Dust mask to help filter contaminated air and plastic sheeting and duct tape to shelter-in-place, basic hygiene supplies, e.g. soap, Feminine supplies and personal hygiene items.
From the lifetime of the Greek philosopher Anaxagoras in the 5th century BCE to the 14th century CE, earthquakes were usually attributed to "air (vapors) in the cavities of the Earth." Thales of Miletus, who lived from 625–547 (BCE) was the only documented person who believed that earthquakes were caused by tension between the earth and water. Other theories existed, including the Greek philosopher Anaxamines' (585–526 BCE) beliefs that short incline episodes of dryness and wetness caused seismic activity. The Greek philosopher Democritus (460–371 BCE) blamed water in general for earthquakes. Pliny the Elder called earthquakes "underground thunderstorms."
Earthquakes in culture
Mythology and religion
In Norse mythology, earthquakes were explained as the violent struggling of the god Loki. When Loki, god of mischief and strife, murdered Baldr, god of beauty and light, he was punished by being bound in a cave with a poisonous serpent placed above his head dripping venom. Loki's wife Sigyn stood by him with a bowl to catch the poison, but whenever she had to empty the bowl the poison dripped on Loki's face, forcing him to jerk his head away and thrash against his bonds, which caused the earth to tremble.
In Greek mythology, Poseidon was the cause and god of earthquakes. When he was in a bad mood, he struck the ground with a trident, causing earthquakes and other calamities. He also used earthquakes to punish and inflict fear upon people as revenge.
In Japanese mythology, Namazu (鯰) is a giant catfish who causes earthquakes. Namazu lives in the mud beneath the earth, and is guarded by the god Kashima who restrains the fish with a stone. When Kashima lets his guard fall, Namazu thrashes about, causing violent earthquakes.
In modern popular culture, the portrayal of earthquakes is shaped by the memory of great cities laid waste, such as Kobe in 1995 or San Francisco in 1906. Fictional earthquakes tend to strike suddenly and without warning. For this reason, stories about earthquakes generally begin with the disaster and focus on its immediate aftermath, as in Short Walk to Daylight (1972), The Ragged Edge (1968) or Aftershock: Earthquake in New York (1998). A notable example is Heinrich von Kleist's classic novella, The Earthquake in Chile, which describes the destruction of Santiago in 1647. Haruki Murakami's short fiction collection after the quake depicts the consequences of the Kobe earthquake of 1995.
The most popular single earthquake in fiction is the hypothetical "Big One" expected of California's San Andreas Fault someday, as depicted in the novels Richter 10 (1996) and Goodbye California (1977) among other works. Jacob M. Appel's widely anthologized short story, A Comparative Seismology, features a con artist who convinces an elderly woman that an apocalyptic earthquake is imminent.
Contemporary depictions of earthquakes in film are variable in the manner in which they reflect human psychological reactions to the actual trauma that can be caused to directly afflicted families and their loved ones. Disaster mental health response research emphasizes the need to be aware of the different roles of loss of family and key community members, loss of home and familiar surroundings, loss of essential supplies and services to maintain survival. Particularly for children, the clear availability of caregiving adults who are able to protect, nourish, and clothe them in the aftermath of the earthquake, and to help them make sense of what has befallen them has been shown even more important to their emotional and physical health than the simple giving of provisions. As was observed after other disasters involving destruction and loss of life and their media depictions, such as those of the 2001 World Trade Center Attacks or Hurricane Katrina—and has been recently observed in the 2010 Haiti earthquake, it is also important not to pathologize the reactions to loss and displacement or disruption of governmental administration and services, but rather to validate these reactions, to support constructive problem-solving and reflection as to how one might improve the conditions of those affected.
- "Earthquake FAQ". Crustal.ucsb.edu. Retrieved 2011-07-24.
- Spence, William; S. A. Sipkin, G. L. Choy (1989). "Measuring the Size of an Earthquake". United States Geological Survey. Retrieved 2006-11-03.
- Wyss, M. (1979). "Estimating expectable maximum magnitude of earthquakes from fault dimensions". Geology 7 (7): 336–340. Bibcode:1979Geo.....7..336W. doi:10.1130/0091-7613(1979)7<336:EMEMOE>2.0.CO;2.
- Sibson R. H. (1982) "Fault Zone Models, Heat Flow, and the Depth Distribution of Earthquakes in the Continental Crust of the United States", Bulletin of the Seismological Society of America, Vol 72, No. 1, pp. 151–163
- Sibson, R. H. (2002) "Geology of the crustal earthquake source" International handbook of earthquake and engineering seismology, Volume 1, Part 1, page 455, eds. W H K Lee, H Kanamori, P C Jennings, and C. Kisslinger, Academic Press, ISBN / ASIN: 0124406521
- "Global Centroid Moment Tensor Catalog". Globalcmt.org. Retrieved 2011-07-24.
- "Instrumental California Earthquake Catalog". WGCEP. Retrieved 2011-07-24.
- Hjaltadóttir S., 2010, "Use of relatively located microearthquakes to map fault patterns and estimate the thickness of the brittle crust in Southwest Iceland"
- "Reports and publications | Seismicity | Icelandic Meteorological office". En.vedur.is. Retrieved 2011-07-24.
- Schorlemmer, D.; Wiemer, S.; Wyss, M. (2005). "Variations in earthquake-size distribution across different stress regimes". Nature 437 (7058): 539–542. Bibcode:2005Natur.437..539S. doi:10.1038/nature04094. PMID 16177788.
- Talebian, M; Jackson, J (2004). "A reappraisal of earthquake focal mechanisms and active shortening in the Zagros mountains of Iran". Geophysical Journal International 156 (3): 506–526. Bibcode:2004GeoJI.156..506T. doi:10.1111/j.1365-246X.2004.02092.x.
- Nettles, M.; Ekström, G. (May 2010). "Glacial Earthquakes in Greenland and Antarctica". Annual Review of Earth and Planetary Sciences 38 (1): 467–491. Bibcode:2010AREPS..38..467N. doi:10.1146/annurev-earth-040809-152414. Avinash Kumar
- Noson, Qamar, and Thorsen (1988). Washington State Earthquake Hazards: Washington State Department of Natural Resources. Washington Division of Geology and Earth Resources Information Circular 85.
- "M7.5 Northern Peru Earthquake of 26 September 2005" (PDF). National Earthquake Information Center. 17 October 2005. Retrieved 2008-08-01.
- Greene II, H. W.; Burnley, P. C. (October 26, 1989). "A new self-organizing mechanism for deep-focus earthquakes". Nature 341 (6244): 733–737. Bibcode:1989Natur.341..733G. doi:10.1038/341733a0.
- Foxworthy and Hill (1982). Volcanic Eruptions of 1980 at Mount St. Helens, The First 100 Days: USGS Professional Paper 1249.
- Watson, John; Watson, Kathie (January 7, 1998). "Volcanoes and Earthquakes". United States Geological Survey. Retrieved May 9, 2009.
- National Research Council (U.S.). Committee on the Science of Earthquakes (2003). "5. Earthquake Physics and Fault-System Science". Living on an Active Earth: Perspectives on Earthquake Science. Washington D.C.: National Academies Press. p. 418. ISBN 978-0-309-06562-7. Retrieved 8 July 2010.
- Thomas, Amanda M.; Nadeau, Robert M.; Bürgmann, Roland (December 24, 2009). "Tremor-tide correlations and near-lithostatic pore pressure on the deep San Andreas fault". Nature 462 (7276): 1048–51. Bibcode:2009Natur.462.1048T. doi:10.1038/nature08654. PMID 20033046.
- "Gezeitenkräfte: Sonne und Mond lassen Kalifornien erzittern" SPIEGEL online, 29.12.2009
- Tamrazyan, Gurgen P. (1967). "Tide-forming forces and earthquakes". Icarus 7 (1–3): 59–65. Bibcode:1967Icar....7...59T. doi:10.1016/0019-1035(67)90047-4.
- Tamrazyan, Gurgen P. (1968). "Principal regularities in the distribution of major earthquakes relative to solar and lunar tides and other cosmic forces". Icarus 9 (1–3): 574–92. Bibcode:1968Icar....9..574T. doi:10.1016/0019-1035(68)90050-X.
- "What are Aftershocks, Foreshocks, and Earthquake Clusters?".
- "Repeating Earthquakes". United States Geological Survey. January 29, 2009. Retrieved May 11, 2009.
- "Earthquake Swarms at Yellowstone". United States Geological Survey. Retrieved 2008-09-15.
- Duke, Alan. "Quake 'swarm' shakes Southern California". CNN. Retrieved 27 August 2012.
- Amos Nur; Cline, Eric H. (2000). "Poseidon's Horses: Plate Tectonics and Earthquake Storms in the Late Bronze Age Aegean and Eastern Mediterranean". Journal of Archaeological Science 27 (1): 43–63. doi:10.1006/jasc.1999.0431. ISSN 0305-4403.
- "Earthquake Storms". Horizon. 1 April 2003. Retrieved 2007-05-02.
- "Earthquake Facts". United States Geological Survey. Retrieved 2010-04-25.
- Pressler, Margaret Webb (14 April 2010). "More earthquakes than usual? Not really.". KidsPost (Washington Post: Washington Post). pp. C10.
- "Earthquake Hazards Program". United States Geological Survey. Retrieved 2006-08-14.
- "Seismicity and earthquake hazard in the UK". Quakes.bgs.ac.uk. Retrieved 2010-08-23.
- "Italy's earthquake history." BBC News. October 31, 2002.
- "Common Myths about Earthquakes". United States Geological Survey. Retrieved 2006-08-14.
- "Earthquake Facts and Statistics: Are earthquakes increasing?". United States Geological Survey. Retrieved 2006-08-14.
- The 10 biggest earthquakes in history, Australian Geographic, March 14, 2011.
- "Historic Earthquakes and Earthquake Statistics: Where do earthquakes occur?". United States Geological Survey. Retrieved 2006-08-14.
- "Visual Glossary — Ring of Fire". United States Geological Survey. Retrieved 2006-08-14.
- Jackson, James, "Fatal attraction: living with earthquakes, the growth of villages into megacities, and earthquake vulnerability in the modern world," Philosophical Transactions of the Royal Society, doi:10.1098/rsta.2006.1805 Phil. Trans. R. Soc. A 15 August 2006 vol. 364 no. 1845 1911–1925.
- "Global urban seismic risk." Cooperative Institute for Research in Environmental Science.
- Madrigal, Alexis (4 June 2008). "Top 5 Ways to Cause a Man-Made Earthquake". Wired News (CondéNet). Retrieved 2008-06-05.
- "How Humans Can Trigger Earthquakes". National Geographic. February 10, 2009. Retrieved April 24, 2009.
- Brendan Trembath (January 9, 2007). "Researcher claims mining triggered 1989 Newcastle earthquake". Australian Broadcasting Corporation. Retrieved April 24, 2009.
- "Speed of Sound through the Earth". Hypertextbook.com. Retrieved 2010-08-23.
- Geographic.org. "Magnitude 8.0 - SANTA CRUZ ISLANDS Earthquake Details". Gobal Earthquake Epicenters with Maps. Retrieved 2013-03-13.
- "On Shaky Ground, Association of Bay Area Governments, San Francisco, reports 1995,1998 (updated 2003)". Abag.ca.gov. Retrieved 2010-08-23.
- "Guidelines for evaluating the hazard of surface fault rupture, California Geological Survey". California Department of Conservation. 2002.
- "Natural Hazards — Landslides". United States Geological Survey. Retrieved 2008-09-15.
- "The Great 1906 San Francisco earthquake of 1906". United States Geological Survey. Retrieved 2008-09-15.
- "Historic Earthquakes — 1946 Anchorage Earthquake". United States Geological Survey. Retrieved 2008-09-15.
- Noson, Qamar, and Thorsen (1988). Washington Division of Geology and Earth Resources Information Circular 85. Washington State Earthquake Hazards.
- MSN Encarta Dictionary. Flood. Retrieved on 2006-12-28. Archived 2009-10-31.
- "Notes on Historical Earthquakes". British Geological Survey. Retrieved 2008-09-15.
- "Fresh alert over Tajik flood threat". BBC News. 2003-08-03. Retrieved 2008-09-15.
- USGS: Magnitude 8 and Greater Earthquakes Since 1900
- "Earthquakes with 50,000 or More Deaths". U.S. Geological Survey
- Spignesi, Stephen J. (2005). Catastrophe!: The 100 Greatest Disasters of All Time. ISBN 0-8065-2558-4
- Kanamori Hiroo. "The Energy Release in Great Earthquakes". Journal of Geophysical Research. Retrieved 2010-10-10.
- USGS. "How Much Bigger?". United States Geological Survey. Retrieved 2010-10-10.
- Earthquake Prediction. Ruth Ludwin, U.S. Geological Survey.
- Working Group on California Earthquake Probabilities in the San Francisco Bay Region, 2003 to 2032, 2003, http://earthquake.usgs.gov/regional/nca/wg02/index.php.
- "Earthquakes". Encyclopedia of World Environmental History 1. Encyclopedia of World Environmental History. 2003. pp. 358–364.
- Sturluson, Snorri (1220). Prose Edda. ISBN 1-156-78621-5.
- Sellers, Paige (1997-03-03). "Poseidon". Encyclopedia Mythica. Retrieved 2008-09-02.
- Van Riper, A. Bowdoin (2002). Science in popular culture: a reference guide. Westport: Greenwood Press. p. 60. ISBN 0-313-31822-0.
- JM Appel. A Comparative Seismology. Weber Studies (first publication), Volume 18, Number 2.
- Goenjian, Najarian; Pynoos, Steinberg; Manoukian, Tavosian; Fairbanks, AM; Manoukian, G; Tavosian, A; Fairbanks, LA (1994). "Posttraumatic stress disorder in elderly and younger adults after the 1988 earthquake in Armenia". Am J Psychiatry 151 (6): 895–901. PMID 8185000.
- Wang, Gao; Shinfuku, Zhang; Zhao, Shen; Zhang, H; Zhao, C; Shen, Y (2000). "Longitudinal Study of Earthquake-Related PTSD in a Randomly Selected Community Sample in North China". Am J Psychiatry 157 (8): 1260–1266. doi:10.1176/appi.ajp.157.8.1260. PMID 10910788.
- Goenjian, Steinberg; Najarian, Fairbanks; Tashjian, Pynoos (2000). "Prospective Study of Posttraumatic Stress, Anxiety, and Depressive Reactions After Earthquake and Political Violence". Am J Psychiatry 157 (6): 911–895. doi:10.1176/appi.ajp.157.6.911.
- Coates SW, Schechter D (2004). Preschoolers' traumatic stress post-9/11: relational and developmental perspectives. Disaster Psychiatry Issue. Psychiatric Clinics of North America, 27(3), 473–489.
- Schechter, DS; Coates, SW; First, E (2002). "Observations of acute reactions of young children and their families to the World Trade Center attacks". Journal of ZERO-TO-THREE: National Center for Infants, Toddlers, and Families 22 (3): 9–13.
- Deborah R. Coen. The Earthquake Observers: Disaster Science From Lisbon to Richter (University of Chicago Press; 2012) 348 pages; explores both scientific and popular coverage
- Donald Hyndman, David Hyndman (2009). "Chapter 3: Earthquakes and their causes". Natural Hazards and Disasters (2nd ed.). Brooks/Cole: Cengage Learning. ISBN 0-495-31667-9.
|Wikimedia Commons has media related to: Earthquake|
- Earthquake Hazards Program of the U.S. Geological Survey
- European-Mediterranean Seismological Centre a real-time earthquake information website
- Seismological Society of America
- Incorporated Research Institutions for Seismology
- Open Directory - Earthquakes
- World earthquake map captures every rumble since 1898 —Mother Nature Network (MNN) (29 June 2012) | http://en.wikipedia.org/wiki/Earthquakes | 13 |
21 | Richard K. Moore
This document continues to evolve, based on continuing research. The latest version is always maintained at this URL:
You can click on any graphic in this document to see a larger image.
Global temperatures in perspective
Lets look at the historical temperature record, beginning with the long-term view. For long-term temperatures, ice-cores provide the most reliable data. Lets look first at the very long-term record, using ice cores from Vostok, in the Antarctic. Temperatures are shown relative to 1900, which is shown as zero.
Here we see a very regular pattern of long-term temperature cycles. Most of the time the Earth is in an ice age, and about every 125,000 years there is a brief period of warm tempertures, called an interglacial period. Our current interglacial period has lasted a bit longer than most, indicating that the next ice age is somewhat overdue.
These long-term cycles are probably related to changes in the eccentricity of the Earths orbit, which follows a cycle of about 100,000 years. We also see other cycles of more closely-spaced peaks, and these are probably related to other cycles in the Earths orbit. There is an obliquity cycle of about 41,000 years, and a precession cycle, of about 20,000 years, and all of these cycles interfere with one another in complex ways. Heres a tutorial from NASA that discusses the Earths orbital variations:
Next lets zoom-in on the current interglacial period, as seen in Vostok and Greenland, again using ice-core data.
Here we see that the Antarctic emerged from the last ice age about 1,000 years earlier than the Arctic. While the Antarctic has oscillated up and down throughout the interglacial period, the Arctic has been on a steady decline towards the next ice age for the past 3,000 years.
As of 1900, in comparison to the whole interglacial period, the temperature was 2°C below the maximum in Vostok, and 3°C below the maximum in Greenland. Thus, as of 1900, temperatures were rather cool for the period in both hemispheres, and in Greenland temperatures were close to a minimum.
During this recent interglacial period, temperatures in both Vostok and Greenland have oscillated through a range of about 4°C, although the patterns of oscillation are quite different in each case. In order to see just how different the patterns are, lets look at Greenland and Vostok together for the interglacial period. Vostok is shown with a dashed line.
The patterns are very different indeed. While Greenland has been almost always above the 1900 base line, Vostok has been almost always below. And in the period 1500-1900, while Greenland temperatures were relatively stable, within a range of 0.5°C, Vostok went through a radical oscillation of 3°C, from an extreme high to an extreme low.
These dramatic differences between the two arctic regions might be related to the Earths orbital variations (See NASA tutorial). On the other hand, we may be seeing a regulatory mechanism, based on the fact that the Southern Hemisphere is dominated by oceans, while most of the land mass is in the Northern Hemisphere. Perhaps incoming heat, though retained by the northern continents, leads to evaporation from the oceans and increased snowfall in the Antarctic. Whatever the reasons, the differences between the two arctic regions are striking.
Lets now look at the average of Greenland and Vostok temperatures over the interglacial period:
8,500 BC 1900
Here we see that the average temperature has followed a more stable pattern, with more constrained oscillations, than either of the hemispheres. The graph shows a relatively smooth arc, rising from the last ice age, and descending steadily over the past 4,000 years toward the next ice age. Heres the average again, together with Vostok and Greenland:
8,500 BC 1900
Notice how the average is nearly always nestled between the Arctic and Antarctic temperatures, with the Arctic above and the Anatarctic below. It does seem that the Antarctic is acting as a regulatory mechanism, keeping the average temperature always moderate, even when the Arctic is experiencing high temperatures. I dont offer this as a theory, but simply as an observation of a possibility.
We can see that the average temperature tells us very little about what is happening in either arctic region. We cannot tell from the average that Arctic temperatures were 3°C higher in 1500 BC, and that glacier melting might have been a danger then. And the average does not tell us that the Antarctic has almost always been cool, with very little danger of ice-cap melting at any time. In general, the average is a very poor indicator of conditions in either arctic region.
If we want to understand warming-related issues, such as tundra-melting and glacier-melting, we must consider the two polar regions separately. If glaciers melt, they do so either because of high Arctic temperatures, or high Antarctic temperatures. Whether or not glaciers are likely to melt cannot be determined by global averages.
Next lets take a closer look at Vostok and Greenland since 500 BC:
500 BC 1900
Again we see how the Antarctic temperatures balance the Arctic, showing almost a mirror image over much of this period. From 1500 to 1800, while the Arctic was experiencing the Little Ice Age, it seems almost as if the Antarctic was getting frantic, going into radical oscillations in an effort to keep the average up near the base line.
Beginning about 1800 we have an unusual situation, where both arctic regions begin warming rapidly at the same time, as each follows its own distinct pattern. This of course means that the average will also be rising. Keep in mind that everything weve been looking at so far has been before human-caused CO2 emissions were at all significant.
Thus, just as human-caused emissions began to increase, around 1900, average temperatures were already rising sharply, from natural causes. There has been a strong correlation between rising average temperature and CO2 levels since 1900, arising from a coincidental alignment of three distinct trends. Whether or not rising CO2 levels have accelerated the natural increase in average temperature remains to be seen.
Well return to this question of CO2 causation, but first lets look at some other records from the Northern Hemisphere, to find out how typical the Greenland record is of its hemisphere. This first record is from Spain, based on the mercury content in a peat bog, as published in Science, 1999, vol. 284. Note that this graph is backwards, with present day on the left.
Present day 2,000 BC
This next record is from the Central Alps, based on stalagmite isotopes, as published in Earth and Planteary Science Letters, 2005, vol. 235.
0 AD Present Day
And for comparison, heres the Greenland record for the most recent 4,000 years:
2,000 BC 1900
While the three records are clearly different, they do share certain important characteristics. In each case we see a staggered rise, followed by a staggered decline a long-term up-and-down cycle over the period. In each case we see that during the past few thousand years, temperatures have been 3°C higher than 1900 temperatures. And in each case we see a steady descent towards the overdue next ice age. The Antarctic, on the other hand, shares none of these characteristics.
In the Northern Hemisphere, based on the shared characteristics we have observed, temperatures would need to rise at least 3°C above 1900 levels before we would need to worry about things like the extinction of polar bears, the melting of the Greenland ice sheet, or runaway methane release. We know this because none of these things have happened in the past 4,000 years, and temperatures have been 3°C higher during that period.
However such a 3°C rise seems very unlikely to happen, given that all three of our Nothern Hemisphere samples show a gradual but definite decline toward the overdue next ice age. Lets now zoom-in on the temperature record since 1900, and see what kind of rise has actually occurred. Lets turn to Jim Hansens latest article, published on realclimate.org, 2009 temperatures by Jim Hansen. The article includes the following two graphs.
Jim Hansen is of course one of the primary spokespersons for the human-caused-CO2-dangerous-warming theory, and there is some reason to believe these graphs show an exaggerated picture as regards to warming. Here is one article relevant to that point, and it is typical of other reports Ive seen:
Son of Climategate! Scientist says feds manipulated data
Nonetheless, lets accept these graphs as a valid representation of recent average temperature changes, so as to be as fair as possible to the warming alarmists. Well be using the red line, which is from GISS, and which does not use the various extrapolations that are included in the green line. Well return to this topic later, but for now suffice it to say that these extrapolations make little sense from a scientific perspective.
The red line shows a temperature rise of .7°C from 1900 to the 1998 maximum, a leveling off beginning in 2001, and then a brief but sharp decline starting in 2005. Lets enter that data into our charting program, using values for each 5-year period that represent the center of the oscillations for that period. Heres what we get for 1900-2008:
In order to estimate how these average changes would be reflected in each of the polar regions, lets look at Greenland and Vostok together, from 1000 AD to 1900.
Vostok shown with dashed line
Here we can see that in 1900 the Antarctic was warming much faster the Arctic. As usual, the Antarctic was exhibiting the more extreme oscillations. In the most recent warming shown, from 1850 to 1900, the Arctic increased by only 0.5°C while the Antarctic increased by 0.75°C. As regards the average of these two inreases, the Antarctic contributed 60%, while the Arctic contributed 40%. If we assume these trends continue, and changes in global average are reflected in the polar regions, then we get the following estimate for temperature changes in the two polar regions:
(based on apportioning GISS changes,
60% to Vostok, 40% to Greenland)
Vostok shown with dashed line
This is only approximate, of course, but it is probably closer to the truth than apportioning the changes equally to the two polar regions. Lets now look again at Greenland and Vostok together, for the past 4,000 years, with these apportioned GISS changes appended.
2,000 BC 2008
Extended by GISS data
Vostok shown with feint line
We see here that both polar regions have remained below their maximum for this period. The Arctic has been nearly 2.5°C warmer, and the Antarctic about 0.5°C warmer. Perhaps CO2 is accelerating Antarctic warming, or perhaps Antarctica is simply continuing its erratic oscillations. In the Arctic however, temperatures are definitely following their long-term pattern, with no apparent influence from increased CO2 levels.
The recent warming period has given us a new peak in the Greenland record, one in a series of declining peaks. If you hold a ruler up to the screen, youll see that the four peaks shown, occuring about every 1,000 years, fall in a straight line. If the natural pattern continues, then the recent warming has reached its maximum in the Northern Hemisphere, and we will soon experience about two centuries of rapid cooling, as we continue our descent to the overdue next ice age. The downturn shown in the GISS data beginning in 2005 fits perfectly with this pattern.
Next lets look at the Greenland-Vostok average temperature for the past 4,000 years, extended by the GISS data.
2,000 BC 2008
Extended by GISS data
Here we see a polar-region subset of the famous hockey stick, on the right end of the graph and we can see how misleading that is as regards the likelihood of dangerous warming. From the average polar temperature, we get the illusion that temperatures are warmer now at the poles than theyve been any time since year 0. But as our previous graph shows, the Arctic has been about 1.5°C warmer during that period, and the Antarctic has been about 0.5°C warmer. And even the average has been nearly 0.5°C warmer, if we look back to 2,000 BC. So in fact we have not been experiencing alarmingly high temperatures recently in either hemisphere.
Dr. Hansen tells us the recent downturn, starting in 2005, is very temporary, and that temperatures will soon start rising again. Perhaps he is right. However, as we shall see, his arguments for this prediction are seriously flawed. What we know for sure is that a downward trend has begun. How far that trend will continue is not yet known.
So everything depends on the next few years. If temperatures turn sharply upwards again, then the IPCC may be right, and human-caused CO2 emissions may have taken control of climate. However, if temperatures continue downward, then climate has been following natural patterns all along in the Northern Hemisphere. The record-setting cold spells and snows in many parts of the Northern Hemisphere this winter seem to be a fairly clear signal that the trend is continuing downwards.
If so, then there has been no evidence of any noticeable influence on northern climate from human-caused CO2, and we are now facing an era of rapid cooling. Within two centuries we could expect temperatures in the Northern Hemisphere to be considerably lower than they were in the recent Little Ice Age.
We dont know for sure which way temperatures will go, rapidly up or rapidly down. But I can make this statement:
As of this moment, based on the long-term temperature patterns in the Northern Hemisphere, there is no evidence that human-caused CO2 has had any effect on climate. The rise since 1800, as well as the downward dip starting in 2005, are entirely in line with the natural long-term pattern. If temperatures turn sharply upwards in the next few years, that will be the first-ever evidence for human-caused warming in the Northern Hemisphere.
The illusion of dangerous warming arises from a failure to recognize that global averages are a very poor indicator of actual conditions in either hemisphere.
If the downward trend continues in the Northern Hemisphere, as the long-term pattern suggests, we are likely to experience about two centuries of rapid cooling in the Northern Hemisphere, as we continue our descent toward the overdue next ice age.
As regards the the recent downturn, here are two other records, both of which show an even more dramatic downturn than the one shown in the GISS data:
Dr. John Christy
UAH Monthly Means of Lower Troposphere LT5-2
2004 - 2008
RSS MSU Monthly Anomaly - 70S to 82.5N (essentially Global)
2004 - 2008
Why havent unsually high levels of CO2 significantly affected temperatures in the Northern Hemisphere?
One place to look for answers to this question is in the long-term patterns that we see in the temperature record of the past few thousand years, such as the peaks separated by about 1,000 years in the Greenland data, and other more closely-spaced patterns that are also visible. Some forces are causing those patterns, and whatever those forces are, they have nothing to do with human-caused CO2 emissions.
Perhaps the forces have to do with cycles in solar radiation and solar magnetism, or cosmic radiation, or something we havent yet identified. Until we understand what those forces are, how they intefere with one another, and how they effect climate, we cant build useful climate models, except on very short time scales.
We can also look for answers in the regulatory mechanisms that exist within the Earths own climate system. If an increment of warming happens on the surface, for example, then there is more evaporation from the oceans, which cools the ocean and leads to increased precipitation. While an increment of warming may melt glaciers, it may also cause increased snowfall in the arctic regions. To what extent do these balance one another? Do such mechanisms explain why Antarctic temperatures seem to always be balancing the Arctic, as we have seen in the data?
It is important to keep in mind that CO2 concentrations in the atmosphere are tiny compared to water-vapor concentrations. A small reduction in cloud formation can more than compensate for a large increase in CO2 concentration, as regards the total greenhouse effect. If there is a precipitation response to CO2-warming, that could be very significant, and we would need to understand it quantitatively, by observing it not by making assumptions and putting them in our models.
Vegetation also acts as a regulatory system. Plants and trees gobble up CO2; that is where their substance comes from. Greater CO2 concentration leads to faster growth, taking more CO2 out of the atmosphere. Until we understand quantitively how these various regulatory systems function and interact, we cant even build useful models on a short time scale.
In fact a lot of research is going on, investigating both lines of inquiry extraterrestrial forces as well as terrestrial regulation mechanisms. However, in the current public-opinion and media climate, any research not related to CO2 causation is dismissed as the activity of contrarians, deniers, and oil-company hacks. Just as the Bishop refused to look through Galileos telescope, so today we have a whole society that refuses to look at many of the climate studies that are available.
From observation of the patterns in climate history, the evidence indicates that regulatory mechanisms of some kind are operating. Its not so much the lack of a CO2-effect that provides evidence, but rather the constrained, oscillatory pattern in the average polar temperatures over the whole interglacial period. Whenever you see contrained oscillations in a system, that is evidence of a regulatory mechanism some kind of thermostat at work.
Direct evidence for climate-regulation mechanisms
Id like to draw attention to one example of a scientist who has been looking at one aspect of the Earths regulatory system. Roy Spencer has been conducting research using the satellite systems that are in place for climate studies. Here are his relevant qualifications:
Roy W. Spencer is a principal research scientist for the University of Alabama in Huntsville and the U.S. Science Team Leader for the Advanced Microwave Scanning Radiometer (AMSR-E) on NASAs Aqua satellite. He has served as senior scientist for climate studies at NASAs Marshall Space Flight Center in Huntsville, Alabama.He describes his research in a presentation available on YouTube:
In the talk he gives a lot of details, which are quite interesting, but one does need to concentrate and listen carefully to keep up with the pace and depth of the presentation. He certainly sounds like someone who knows what hes talking about. Permit me to summarize the main points of his research:
When greenhouse gases cause surface warming, a response occurs, a feedback response, in the form of changes in cloud and precipitation patterns. The CRU-related climate models all assume the feedback response is a positive one: any increment of greenhouse warming will be amplified by knock-on effects in the weather system. This assumption then leads to the predictions of runaway global warming.This is the kind of research we need to look at if we want to build useful climate models. Certainly Spencers results need to be confirmed by other researchers before we accept them as fact, but to simply dismiss his work out of hand is very bad for the progress of climate science. Consider what the popular website SourceWatch says about Spencer.
Spencer set out to see what the feedback response actually is, by observing what happens in the cloud-precipitation system when surface warming is occurring. What he found, by targeting satellite sensors appropriately, is that the feedback response is negative rather than positive. In particular, he found that the formation of storm-related cirrus clouds is inhibited when surface temperatures are high. Cirrus clouds are themselves a powerful greenhouse gas, and this reduction in cirrus cloud formation compensates for the increase in the CO2 greenhouse effect.
We dont find there any reference to rebuttals to his research, but we are told that Spencer is a global warming skeptic who writes columns for a free-market website funded by Exxon. They also mention that he spoke at conference organized by the Heartland Institute, that promotes lots of reactionary, free-market principles. They are trying to discredit Spencers work on irrelevant grounds, what the Greeks referred to as an ad hominem argument. Sort of like, If he beats his wife, his science must be faulty.
And its true about beating his wife Spencer does seem to have a pro-industry philosophy that shows little concern for sustainability. That might even be part of his motivation for undertaking his recent research, hoping to give ammunition to pro-industry lobbyists. But that doesnt prove his research is flawed or that his conclusions are invalid. His work should be challenged scientifically, by carrying out independent studies of the feedback process. If the challenges are restricted to irrelevant attacks, that becomes almost an admission that his results, which are threatening to the climate establishment, cannot be refuted. He does not hide his data, or his code, or his sentiments. The same cannot be said for the warming-alarmist camp.
What are we to make of Jim Hansens prediction that rapid warming will soon resume?
Once again, I refer you to Dr. Hansens recent article, 2009 temperatures by Jim Hansen. Jim explains his prediction methodlolgy in this paragraph, emphasis added:
The global record warm year, in the period of near-global instrumental measurements (since the late 1800s), was 2005. Sometimes it is asserted that 1998 was the warmest year. The origin of this confusion is discussed below. There is a high degree of interannual (year‐to‐ year) and decadal variability in both global and hemispheric temperatures. Underlying this variability, however, is a long‐term warming trend that has become strong and persistent over the past three decades. The long‐term trends are more apparent when temperature is averaged over several years. The 60‐month (5‐year) and 132 month (11‐year) running mean temperatures are shown in Figure 2 for the globe and the hemispheres. The 5‐year mean is sufficient to reduce the effect of the El Niño La Niña cycles of tropical climate. The 11‐ year mean minimizes the effect of solar variability the brightness of the sun varies by a measurable amount over the sunspot cycle, which is typically of 10‐12 year duration.
As Ive emphasized above, Jim is assuming there is a strong and persistent warming trend, which he of course attributes to human-caused CO2 emissions. And then that assumption becomes the justification for the 5 and 11-year running averages. Those running averages then give us phantom temperatures that dont match actual observations. In particular, if a downard decline is beginning, the running averages will tend to hide the decline, as we see in these alarmist graphs from the article with their exaggerated hockey stick.
It seems we are looking at a classic case of scientists becoming over-attached to their model. In the beginning there was a theory of human-caused global warming, arising from the accidental convergence of three independent trends, combined with the knowledge that CO2 is a greenhouse gas. That theory has now become an assumption among its proponents, and actual observations are being dismissed as confusion because they dont agree with the model. One is reminded again of the Bishop who refused to look through Galileos telescope, so as not to be confused about the fact that the Earth is the center of the universe.
The climate models have definitely strayed into the land of imaginary epicycles. The assumption of CO2 causation, plus the preoccupation with an abstract global average, creates a warming illusion that has no connection with reality in either hemisphere. This mathematical abstraction, the global average, is characteristic of nowhere. It creates the illusion of a warming crisis, when in fact no evidence for such a crisis exists. In the context of IPCC warnings about glacers melting, runaway warming, etc., the global-average hockey stick serves as deceptive and effective propaganda, but not as science.
As with the Ptolemaic model, there is a much simpler explantation for our recent era of warming, at least in the Northern Hemisphere: long-term temperature patterns are continuing, from natural causes, and natural regulatory mechanisms have compensated for the greenhouse effect of human-caused CO2 emissions. There is no strong reason to believe that CO2 has been affecting the Southern Hemisphere either, given the natural record of rapid and extreme oscillations which often go opposite to northern trends.
This simpler explanation is based on actual observations, and requires no abstract mathematical epicycles or averages but it removes CO2 from the center of the climate debate. And just as politically powerful factions in Galileos day wanted the Earth to remain the center of the universe, powerful factions today want CO2 to remain at the center of climate debate, and global warming to be seen as a threat.
What is the real agenda of the politically powerful factions who are promoting global-warming alarmism?
One thing we always need to keep in mind is that the people at the top of the power pyramid in our society have access to the very best scientific information. They control dozens, probably hundreds, of high-level think tanks, able to hire the best minds, and carrying out all kinds of research we dont hear about. They have access to all the secret military and CIA research, and a great deal of influence over what research is carried out in think tanks, the military, and in universities.
Just because they might be promoting faulty science for its propaganda value, that doesnt mean they believe it themselves. They undoubtedly know that global cooling is the most likely climate prognosis, and the actions they are promoting are completely in line with such an understanding.
Cap-and-trade, for example, wont reduce carbon emissions. Rather it is a mechanism that allows emissions to continue, while pretending they are declining by means of a phony market model. You know what a phony market model looks like. It looks like Reagan and Thatcher telling us that lower taxes will lead to higher government revenues due to increased business activity. It looks like globalization, telling us that opening up free markets will raise all boats and make us all prosperous. It looks like Wall Street, telling us that mortgage derivatives are a good deal, and we should buy them. And it looks like Wall Street telling us the bailouts will restore the economy, and that the recession is over. In short, its a con. Its a fake theory about what the consequences of a policy will be, when the real consequences are known from the beginning.
Cap-and-trade has nothing to do with climate. It is part of a scheme to micromanage the allocation of global resources, and to maximize profits from the use of those resources. Think about it. Our powerful factions decide who gets the initial free cap-and-trade credits. They run the exchange market itself, and can manipulate the market, create derivative products, sell futures, etc. They can cause deflation or inflation of carbon credits, just as they can cause deflation or inflation of currencies. They decide which corporations get advance insider tips, so they can maximize their emissions while minimizing their offset costs. They decide who gets loans to buy offsets, and at what interest rate. They decide what fraction of petroleum will go to the global North and the global South. They have their man in the regulation agencies that certify the validity of offset projects, such as replacing rainforests with tree plantations, thus decreasing carbon sequestration. And they make money every which way as they carry out this micromanagement.
In the face of global cooling, this profiteering and micromanagenent of energy resources becomes particularly significant. Just when more energy is needed to heat our homes, well find that the price has gone way up. Oil companies are actually strong supporters of the global-warming bandwagon, which is very ironic, given that they are funding some of the useful contrary research that is going on. Perhaps the oil barrons are counting on the fact that we are suspicious of them, and asssume we will discount the research they are funding, as most people are in fact doing. And the recent onset of global cooling explains all the urgency to implement the carbon-management regime: they need to get it in place before everyone realizes that warming alarmism is a scam.
And then theres the carbon taxes. Just as with income taxes, you and I will pay our full share for our daily commute and for heating our homes, while the big corporate CO2 emitters will have all kinds of loopholes, and offshore havens, set up for them. Just as Federal Reserve theory hasnt left us with a prosperous Main Street, despite its promises, so theories of carbon trading and taxation wont give us a happy transition to a sustainable world.
Instead of building the energy-efficient transport systems we need, for example, theyll sell us biofuels and electric cars, while most of societys overall energy will continue to come from fossil fuels, and the economy continues to deteriorate. The North will continue to operate unsustainably, and the South will pay the price in the form of mass die-offs, which are already ticking along at the rate of six million children a year from malnutrition and disease.
While collapse, suffering, and die-offs of marginal populations will be unpleasant for us, it will give our powerful factions a blank canvas on which to construct their new world order, whatever that might be. And well be desperate to go along with any scheme that looks like it might put food back on our tables and warm up our houses.
This document continues to evolve, based on continuing research. The latest version is always maintained at this URL:
The author can be reached here: firstname.lastname@example.org | http://rkmdocs.blogspot.com/2010/01/climate-science-observations-vs-models.html | 13 |
33 | State Government Structure
State Government Structure
The U.S. government is federal in form. The states and national government share powers, which are wholly derived from the Constitution.
From the Constitution, the national government derives
Article I, Section 10 of the Constitution of the United States puts limits on the powers of the states. States cannot form alliances with foreign governments, declare war, coin money, or impose duties on imports or exports.
The Tenth Amendment declares, “The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people.” In other words, states have all powers not granted to the federal government by the Constitution.
These powers have taken many different forms. States must take responsibility for areas such as:
- ownership of property
- education of inhabitants
- implementation of welfare and other benefits programs and distribution of aid
- protecting people from local threats
- maintaining a justice system
- setting up local governments such as counties and municipalities
- maintaining state highways and setting up the means of administrating local roads
- regulation of industry
- raising funds to support their activities
In many areas, states have a large role but also share administrative responsibility with local and federal governments. Highways, for example, are divided amongst the three different levels. Most states classify roads into primary, secondary, and local levels. This system determines whether the state, county, or local governments, respectively, must pay for and maintain roads. Many states have departments of transportation, which oversee and administer interstate transportation. U.S. highways and the interstate system are administered by the national government through the U.S. Department of Transportation.
States must also administer mandates set by the federal government. Generally these mandates contain rules which the states wouldn’t normally carry out. For example, the federal government may require states to reduce air pollution, provide services for the handicapped, or require that public transportation must meet certain safety standards. The federal government is prohibited by law from setting unfunded mandates. In other words, the federal government must provide funding for programs it mandates.
The federal government pays for its mandates through grants-in-aid. The government distributes categorical grants to be used for specific programs. In 1995, federal grant money totaled $229 billion. Block grants give the states access to large sums of money with few specific limitations. The state must only meet the federal goals and standards. The national government can give the states either formula grants or project grants (most commonly issued).
Mandates can also pass from the state to local levels. For example, the state can set certain education standards that the local school districts must abide by. Or, states could set rules calling for specific administration of local landfills.
Each state has its own constitution which it uses as the basis for laws. All state constitutions must abide by the framework set up under the national Constitution.
Therefore, in basic structure state constitutions much resemble the U.S. Constitution. They contain a preamble, a bill of rights, articles that describe separation of powers between the executive, legislative and judicial branches, and a framework for setting up local governments.
Length and Specificity
State constitutions also tend to be significantly lengthier than the U.S. Constitution. State constitutions can contain as many as 174,000 words (Alabama), and have as many as 513 amendments attached (also Alabama). Much of this length is devoted to issues or areas of interest that are outdated. Oklahoma‘s constitution, for example, contains provisions that describe the correct temperature to test kerosene and oil. California has sections that describe everything that may be deemed tax-exempt, including specific organizations and fruit and nut trees less than four years of age.
All state constitutions provide for a means of amendment. The process is usually initiated when the legislature proposes the amendment by a majority or supermajority vote, after which the people approve the amendment through a majority vote. Amendments can also be proposed by a constitutional convention or, in some states, through an initiative petition.
All states have a bicameral or two-house legislature, except Nebraska, which has a unicameral, or single, house. Legislative salaries range from nothing (Kentucky and Montana) to $57,500 (New York) per year. In states where there is no official salary, Legislators are often paid on a per diem basis (i.e. Rhode Island Legislators earn $5 per day).
The Upper House
- Called the Senate.
- Membership can range from 20 (Alaska) to 67 (Minnesota).
- Terms usually last four years.
The Lower House
- called the House of Representatives, General Assembly, or House of Delegates (Virginia),
- Membership can range from 40 (Alaska and Nevada) to 400 (New Hampshire).
- Terms usually last two years.
Like the national legislature, each house in a state legislature has a presiding officer. The Lieutenant Governor presides over the Senate, but the majority leader assumes most of the leadership roles. The house elects a Speaker who serves as its leader. Leaders of each house are responsible for recognizing speakers in debate, referring bills to committee, and presiding over deliberations.
States grant legislatures a variety of functions:
- Enact laws
- Represent the needs of their constituents
- Share budget-making responsibilities with Governor
- Confirm nominations of state officials
- House begins impeachment proceedings; Senate conducts the trial if there is an impeachment.
- Oversight – review of the executive branch. (e.g., sunset legislation)
Legislators don’t wield the only legislative power in state government. In many states, the people can perform legislative functions directly. The ways by which these methods can be implicated vary, but they usually require a certain number of signatures on a petition. After that, the issue is put on the ballot for a general vote.
A. Initiative – A way citizens can bypass the legislature and pass laws or amend the state constitution through a direct vote.
B. Referendum – A way citizens can approve of statutes or constitutional changes proposed by the legislature through a direct vote.
C. Recall – A way citizens can remove elected officials from office. It is allowed in 14 states and is hardly ever used.
The Governor is a state’s chief executive. A governor can serve either a two or four year term. Thirty-seven states have term limits on the governor.
The Governor is chiefly responsible for making appointments to state agencies and offices. These powers include:
- The ability to appoint for specific posts in the executive branch.
- The ability to appoint to fill a vacancy caused by the death or resignation of an elected official
- Chief Executive – draws up budget, also has clemency and military powers
- Like the U.S. President, a governor has the right to veto bills passed by the legislature.
- Vetoes can be overridden by a two-thirds or three-fourths majority in the legislature.
- In many states, the governor has the power of a line-item veto.
- In some states, the governor has the power of an amendatory or conditional veto.
Other Elected Positions Within the Executive Branch
The president and vice-president are the only elected executive positions within the federal government. State governments, however, often have other positions executive elected separately from the governor. Some examples include:
- Lieutenant Governor: Succeeds the governor in office and resigns over the senate.
- Secretary of State – Takes care of public records and documents; also may have many other responsibilities.
- Attorney General – Responsible for representing the state in all court cases.
- Auditor – Makes sure that public money has been spent legally.
- Treasurer – Invests and pays out state funds.
- Superintendent of Public Instruction – Heads state department of education.
Like the Federal Government, state governments need money to function. State systems, however, rely on different mechanisms to raise revenue. A breakdown of the state revenue system:
A. Insurance Trust Revenue relates to the money that the state takes in for administering programs such as retirement, unemployment compensation, and other social insurance systems.
B. Services and Fees include items such as tolls, liquor sales, lottery ticket sales, income from college tuition, and hospital charges.
C. State Taxes come in many different forms:
- Most states have a sales tax. The sales tax is assessed on most consumer goods in the state and ranges from 4% to 7%.
- Most states also have a state income tax, similar to the one used by the federal government. People can pay up to 16% of taxable income in state income taxes. Most states have a progressive sales tax. About 37% of state tax revenue is obtained through the personal income tax. Corporate income tax is also assessed on corporate income, a sum that accounts for 7% of state tax revenue.
- States levy taxes on motor fuels such as gasoline, diesel, and gasohol. Most of the funds go towards financing roads and transportation within the state.
- Sin taxes apply to alcoholic beverages and tobacco products. These taxes are named as such because they were originally intended to decrease consumption of these “undesirable” goods.
- Most states also have inheritance taxes, where a person pays a percentage of what he or she inherits from a deceased person.
D. State-run liquor stores are in operation in 17 states. Some states also make money through administration of utilities.
By 1989, 29 states had adopted some sort of gambling, most in the form of instant-winner or “drawing” lotteries. About 1 percent of state revenue comes from gambling. Lotteries can be very profitable for the state. Generally 50% of the proceeds go to winners, 10% to administration costs, and 40% to the state’s general fund. Profits from lotteries have been used towards funding education, economic development, and environmental programs. Net income from state lotteries totaled $11.1 billion in 1995.
Like the Federal government, state governments also have debts. In 1994, total state government debt had reached $410 billion. The per capita debt towards state governments across the country is about $1500. Debts range from about $700 million in Wyoming to over $65 billion in New York.
One of the largest issue areas left to the discretion of the states is education. The United States‘ public education system is administered mostly on the state and local levels. Elementary and Secondary schools receive funding from all the different levels of government: about 8% from the Federal Government, 50% from the State government, and 42% from local governments. State and local governments put more money toward education than any other cost. There are approximately 15,000 school districts around the country, each governed by its own school board. The people of the district vote the members of the school board into office. Generally about 15-30% of the local electorate participates in a typical school board election. Some roles of a school board:
- Administer general district policy
- Make sure the district is in tune with local interests
- Hire or fire the superintendent
The Superintendent is the head administrator within a district. His or her responsibilities include:
- Drafting the budget
- Overseeing the principals of schools within the district
- General administration within the district
- Communication with the chief state school official (CSSO).
The chief state school official is appointed by the governor and, along with other state education positions, has many responsibilities:
- distribute state funds
- establish teacher certification requirements
- define length of the school day
- defines nutritional content of school lunches
- mandate certain curricula for schools and set the school calendar
Amendatory or conditional veto – the power to send a bill back to the legislature with suggested changes.
Federal – a system in which the states and national government share responsibilities. When people talk about the federal government, they generally mean the national government, although the term often refers to the division of powers between the state and national governments.
Presiding officer – one person who oversees the activities of a legislative house. A presiding officer can have either a major or minor leadership role in his or her house.
Progressive tax – a tax where people with higher incomes pay a higher percentage of taxable income in state taxes.
- It can be used to persuade legislators who do not strongly support a particular measure. When the legislation lasts only a set length of time, the “on the fence” legislators are more likely to vote for it because of its “temporary” nature.
- Some issues change rapidly (e.g., technology-related issues), and therefore legislation pertaining to these issues must be updated periodically.
Term limit – a limit on the number of consecutive terms an elected official can serve.
Unfunded mandate – when the federal government sets regulations for the states to follow and does not provide the states with funds to carry them out.
Burns, James et. al. State and Local Politics: Government by the People. Englewood Cliffs, NJ: Prentice-Hall, Inc. 1984.
Peterson, Steven and Rasmussen, Thomas. State and Local Politics. New York: McGraw-Hill, Inc. 1990.
Ross, Michael. State and Local Politics and Policy: Change and Reform. Englewood Cliffs, NJ: Prentice-Hall, Inc. 1987.
Saffell, David. State Politics. Reading, MA: Addison-Wesley Publishing Company. 1984.
U.S. Bureau of the Census. Statistical Abstract of the United States: 1996 (116th edition.) Washington, DC. 1996.
Compiled by James Berry
This article is republished with the consent of Project Vote Smart. Access their website at www.vote-smart.org. Call their Voter’s Research Hotline toll free 1-888-868-3762. | http://www.goodpoliticsradio.com/?page_id=327 | 13 |
25 | 1. What are the two main functions that prices perform in market economies? How do they address the three main questions: what gets produced, how are they produced, and who gets the products? How do prices transmit information about changing consumer wants and resource availability?
In market economies, prices answer the three main questions that any economic system must address. Prices do this by performing two main functions: rationing the goods and services that are produced and allocating the resources used to produce them. The question of who gets the goods and services that are produced is answered by the rationing function that prices perform. Products are rationed according to willingness and ability to pay the market prices for products. The questions of what gets produced and how they are produced are answered by the allocative function that prices perform. Prices transmit information between consumers and producers. Changes in consumers' desires and changing resource scarcity are signaled by the changing prices of goods and resources.
2. How do prices ration goods? Why must goods be rationed? What are other means of rationing besides price? Are these other methods fairer than using price?
All scarce goods must be rationed somehow. Because goods are not freely available to everyone who wants them, some people will get certain goods and others will not. Rationing invariably discriminates against someone. Rationing by price discriminates against people with a low ability or willingness to pay the market price. Sometimes, other rationing mechanisms are employed, such as queuing.
3. If the price of a good is kept below the market price through the use of a government-imposed price control, how can the total cost end up exceeding the supposedly higher market price? When employing other ways of rationing goods, we should keep in mind that every rationing mechanism discriminates against someone and can result in wasted resources. For example, queuing often leads to long lines and wasted time and discriminates against people on the basis of the opportunity cost of their time. The total cost will often exceed what would have been paid in a free market.
4. How can supply and demand be used as a tool for analysis?
The basic logic of supply and demand is a powerful tool for analysis. For example, supply and demand analysis show that an oil import tax will reduce the quantity of oil demanded, increase domestic production and generate revenues for the government.
5. How is market efficiency related to demand and supply?
Supply and demand curve be used to illustrate the idea of market efficiency, an important aspect of normative economics.
6. What is Consumer Surplus?
Consumer surplus is the difference between maximum amount a person is willing to pay for a good and the current market price.
7. What is Producer Surplus?
Producer Surplus is the difference between the current market price and the full cost of production at each output level.
8. When are producer and consumer surpluses maximized?
Producer and consumer surpluses are maximized at free market equilibrium in competitive markets.
9. What happens to consumer surplus if goods are over or under produced?
The is a loss in both consumer and producer surplus and this is referred to as a deadweight loss.
10. What is elasticity?
Elasticity is a general measure of responsiveness. If one variable A changes in response to changes in another variable B, the elasticity of A with respect to B is equal to the percentage change in A divided by the percentage change in B.
11. How is the slope of the demand curve related to responsiveness?
The slope of a demand curve is an inadequate measure of responsiveness, because its value depends on the units of measurement used. For this reason, elasticities are calculated using percentages.
12. What is price elasticity of demand and what are its extremes?
The price elasticity of demand lets us know the percentage change we could expect in the quantity demanded of a good for a 1% change in price. Perfectly inelastic demand does not respond to price changes, its numerical value is zero. Perfectly elastic demand for a product drops to zero when there is a very small price increase. Unitary elastic demand describes a relationship in which the percentage change in the quantity of a product demanded is the same as the percentage change in price; its numerical value is -1. Elastic demand is demand in which the percentage change in the quantity of a product demanded is larger than the percentage change in price. Inelastic demand is demand in which the percentage change in the quantity of a product demanded is smaller than the percentage change in price.
13. What happens to total revenue if demand is elastic and price increases?
A price increase will cause total revenue to fall as the quantity demanded will fall by a larger amount than the price rose.
14. What happens to total revenue if demand is elastic and price decreases?
A price increase will cause total revenue to rise as the quantity demanded will rise by a larger amount than the price fell.
15. What does the elasticity of demand depend upon?
The elasticity of demand depends upon the availability of substitutes, the importance of the item in individual budgets, and the time frame in question.
16. What are other important elasticity measures?
Other important elasticity measures are: (1) income elasticity which measures the responsiveness of the quantity demanded with respect to changes in income; (2) cross price elasticity of demand which measures the responsiveness of the quantity demanded of one good with respect to changes in the price of another good, (3) elasticity of supply which measures the responsiveness of the quantity supplied of a good with respect to changes in the price of that good; and (4) elasticity of labor supply which measures the responsiveness of the quantity of labor supplied of a good with respect to changes in the price labor. | http://wps.prenhall.com/bp_casefair_econf_7e/30/7932/2030659.cw/index.html | 13 |
29 | A variety of measures of national income and output are used in economics to estimate total economic activity in a country or region, including gross domestic product (GDP), gross national product (GNP), net national income (NNI), and adjusted national income (NNI* adjusted for natural resource depletion). All are specially concerned with counting the total amount of goods and services produced within some "boundary". The boundary is usually defined by geography or citizenship, and may also restrict the goods and services that are counted. For instance, some measures count only goods and services that are exchanged for money, excluding bartered goods, while other measures may attempt to include bartered goods by imputing monetary values to them.
Arriving at a figure for the total production of goods and services in a large region like a country entails a large amount of data-collection and calculation. Although some attempts were made to estimate national incomes as long ago as the 17th century, the systematic keeping of national accounts, of which these figures are a part, only began in the 1930s, in the United States and some European countries. The impetus for that major statistical effort was the Great Depression and the rise of Keynesian economics, which prescribed a greater role for the government in managing an economy, and made it necessary for governments to obtain accurate information so that their interventions into the economy could proceed as well-informed as possible.
In order to count a good or service, it is necessary to assign value to it. The value that the measures of national income and output assign to a good or service is its market value – the price it fetches when bought or sold. The actual usefulness of a product (its use-value) is not measured – assuming the use-value to be any different from its market value.
Three strategies have been used to obtain the market values of all the goods and services produced: the product (or output) method, the expenditure method, and the income method. The product method looks at the economy on an industry-by-industry basis. The total output of the economy is the sum of the outputs of every industry. However, since an output of one industry may be used by another industry and become part of the output of that second industry, to avoid counting the item twice we use not the value output by each industry, but the value-added; that is, the difference between the value of what it puts out and what it takes in. The total value produced by the economy is the sum of the values-added by every industry.
The expenditure method is based on the idea that all products are bought by somebody or some organisation. Therefore we sum up the total amount of money people and organisations spend in buying things. This amount must equal the value of everything produced. Usually expenditures by private individuals, expenditures by businesses, and expenditures by government are calculated separately and then summed to give the total expenditure. Also, a correction term must be introduced to account for imports and exports outside the boundary.
The income method works by summing the incomes of all producers within the boundary. Since what they are paid is just the market value of their product, their total income must be the total value of the product. Wages, proprieter's incomes, and corporate profits are the major subdivisions of income.
The output approach focuses on finding the total output of a nation by directly finding the total value of all goods and services a nation produces.
Because of the complication of the multiple stages in the production of a good or service, only the final value of a good or service is included in the total output. This avoids an issue often called 'double counting', wherein the total value of a good is included several times in national output, by counting it repeatedly in several stages of production. In the example of meat production, the value of the good from the farm may be $10, then $30 from the butchers, and then $60 from the supermarket. The value that should be included in final national output should be $60, not the sum of all those numbers, $90. The values added at each stage of production over the previous stage are respectively $10, $20, and $30. Their sum gives an alternative way of calculating the value of final output.
The income approach equates the total output of a nation to the total factor income received by residents or citizens of the nation. The main types of factor income are:
All remaining value added generated by firms is called the residual or profit. If a firm has stockholders, they own the residual, some of which they receive as dividends. Profit includes the income of the entrepreneur - the businessman who combines factor inputs to produce a good or service.
The expenditure approach is basically an output accounting method. It focuses on finding the total output of a nation by finding the total amount of money spent. This is acceptable, because like income, the total value of all goods is equal to the total amount of money spent on goods. The basic formula for domestic output takes all the different areas in which money is spent within the region, and then combines them to find the total output.
C = household consumption expenditures / personal consumption expenditures
I = gross private domestic investment
G = government consumption and gross investment expenditures
X = gross exports of goods and services
M = gross imports of goods and services
Note: (X - M) is often written as XN, which stands for "net exports"
The names of the measures consist of one of the words "Gross" or "Net", followed by one of the words "National" or "Domestic", followed by one of the words "Product", "Income", or "Expenditure". All of these terms can be explained separately.
Note that all three counting methods should in theory give the same final figure. However, in practice minor differences are obtained from the three methods for several reasons, including changes in inventory levels and errors in the statistics. One problem for instance is that goods in inventory have been produced (therefore included in Product), but not yet sold (therefore not yet included in Expenditure). Similar timing issues can also cause a slight discrepancy between the value of goods produced (Product) and the payments to the factors that produced the goods (Income), particularly if inputs are purchased on credit, and also because wages are collected often after a period of production.
Gross domestic product (GDP) is defined as "the value of all final goods and services produced in a country in 1 year".
Gross National Product (GNP) is defined as "the market value of all goods and services produced in one year by labour and property supplied by the residents of a country."
As an example, the table below shows some GDP and GNP, and NNI data for the United States:
|Gross national product||11,063.3|
|Net U.S. income receipts from rest of the world||55.2|
|U.S. income receipts||329.1|
|U.S. income payments||-273.9|
|Gross domestic product||11,008.1|
|Private consumption of fixed capital||1,135.9|
|Government consumption of fixed capital||218.1|
GDP per capita (per person) is often used as a measure of a person's welfare. Countries with higher GDP may be more likely to also score highly on other measures of welfare, such as life expectancy. However, there are serious limitations to the usefulness of GDP as a measure of welfare:
Because of this, other measures of welfare such as the Human Development Index (HDI), Index of Sustainable Economic Welfare (ISEW), Genuine Progress Indicator (GPI), gross national happiness (GNH), and sustainable national income (SNI) are used.
There are many difficulties when it comes to measuring national income, however these can be grouped into conceptual difficulties and practical difficulties.
Australian Bureau of Statistics, Australian National Accounts: Concepts, Sources and Methods, 2000. This fairly large document has a wealth of information on the meaning of the national income and output measures and how they are obtained.
Here you can share your comments or contribute with more information, content, resources or links about this topic. | http://www.mashpedia.com/Measures_of_national_income_and_output | 13 |
23 | Identification. The name Guatemala, meaning "land of forests," was derived from one of the Mayan dialects spoken by the indigenous people at the time of the Spanish conquest in 1523. It is used today by outsiders, as well as by most citizens, although for many purposes the descendants of the original inhabitants still prefer to identify themselves by the names of their specific language dialects, which reflect political divisions from the sixteenth century. The pejorative terms indio and natural have been replaced in polite conversation and publication by Indígena . Persons of mixed or non-indigenous race and heritage may be called Ladino , a term that today indicates adherence to Western, as opposed to indigenous, culture patterns, and may be applied to acculturated Indians, as well as others. A small group of African–Americans, known as Garifuna, lives on the Atlantic coast, but their culture is more closely related to those found in other Caribbean nations than to the cultures of Guatemala itself.
The national culture also was influenced by the arrival of other Europeans, especially Germans, in the second half of the nineteenth century, as well as by the more recent movement of thousands of Guatemalans to and from the United States. There has been increased immigration from China, Japan, Korea, and the Middle East, although those groups, while increasingly visible, have not contributed to the national culture, nor have many of them adopted it as their own.
Within Central America the citizens of each country are affectionately known by a nickname of which they are proud, but which is sometimes used disparagingly by others, much like the term "Yankee." The term "Chapín" (plural, "Chapines"), the origin of which is unknown, denotes anyone from Guatemala. When traveling outside of Guatemala, all its citizens define themselves as Guatemalans and/or Chapines. While at home, however, there is little sense that they share a common culture. The most important split is between Ladinos and Indians. Garifuna are hardly known away from the Atlantic coast and, like most Indians, identify themselves in terms of their own language and culture.
Location and Geography. Guatemala covers an area of 42,042 square miles (108,889 square kilometers) and is bounded on the west and north by Mexico; on the east by Belize, the Caribbean Sea, Honduras and El Salvador; and on the south by the Pacific Ocean. The three principal regions are the northern lowland plains of the Petén and the adjacent Atlantic littoral; the volcanic highlands of the Sierra Madre, cutting across the country from northwest to southeast; and the Pacific lowlands, a coastal plain stretching along the entire southern boundary. The country has a total of 205 miles (330 kilometers) of coastline. Between the Motagua River and the Honduran border on the southeast there is a dry flat corridor that receives less than forty inches (one hundred centimeters) of rain per year. Although the country lies within the tropics, its climate varies considerably, depending on altitude and rainfall patterns. The northern lowlands and the Atlantic coastal area are very warm and experience rain throughout much of the year. The Pacific lowlands are drier, and because they are at or near sea level, remain warm. The highlands are temperate. The coolest weather there (locally called "winter") occurs during the rainy season from May or June to November, with daily temperatures ranging from 50 to 60 degrees Fahrenheit in the higher altitudes, and from 60 to 70 degrees in Guatemala City, which is about a mile above sea level.
The Spanish conquerors preferred the highlands, despite a difficult journey from the Atlantic coast, and that is where they placed their primary settlements. The present capital, Guatemala City, was founded in 1776 after a flood and an earthquake had destroyed two earlier sites. Although the Maya had earlier inhabited the lowlands of the Petén and the lower Motagua River, by the time the first Spaniards arrived, they lived primarily in the Pacific lowlands and western highlands. The highlands are still largely populated by their descendants. The eastern Motagua corridor was settled by Spaniards and is still inhabited primarily by Ladinos. Large plantations of coffee, sugarcane, bananas, and cardamom, all grown primarily for export, cover much of the Pacific lowlands. These are owned by large, usually nonresident, landholders and are worked by local Ladinos and Indians who journey to the coast from highland villages for the harvest.
Demography. The 1994 census showed a total of 9,462,000 people, but estimates for 1999 reached twelve million, with more than 50 percent living in urban areas. The forty-year period of social unrest, violence, and civil war (1956–1996) resulted in massive emigration to Mexico and the United States and has been estimated to have resulted in one million dead, disappeared, and emigrated. Some of the displaced have returned from United Nations refugee camps in Mexico, as have many undocumented emigrants to the United States.
The determination of ethnicity for demographic purposes depends primarily on language, yet some scholars and government officials use other criteria, such as dress patterns and life style. Thus, estimates of the size of the Indian population vary from 35 percent to more than 50 percent—the latter figure probably being more reliable. The numbers of the non-Mayan indigenous peoples such as the Garifuna and the Xinca have been dwindling. Those two groups now probably number less than five thousand as many of their young people become Ladinoized or leave for better opportunities in the United States.
Linguistic Affiliation. Spanish is the official language, but since the end of the civil war in December 1996, twenty-two indigenous languages, mostly dialects of the Mayan linguistic family, have been recognized. The most widely spoken are Ki'che', Kaqchikel, Kekchi, and Mam. A bilingual program for beginning primary students has been in place since the late 1980s, and there are plans to make it available in all Indian communities. Constitutional amendments are being considered to recognize some of those languages for official purposes.
Many Indians, especially women and those in the most remote areas of the western highlands, speak no Spanish, yet many Indian families are abandoning their own language to ensure that their children become fluent in Spanish, which is recognized as a necessity for living in the modern world, and even for travel outside one's village. Since the various indigenous languages are not all mutually intelligible, Spanish is increasingly important as a lingua franca . The Academy of Mayan Languages, completely staffed by Maya scholars, hopes its research will promote a return to Proto-Maya, the language from which all the various dialects descended, which is totally unknown today. Ladinos who grow up in an Indian area may learn the local language, but bilingualism among Ladinos is rare.
In the cities, especially the capital, there are private primary and secondary schools where foreign languages are taught and used along with Spanish, especially English, German, and French.
Symbolism. Independence Day (15 September) and 15 August, the day of the national patron saint, María, are the most important national holidays, and together reflect the European origin of the nation–state, as does the national anthem, "Guatemala Felíz" ("Happy Guatemala"). However, many of the motifs used on the flag (the quetzal bird and the ceiba tree); in public monuments and other artwork (the figure of the Indian hero Tecún Umán, the pyramids and stelae of the abandoned and ruined Mayan city of Tikal, the colorful motifs on indigenous textiles, scenes from villages surrounding Lake Atitlán); in literature (the novels of Nobel laureate Miguel Angel Asturias) and in music (the marimba, the dance called son) are associated with the Indian culture, even when some of their elements originated in Europe or in precolonial Mexico. Miss Guatemala, almost always a Ladina, wears Indian dress in her public appearances. Black beans, guacamole, tortillas, chili, and tamales, all of which were eaten before the coming of the Spaniards, are now part of the national culture, and have come to symbolize it for both residents and expatriates, regardless of ethnicity or class.
History and Ethnic Relations
Emergence of the Nation. Guatemala, along with other Central American Spanish colonies, declared its independence on 15 September 1821. Until 1839, it belonged first to Mexico and then to a federation known as the United Provinces of Central America. It was not until 1945 that a constitution guaranteeing civil and political rights for all people, including women and Indians, was adopted. However, Indians continued to be exploited and disparaged until recently, when international opinion forced Ladino elites to modify their attitudes and behavior. This shift was furthered by the selection of Rigoberta Menchú, a young Maya woman, for the Nobel Peace Prize in 1992.
Severe repression and violence during the late 1970s and 1980s was followed by a Mayan revitalization movement that has gained strength since the signing of the Peace Accords in 1996. While Mayan languages, dress, and religious practices have been reintroduced or strengthened, acculturation to the national culture has continued. Today more Indians are becoming educated at all levels, including postgraduate university training. A few have become professionals in medicine, engineering, journalism, law, and social work. Population pressure has forced many others out of agriculture and into cottage industries, factory work, merchandising, teaching, clerical work, and various white-collar positions in the towns and cities. Ironically, after the long period of violence and forced enlistment, many now volunteer for the armed forces.
Ethnic Relations. Some Ladinos see the Indian revitalization movement as a threat to their hegemony and fear that they will eventually suffer violence at Indian hands. There is little concrete evidence to support those fears. Because the national culture is composed of a blend of European and indigenous traits and is largely shared by Maya, Ladinos, and many newer immigrants, it is likely that the future will bring greater consolidation, and that social class, rather than ethnic background, will determine social interactions.
Urbanism, Architecture, and the Use of Space
The Spanish imposed a gridiron pattern on communities of all sizes, which included a central plaza, generally with a public water fountain known as a "pila," around which were situated a Catholic church, government offices, and the homes of high-ranking persons. Colonial homes included a central patio with living, dining, and sleeping rooms lined up off the surrounding corridors. A service patio with a pila and a kitchen with an open fireplace under a large chimney was located behind the general living area. Entrances were directly off the street, and gardens were limited to the interior patios.
Those town and house plans persist, except that homes of the elite now tend to be placed on the periphery of the town or city and have modified internal space arrangements, including second stories. An open internal patio is still popular, but gardens now surround the house, with the whole being enclosed behind high walls. The older, centrally located colonial houses are now occupied by offices or have been turned into rooming houses or hotels.
Indian towns retain these characteristics, but many of the smaller hamlets exhibit little patterning. The houses—mostly made of sun-dried bricks (adobe) and roofed with corrugated aluminum or ceramic tiles—may stretch out along a path or be located on small parcels of arable land. The poorest houses often have only one large room containing a hearth; perhaps a bed, table and chairs or stools; a large ceramic water jug and other ceramic storage jars; a wooden chest for clothes and valuables; and sometimes a cabinet for dishes and utensils. Other implements may be tied or perched on open rafters in baskets. The oldest resident couple occupies the bed, with children and younger adults sleeping on reed mats ( petates ) on the floor; the mats are rolled up when not in use. Running water in the home or yard is a luxury that only some villages enjoy. Electricity is widely available except in the most remote areas. Its primary use is for light, followed by refrigeration and television.
The central plazas of smaller towns and villages are used for a variety of purposes. On market days, they are filled with vendors and their wares; in the heat of the day people will rest on whatever benches may be provided; in early evening young people may congregate and parade, seeking partners of the opposite sex, flirting, and generally having a good time. In Guatemala City, the central plaza has become the preferred site for political demonstrations.
The national palace faces this central plaza; although it once was a residence for the president, today it is used only for official receptions and meetings with dignitaries. More than any other building, it is a symbol of governmental authority and power. The walls of its entryway have murals depicting scenes honoring the Spanish and Mayan heritages. Other government buildings are scattered throughout the central part of Guatemala City; some occupy former residences, others are in a newer complex characterized by modern, massive, high-rising buildings of seven or eight floors. Some of these structures are adorned on the outside with murals depicting both Mayan and European symbols.
Food and Economy
Read more about the Food and Cuisine of Guatemala.
Food in Daily Life. Corn made into tortillas or tamales, black beans, rice, and wheat in the form of bread or pasta are staples eaten by nearly all Guatemalans. Depending on their degree of affluence, people also consume chicken, pork, and beef, and those living near bodies of water also eat fish and shellfish. With improvements in refrigeration and transport, seafood is becoming increasingly popular in Guatemala City. The country has long been known for vegetables and fruits, including avocados, radishes, potatoes, sweet potatoes, squash, carrots, beets, onions, and tomatoes. Lettuce, snow peas, green beans, broccoli, cauliflower, artichokes, and turnips are grown for export and are also available in local markets; they are eaten more by Ladinos than by Indians. Fruits include pineapples,
Three meals per day are the general rule, with the largest eaten at noon. Until recently, most stores and businesses in the urban areas closed for two to three hours to allow employees time to eat at home and rest before returning to work. Transportation problems due to increased traffic, both on buses and in private vehicles, are bringing rapid change to this custom. In rural areas women take the noon meal to the men in the fields, often accompanied by their children so that the family can eat as a group. Tortillas are eaten by everyone but are especially important for the Indians, who may consume up to a dozen at a time, usually with chili, sometimes with beans and/or stews made with or flavored with meat or dried shrimp.
Breakfast for the well to do may be large, including fruit, cereal, eggs, bread, and coffee; the poor may drink only an atol , a thin gruel made with any one of several thickeners—oatmeal, cornstarch, cornmeal, or even ground fresh corn. Others may only have coffee with sweet bread. All drinks are heavily sweetened with refined or brown sugar. The evening meal is always lighter than that at noon.
Although there are no food taboos, many people believe that specific foods are classified as "hot" or "cold" by nature, and there may be temporary prohibitions on eating them, depending upon age, the condition of one's body, the time of day, or other factors.
Food Customs at Ceremonial Occasions. The ceremonial year is largely determined by the Roman Catholic Church, even for those who do not profess that faith. Thus, the Christmas period, including Advent and the Day of the Kings on 6 January, and Easter week are major holidays for everyone. The patron saints of each village, town or city are honored on their respective days. The cofradia organization, imposed by the colonial Spanish Catholic Church, is less important now, but where it persists, special foods are prepared. Tamales are the most important ceremonial food. They are eaten on all special occasions, including private parties and celebrations, and on weekends, which are special because Sunday is recognized as being a holy day, as well as a holiday. A special vegetable and meat salad called fiambre is eaten on 1 November, the Day of the Dead, when families congregate in the cemeteries to honor, placate, and share food with deceased relatives. Codfish cooked in various forms is eaten at Easter, and Christmas is again a time for gourmet tamales and ponche , a rum-based drink containing spices and fruits. Beer and rum, including a fairly raw variety known as aguardiente are the most popular alcoholic drinks, although urban elites prefer Scotch whisky.
Basic Economy. Guatemala's most important resource is its fertile land, although only 12 percent of the total landmass is arable. In 1990, 52 percent of the labor force was engaged in agriculture, which contributed 24 percent of the gross domestic product. Although both Ladinos and Indians farm, 68 percent of the agricultural labor force was Indian in 1989. Forty-seven percent of Indian men were self-employed as farmers, artisans, or merchants; the average income for this group was only about a third of that for Ladino men. Agriculture accounts for about one-fourth of the gross domestic product.
The country has traditionally produced many agricultural products for export, including coffee, sugar, cardamom, bananas, and cotton. In recent years flowers and vegetables have become important. However, Guatemala is not self-sufficient in basic grains such as wheat, rice, and even maize, which are imported from the United States. Many small farmers, both Indian and Ladino, have replaced traditional subsistence crops with those grown for export. Although their cash income may be enhanced, they are forced to buy more foods. These include not only the basic staples, but also locally produced "junk" foods such as potato chips and cupcakes as well as condiments such as mayonnaise.
Affluent city dwellers and returning expatriates increasingly buy imported fruits, vegetables, and specialty items, both raw and processed. Those items come from neighboring countries such as Mexico and El Salvador as well as from the United States and Europe, especially Spain, Italy, and France.
Land Tenure and Property. The concept of private property in land, houses, tools, and machinery is well established even though most Indian communities have long held some lands as communal property that is allotted as needed. Unfortunately, many rural people have not registered their property, and many swindles occur, leading to lengthy and expensive lawsuits. As long as owners occupied their land and passed it on to their children or other heirs, there were few problems, but as the population has become more mobile, the number of disputes has escalated. Disputes occur within villages and even within families as individuals move onto lands apparently abandoned while the owners are absent. Sometimes the same piece of land is sold two
Commercial Activities. Agricultural products are the goods most commonly produced for sale within the country and for export. Handicrafts have been produced and widely traded since precolonial times and are in great demand by tourists, museums, and collectors, and are increasingly exported through middlemen. The most sought after items include hand woven cotton and woolen textiles and clothing items made from them; baskets; ceramics; carved wooden furniture, containers, utensils and decorative items; beaded and silver jewelry; and hand-blown glassware. These items are made in urban and rural areas by both Ladinos and Indians in small workshops and by individuals in their own homes.
Assembly plants known as maquilas produce clothing and other items for export, using imported materials and semiskilled labor. Despite criticisms of this type of enterprise in the United States, many Guatemalans find it a welcome source of employment with relatively high wages.
Major Industries. Guatemala has many light industries, most of which involve the processing of locally grown products such as chicken, beef, pork, coffee, wheat, corn, sugar, cotton, cacao, vegetables and fruits, and spices such as cinnamon and cardamom. Beer and rum are major industries, as is the production of paper goods. A large plastics industry produces a wide variety of products for home and industrial use. Several factories produce cloth from domestic and imported cotton. Some of these products are important import substitutes, and others are exported to other Central American countries and the United States.
Division of Labor. In the Ladino sector, upper-class men and women work in business, academia, and the major professions. Older Ladino and Indian teenagers of both sexes are the primary workers in maquilas , a form of employment that increasingly is preferred to working as a domestic. Children as young as four or five years work at household tasks and in the fields in farming families. In the cities, they may sell candies or other small products on the streets or "watch" parked cars. Although by law all children must attend school between ages seven and thirteen, many do not, sometimes because there is no school nearby, because the child's services are needed at home, or because the family is too poor to provide transportation, clothing, and supplies. The situation is improving; in 1996, 88 percent of all children of primary age were enrolled in school, although only 26 percent of those of high school age were enrolled.
Classes and Castes. Social class based on wealth, education, and family prestige operates as a sorting mechanism among both Indians and Ladinos. Race is also clearly a component, but may be less important than culture and lifestyle, except in the case of the black Garifuna, who are shunned by all other groups. Individual people of Indian background may be accepted in Ladino society if they are well educated and have the resources to live in a Western style. However, Indians as a group are poorer and less educated than are non-Indians. In the 1980s, illiteracy among Indians was 79 percent, compared with 40 percent among Ladinos. In 1989, 60 percent of Indians had no formal education, compared with 26 percent of Ladinos. Indians with thirteen or more years of education earned about one-third less than did Ladinos with a comparable level of education.
Symbols of Social Stratification. Dress varies significantly by class and caste. Professional and white-collar male workers in the cities usually wear suits, dress shirts, and neckties, and women in comparable pursuits dress fashionably, including stockings and high-heeled shoes. Nonemployed upper-class women dress more casually, often in blue jeans and T-shirts or blouses. They frequent beauty salons since personal appearance is considered an important indicator of class.
Poorer Ladinos, whether urban or rural, buy secondhand clothing from the United States that is sold at low prices in the streets and marketplaces. T-shirts and sweatshirts with English slogans are ubiquitous.
Many Mayan women, regardless of wealth, education, or residence, continue to wear their distinctive clothing: a wraparound or gathered, nearly ankle-length skirt woven with tie-dyed threads that produce interesting designs, topped with a cotton or rayon blouse embroidered with flower motifs about the neck, or a more traditional huipil . The huipil is hand woven on a backstrap loom and consists of two panels sewn together on the sides, leaving openings for the arms and head. It usually is embroidered with traditional designs. Shoes or sandals are almost universal, especially in towns and cities. Earrings, necklaces, and rings are their only jewelry.
Indian men are more likely to dress in a Western style. Today's fashions dictate "cowboy" hats, boots, and shirts for them and for lower-class rural Ladinos. In the more remote highland areas, many men continue to wear the clothing of their ancestors. The revitalization movement has reinforced the use of traditional clothing as a means of asserting one's identity.
Government. As of 1993, the president and vice-president and sixteen members of the eighty-member congress are elected by the nation as a whole for non-renewable four-year terms, while the remaining sixty-four members of the unicameral legislature are popularly elected by the constituents of their locales. Despite universal suffrage, only a small percentage of citizens vote.
There are twenty-two departments under governors appointed by the president. Municipalities are autonomous, with locally elected officials, and are funded by the central government budget. In areas with a large Mayan population, there have been two sets of local government leaders, one Ladino and one Mayan, with the former taking precedence. In 1996, however, many official or "Ladino" offices were won by Maya.
Leadership and Political Officials. Political parties range from the extreme right to the left and represent varying interests. Thus, their numbers, size, and electoral success change over time. It generally is believed that most elected officials use their short periods in office to aggrandize their prestige and line their pockets. Most take office amid cheering and accolades but leave under a cloud, and many are forced to leave the country or choose to do so. While in office, they are able to bend the law and do favors for their constituents or for foreigners who wish to invest or do business in the country. Some national business gets accomplished, but only after lengthy delays, debate, and procrastination.
Social Problems and Control. Since the signing of the Peace Accords in December 1996, there has been continued social unrest and a general breakdown in the system of justice. Poverty, land pressure, unemployment, and a pervasive climate of enmity toward all "others" have left even rural communities in a state of disorganization. In many Maya communities, their traditional social organization having been disrupted or destroyed by the years of violence, the people now take the law into their own hands. Tired of petty crime, kidnappings, rapes, and murders and with no adequate governmental relief, they frequently lynch suspected criminals. In the cities, accused criminals frequently are set free for lack of evidence, since the police and judges are poorly trained, underpaid, and often corrupt. Many crimes are thought to have been committed by the army or by underground vigilante groups unhappy with the Peace Accords and efforts to end the impunity granted to those who committed atrocities against dissidents.
Military Activity. In 1997, the army numbered 38,500. In addition, there is a paramilitary national police force of 9,800, a territorial militia of about 300,000, and a small navy and air force.
Social Welfare and Change Programs
Guatemala has governmental and nongovernmental agencies that promote change in agriculture, taxes, banking, manufacturing, environmental protection, health, education, and human and civil rights.
Since 1945 the government has provided social security plans for workers, but only a small percentage of the populace has received these health and retirement benefits. There are free hospitals and clinics throughout the country, although many have inadequate equipment, medicines, and personnel. Free or inexpensive health services are offered as charities through various churches and by private individuals.
Gender Roles and Statuses
Division of Labor by Gender. Among both Maya and Ladinos, women are associated primarily with the domestic world and men work in agriculture, business, and manufacturing. However, well-educated professional women are accepted and often highly respected; many are owners and managers of businesses. More of these women are Ladinas than Mayas. Statistically, women are less educated and lower paid than their male counterparts. Their numbers exceed those of males in nursing, secretarial, and clerical jobs. The teaching force at all levels has attracted women as well as men, but men predominate.
In rural areas, Maya women and men may engage in agriculture, but the crops they grow are different. Men tend to grow basic grains such as corn and beans as well as export crops such as green beans and snow peas. Women grow vegetables and fruits for local consumption and sale, as well as herbs and spices.
Handicrafts also tend to be assigned according to gender. Pottery is most often made by Indian women and Ladino men. Similarly, Indian women are the only ones who weave on backstrap or stick looms, while both Indian and Ladino men weave on foot looms. Indian men knit woolen shoulder bags for their own use and for sale. Men of both ethnicities do woodwork and carpentry, bricklaying, and upholstering. Indian men carve images of saints, masks, slingshots, and decorative items for their own use or for sale. Men and boys fish, while women and girls as well as small boys gather wild foods and firewood. Women and children also tend sheep and goats.
Rural Ladinas do not often engage in agriculture. They concentrate on domestic work and cottage industries, especially those involving sewing, cooking, and processing of foods such as cheese,
The Relative Status of Men and Women. Indian and poor Ladino women (as well as children) are often browbeaten and physically mistreated by men. Their only recourse is to return to their parents' home, but frequently are rejected by the parents for various reasons. A woman from a higher-status family is less likely to suffer in this way, especially if her marriage has been arranged by her parents. While walking, a Maya woman traditionally trails her husband; if he falls drunk by the wayside, she dutifully waits to care for him until he wakes up.
Marriage, Family, and Kinship
Marriage. Marriages are sometimes arranged in Maya communities, although most couples choose each other and often elope. Membership in private clubs and attendance at private schools provides a way for middle-class and upper-class young people to meet prospective mates. Parents may disapprove of a selection, but their children are likely able to persuade them. Marriages are celebrated in a civil ceremony that may be followed by a religious rite. Monogamy is the rule, although many men have a mistress as well as a wife. Among the poorer classes, both Mayan and Ladino, unions are free and ties are brittle; many children do not know, nor are they recognized by their fathers. Formal divorces are more common than many people believe, despite the disapproval of the Catholic Church. Until recently, a divorced woman did not have the right to retain her husband's surname; but she may sue for a share of his property to support herself and her minor children.
Domestic Unit. The nuclear family is the preferred and most common domestic unit. Among both Ladinos and Maya, a young couple may live at first in the home of the man's parents, or if that is inconvenient or overcrowded, with the parents of the woman. Wealthy Ladinos often provide elaborate houses close to their own homes as wedding presents for their sons and daughters.
Inheritance. Inheritance depends on a witnessed written or oral testament of the deceased, and since many people die without indicating their preferences, family disputes after death are very common among both Mayas and Ladinos. Land, houses, and personal belongings may be inherited by either sex, and claims may be contested in the courts and in intrafamily bickering.
Infant Care. The children of middle-class and upper-class Ladinos are cared for by their mothers, grandmothers, and young women, often from the rural areas, hired as nannies. They tend to be indulged by their caretakers. They may be breastfed for a few months but then are given bottles, which they may continue using until four or five years. To keep children from crying or complaining to their parents, nannies quickly give them whatever they demand.
Maya women in the rural areas depend upon their older children to help care for the younger ones. Babies are breastfed longer, but seldom after two years of age. They are always close to their mothers during this period, sleeping next to them and carried in shawls on their backs wherever they go. They are nursed frequently on demand wherever the mother may be. Little girls of five or six years may be seen carrying tiny babies in the same way in order to help out, but seldom are they out of sight of the mother. This practice may be seen as education for the child as well as caretaking for the infant. Indian children are socialized to take part in all the activities of the family as soon as they are physically and mentally capable.
Child Rearing and Education. Middle-class and upper-class Ladino children, especially in urban areas, are not expected to do any work until they are teenagers or beyond. They may attend a private preschool, sometimes as early as eighteen months, but formal education begins at age seven. Higher education is respected as a means of rising socially and economically. Children are educated to the highest level of which they are capable, depending on the finances of the family.
Higher Education. The national university, San Carlos, has until recently had free tuition, and is still the least expensive. As a result, it is overcrowded, but graduates many students who would not otherwise be able to attain an education. There are six other private universities, several with branches in secondary cities. They grant undergraduate and advanced degrees in the arts, humanities, and sciences, as well as medicine, dentistry, pharmacy, law, engineering, and architecture. Postgraduate work is often pursued abroad by the better and more affluent students, especially in the United States, Spain, Mexico, and some other Latin American countries.
Etiquette varies considerably according to ethnicity. In the past, Indians were expected to defer to Ladinos, and in general they showed them respect and subservience at all times. In turn, they were treated by Ladinos as children or as persons of little worth. Some of those modes of behavior carried over into their own society, especially within the cofradia organization, where deliberate rudeness is considered appropriate on the part of the highest-ranking officers. Today there is a more egalitarian attitude on both sides, and in some cases younger Maya may openly show contempt for non-indigenous people. Maya children greet adults by bowing their heads and sometimes folding their hands before them, as in prayer. Adults greet other adults verbally, asking about one's health and that of one's family. They are not physically demonstrative.
Among Ladino urban women, greetings and farewells call for handshakes, arm or shoulder patting, embraces, and even cheek kissing, almost from first acquaintance. Men embrace and cheek kiss women friends of the family, and embrace but do not kiss each other. Children are taught to kiss all adult relatives and close acquaintances of their parents hello and goodbye.
In the smaller towns and until recently in the cities, if eye contact is made with strangers on the street, a verbal "good morning" or "good afternoon" is customary.
Religious Beliefs. Roman Catholicism, which was introduced by the Spanish and modified by Maya interpretations and syncretism, was almost universal in Guatemala until the early part of the twentieth century, when Protestantism began to make significant headway among both Ladinos and Maya. Today it has been estimated that perhaps 40 percent or more adhere to a Protestant church or sect ranging from established churches with international membership to small local groups celebrating their own set of beliefs under the leadership of lay pastors.
Many Maya combine membership in a Christian fellowship with a continued set of beliefs and practices inherited from their ancient ancestors. Rituals may still be performed to ensure agricultural success, easy childbirth, recovery from illness, and protection from the elements (including eclipses) and to honor and remember the dead. The Garifuna still practice an Afro-Caribbean form of ancestor worship that helps to meld together families broken by migration, plural marriages, and a social environment hostile to people of their race and culture.
Many of the indigenous people believe in spirits of nature, especially of specific caves, mountains, and bodies of water, and their religious leaders regularly perform ceremonies connected with these sites. The Catholic Church has generally been more lenient in allowing or ignoring dual allegiances than have Protestants, who tend to insist on strict adherence to doctrine and an abandonment of all "non-Christian" beliefs and practices, including Catholicism.
Medicine and Health Care
Although excellent modern medical care is available in the capital city for those who can afford it and even for the indigent, millions of people in the rural areas lack adequate health care and health education. The medical training at San Carlos University includes a field stint for advanced students in rural areas, and often these are the only well-trained medical personnel on duty at village-level government-run health clinics.
The less well educated have a variety of folk explanations and cures for disease and mental illnesses, including herbal remedies, dietary adjustments, magical formulas, and prayers to Christian saints, local gods, and deceased relatives.
Most births in the city occur in hospitals, but some are attended at home by midwives, as is more usual in rural areas. These practitioners learn their skills from other midwives and through government-run courses.
For many minor problems, local pharmacists may diagnose, prescribe, and administer remedies, including antibiotics.
The Arts and Humanities
Support for the Arts. The Ministry of Culture provides moral and some economic support for the arts, but most artists are self-supporting. Arts and handicrafts are important to all sectors of the population; artists are respected and patronized, especially in the cities where there are numerous art galleries. Even some of the smaller towns, such as Tecpán, Comalapa and Santiago de Atitlán offer paintings by local artists for sale to both foreign and Guatemalan visitors. There are dozens, perhaps hundreds, of indigenous "primitive" painters, some of whom are known internationally. Their products form an important part of the wares offered to tourists and local collectors. Non-indigenous painters are exhibited primarily in the capital city; these include many foreign artists as well as Guatemalans.
Graphic Arts. Textiles, especially those woven by women on the indigenous backstrap loom, are of such fine quality as to have been the object of scholarly study. The Ixchel Museum of Indian Textiles, located in Guatemala City at the Francisco
Pottery ranges from utilitarian to ritual wares and often is associated with specific communities, such as Chinautla and Rabinal, where it has been a local craft for centuries. There are several museums, both government and private, where the most exquisite ancient and modern pieces are displayed.
Performance Arts. Music has been important in Guatemala since colonial times, when the Catholic Church used it to teach Christian doctrine. Both the doctrine and the musical styles were adopted at an early date. The work of Maya who composed European-style classical music in the sixteenth and seventeenth centuries has been revived and is performed by several local performance groups, some using replicas of early instruments. William Orbaugh, a Guatemalan of Swiss ancestry, is known internationally for performances of classical and popular guitar music. Garifuna music, especially that of Caribbean origin, is popular in both Guatemala and in the United States, which has a large expatriate Garifuna population. Other popular music derives from Mexico, Argentina, and especially the United States. The marimba is the popular favorite instrument, in both the city and in the countryside.
There is a national symphony as well as a ballet, national chorus, and an opera company, all of which perform at the National Theater, a large imposing structure built on the site of an ancient fort near the city center.
Theater is less developed, although several private semiprofessional and amateur groups perform in both Spanish and English. The city of Antigua Guatemala is a major center for the arts, along with the cities of Guatemala and Quetzaltenango.
The State of the Physical and Social Sciences
Although the country boasts six universities, none is really comprehensive. All of the sciences are taught in one or another of these, and some research is done by professors and advanced students— especially in fields serving health and agricultural interests, such as biology, botany, and agronomy. Various government agencies also conduct research in these fields. However, most of those doing advanced research have higher degrees from foreign universities. The professional schools such as Dentistry, Nutrition, and Medicine keep abreast of modern developments in their fields, and offer continuing short courses to their graduates.
Anthropology and archaeology are considered very important for understanding and preserving the national cultural patrimony, and a good bit of research in these fields is done, both by national and visiting scholars. One of the universities has a linguistics institute where research is done on indigenous languages. Political science, sociology, and international relations are taught at still another, and a master's degree program in development, depending on all of the social sciences, has recently been inaugurated at still a third of the universities. Most of the funding available for such research comes from Europe and the United States, although some local industries provide small grants to assist specific projects.
Adams, Richard N. Cultural Surveys of Panama, Nicaragua, Guatemala, El Salvador, Honduras , 1957.
Asturias, Miguel Angel. Men of Maize , 1994.
——. The Mirror of Lida Sal: Tales Based on Mayan Myths and Guatemalan Legends , 1997.
Asturias de Barrios, Linda, ed. Cuyuscate: The Traditional Brown Cotton of Guatemala , 2000.
Aveni, Anthony F. The Sky in Mayan Literature , 1995.
Bahl, Roy W., et al. The Guatemalan Tax Reform , 1996.
Carlsen, Robert S. The War for the Heart and Soul of a Highland Maya Town , 1997.
Carmack, Robert. Rebels of Highland Guatemala: The Quiche-Mayas of Momostenango , 1995.
——, ed. Harvest of Violence , 1988.
Ehlers, Tracy B. Silent Looms: Women, Work, and Poverty in a Guatemalan Town , 1989.
Fischer, Edward F., and R. McKenna Brown, eds. Maya Cultural Activism in Guatemala , 1996.
Flannery, Kent, ed. Maya Subsistence , 1982.
Garrard, Virginia. A History of Protestantism in Guatemala , 1998.
Garzon, Susan, ed. The Life of Our Language: Kaqchikel Maya Maintenance, Shift and Revitalization , 1998.
Goldin, C. "Work and Ideology in the Maya Highlands of Guatemala: Economic Beliefs in the Context of Occupational Change." Economic Development and Cultural Change 41(1):103–123, 1992.
Goldman, Francisco. The Long Night of White Chickens , 1993.
Gonzalez, Nancie L. Sojourners of the Caribbean: Ethnogenesis and Ethnohistory of the Garifuna , 1988.
Handy, J. Gift of the Devil: A History of Guatemala , 1984.
Hendrickson, Carol. Weaving Identities: Construction of Dress and Self in a Highland Guatemalan Town , 1995.
McCreery, David. Rural Guatemala, 1760–1940 , 1994.
Menchú, Rigoberta, and Ann Wright (ed.). Crossing Borders , 1998.
Nash, June, ed. Crafts in the World Market: The Impact of Global Exchange on Middle American Artisans , 1993.
Perera, Victor. Unfinished Conquest: The Guatemalan Tragedy , 1993. Psacharopoulos, George. "Ethnicity, Education and Earnings in Bolivia and Guatemala," Comparative Education Review 37(1):9–20, 1993.
Reina, Ruben E. and Robert M. Hill II. The Traditional Pottery of Guatemala , 1978.
Romero, Sergio Francisco and Linda Asturias de Barrios. "The Cofradia in Tecpán During the Second Half of the Twentieth Century." In Linda Asturias de Barrios, ed. Cuyuscate: The Traditional Brown Cotton of Guatemala , 2000.
Scheville, Margot Blum, and Christopher H. Lutz (designer). Maya Textiles of Guatemala , 1993.
Smith, Carol, ed. Guatemalan Indians and the State: 1540 to 1988 , 1992.
Steele, Diane. "Guatemala." In Las poblaciones indígenas y. la pobreza en América Latina: Estudio Empírico , 1999.
Stephen, D. and P. Wearne. Central America's Indians , 1984.
Tedlock, Barbara. Time and the Highland Maya , 1990.
Tedlock, Dennis. Popul Vuh: The Definitive Edition of the Mayan Book of the Dawn of Life and the Glories of the Gods and Kings , 1985.
Urban, G. and J. Sherzer, eds. Nation–States and Indians in Latin America , 1992.
Warren, Kay B. Indigenous Movements and Their Critics: Pan-Maya Activism in Guatemala , 1998.
Watanabe, John. Maya Saints and Souls in a Changing World , 1992.
Whetten, N. L. Guatemala, The Land and the People , 1961.
Woodward, R. L. Guatemala , 1992.
World Bank. World Development Indicators , 1999.
—N ANCIE L. G ONZÁLEZ | http://www.everyculture.com/Ge-It/Guatemala.html | 13 |
17 | The Chimú were the residents of Chimor, with its capital at the city of Chan Chan, a large adobe city in the Moche Valley of present-day Trujillo city. The culture arose about 900 AD. The Inca ruler Tupac Inca Yupanqui led a campaign which conquered the Chimú around 1470 AD.
This was just fifty years before the arrival of the Spanish in the region. Consequently, Spanish chroniclers were able to record accounts of Chimú culture from individuals who had lived before the Inca conquest. Similarly, Archaeological evidence suggest Chimor grew out of the remnants of Moche culture; early Chimú pottery had some resemblance to that of the Moche. Their ceramics are all black, and their work in precious metals is very detailed and intricate.
The Chimú resided on the north coast of Peru: "It consists of a narrow strip of desert, 20 to 100 miles wide, between the Pacific and the western slopes of the Andes, crossed here and there by short rivers which start in the rainier mountains and provide a series of green and fertile oases." The valley plains are very flat and well-suited to irrigation, which is probably as old as agriculture here. Fishing was also very important and was almost considered as important as agriculture.
The Chimú were known to have worshipped the moon, unlike the Inca, who worshiped the sun. The Chimu viewed the sun as a destroyer. This is likely due to the harshness of the sun in their desert environment. Offerings played an important role in religious rites. A common object for offerings, as well as one used by artisans, was the shell of the Spondylus shellfish, which live only in the warm coastal waters off present-day Ecuador. It was associated with the sea, rainfall, and fertility. Spondylus shells were also highly valued and traded by the Chimú.
The Chimú are best known for their distinctive monochromatic pottery and fine metal working of copper, gold, silver, bronze, and tumbaga (copper and gold). The pottery is often in the shape of a creature, or has a human figure sitting or standing on a cuboid bottle. The shiny black finish of most Chimú pottery was achieved by firing the pottery at high temperatures in a closed kiln, which prevented oxygen from reacting with the clay.
Early Chimú (Moche Civilization)
The oldest civilization present on the north coast of Peru is Early Chimú. Early Chimú is also known as the Moche or Mochica civilization. The start of this Early Chimú time period is not known (although it was BC), but it ends around 500 A.D. It was centered in the Chicama, Moche, and Viru valleys. "Many large pyramids are attributed to the Early Chimú period." (37) These pyramids are built of adobe in rectangular shapes made from molds.
"Early Chimú cemeteries are also found without pyramid associations. Burials are usually in extended positions, in prepared tombs. The rectangular, adobe-lined and covered tombs have niches in their walls in which bowls were placed." (39)
The Early pottery is also characterized by realistic modeling and painted scenes.
Expansion and rule
The mature Chimú culture developed in roughly the same territory where the Mochica had existed centuries before. The Chimú was also a coastal culture. It was developed in the Moche Valley south of present-day Lima, northeast of Huarmey, and finishing in central present-day Trujillo. Later it expanded to Arequipa.
The Chimú appeared in the year 900 A.D: "The City of Chimor was at the great site now called Chanchan, between Trujilo and the sea, and we may assume that Taycanamo founded his kingdom there. His son, Guacri-caur, conquered the lower part of the valley and was succeeded by a son named Nancen-pinco who really laid the foundations of the Kingdom by conquering the head of the valley of Chimor and the neighboring valleys of Sana, Pacasmayo, Chicama, Viru, Chao and Santa." (39)
The estimated founding date of the Chimú Kingdom is in the first half of the 14th century. Nacen-pinco was believed to have ruled around 1370 CE and was followed by seven rulers whose names are not yet known. Minchancaman followed these rulers, and was ruling around the time of the Inca conquest (between 1462 and 1470). This great expansion is believed to have occurred during the late period of Chimú civilization, called: Late Chimú, but the development of the Chimú territory spanned a number of phases and more than a single generation. Nacen-pinco, "may have pushed the imperial frontiers to Jequetepeque and to Santa, but conquest of the entire region was an agglutinative process initiated by earlier rulers." (17)
The Chimú expanded to include a vast area and many different ethnic groups. At its peak, the Chimú advanced to the limits of the desert coast, to the Jequetepeque Valley in the north, and Carabayallo in the south. Their expansion southward was stopped by the military power of the great valley of Lima. Historians and archeologists contest how far south they managed to expand.
The Chimú society was a four-level hierarchical system, with a powerful elite rule over administrative centers. The hierarchy was centered at the walled cities, called ciudadelas, at Chan Chan. The political power at Chan Chan is demonstrated by the organization of labor to construct the Chimú's canals and irrigated fields.
Chan Chan was the top of the Chimu hierarchy, with Farfán in the Jequetepeque Valley as a subordinate. This organization, which was quickly established during the conquest of the Jequetepeque Valley, suggests the Chimú established the hierarchy during the early stages of their expansion. The existing elite at peripheral locations, such as the Jequetepeque Valley and other centers of power, were incorporated into the Chimú government on lower levels of the hierarchy. These lower-order centers managed land, water, and labor, while the higher-order centers either moved the resources to Chan Chan or carried out other administrative decisions. Rural sites were used as engineering headquarters, while the canals were being built; later they operated as maintenance sites. The numerous broken bowls found at Quebrada del Oso support this theory, as the bowls were probably used to feed the large workforce that built and maintained that section of canal. The workers were probably fed and housed at state expense.
The state governed such social classes until imperial Sican conquered the kingdom of Lambayeque. The legends of war were said to have been told by the leaders Naylamp in the Sican language and Tacayanamo in Chimú. The people paid tribute to the rulers with products or labor. By 1470, the Incas from Cuzco defeated the Chimú. They moved Minchancaman to Cuzco, and redirected gold and silver there to adorn the Temple of the Sun.
Chan Chan could be said to have developed a bureaucracy due to the elite's controlled access to information. The economic and social system operated through the import of raw materials, where they were processed into prestige goods by artisans at Chan Chan. The elite at Chan Chan made the decisions on most other matters concerning organization, monopolizing production, storage of food and products, and distribution or consumption of goods.
The majority of the citizens in each ciudadela were artisans. In the late Chimú, about 12,000 artisans lived and worked in Chan Chan alone. They engaged in fishing, agriculture, craft work, and trade. Artisans were forbidden to change their profession, and were grouped in the ciudadela according to their area of specialization. Archeologists have noted a dramatic increase in Chimú craft production, and they believe that artisans may have been brought to Chan Chan from another area taken as a result of Chimú conquest. As there is evidence of both metalwork and weaving in the same domestic unit, it is likely that both men and women were artisans. They engaged in fishing, agriculture, and metallurgy, and made ceramics and textiles (from cotton, llama, alpaca, and vicunas wool). People used reed fishing canoes, hunted, and traded using bronze coins.
Split Inheritance
The Chimu capital, Chan Chan, had a series of elite residential compounds or cuidadelas that were not occupied simultaneously, but sequentially. The reason for this is that Chimu rulers practice split inheritance, which dictated that the heir to the throne had to build his own palace and after the death of a ruler; all the ruler's wealth would be distributed to more distant relatives.
Spinning is the practice of combining a small set of threads to achieve a long and continuous thread with the use of an instrument called a spindle. The zone is an instrument made of a small wand that usually gets thinner at both ends; that was used alongside a tortera or piruro. The spindle is inserted into the bottom to make a counterweight. It starts spinning, taking the rueca (where the fiber was set to be spun). Fibers that are laid down in the zone are quickly turned between the thumb and index fingers and twisted to interlock the fibers, creating a long thread. After the desired lengths of threads are attained, the threads are intersected and woven in various combinations to make fabrics.
The Chimú embellished their fabrics with brocades, embroidery, fabrics doubles, and painted fabrics. Sometimes textiles were adorned with feathers and gold or silver plates. Colored dyes were created from plants containing tannin, mole, or walnut; and minerals, such as clay, ferruginosa, or mordant aluminum; as well as animals, such as cochineal. The garments were made of the wool of four animals: the guanaco, llama, alpaca, and vicuna. The people also used varieties of cotton, that grows naturally in seven different colors. The clothing consisted of the Chimú loincloth, sleeveless shirts with or without fringes, small ponchos, and tunics.
The majority of Chimú textiles were made from alpaca wool. Judging from he uniform spin direction, degree of the twist, and colors of the threads, all of the fibers were likely prespun and imported from a single location.
Chimú ceramics were crafted for two functions: containers for daily domestic use and those made for ceremonial use for offerings at burials. Domestic pottery was developed without higher finishing, while funeral ceramics show more aesthetic refinement.
The main features of Chimú ceramics were small sculptures, and manufacturing molded and shaped pottery for ceremonial or daily use. Ceramics were usually stained black, although there are some variations. Lighter ceramics were also produced in smaller quantities. The characteristic brightness was obtained by rubbing with a rock that previously had been polished. Many animals, fruits, characters, and mystical entities have been represented pictorially on Chimú ceramics.
Metalworking picked up quickly in the Late Chimú periods. Some Chimú artisans worked in metal workshops divided into sections for each specialized treatment of metals: plating, gold, stamping, lost-wax, pearl, the watermark, and embossing wooden molds. These techniques produced large variety of objects, such as cups, knives, containers, figurines, bracelets, pins, crowns, etc. They used arsenic to harden the metals after they were cast. Large-scale smelting took place in a cluster of workshops at Cerro de los Cemetarios. The process starts with ore extracted from mines or a river, which is heated to very high temperatures and then cooled. The result is a group of prills (small round sections of copper, for example) in a mass of slag (other materials which are not useful for metallurgy). The prills are then extracted by crushing the slag, and then melted together to form ingots, which were fashioned into various items.
Although copper is found naturally on the coast, it was mostly attained from the highlands in an area about 3 days away. Since most of the copper was imported, it is likely that most of the metal objects that were made were likely very small. The pieces, such as wires, needles, digging stick points, tweezers, and personal ornaments, are consistently small, utilitarian objects of copper or copper bronze. The Tumi is one well-known Chimú work. They also made beautiful ritual costumes of gold compounds with plume headdresses (also gold), earrings, necklaces, bracelets, and breastplates.
Subsistence and Agriculture
The Chimú developed mainly through intensive farming techniques and hydraulic work, which joined valleys to form complexes, such as the Chicama-Moche complex, which was a combination of two valleys in La Libertad. The Lambayeque linked the valleys of La Leche, Lambayeque, Reque, and Saña Jequetepeque. They developed an excellent agricultural techniques which expanded the strength of their cultivated areas. Huachaques were sunken farms where land was withdrawn to work the moist, sandy soil underneath, an example of which is Tschudi. The Chimú used walk-in wells, similar to those of the Nazca, to draw water, and reservoirs to contain the water from rivers. This system increased the productivity of the land, which increased Chimú wealth, and likely contributed to the formation of a bureaucratic system. The Chimú cultivated beans, sweet potato, papaya, and cotton with their reservoir and irrigation system. This focus on large-scale irrigation persisted until the Late Intermediate period. At this point, there was a shift to a more specialized system that focused on importing and redistributing resources from satellite communities. There appears to have been a complex network of sites that provided goods and services for Chimú subsistence. Many of these sites produced commodities that the Chimú could not. Many sites relied on marine resources, but after the advent of agriculture, there were more sites further inland, where marine resources were harder to attain. Keeping llamas arose as a supplemental way of attaining meat, but by the Late Intermediate period and Late Horizon, inland sites used llamas as a main resource, although they maintained contact with coastal sites to use supplemental marine resources.
In Pacasmayo, the Moon (Si) was the greatest divinity. It was believed to be more powerful than the Sun, as it appeared by night and day, and it also controlled the weather and growth of crops. Sacrifices were made to the moon, and devotees sacrificed their own children on piles of colored cottons with offerings of fruit and chicha. They believed the sacrificed children would become deified and they were usually sacrificed around age five. "Animals and birds were also sacrificed to the Moon".
Several constellations were also viewed as important. Two of the stars of Orion's Belt were considered to be the emissaries of the Moon. The constellation Fur (the Pleiades) was used to calculate the year and was believed to watch over the crops.
There were also local shrines in each district, which varied in importance. These shrines are also found in other parts of Peru. These shrines (called huacas) had a sacred object of worship (macyaec) with an associated legend and cult.
Mars (Nor), Sol (Jiang) and Earth (Ghisa) were also worshiped.
In 1997, members of an archaeological team discovered approximately 200 skeletal remains on the beach at Punta Lobos, Peru. The bodies had their hands bound behind their backs, their feet were bound together, they were blindfolded, and their throats had been slashed. Archeologists suggest these fisherman may have been killed as a sign of gratitude to the sea god Ni after they conquered the fishermen's fertile seaside valley in 1350 A.D.
Tombs in the Huaca of the Moon belonged to six or seven teenagers from 13–14 years of age. Nine tombs were reported to belong to children. If this is indicative of human sacrifice, the Chimú offered children to their gods.
Differential architecture of palaces and monumental sites distinguished the rulers from the common people. At Chan Chan, there are 10 large, walled enclosures called ciudadelas,or royal compounds, thought to be associated with the kings of Chimor(Day 1973, 1982). They were surrounded by adobe walls 9m high, giving the ciudadela the appearance of a fortress.
The bulk of the Chimú population (around 26,000 people) lived in barrios on the outer edge of the city. They consisted of many single-family domestic spaces with a kitchen, work space, domestic animals, and storage area.
Ciudadelas frequently have U-shaped rooms that consist of three walls, a raised floor, and frequently a courtyard, and there were often as many as 15 in one palace. In the early Chimú, the U-shaped areas were found in strategic places for controlling the flow of supplies from storerooms, but it is unlikely that they served as storage areas. They are described as mnemonic devices for keeping track of the distribution of supplies. Throughout time, the frequency of the U-shaped structures increases, and their distribution changes. They become more grouped, rather than dispersed, and occur further away from access routes to resources. The architecture of the rural sites also support the idea of a hierarchical social order. They have similar structural components, making them mini-ciudadelas with rural adapted administrative functions. Most of these sites have smaller walls, with many audiencias as the focial point of the structures. These would be used to restrict access to certain areas and are often found at strategic points.
- non-elite commoner dwellings and workshops spread throughout the city
- intermediate architecture associated with Chan Chan's non-royal elites
- ten ciudadelas, thought to be palaces of the Chimú kings
- four waqas
- U-shaped structures called audiencias
- SIAR or small irregular agglutinated rooms, which probably served as the residences for the majority of the population
See also
- "Chan Chan : Capital of [[Chimu culture|Kingdom Chimú]] - UNESCO". Retrieved 29 de marzo de 2012. Wikilink embedded in URL title (help)
- Kubler, George. (1962). The Art and Architecture of Ancient America, Ringwood: Penguin Books Australia Ltd., pp. 247-274
- Rowe, John H. (1948) "The kingdom of Chimor", Aus Acta Americana 6, (1-2): 27.
- Ember, Melvin; Peregrine, Peter Neal, eds. (2001). "Chimú". Encyclopedia of Prehistory. 7 : South America (1 ed.). Springer. ISBN 978-0306462610.
- Holstein, Otto. 1927. "Chan-chan: Capital of the great Chimu", Geographical Review 17, (1) (Jan.): 36-61.
- Bennett, Wendell C. (1937). "Chimu archeology", The Scientific Monthly 45, (1) (Jul.): 35-48.
- Mosely, Michael E. (1990). "Structure and history in the dynastic lore of Chimor", in The Northern Dynasties Kingship and SCtatecraft inchimor., eds. Maria Rostworowski and Michael E. Mosely. Washington, D.C.: Dumbarton Oaks, 1st ed., p. 548
- Christie, J. J. & Sarro, P. J (Eds). (2006). Palaces and Power in the Americas. Austin, Texas: University of Texas Press
- Keatinge, Richard W., and Geoffrey W. Conrad. 1983. Imperialist expansion in peruvian prehistory: Chimu administration of a conquered territory. Journal of Field Archaeology 10, (3) (Autumn): 255-83.
- Keatinge, Richard W. 1974. Chimu rural administrative centers in the Koche valley, peru. World Archaeology 6, (1, Political Systems) (Jun.): 66-82.
- Topic, J. R. (2003). "From stewards to bureaucrats: architecture and information flow at Chan Chan, Peru", Latin American Antiquity, 14, 243-274.
- Moseley, M. E. & Cordy-Collins, A. (Ed.) (1990). The Northern Dynasties: Kingships and Statecraft in Chimor. Washington, D.C.: Dumbarton Oaks.
- "Chimú - Jar". The Walters Art Museum.
- Mosely, Michael E., and Kent C. Day. 1982. Chan Chan: Andean desert city. 1st ed. United States of America: School of American Research.
- "Mass human sacrifice unearthed in Peru". Retrieved 2009-10-09.
- Moore, Jerry D. 1992. Pattern and meaning in prehistoric Peruvian architecture: The architecture of social control in the Chimú state. Latin American Antiquity 3, (2) (Jun.): 95-113.
- Topic, J. R. (2003). From stewards to bureaucrats: architecture and information flow at Chan Chan, Peru. Latin American Antiquity, 14, 243-274.
- Moore, Jerry D. 1996. Architecture and power in the ancient andes: The archaeology of public buildings. Great Britain: Cambridge University Press 1996.
|Wikimedia Commons has media related to: Chimu|
- Central and Southern Andes, 1000–1400 AD at Metropolitan Museum of Art
- Chimú gallery
- Video of possible Quingam letter discussed above | http://en.wikipedia.org/wiki/Chimu | 13 |
14 | In 1400 A.D. Europeans probably knew less of the globe than they had during the Pax Romana. Outside of Europe and Mediterranean, little was known, with rumor and imagination filling the gaps. Pictures of bizarre looking people with umbrella feet, faces in their stomachs, and dogs' heads illustrated books about lands to the East. There was the legendary Christian king, Prester John with an army of a million men and a mirror that would show him any place in his realm whom Christians hoped to ally with against the Muslims.
Europeans also had many misconceptions about the planet outside their home waters. They had no real concept of the size or shape of Africa or Asia. Because of a passage in the Bible, they thought the world was seven-eighths land and that there was a great southern continent that connected to Africa, making any voyages around Africa to India impossible since the Indian Ocean was an inland. They had no idea at all of the existence of the Americas, Australia, or Antarctica. They also vastly underestimated the size of the earth by some 5-10,000 miles. However, such a miscalculation gave explorers like Columbus and Magellan the confidence to undertake voyages to the Far East since they should be much shorter and easier than they turned out to be.
However, about this time, European explorers started to lead the way in global exploration, timidly hugging the coasts at first, but gradually getting bolder and striking out across the open seas. There were three main factors that led to Europeans opening up a whole new world at this time.
The rise of towns and trade along with the Crusades in the centuries preceding the age of exploration caused important changes in Europeans' mental outlook that would give them the incentive and confidence to launch voyages of exploration in three ways. First, they stimulated a desire for Far Eastern luxuries. Second, they exposed Europeans to new cultures, peoples and lands. Their interest in the outside world was further stimulated by the travels of Marco Polo in the late 1200's.
Finally, towns and the money they generated helped lead to the Renaissance that changed Europeans' view of themselves and the world. There was an increasing emphasis on secular topics, including geography. Skepticism encouraged people to challenge older geographic notions. Humanism and individualism, gave captains the confidence in their own individual abilities to dare to cross the oceans with the tiny ships and primitive navigational instruments at their disposal.
Medieval religious fervor also played its part. While captains such as Columbus, da Gama, and Magellan had to rely on their own skills as leaders and navigators, they also had an implicit faith in God's will and guidance in their missions. In addition, they felt it was their duty to convert to Christianity any new peoples they met. Once again we see Renaissance Europe caught in the transition between the older medieval values and the new secular ones. Together they created a dynamic attitude that sent Europeans out on a quest to claim the planet as their own.
Europe’s geographic position also drove it to find new routes to Asia in three ways. First of all, Europe's geographic position at the extreme western end of the trade routes with the East allowed numerous middlemen each to take his cut and raise the cost of the precious silks and spices before passing them on to still another middleman. Those trade routes were long, dangerous, and quite fragile. It would take just one strong hostile power to establish itself along these routes in order to disrupt the flow of trade or raise the prices exorbitantly. For Europeans, that power was the Ottoman Empire. The fall of the Byzantine Empire and the earlier fall of the crusader states had given the Muslims a larger share of the trade headed for Europe. Thus Europe's disadvantageous geographic position provided an incentive to find another way to the Far East.
However, Europe was also in a good position for discovering new routes to Asia. It was certainly in as good a position as the Muslim emirates on the coast of North Africa for exploring the Atlantic coast of Africa. And when Spain gained control of both sides of the straits of Gibraltar, it was in a commanding position to restrict any traffic passing in and out of the Western Mediterranean. Europe was also well placed for exploration across the Atlantic Ocean.
Finally, ships and navigation technology had seen some dramatic leaps forward. The most striking of these was the compass, which had originated in China around 200 B.C. This allowed sailors to sail with much greater certainty that they were sailing in the right direction. Instruments such as the quadrant, crosstaff, and astrolabe allowed them to calculate latitude by measuring the elevation of the sun and North Star, although the rocking of ships at sea often made measurements taken with these instruments highly inaccurate. Columbus, one of the best navigators of his day, took readings in the Caribbean that corresponded to those of Wilmington, North Carolina, 1100 miles to the north! As a result of such imperfect measurements, sailing directions might be so vague as to read: "Sail south until your butter melts. Then turn west." Compounding this was the lack of a way to measure longitude (distance from east to west) until the 1700's with the invention of the chronometer.
Maps also left a lot to be desired. A medieval map of the world, showing Jerusalem in the center and Paradise to the Far East, gives an insight into the medieval worldview, but little useful geographic information. By 1400, there were fairly decent coastal maps of Europe and the Mediterranean, known as portolan charts. However, these were of no use beyond Europe, and larger scale global projections would not come along until the 1500's. As a result, explorers relied heavily on sailors' lore: reading the color of the water and skies or the type of vegetation and sea birds typical of an area. However, since each state jealously guarded geographic information so it could keep a monopoly on the luxury trade, even this information had limited circulation.
Advances in ship design involved a choice between northern Atlantic and southern Mediterranean styles. For hulls, shipwrights had a choice between the Mediterranean carvel built design , where the planks were cut with saws and fit end to end, or northern clinker built designs, where the planks were cut with an axe or adze and overlapped. Clinker-built hulls were sturdy and watertight, but limited in size to the length of one plank, about 100 feet. As a result, the southern carvel built hulls were favored, although they were built in the bulkier and sturdier style of the northern ships to withstand the rough Atlantic seas. One other advance was the stern rudder, which sat behind the ship, not to the side. Unlike the older side steering oars which had a tendency to come out of the water as the ship rocked, making it hard to steer the ship, he stern rudder stayed in the water.
There were two basic sail designs: the southern triangular, or lateen, (i.e., Latin) sail and the northern square sail. The lateen sail allowed closer tacking into an adverse wind, but needed a larger crew to handle it. By contrast, the northern square sail was better for tailwinds and used a smaller crew. The limited cargo space and the long voyages involved required as few mouths as possible to feed, and this favored the square design for the main sail, but usually with a smaller lateen sail astern (in the rear) to fine tune a ship's direction.
The resulting ship, the carrack, was fusion of northern and southern styles. It was carvel built for greater size but with a bulkier northern hull design to withstand rough seas. Its main sail was a northern square sail, but it also used smaller lateen sails for tacking into the wind.
Living conditions aboard such ships, especially on long voyages, was appalling. Ships constantly leaked and were crawling with rats, lice, and other creatures. They were also filthy, with little or no sanitation facilities. Without refrigeration, food and water spoiled quickly and horribly. Disease was rampant, especially scurvy, caused by a vitamin C deficiency. A good voyage between Portugal and India would claim the lives of twenty per cent of the crewmen from scurvy alone. It should come as no surprise then that ships' crews were often drawn from the dregs of society and required a strong and often brutal, hand to keep them in line.
Portugal and Spain led the way in early exploration for two main reasons. First, they were the earliest European recipients of Arab math, astronomy, and geographic knowledge based on the works of the second century A.D. geographer, Ptolemy. Second, their position on the southwest corner of Europe was excellent for exploring southward around Africa and westward toward South America.
Portugal started serious exploration in the early 1400's, hoping to find both the legendary Prester John as an ally against the Muslims and the source of gold that the Arabs were getting from overland routes through the Sahara. At first, they did not plan to sail around Africa, believing it connected with a great southern continent. The guiding spirit for these voyages was Prince Henry the Navigator whose headquarters at Sagres on the north coast of Africa attracted some of the best geographers, cartographers and pilots of the day. Henry never went on any of his expeditions, but he was their heart and soul.
The exploration of Africa offered several physical and psychological obstacles. For one thing, there were various superstitions, such as boiling seas as one approached the equator, monsters, and Cape Bojador, which many thought was the Gates of Hell. Also, since the North Star, the sailors' main navigational guide, would disappear south of the Equator, sailors were reluctant to cross that line.
Therefore, early expeditions would explore a few miles of coast and then scurry back to Sagres. This slowed progress, especially around Cape Bojador, where some fifteen voyages turned back before one expedition in 1434 finally braved its passage without being swallowed up. In the 1440's, the Portuguese found some, but not enough, gold and started engaging in the slave trade, which would disrupt African cultures for centuries. In 1445, they reached the part of the African coast that turns eastward for a while. This raised hopes they could circumnavigate Africa to reach India, a hope that remained even when they found the coast turning south again.
In 1460, Prince Henry died, and the expeditions slowed down for the next 20 years. However, French and English interest in a route around Africa spurred renewed activity on Portugal's part. By now, Portuguese captains were taking larger and bolder strides down the coast. One captain, Diego Cao, explored some 1500 miles of coastline. With each such stride, Portuguese confidence grew that Africa could be circumnavigated. Portugal even sent a spy, Pero de Covilha, on the overland route through Arab lands to the Indies in order to scout the best places for trade when Portuguese ships finally arrived.
The big breakthrough came in 1487, when Bartholomew Dias was blown by a storm around the southern tip of Africa (which he called the Cape of Storms, but the Portuguese king renamed the Cape of Good Hope). When Dias relocated the coast, it was to his west, meaning he had rounded the tip of Africa. However, his men, frightened by rumors of monsters in the waters ahead, forced him to turn back. Soon after this, the Spanish, afraid the Portuguese would claim the riches of the East for themselves, backed Columbus' voyage that discovered and claimed the Americas for Spain. This in turn spurred Portuguese efforts to find a route to Asia before Spain did. However, Portugal's king died, and the transition to a new king meant it was ten years before the Portuguese could send Vasco da Gama with four ships to sail to India. Swinging west to pick up westerly winds, da Gama rounded the Cape of Good Hope in three months, losing one ship in the process. Heading up the coast, the Portuguese encountered Arab surprise and hostility against European ships in their waters. Da Gama found an Indian pilot who led the Portuguese flotilla across the Indian Ocean to India in 1498.
The hostility of the Arab traders who dominated trade with India and the unwillingness of the Indians to trade for European goods which they saw as inferior made getting spices quite difficult. However, through some shrewd trading, da Gama managed to get one shipload of spices and then headed home in August 1498. It took over a year, until September 1499, to get back to Portugal, but he had proven that Africa could be circumnavigated and India could be reached by sea. Despite its heavy cost (two of four ships and 126 out of 170 men) Da Gama's voyage opened up new vistas of trade and knowledge to Europeans.
Subsequent Portuguese voyages to the East reached the fabled Spice Islands (Moluccas) in 1513. In that same year, the Portuguese explorer, Serrao, reached the Pacific at its western end while the Spanish explorer, Balboa, was discovering it from its eastern end. Also in 1513, the Portuguese reached China, the first Europeans to do so in 150 years. They won exclusive trade with China, which had little interest in European goods. However, China was interested in Spanish American silver, which made the long treacherous voyage across the Pacific to the Spanish Philippines. There, the Portuguese would trade Chinese silks for the silver, and then use it for more trade with China, while the Spanish would take their silks on the even longer voyage back to Europe by way of America. In 1542, the Portuguese even reached Japan and established relations there. As a result of these voyages and new opportunities, Portugal would build an empire in Asia to control the spice trade.
Spain led the other great outward thrust of exploration westward across the Atlantic Ocean. Like, Portugal, the Spanish were also partially driven in their explorations by certain misconceptions. While they did realize the earth is round, they also vastly underestimated its size and thought it was seven-eighths land, making Asia much bigger and extending much further west. Therefore, they vastly underestimated the distance of a westward voyage to Asia.
This was especially true of a Genoese captain, Christopher Columbus, an experienced sailor who had seen most of the limits of European exploration up to that point in time, having sailed the waters from Iceland to the African coast. Drawing upon the idea of a smaller planet mostly made up of land, Columbus had the idea that the shortest route to the Spice Islands was by sailing west, being only some 3500 miles. In fact, the real distance is closer to 12,000 miles, although South America is only about 3500 miles west of Spain, explaining why Columbus thought he had hit Asia. The problem was that most people believed such an open sea voyage was still too long for the ships of the day.
Getting support for this scheme was not easy. The Portuguese were already committed to finding a route to India around Africa, and Spain was preoccupied with driving the Moors from their last stronghold of Granada in southern Spain. However, when the Portuguese rounded the Cape of Good Hope and stood on the verge of reaching India, Spain had added incentive to find another route to Asia. Therefore, when Granada finally fell in 1492, Spain was able to commit itself to Columbus' plan.
Columbus set sail August 3, 1492 with two caravels, the Nina and Pinta, and a carrack, the Santa Maria . They experienced perfect sailing weather and winds. In fact, the weather was too good for Columbus' sailors who worried that the perfect winds blowing out would be against them going home while the clear weather brought no rain to replenish water supplies. Columbus even lied to his men about how far they were from home (although the figure he gave them was fairly accurate since his own calculations overestimated how far they had gone). By October 10, nerves were on edge, and Columbus promised to turn back if land were not sighted in two or three days. Fortunately, on October 12, scouts spotted the island of San Salvador, which Columbus mistook for Japan.
After failing to find the Japanese court, Columbus concluded he had overshot Japan. Further exploration brought in a little gold and a few captives. But when the Santa Maria ran aground, Columbus decided to return home. A lucky miscalculation of his coordinates caused him to sail north where he picked up the prevailing westerlies. The homeward voyage was a rough one, but Columbus reached Portugal in March 1493, where he taunted the Portuguese with the claim that he had found a new route to the Spice Islands. This created more incentive for the Portuguese to circumnavigate Africa, which they did in 1498. It also caused a dispute over who controlled what outside of Europe, which led to the pope drawing the Line of Demarcation in 1494.
Ferdinand and Isabella, although disappointed by the immediate returns of the voyage, were excited by the prospects of controlling the Asian trade. They gave Columbus the title "Admiral of the Ocean Sea, Viceroy and Governor of the Islands that he hath discovered in the Indies." Over the next decade, they sent him on three more voyages to find the Spice Islands. Each successive voyage put even more of the Caribbean and surrounding coastline on the map, but the Spice Islands were never found. Columbus never admitted that his discovery was a new continent. He died in 1504, still convinced that he had reached Asia.
However, by 1500, many people were convinced that this was a new continent, although its size and position in relation to and distance from Asia were by no means clear. The Portuguese discovery of a route to India around Africa in 1498 provided more incentive for Spanish exploration. In 1513, the Spanish explorer, Balboa discovered the Pacific Ocean, having no idea of its immensity or that the Portuguese explorer, Serrao, was discovering it from the Asian side. Given the prevailing view of a small planet, many people though that the Pacific Sea, as they called it, must be fairly small and that Asia must be close to America. Some even thought South America was a peninsula attached to the southern end of Asia. Either way, finding a southwest passage around the southern tip of South America would put one in the Pacific Sea and a short distance from Asia. If this were so, it would give Spain a crucial edge over Portugal, whose route around Africa to India was especially long and hard.
In 1519, Charles V of Spain gave five ships and the job of finding a southwest passage around South America to Ferdinand Magellan, a former Portuguese explorer who had been to the Spice Islands while serving Portugal. Magellan's circumnavigation of the globe was one of the great epic, and unplanned, events in history. After sailing down the South American coast, he faced a mutiny, which he ruthlessly suppressed, and then entered a bewildering tangle of islands at the southern tip of the continent known even today as the Straits of Magellan. Finding his way through these islands took him 38 days, while the same journey today takes only two.
Once they emerged from the Straits of Magellan into the Pacific "Sea", Magellan and his men figured they were a short distance from Asia, and set out across the open water and into one of the worst ordeals ever endured in nautical history. One of those on the journey, Pigafetta, left an account of the Pacific crossing:
“On Wednesday the twenty-eighth of November, one thousand five hundred and twenty, we issued forth from the said strait and entered the Pacific Sea, where we remained three months and twenty days without taking on board provisions or any other refreshments, and we ate only old biscuit turned to powder, all full of worms and stinking of the urine which the rats had made on it, having eaten the good. And we drank water impure and yellow. We ate also ox hides, which were very hard because of the sun, rain, and wind. And we left them...days in the sea, then laid them for a short time on embers, and so we ate them. And of the rats, which were sold for half an ecu apiece, some of us could not get enough.
“Besides the aforesaid troubles, this malady (scurvy) was the worst, namely that the gums of most part of our men swelled above and below so that they could not eat. And in this way they died, inasmuch as twenty-nine of us died...But besides those who died, twenty-five or thirty fell sick of divers maladies, whether of the arms or of the legs and other parts of the body (also effects of scurvy), so that there remained very few healthy men. Yet by the grace of our Lord I had no illness.
“During these three months and twenty days, we sailed in a gulf where we made a good 4000 leagues across the Pacific Sea, which was rightly so named. For during this time we had no storm, and we saw no land except two small uninhabited islands, where we found only birds and trees. Wherefore we called them the Isles of Misfortune. And if our Lord and the Virgin Mother had not aided us by giving good weather to refresh ourselves with provisions and other things we would have died in this very great sea. And I believe that nevermore will any man undertake to make such a voyage.”
By this point, the survivors were so weakened that it took up to eight men to do the job normally done by one. Finally, they reached the Philippines, which they claimed for Spain, calculating it was on the Spanish side of the Line of Demarcation. Unfortunately, Magellan became involved in a tribal dispute and was killed in battle. Taking into account his previous service to Portugal in the East, Magellan and the Malay slave who accompanied him were the first two people to circumnavigate the earth.
By now, the fleet had lost three of its five ships: one having mutinied and returned to Spain, one being lost in a storm off the coast of South America, and the other being so damaged and the crews so decimated that it was abandoned. The other two ships, the Trinidad and Victoria, finally reached the Spice Islands in November 1521 and loaded up with cloves. Now they faced the unpleasant choice of returning across the Pacific or continuing westward and risking capture in Portuguese waters. The crew of the Trinidad tried going back across the Pacific, but gave up and were captured by the Portuguese. Del Cano, the captain of the Victoria, took his ship far south to avoid Portuguese patrols in the Indian Ocean and around Africa, but also away from any chances to replenish its food and water. Therefore, the Spanish suffered horribly from the cold and hunger in the voyage around Africa.
When the Victoria finally made it home in 1522 after a three year journey, only 18 of the original 280 crewmen were with it, and they were so worn and aged from the voyage that their own families could hardly recognize them. Although the original theory about a short South-west Passage to Asia was wrong, they had proven that the earth could be circumnavigated and that it was much bigger than previously supposed. It would be half a century before anyone else would repeat this feat. And even then, it was an act of desperation by the English captain Sir Francis Drake fleeing the Spanish fleet.
Meanwhile, the Spanish were busy exploring the Americas in search of new conquests, riches, and even the Fountain of Youth. There were two particularly spectacular conquests. The first was by Hernando Cortez, who led a small army of several hundred men against the Aztec Empire in Mexico. Despite their small number, the Spanish could exploit several advantages: their superior weapons and discipline, the myth of Quetzecoatl which foretold the return of a fair haired and bearded god in 1519 (the year Cortez did appear), and an outbreak of smallpox which native Americans had no prior contact with or resistance to. Because of this and other Eurasian diseases, native American populations would be devastated over the following centuries to possibly less than ten per cent their numbers in 1500.
The Spanish conquistador, Pizarro, leading an army of less then 150 men, carried out an even more amazing conquest of the Inca Empire in Peru in the 1530's. Taking advantage of a dispute over the throne, Pizarro captured the Inca Emperor, whose authority was so great that his capture virtually paralyzed the Incas into inaction. As a result, a highly developed empire ruling millions of people fell to a handful of Spaniards.
The conquests of Mexico and Peru more than compensated Spain for its failure to establish a trade route to the Spice Islands. The wealth of South America's gold and silver mines would provide Spain with the means to make it the great power of Europe in the 1500's. Unfortunately, Spain would squander these riches in a series of fruitless religious wars that would wreck its power by 1650.
Other Spanish expeditions were exploring South America's coasts and rivers, in particular the Amazon, Orinoco, and Rio de la Plata, along with ventures into what is now the south-west United States (to find the Seven Cities of Gold), the Mississippi River, and Florida (to find the Fountain of Youth). While these found little gold, they did provide a reasonable outline of South America and parts of North America by 1550. However, no one had yet found an easy route to Asia. Therefore, the following centuries would see further explorations which, while failing to find an easier passageway, would in the process piece together most of the global map. | http://www.flowofhistory.com/units/west/12/fc81 | 13 |
100 | International trade allows countries to buy and sell both domestic and foreign goods, as well as services and financial assets. A country's transactions are summarized in a set of accounts called the "Balance of Payments (BOP)." Students will learn how to record transactions in the BOP accounts, and why the sum of the current account and capital account must equal zero.
- Explain why the current account is the mirror image of the capital account.
- Explain why a current account deficit implies a capital account surplus.
- Explain that a current account deficit implies borrowing from the rest of the world.
- Show that the balance of payments equals zero.
- Use double-entry accounting to record transactions.
When one country trades with another, money and financial capital flow between the countries. The balance of payments account captures this flow. In this lesson, students learn how the current and capital accounts are used in foreign exchange. They learn why the balance of payments must equal zero. As an extension activity, students learn that international trade allows countries to buy more than they sell or sell more than they buy. This allows countries to run a current account deficit or surplus, respectively. A country running a current account deficit will borrow from the rest of the world; and a country running a surplus will lend. In the end, all countries are better off through trade. (NOTE: Recently, the name of the capital account was changed to financial account. Most of the introductory textbooks on the market today use "capital account" instead of "financial account". I will use capital account to align with those textbooks.)
CEE Balance of Payments - BOP Worksheet/Charts: This worksheet is presented in the Process section of this lesson.
CEE Balance of Payments - BOP
Teacher's Version (Answer Key)
U.S. International Transactions: The Bureau of Economic Analysis has a news release of the balance of payments and full text tables at the link below.
AmosWEB: A brilliant demonstration, explanation, and interactive BOP are found here. The explanation is well written and fun reading.
Balance of Payments: Recording debits and credits is as easy as knowing left from right. "Credit" literally means right side, any transaction that creates a supply of a foreign currency. "Debit" literally means left side, any transaction that creates a demand for a foreign currency.
Balance of Payments (BOP) Worksheet: This worksheet is presented in the Assessment Activity.
Teacher Answer sheet
The Economist- World News, Politics, Economics, Business & Finance: This website is used in the Extension Activity to find "Economics and Financial Indicators (interest rate data)."
1) Ask students, "What does a $-645 billion current account deficit mean?" Ask students, "Should you be concerned if the United States (U.S.) is running a large current account deficit?" To answer these questions, students will need a common vocabulary. Students already have a working knowledge of Imports, Exports, Balance of Payments. These definitions will help in completing the activity. Also, direct the students to read the appropriate section in their textbooks for capital flows.
Current Account : a summary of imports and exports of merchandise and services between countries.
Capital Account : a summary of financial flows between countries.
Balance of Payments : the sum of current account and capital account.
3) Explain that you are going to work through 10 transactions with them before independent working begins. Explain that debits are used to record payments and credits are used to record receipts. A debit card is used when making a payment so debits are a minus (-). Credits are a plus (+). Tell students to think of Alpha and Omega as two countries that trade with each other.
3a.) Alpha buys a graphing calculator from Omega for $50. (Debit Alpha $50, Import; Credit Omega $50, Export.) (NOTE: dollar amounts equal and one is positive and one negative. This is always true in every transaction.)
3b.) Omega downloads a movie from Alpha for $5. (Debit Omega $5, Import; Credit Alpha $5, Export.)
3c.) Alpha immigrants send $100 back to Omega relatives. (Debit Alpha $100, Transfer to the World; Credit Omega $100, Transfer from the World. This is a unilateral transfer or a gift.)
3d.) Alpha corporations pay $20 dividends to Omega stockholders. (Debit Alpha $20, investment income paid; Credit Omega $20, investment income received.)
3e.) The Alpha government sells $75 in bonds to Omega. (Debit Omega Capital Outflow, $75; Credit Alpha, Capital Inflow, $75. Here, financial capital is leaving Omega and entering Alpha as investment.)
3f.) Rogue Omega investors buy Alpha junk bonds for $15. (Debit Omega Capital Outflow, $15; Credit Alpha, Capital Inflow, $15. Here, financial capital is leaving Omega and entering Alpha as investment.)
3g.) Alpha investors receive interest payments from Omega Government of $10. (Debit Omega, $10 Investment Income Paid; Credit Alpha $10, Investment Income Received.)
3h.) Alpha businesses borrow $65 from Omega Banks.
(Debit Omega $65, capital outflow; Credit Alpha $65, capital inflow.)
Discuss how every transaction has both a debit and credit. Point out that the current account and capital account are like a mirror. Explain that the balance of payments is always zero. A formula that should be taught is current account plus capital account equals zero. This is how students can check their work. An insight that brought the BOP together for me was a question asked by Robert H. Frank, a Cornell Economist, "If you received a Euro while in the U.S., what could you do with it?" The answer is spend it either on European goods, an import, or buy European capital, capital outflow. This helps me to understand that spending from one country is a debit and is income to another country, which is a credit. Since the debits equal credits, the BOP must be zero. The "O" in BOP is a memory cue for students to know that the balance of payments must be "0."
What would happen if the citizens of Alpha were to buy $10 of illegal drugs from Omega? How would that appear in the BOP? The answer is that transaction would not show up. When looking at actual data, economists use a statistical discrepancy to account for bribes, drug deals, and unreported interest payments. MIT Economist, Oliver Blanchard, estimates that about half of the U.S. money stock is held abroad. Some countries with high inflation use U.S. currency for transactions in their country. Some of the money is used in illegal activities such as bribes.
4.) Ask the students to complete Part II on their own. Give positive and immediate feedback for their work to guide them.
5.) Use the worksheet or the quiz for evaluation.
6.) Use the completed worksheets to make inferences about the balance of payments. Here are some questions to ask your students to help then discover the meaning behind the debits and credits.
6a.) When Alpha ran a negative trade balance, this means that Alpha spent more than it produced. How was Alpha able to "buy" more than it produced? (Alpha borrowed from Omega. In Part I, Alpha's trade balance was $-45. Omega's was $45. What can Omega do with $45 Alpha dollars? It can buy Alpha exports or buy Alpha financial assets. Since Alpha's exports were only $5, Omega must have bought Alpha's financial assets. This is one reason why Alpha had a $155 capital inflow.)
6b.) If Alpha continues to run a negative current account balance, what choices would Alpha make? (If Alpha persistently runs a negative current account balance, Alpha will have to sell more of its financial and physical assets by issuing more bonds or IOUs. This is okay as long as Alpha has the productive capacity to repay the loans. Also, Alpha must weigh the benefits of current consumption versus future consumption when the loans must be repaid--with interest.)
6c.) The current account and capital account are mirror opposites. What are some reasons why? (Double-entry accounting ensures that for every debit there is an equal and opposite credit. Every dollar spent must come back to the source in either an export or a capital inflow.)
6d.) This worksheet did not use the "statistical discrepancy" to adjust the capital account. How can discrepancies arise? (Unilateral transfers are estimated. Illegal activities such as drug deals or bribing of officials. Many avoid tax payments by not reporting interest from bonds. Some under ground activities earn income but are not reported. In 1998, the discrepancy was about $4 billion. In 1997, the difference was $98 billion.)
6e.) The terms "capital inflows" and "capital outflows" might be confusing. For learning purposes, do you think an "increase in foreign holdings of U.S. assets" would be better than capital inflows? Likewise, would an "increase in U.S. holdings of foreign assets" be better than capital outflows? (Answers can vary, however, if Omega buys a used Boeing airplane from Alpha, perhaps students will be able to see that this is a capital outflow for Omega with the alternate terminology.)
You now see that the current account and the capital account are mirror images. A debit must have a source so for every debit there is a credit. Most important is that the balance of payments must equal zero if you have recorded your debits and credits correctly. The Bureau of Economic Analysis collects the balance of payments data.
1. In 2000, net exports were $-375.7, net investment income was $-14.9 and net transfers were $-54.1. What was the balance of the current account? __________ [$-444.7. $-375.7$-14.9$-54.1 = $-444.7.]
2. In 2001, exports for the United States were $720.8 billion and imports were $-1,147.4 billion. How much is net exports? ________ [$-426.6 = $720.8-$1,147.4.]
3. In 1980, the United States had net exports of $-19.4 billion, investment income of $30.1 billion and transfer payments of $-8.2 billion. Did the U.S. have a current account surplus or deficit? [The U.S. had a current account surplus of $2.4 billion.]
4. Circle one: If a country is running a current account deficit, the capital account must be a (deficit/surplus). [Surplus]
5. If Alpha buys stock in a corporation in Omega, which account, current or capital, would the transaction be recorded? [Capital. (NOTE: Since it increases the holdings of a foreign country’s assets, there would be a transaction recorded for both countries.)]
6. When Alpha imports cheese from Omega, how is the transaction recorded for Alpha. Is it a debit or a credit? [Debit]
7. If Omega has a current account surplus of $100, Omega's Capital account must equal ______? [$-100]
8. How is the trade balance calculated?_________ [Exports - Imports]
9. Circle the answer that completes the statement: Some residents of Alpha send money to charities in Omega. This transaction would be recorded as a debit to transfers (to the world/from the world). [To the world]
10. TRUE or FALSE. Alpha has an underground market in illegal drugs. Because these activities are not reported on the balance of payments, the current and capital account would not balance. [FALSE]
What does it mean to say that a country has a negative current account balance? For a country to run a deficit in the current account, means that it is buying more than it is selling so it must borrow the funds from the rest of the world. A country that runs a deficit, will issue debt instruments like treasury bills to finance their current spending. A country might have to raise the domestic interest rate to attract foreign investment. Obtain a copy of the Economist or visit: www.economist.com/node/18682709?story_id=18682709 . On the last page, is a list of interest rates. Ask the students to list the countries by current accounts and show the interest rates on 10-year government bonds. Then ask the students to sort the countries by the highest interest rate and see that the countries with the highest account deficit also pay the highest interest rate.
“Congratulations to the EconedLink developers and sponsors, this is truly education without walls!”
“Superb job. Simple & to the point.” | http://www.econedlink.org/lessons/projector.php?lid=820&type=educator | 13 |
16 | In this video segment adapted from AMERICAN EXPERIENCE, learn how the Trans-Alaska Pipeline was conceived and built in the 1970s. When oil was discovered in northern Alaska's Prudhoe Bay, the challenge facing engineers was how to transport it to refineries outside of Alaska. Engineers developed plans for a north–south pipeline that, unlike other pipelines, would be built aboveground due to the pervasive ice-rich soil layer called permafrost. The pipeline cut through the Alaskan landscape, causing much contention, especially among Alaska Native peoples and environmentalists.
The Trans-Alaska Pipeline was engineered to accommodate the highly varied terrain and considerable level of seismic activity experienced in the region. Because the pipeline would run directly through the heart of untouched wilderness and face scrutiny from both Alaska Native and environmental groups, the pipeline was also designed with environmental safety in mind. Perhaps not surprisingly, these groups and oil industry representatives differ in their assessment of the pipeline's performance to date.
While protecting Alaska's ecology was of utmost importance to critics of the pipeline, many of the engineering solutions appear to have proven to be sound up to this point. At the source in Prudhoe Bay, oil lies several thousand feet below ground. When it arrives at the surface, it can be as hot as 180° Fahrenheit. At this temperature, oil in the pipe would thaw a cylindrical area 20 to 30 feet in diameter in the frozen soil within a decade. To address this, engineers installed heat exchangers to cool the oil to about 120° before it entered the pipeline, and further protected the ground by elevating more than half the pipeline's 800-mile length. The pipeline's structural design allows for the pipeline to flex in the event of an earthquake. In November of 2002, when the Denali Fault in central Alaska experienced a magnitude 7.9 earthquake, the ground slipped 18 feet laterally and more than 3 feet vertically beneath the pipeline, yet not a single drop of oil was spilled.
With respect to protecting wildlife, Alaska Native peoples and conservationists believed the pipeline would disrupt land-animal migration routes. Engineers were therefore pressed to allow crossings in many designated areas. Reports conflict as to how well caribou have fared. While oil industry representatives note a doubling in caribou populations overall, wildlife biologists attribute this to other factors. Some Iñupiat residents living in communities adjacent to oil pipelines and maintenance roads are convinced that caribou migration patterns have changed. They attribute this to the noise, lights, pipelines, and roads in and around the oil pumping facilities and along the pipeline routes.
Oil industry executives contend that the pipeline is among the world's cleanest. Safety provisions include special valves that can shut down flow in the event of a leak within minutes of detection. Conservationists suggest that, despite these provisions, at least one spill a day occurs. Because oil passing through the pipeline contains water, which can corrode the pipes, monitoring and replacing corroded pipe will be a major issue as the pipeline ages.
The worst oil contamination incident related to the pipeline did not occur along the pipeline but in the marine environment. In 1989, the Exxon Valdez spilled 11 million gallons of crude oil into Prince William Sound. It was the worst oil spill in U.S. history, with fallout to the ecosystem, local economy, and subsistence ways of life still being felt today.
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment. | http://www.teachersdomain.org/resource/ean08.sci.ess.earthsys.pipeline/ | 13 |
16 | King Cotton was a slogan used by southerners (1860–61) to support secession from the United States by arguing cotton exports would make an independent Confederacy economically prosperous, and—more important—would force Great Britain and the France to support the Confederacy in the Civil War because their industrial economy depended on cotton textiles. The slogan was successful in mobilizing support: by February 1861, the seven states whose economies were based on cotton plantations had all seceded and formed the Confederacy. However, the other eight slave states remained in the Union. To demonstrate their economic power southerners spontaneously refused to sell or ship out their cotton in early 1861; it was not a government decision. By summer 1861, the Union blockade shut down over 95% of exports. Since the Europeans had large stockpiles of cotton, they were not injured by the boycott—the value of their stockpiles went up. To intervene meant war with the U.S. and a cutoff of food supplies, so Britain did not intervene. Consequently, the strategy proved a failure for the Confederacy—King Cotton did not help the new nation.
The South has long, hot summers, and rich soils in river valleys—ideal conditions to grow cotton. The drawback of growing cotton was mainly the time spent removing the seeds after harvest. Following invention of the cotton gin by Eli Whitney in 1793, cotton production surpassed that of tobacco in the South and became the dominant cash crop. At the time of the American Civil War, Southern plantations supplied 75% of the world's.
The insatiable European demand for cotton was a result of the Industrial Revolution. In Great Britain, a series of inventions resulted in the mechanized spinning and weaving of cloth in the world’s first factories in the north of England. The ability of these factories to produce unprecedented amounts of cotton cloth revolutionized the world economy, but Great Britain needed raw cotton.
British textile manufacturers were eager to buy all the cotton that the South could produce. Cotton-bale production supports this conclusion: from 720,000 bales in 1830, to 2.85 million bales in 1850, to nearly 5 million in 1860. Cotton production renewed the need for slavery after the tobacco market declined in the late 18th century. The more cotton grown, the more slaves were needed to pick the crop. By 1860, on the eve of the American Civil War, cotton accounted for almost 60% of American exports, representing a total value of nearly $200 million a year.
Without firing a gun, without drawing a sword, should they make war on us, we could bring the whole world to our feet... What would happen if no cotton was furnished for three years?... England would topple headlong and carry the whole civilized world with her save the South. No, you dare not to make war on cotton. No power on the earth dares to make war upon it. Cotton is King.
Southerners thought their survival depended on the sympathy of Europe to offset the power of the Union. They believed that cotton was so essential to Europe that they would intervene in any civil war.
British position
When war broke out, the Confederate people, acting spontaneously without government direction, held their cotton at home, watching prices soar and economic crisis hit Britain and New England. Britain did not intervene because it meant war with the United States, as well as loss of the American market, loss of American grain supplies, risk to Canada, and much of the British merchant marine, all in the slim promise of getting more cotton. Besides that, in the spring of 1861, warehouses in Europe were bulging with surplus cotton—which soared in price. So the cotton interests made their profits without a war. The Union imposed a blockade, closing all Confederate ports to normal traffic; consequently, the South was unable to move 95% of its cotton. Yet, some cotton was slipped out by blockade runner, or through Mexico. Cotton diplomacy, advocated by the Confederate diplomats James M. Mason and John Slidell, completely failed because the Confederacy could not deliver its cotton, and the British economy was robust enough to absorb a depression in textiles from 1862–64.
As Union armies moved into cotton regions of the South in 1862, the U.S. acquired all the cotton available, and sent it to Northern textile mills or sold it to Europe. Cotton production increased in India by a factor of 700% and also increased in Egypt.
When war broke out, the Confederates refused to allow the export of cotton to Europe. The idea was that this cotton diplomacy would force Europe to intervene. However, European states did not intervene, and following Abraham Lincoln's decision to impose a Union blockade, the South was unable to market its millions of bales of cotton. The production of cotton increased in other parts of the world, such as India and Egypt, to meet the demand. A British-owned newspaper, The Standard of Buenos Aires, in cooperation with the Manchester Cotton Supply Association succeeded in encouraging Argentinian farmers to drastically increase production of cotton in that country and export it to the United Kingdom.
Surdam (1998) asks, "Did the world demand for American-grown raw cotton fall during the 1860s, even though total demand for cotton increased?" Previous researchers have asserted that the South faced stagnating or falling demand for its cotton. Surdam's more complete model of the world market for cotton, combined with additional data, shows that the reduction in the supply of American-grown cotton induced by the Civil War distorts previous estimates of the state of demand for cotton. In the absence of the drastic disruption in the supply of American-grown cotton, the world demand for such cotton would have remained strong.
Lebergott (1983) shows the South blundered during the war because it clung too long to faith in King Cotton. Because the South's long-range goal was a world monopoly of cotton, it devoted valuable land and slave labor to growing cotton instead of urgently needed foodstuffs.
See also
- Frank Lawrence Owsley, King Cotton Diplomacy: Foreign relations of the Confederate States of America (1931)
- Yafa (2004)
- TeachingAmericanHistory.org "Cotton is King" http://teachingamericanhistory.org/library/index.asp?document=1722
- Eli Ginzberg, "The Economics of British Neutrality during the American Civil War," Agricultural History, Vol. 10, No. 4 (Oct., 1936), pp. 147–156 in JSTOR
- Charles M. Hubbard, The Burden of Confederate Diplomacy (1998)
- Argentina Department of Agriculture (1904), Cotton Cultivation, Buenos Aires: Anderson and Company, General Printers, p. 4, OCLC 17644836
- David Donald, Why the North won the Civil War (1996) p. 97
- John Ashworth, Slavery, capitalism, and politics in the antebellum Republic (2008) vol 2 p 656
- Blumenthal, Henry. "Confederate Diplomacy: Popular Notions and International Realities." Journal of Southern History 1966 32(2): 151-171. Issn: 0022-4642 in Jstor
- Hubbard, Charles M. The Burden of Confederate Diplomacy (1998)
- Jones, Howard, Union in Peril: The Crisis over British Intervention in the Civil War (1992) online edition
- Lebergott, Stanley. "Why the South Lost: Commercial Purpose in the Confederacy, 1861-1865." Journal of American History 1983 70(1): 58-74. Issn: 0021-8723 in Jstor
- Lebergott, Stanley. "Through the Blockade: The Profitability and Extent of Cotton Smuggling, 1861-1865," The Journal of Economic History, Vol. 41, No. 4 (1981), pp. 867–888 in JSTOR
- Owsley, Frank Lawrence. King Cotton Diplomacy: Foreign relations of the Confederate States of America (1931, revised 1959) Continues to be the standard source.
- Scherer, James a.b. Cotton as a world power: a study in the economic interpretation of history (1916) online edition
- Surdam, David G. "King Cotton: Monarch or Pretender? The State of the Market for Raw Cotton on the Eve of the American Civil War." Economic History Review 1998 51(1): 113-132. in JSTOR
- Yafa, Stephen H. Big Cotton: How A Humble Fiber Created Fortunes, Wrecked Civilizations, and Put America on the Map (2004) | http://en.wikipedia.org/wiki/King_Cotton | 13 |
14 | We all enjoy the imaginations of children. Parents and teachers are often amazed at the ideas of young children. In the preschool years, children’s questions and thinking are not fettered by the rules of society, physics, or logic. Anything goes, so hang on to your hat!
Imagination is Vital to Learning
It was Albert Einstein who said, “Imagination is more important than knowledge.” That is especially true for people who live in a changing world. Yesterday’s solutions will not work for tomorrow’s problems. Children need an opportunity to develop the ability to visualize scenes and solutions that are not right in front of them. To be able to read, learn about history, geography, mathematics, and most other subjects in school, it helps a child to be able to create a mental picture of things. The “raw materials” for this skill are developed in the [toddler, preschool, and early childhood] years and at Kindermusik.
Create a Wealth of Experiences
To develop your child’s imagination, give him a wealth of interesting direct experiences using all of the senses. The root word of imagination is image. Provide your child with an opportunity to create many images. Try sensory experiences like playing with water and sand, cooking, or dancing to your Kindermusik CD.
“Open-ended” activities encourage the development of imagination. Those are activities where there is not one particular “right answer” or product. For example, give your child art materials to use such as crayons or paint. Instead of telling her what to draw, try saying, “I wonder what you’ll come up with this time.” Then be surprised. Show interest and delight in her work, and invite her to explain it to you.
Pretend play is a great avenue for a child’s imagination. Provide some dress-up clothes or “props” from your playset and enjoy the show. When a child uses objects for pretend, such as a paper plate for a steering wheel, she is actually creating her own symbols for “real” things she has seen.
Storytelling, as well as reading to your child, are other great ways to develop imagination. When you tell a story with no book, your child can form the pictures in her mind. Your facial expressions and intonations, as well as your words, can help her understand the story. After she has heard you read a story, she will naturally enjoy telling stories herself. Try forming the “framework” of a story and let her fill in the details. In the following example, wherever there is a blank, let the child fill in any word or words that come to her. Then you can continue the story with connecting phrases.
“Once upon a time there was a (bear). This bear, who’s name was (Fuzzy) was very very (dirty). He was so dirty that (his mother wouldn’t let him come in the house). Now that caused a problem because…”
These stories are fun because you never know where they’ll lead. You could tape record them and write them down later.
Don’t All Children Have Imagination?
All children have the potential for a rich imagination, but you can help increase that potential. Avoid unlimited exposure to television – many hours in front of the TV absorbing “canned entertainment” can create a passive child waiting to be entertained instead of one creating his own ideas.
Encourage free thought and creativity by not getting too caught up in reality. Compliment your child on her ideas instead of her looks. “You have such good ideas…I never know what you’ll think of next.” Show interest in her and you can be sure that your child will continue to express the wondrous products of her brain.
- written for Kindermusik International by Karen Miller, Early Childhood Expert, Consultant, and Author
Young children are at a wonderful stage of life in which they are learning to express themselves in many ways. As infants and toddlers, they mainly respond to what they found in front of them. Over time, with the new tool of language and the more complex thinking skills that come with it, their world of ideas is broadens.
Words provide an anchor for thoughts. Vocabulary grows with new experiences. Along with providing your child with interesting experiences, you can be most helpful by acting like a “play by play announcer” providing words for your child’s perceptions. Describe what your child is doing and use rich descriptive words about size, color, shape, texture. Help your child recognize and talk about feelings and to know that there are no “bad” feelings. Anger, sadness, frustration, fear, as well as happiness, excitement and joy are all part of the human feelings menu. Kindermusik books and puppets can be fun tools to help children express themselves with words.
Music & Movement
Making music and moving to music are some of the most basic ways in which children express themselves – like bouncing, rocking, and moving to music. You can encourage this by playing different kinds of music – starting with diverse and culturally different songs found on your Kindermusik Home CD. Encourage your child to dance – and join in! Use scarves, simple props and your special Kindermusik instrument to make it more fun.
Singing is a tradition of every culture in the world and a powerful way in which people express emotions. Sing along to your home CD and discuss the feelings that come from each song.
Art & Constructive Play
Children can express how they’re feeling using paints, crayons, play dough, art media or playsets. Some creations may be rather “abstract” but valuable nonetheless. Also, as they play with blocks and construction toys, children give shape to their ideas. It’s not necessary to tell your child what to make, but rather be interested and ready to be surprised. Invite your child to tell you about his or her creation. Granted, sometimes she will have nothing at all in mind, but will simply be experimenting with the materials. Interesting stories may emerge. Offer to take dictation and write down what your child says to show your interest in her ideas.
Both boys and girls use pretend play as a primary way to express themselves. Play themes should be your child’s domain. Try not to edit her play unless you are genuinely uncomfortable with what is going on or it threatens to hurt someone or break something. Support her play by providing a variety of things to use such as music, dress-up clothes and hats, large boxes and props. Become a play partner yourself.
All these experiences give your child the message that her ideas are interesting and valuable. As an appreciative and listening parent, you are giving your child the skill and the disposition to express herself in appropriate ways – skills that will help her throughout life.
- written for Kindermusik International by Karen Miller, Early Childhood Expert, Consultant, and Author
Early childhood is a time of life that challenges both developmental psychologists and parents with its fascinating mixture of change, growth, joy, and frustration. Between the ages of 18 months and 3 years, babyhood is left behind and a verbal, relatively competent preschooler emerges. Before long, those preschoolers are moving from wiggleworms who share every idea that pops into their heads to sweet school kids learning to follow school rules, make friends, and steal your heart. During these transition, the changes in the physical, cognitive, social, emotional, and communication skills can be amazing and, at times, overwhelming!
Early childhood has gained a reputation as being a difficult period of tremendous energy and great capacity for movement and activity in the child, while at the same time it is the period when children are just beginning to acquire the rudiments of self-control and to accept the need for limits. One of the most rewarding challenges for parents is selecting activities that introduce new learning experiences without overwhelming the child’s capacity
for change. This is particularly important because successful activities, like the ones presented in Kindermusik, support and strengthen the parent-child relationship, while activities that are developmentally inappropriate can stress it further.
Knowledge of some of the key emotional tasks of the early childhood years can help reduce frustration and increase the joy!
The Quest for Control
The issue of developing control (over body functions, physical activities, feelings, or the world around him) is crucial for a child. A balance between structure and unnecessary regimentation is important. Children do best when they are invited and attracted into activities, rather than required to participate in them. Kindermusik invites children to participate in activities, but there are no performance expectations.
- Instead of trying to force your child into playing a game or reading a book, start playing or reading it by yourself. Your enthusiasm will most likely draw your child into the activity.
- Give your child lots of choices throughout the day both to allow her to have some control and develop those all important reasoning centers of the brain. Let her choose this shirt or that one. Help her pick the red car or the blue one. Slide or swings. There are plenty of opportunities for appropriate choices.
Hold Me and Let Me Go
A major task for young children is resolving the conflict between desire for love and protection and the urge to become independent. The mantra for many is “by myself”. Yet, when the going gets tough, the tough get going right back to Mom’s or Dad’s lap. Some call this the “rubber band” stage because it seems like the child is pulling outward and then snapping back. Giving your child permission to retreat to safety in your arms allows him to naturally move to greater independence.
- For “Our Time” kids: Recite “Run and Jump” (Home Activity Book, p. 33) while your child jumps into your arms. This allows your child the opportunity to practice and master the skills of running and jumping and has meaningful emotional content. Knowing that her caregiver will catch her when she jumps represents a level of emotional security in the relationship.
- For “Imagine That!” kids: take a pretend boat ride with your child on your knees or lap. Let a storm attack your boat rocking it from side to side until you “crash”. While at first it may seem scary, making it through the tumultuous ride with you builds trust in your relationship while reassuring your child that even stormy seas can be made less scary when shared with a friend or grown-up.
- Some children require a little extra gentle push to be independent. Encourage your child to try new things on his own or to go exploring without you holding his hand. Studies show that parents of more reserved children who gently encourage their children to step out and try things on their own can indeed help their children build more outgoing temperaments. However, don’t force the issue, and do stay close by in case they decide they need you.
Control of emotions is one of the most complex challenges facing young children and their parents. Children vary greatly in the intensity of how they experience and express feelings, depending on inborn temperamental factors, but it is a rare toddler whose feelings do not become intense and overwhelming at times. By providing both limits and loving support to your child, you are helping her gradually learn ways of handling and modulating her feelings so that the tantrums of the toddler ideally give way to the emphatic verbal argument of the school-aged child.
Music, as a fundamental route for the expression of human emotion, is an excellent tool for helping learn to identify and channel emotions. Even very young children can identify music that makes them happy or sad. Musical expression of wide ranges of emotion can help make them more manageable, less overwhelming, and much more understandable.
- Use emotion words to help your child learn to identify and later label how she is feeling. One child psychologist has been known to say “An emotion named is an emotion tamed.” Describe to your child how she looks when she is sad or angry or happy, so she can connect the way her body feels with the emotion itself.
- Remember that emotions in and of themselves are not good or bad. It’s our expression of them that is deemed appropriate or inappropriate by society.
- Help your child learn appropriate ways to deal with strong emotions such as taking a deep breath, counting, going to a quiet place to calm down, drawing, snuggling a stuffed animal, finding something else to do that will help them feel better, or even as they get older talking about their emotions.
The more you help your child build emotional intelligence, the more successful he will be in life. Our ability to read, understand, and express emotions in healthy ways as well as our emotional security affect everything we do and color our ability to interact with others every moment of our lives. But the good news is, building emotional intelligence can be both easy and fun, and it all starts with the simple love between parent and child.
Life is so much more fun when there are 31 flavors of ice cream available. I can vividly remember going every Sunday growing up with my Dad to the local Baskin Robbins ice cream store and staring into the cases trying to decide which flavor I’d pick. While I had my favorites (mint-chocolate chip), it was always fun peeking at the possibilities and getting those little tasting spoons to try a bit of something new before making a decision. I see the same joy in my kids’ faces now when we go to a local yogurt bar for a similar ritual. The combinations are so much more delightful simply because there are so many choices. There is so much variety.
Music certainly works that way. Oftentimes what is hailed as genius provides new and interesting combinations of instruments, rhythms, pitches, or presentations. And great musicians are great in part because of their immense versatility gained by learning to play or sing in a variety of styles, colors, and ways.
Variety. It adds color, flavor, interest. And our brains like it. Studies show we are drawn to music that is in part familiar and in part new and different. And as is often the case, our brains like musical variety because it’s good for them.
Here are some of the benefits you can see:
- Vary instruments, timbres, tonalities, tempi, rhythms, etc. to help your child become more aware, alert, and sensitive—not only to music but to his total environment.
- Include a variety of settings for music both passive and active, and you are teaching him the many roles that music can play. Music can help him relax, cope with feelings, celebrate, create, and express beyond verbal capabilities.
- Expose your child to music with unfamiliar tonalities to promote the development of new neural pathways and help him master more complex music later in life.
- Share with your child new instrument sounds, music in modes other than the typical Major and Minor modes of traditional Western music,
and songs of cultures other than your own to allow her to appreciate a broader range of music throughout her life as well as expose her to cultures other than her own.
- Sing and speak in both high and low ranges as a means to initiate different responses from your child. Research has discovered that exposure
to high sounds plays an important part in maintaining alertness and energy required for learning. Lower pitches calm and relax the body. And mid-range pitches are easier for early singers to reproduce.
Want to provide your child with musical variety? Here’s the easiest homework you’ll ever have. Simply by coming to class and listening and singing along at home your are providing your child with an extremely varied musical diet. While we start with simple things like a variety of timbres (instrument colors) from drums to egg shakers to sandblocks. You’ll also hear music in class and on your CDs from all over the world including folk songs, instruments, and compositions from Europe, China, Indonesia, Japan, India, Australia, Africa, South America, and more. We even sing songs in more uncommon modes (beyond Major and Minor) like Dorian, Lydian, and Mixolydian. It’s just one more reason we love Kindermusik!
Wanna’ make some sweet Valentines for friends and family? Here are some they will enjoy now and treasure for years to come.
Making Valentines with Babies
Make a recording of you engaging in some vocal play with your child. You might read a book and allow your child to share in the play of copying animal sounds or car, bus, or truck sounds. You could also simply play with tongue clicks, favorite syllables (ba, ma, da), or blowing raspberries and see if your child will follow along. Create your own little conversation, pausing to allow him to add his own sounds as he chooses. Label the recording with the date and age of your child and give it along with a card as a gift to your chosen Valentine. Have fun and remember that along the way you’re encouraging important language and turn-taking skills.
Making Valentines with Toddlers
Follow the instructions for the activity above with a few adaptations. You might see if your child would “read” a favorite book such as Shiny Dinah from memory or even sing or echo sing a favorite song (you sing part of the song, and he echoes back with the same). If reading a book, try asking your toddler what comes next in the story in order to build sequencing skills.
Making Valentines with Toddlers, Preschoolers, and School-aged Children
Select several favorite songs, maybe even some songs that say “I love you.” Record your child singing them along with you. You might even think about adding some instrumental accompaniment with a simple percussion instrument, like an egg shaker or a drum. Give the recording along with a card to your Valentine.
In addition to creating a great memory and gift, you are encouraging your child’s solo singing abilities as well as creativity and problem solving. Make sure to include him in the choices of songs, the making of the card, and the choice of instrumental accompaniment (if included). School-aged children may even want to create a song of their own!
Enjoy your Valentines!
Are you looking for a way to slow down and “de-stress” your busy life? Try playing with your child! Try getting back in touch with that playful, creative child inside of you and the imaginative, engaging child in front of you.
Many parents don’t play with their children. They buy them toys to “occupy” them. They are missing one of the best ways to “bond” with their child – to strengthen and reinforce the relationship. Dr. Stanley Greenspan, a pediatric psychiatrist, and author of First Feelings, Milestones in the emotional development of your baby and child, coined the term “Floor Time” and outlines how parents can connect with their children in this emotionally powerful way.
How to Do It
- Let the child take the lead and decide what to play. You act as the “stage manager” and help gather the things you’ll need. Then ask the child what role you should play, and even what you should do. “What are we playing?” “Who am I?” “What should I do?” Let your child be the train conductor and you be the passenger.
- Do what she says. If you’re playing with blocks, copy what the child is building, or build something similar. In pretend play, go with her idea and play your assigned role.
- Add an idea. After you’ve copied her, add a small new idea of your own. See if she accepts it. If not, go with her agenda. Let her add to that idea and see how many back and forth new ideas you can come up with.
- Sustain the play. See how long you can keep it going, keeping her interested.
- Don’t edit. There are only two rules for your child: 1.) No hurting, and 2.) No breaking things. Otherwise, anything goes. See where your child takes the play theme.
Materials to Use
This type of play works best with pretend play and dolls, puppets or stuffed animals, or playing with miniatures.
What’s the Benefit?
There are many benefits when you play with your child. It’s about power. You are putting your child in a position of legitimate power. He can take the lead and direct what’s happening. Playing this way can help reduce other “power struggles” you may be experiencing. It is also suggested that you increase the amount of “Floor Time” play after you have had to discipline your child or impose limits. It re-establishes the positive emotional connection.
It is also a way of showing your child that you find him interesting and that you value his ideas. “You have such good ideas. I would never have thought of that.” You can learn about your child, as well. You may find out about what is on his mind, or hear some vocabulary you didn’t know he had.
Play becomes richer than when the child plays alone or with an age-mate. You are teaching your child how to be a good player and how to elaborate roles, add ideas and take suggestions from others. You are supporting your child’s imagination.
Finding the Time
One suggestion is to turn the TV off for half and hour and play, read or listen to music. It should be when everyone is reasonably relaxed and not hungry.
Remember, this is what real “quality time” is all about. It works with any age child, even babies. You’ll have fun, you’ll laugh, you’ll relax and your child will remember these times.
- Written specifically for Kindermusik International by Karen Miller, Early Childhood Expert, Consultant and Author
It doesn’t matter the age – kids love to move, or be moved. Bouncing, wiggling, running, jumping, climbing on furniture, it seems they never stop. And in fact, such energetic locomotor movements are valuable, appropriate, and fun activities for little ones. The child’s innate need to move is inextricably linked to learning. We not only learn to move as we grow, but we literally move to learn. Educational Psychologist, Dr. Becky Bailey is fond of saying, “The best exercise for the brain is exercise”, and science bears it out. (Just check out the recent Newsweek article on how to make yourself smarter.) So, movement is important and provides outlets for your child’s energy as well as for her skill development.
Moving and controlling one’s movements are learned skills, and one of the best features of learning to regulate one’s movements as we learn to crawl, walk, run, jump, and swing is that it helps us learn inhibitory control, or the ability to stop oneself and wait. Now, I love inhibitory control because it’s an amazing developmental bonus you can often hide in a fun activity or game. Kids will be playing along, giggling and smiling and never know that they are working on learning how to stop and wait, which really means they are learning self-control or impulse control. Having inhibitory control is important for social skills like taking turns, waiting in a line, waiting your turn to speak, asking for a toy rather than just grabbing it from another kid (or pushing them over for that matter). Inhibitory control helps us stop and think through the choices rather than repeating past behaviors that got us into trouble like hitting a sister, jumping on the couch, or eating all the cookies. In fact, a lack of impulse control or inhibitory control can cause us to get into a lot of trouble in school as well as in life.
So, I mentioned earlier that I love inhibitory control because it’s so easy to sneak it into fun activities. How? Simply play “Stop and Go” games. Even babies love ‘em because they delight in anticipating when the stop and start will come, while kids of all ages find great joy in developing mastery over their own bodies as they command their feet to stop. We’re going to be doing lots of stop and go play in all our classes over the course of the semester. But here are some fun things you might try at home:
- Learn the ASL sign for STOP. One fun way to add stop and go to almost any activity is to learn the ASL sign for STOP. In class with the preschoolers you might hear us chant, “Walk, walk, walk, walk, walk, walk, and STOP (all-caps is a common signifier for American Sign Language signed words). Simply as it sounds, the kids love it. You can use any locomotor movement – jump, drive, swing. For babies, this can be a great way to teach the sign. Simply push baby in a baby swing and then surprise them with a quick STOP as you sign STOP.
- Play Move and Freeze. It’s musical chairs. Well, sort of. Most of you probably remember in playing musical chairs how you moved to the music, and when the teacher paused the music you had to race to find a seat. Same idea minus the chairs. Instead of racing to find a seat when the music stops, simply freeze your body. If you want to add more silliness for preschoolers or big kids try have them freeze in silly shapes or statues when you pause the music.
- Play Red Light Green Light. This is another game I remember fondly from my childhood. It’s a little too involved for toddlers unless they have adult assistance, but it would be great fun for preschoolers with a bit of help or bigger kids on their own. One person (works well for a grownup to take this part) is the traffic light and stands a good distance ahead of the other players with his back turned to them. The traffic light calls out “green light”, which means the other players can attempt to sneak up and tap him on the shoulder. However, when he calls “red light”, they have to freeze before he turns around and catches them. Anyone the traffic light sees moving when he turns must return to the starting line. The first player to sneak up and tap the traffic light wins.
“Raindrops on roses and whiskers on kittens, doorbells and sleighbells, and warm woolen mittens, brown paper packages tied up with string, these are few of my favorite things.” Gets you in the Holiday spirit doesn’t it. I have such great memories of watching The Sound of Music with my family at this time of year growing up. Well, here are some Kansas City favorite things to do with the under 5 crowd during the Holiday Season, with a little something for everyone I hope. (Recommendations come from both our family and several other families in the Kindermusik program. If you have others you’d like to share, we’d love to hear them. Just add them in the comments here or on our Facebook page.):
Dec. 2-3, 5 – 8:30 pm
Now while I can’t say I’ve been to this special walk, the Gardens themselves are lovely, and I have been to similar walks in other states and loved them. Admission $7 per person. Kids 5 and under free.
Friday, Dec. 2, 7-9 pm
Saturday, Dec. 3, 5-7 pm
I have driven past the sign on Brookside Boulevard for years, but only two years ago did our family decide to find out what “Journey to Bethlehem” is all about. I must say I was very impressed with this 45-min walk-through of the story of the birth of Christ. We felt truly immersed in the story as we turned in our “census” to the Roman rulers, “bought” things at the Bethlehem market, and then were led by a shepherd all over Bethlehem to see the exciting events of that first Christmas. As an additional treat, I recently learned to one of our longtime Kindermusik families is involved with their two kiddos as performers each year. My kids loved it. However, I recommend attending well diapered and fed as you may have to stand in line for quite a while. Admission free to all.
Sunday, Dec. 18 – come see the pageant from 9:15-10 am and stay for a Birthday Party for Jesus
The angel squadron has been gathered. News of peace and joy are to be shared with a young girl named Mary, with shepherds on the hill, and the whole world. And this isn’t just any news. This is the VERY IMPORTANT message announcing the story of the birth of Jesus Christ!
Come watch as the children of St. Andrew’s Sunday School classes present this year’s VERY IMPORTANT Christmas pageant from 9:15-10 a.m. Sunday, Dec. 18, in the undercroft! (After attending St. Andrew’s All Hallow’s Eve Party, the super fun “Boo Bash” last year I would expect this to be a delightful time. Glenda, the Children’s Minister at St. Andrew’s is so lovely and welcoming.)
Dec. 3, 9: o0 am – noon – Fun with Santa and Mrs. Claus
Enjoy a Chris Cakes style pancake breakfast when you come out to meet Old St. Nick. Afterward, make a craft to take home, join Mrs. Claus for storytelling and admire the Victorian display in the conservatory. Then bundle up for a ride on the outdoor mini-train (weather permitting). Reservations are required.
Call for reservations: 816-697-2600 x209 Members age 4 & under $5, Members age 5-12 $6, Members 13 & up $8, Non members age 4 & under $7, Non members age 5-12 $9, Non members age 13 & up $13
Dec. 10-11, 5:30-7 pm - Luminary Walk
Another fabulous garden to check out is Powell Gardens, though it is a bit of a drive, so you might plan to go and stay for a while if possible. Having been to other events they have held in the past, I feel sure their Luminary Walk will be beautiful. Enjoy live holiday music, homemade cookies and hot chocolate by the fireside and a walk along a candlelit path to the peaceful Marjorie Powell Allen Chapel. Admission $7/adults, $6/seniors, $3 children 5-12, children under 5 free, free for members. While you’re at the Gardens, be sure to ride the Trolley (really our favorite part.)
Every year we make a trip during the Holiday season to Union Station to see the enormous 8,000 square foot holiday model railroad exhibit. There is also a fun train ride around a huge Christmas tree at the end of the Grand Hall and special events going on all the time. Admission to the Model Railroad Experience is free. Other charges may apply to special appearances, exhibits, and train rides. (p.s. Memberships are great and can give you some nice discounts if you have a kid who likes Science City, like we do.)
Hope these ideas will help you create a few special family traditions to share for years to come!
There will be no classes the entire week of Thanksgiving, November 21-26.
But just because we’re closed doesn’t mean you can’t take Kindermusik on the Road. We’ve got some perfect ideas for filling the long hours for a trip to Grandma’s. You’d be amazed how much time a Kindermusik CD or two and a few of your favorite activities can fill.
- Load up your CD player or iPod/MP3 Player with Kindermusik Favorites.
- Give every activity a try. You might be surprised at which ones appeal on the road. I know I was shocked to find out that those same “Warm-Up Exercises” from the Village class I couldn’t ever get my little guys to sit still for brought peals of laughter when they were restrained in a carseat!
- Don’t be afraid to shout out a little Kindermusik in the airport. People love it, and anyone would much rather see a singing, giggling child than one who is whining or fussing. (Yes, I have been known to break into actual Kindermusik dances while waiting to board a plane. Anything to keep a crying child soothed. Plus, it worked.)
- Remember, everyone gets tired of sitting too long, and adapted fingerplays and movement activities are great for getting the blood flowing as well as raising everyone’s spirits. If you’re in Village, think about trying the “Pig Jig” chant or “Toodala”. Kids from Our Time might get a kick out of “Johnny & Katie”, “The Frog in the Bog” or “All By Myself”. Imagine That! kids might love “Three Blue Pigeons” (as a fingerplay) or even making up silly words with the song “Allee Galloo.” Even circle songs like “All the Day Long” or “Ha, Ha, This A-Way” can be done in the car or on a plane, though it sometimes means stomping into thin air.
- Last but not least, use your favorite lullabies to help soothe your little one to sleep.
With a few songs from class you can turn a long day on the road into a time to build memories and make connections you’ll share together forever.
“Becoming a careful observer of young children reminds us that what might be ordinary at first glance is actually quite extraordinary. A string of “ordinary” moments for a child becomes like a bead on a necklace, each one unique, though related to the others, combining to create a work of wonder.” – from The Art of Awareness by Deb Curtis and Margie Carter
When you look at your child, what do you see? Perhaps it’s the shiny blue eyes that mirror your own. Maybe it’s that familiar nose or the dark, wavy hair. And maybe you see a budding teacher, artist or musician. But what else do you see?
Every day your child is doing something or saying something that provides a beautiful window into her developing traits and personality. During the preschool years, your child’s wondrous individuality is truly beginning to form.
Christopher was thrilled about the classroom “trip” to the imaginary Grasshopper Park. When the children were asked what animal they saw in the park, the other children responded with bird, squirrel, dog, cat, skunk. Christopher, on the other hand, saw a dinosaur — the same dinosaur he saw on a recent family trip to Science City. And that’s how it goes in class, whether your child is 6 months or 6 years, we want to encourage a lot of individuality, creativity, and personal expression as we work to foster a classroom that truly “follows the child.”
Following the Child is a Montessori concept expressing the idea that children learn best when they are allowed to lead and even direct the learning experience. What does that look like in the classroom?
- A baby claps his hands at the start of class leading the teacher to say “Are you ready to clap hello today, Will?”
- Teachers constantly monitor and choose to extend activities, repeat activities, or move on based on the reactions and inclinations of the children.
- Babies, toddlers, preschoolers, and big kids are all given the opportunity to explore and discover how props or instruments might be used on their own. “Look at how Sarah is rolling her sticks on the floor. Sam likes to use his sticks to tap his knees. Eli is making the letter L with his sticks.” The kids ideas are then incorporated into the following activity.
- Older toddlers, preschoolers, and big kids begin to add to stories and songs creating new ideas and verses.
- Movement exploration is often built from the kids’ ideas and extended from there.
And the kids love it. Not only does it mean class often moves in a direction that interests them, but the validation is satisfying. Listen to the rising confidence in your child’s voice as she expresses her opinions during class. Such expression will help her as she begins to pick out different sounds while listening to a song and then describe which sounds she likes and why. Take note of his original thoughts and ideas – and how he relates a concept to a previous learning experience. Then watch his face light up as his idea is utilized in class. Sometimes the teacher even thinks their ideas are important enough to write them down! That must mean his ideas are really special!
Kindermusik allows your child to express his thoughts, actions, and imagination in his own way. There is no right or wrong. By soliciting and incorporating a child’s ideas and feelings into each lesson, we are affirming that their thoughts and ideas are important and worth exploring. Each little success is noticed and celebrated.
And you can “follow the child” at home, too. Now’s the time to “stop, look, and listen” as your child begins to cross the street of independence and individuality. More importantly, you can cross the street with him by taking some steps to help nurture his budding originality. For example:
- Let your baby take the lead as you play peekaboo, determining how long the game goes one, whether you hide or she does. When her interest wanes, let her show you what she wants to play next.
- Listen closely and respond to your toddlers thoughts and ideas – let her lead the way when it comes to navigating around the zoo…or let her make up the rules to the game.
- Encourage the “process” by allowing your preschoolers to “try,” then offer positive reinforcement for his effort.
- Solicit your big kid’s opinions on various subjects – why does she like or dislike a certain song or type of music?
Not only does “following the child” provide greater creativity, independence, and problem solving skills, but it also helps you stop and tune in more fully to all those moments you share together helping you make memories that last a lifetime. | http://www.kindermusikwithjoy.net/blog/fun-ideas-for-home/ | 13 |
23 | Outline of U.S. History/Early America
Heaven and Earth never agreed better to frame a place for man’s habitation. Jamestown founder John Smith, 1607
The first Americans
At the height of the Ice Age, between 34,000 and 30,000 B.C., much of the world’s water was locked up in vast continental ice sheets. As a result, the Bering Sea was hundreds of meters below its current level, and a land bridge, known as Beringia, emerged between Asia and North America. At its peak, Beringia is thought to have been some 1,500 kilometers wide. A moist and treeless tundra, it was covered with grasses and plant life, attracting the large animals that early humans hunted for their survival.
The first people to reach North America almost certainly did so without knowing they had crossed into a new continent. They would have been following game, as their ancestors had for thousands of years, along the Siberian coast and then across the land bridge.
Once in Alaska, it would take these first North Americans thousands of years more to work their way through the openings in great glaciers south to what is now the United States. Evidence of early life in North America continues to be found. Little of it, however, can be reliably dated before 12,000 B.C.; a recent discovery of a hunting lookout in northern Alaska, for example, may date from almost that time. So too may the finely crafted spear points and items found near Clovis, New Mexico.
Similar artifacts have been found at sites throughout North and South America, indicating that life was probably already well established in much of the Western Hemisphere by some time prior to 10,000 B.C.
Around that time the mammoth began to die out and the bison took its place as a principal source of food and hides for these early North Americans. Over time, as more and more species of large game vanished—whether from overhunting or natural causes—plants, berries, and seeds became an increasingly important part of the early American diet. Gradually, foraging and the first attempts at primitive agriculture appeared. Native Americans in what is now central Mexico led the way, cultivating corn, squash, and beans, perhaps as early as 8,000 B.C. Slowly, this knowledge spread northward.
By 3,000 B.C., a primitive type of corn was being grown in the river valleys of New Mexico and Arizona. Then the first signs of irrigation began to appear, and, by 300 B.C., signs of early village life.
By the first centuries A.D., the Hohokam were living in settlements near what is now Phoenix, Arizona, where they built ball courts and pyramid—like mounds reminiscent of those found in Mexico, as well as a canal and irrigation system.
Mound builders and pueblos
The first Native-American group to build mounds in what is now the United States often are called the Adenans. They began constructing earthen burial sites and fortifications around 600 B.C. Some mounds from that era are in the shape of birds or serpents; they probably served religious purposes not yet fully understood.
The Adenans appear to have been absorbed or displaced by various groups collectively known as Hopewellians. One of the most important centers of their culture was found in southern Ohio, where the remains of several thousand of these mounds still can be seen. Believed to be great traders, the Hopewellians used and exchanged tools and materials across a wide region of hundreds of kilometers.
By around 500 A.D., the Hopewellians disappeared, too, gradually giving way to a broad group of tribes generally known as the Mississippians or Temple Mound culture. One city, Cahokia, near Collinsville , Illinois, is thought to have had a population of about 20,000 at its peak in the early 12th century. At the center of the city stood a huge earthen mound, flattened at the top, that was 30 meters high and 37 hectares at the base. Eighty other mounds have been found nearby.
Cities such as Cahokia depended on a combination of hunting, foraging, trading, and agriculture for their food and supplies. Influenced by the thriving societies to the south, they evolved into complex hierarchical societies that took slaves and practiced human sacrifice.
In what is now the southwest United States, the Anasazi, ancestors of the modern Hopi Indians, began building stone and adobe pueblos around the year 900. These unique and amazing apartment—like structures were often built along cliff faces; the most famous, the “cliff palace” of Mesa Verde, Colorado, had more than 200 rooms. Another site, the Pueblo Bonito ruins along New Mexico’s Chaco River, once contained more than 800 rooms.
Perhaps the most affluent of the pre-Columbian Native Americans lived in the Pacific Northwest, where the natural abundance of fish and raw materials made food supplies plentiful and permanent villages possible as early as 1,000 B.C. The opulence of their “potlatch” gatherings remains a standard for extravagance and festivity probably unmatched in early American history.
Native-American cultures
The America that greeted the first Europeans was, thus, far from an empty wilderness. It is now thought that as many people lived in the Western Hemisphere as in Western Europe at that time—about 40 million. Estimates of the number of Native Americans living in what is now the United States at the onset of European colonization range from two to 18 million, with most historians tending toward the lower figure. What is certain is the devastating effect that European disease had on the indigenous population practically from the time of initial contact. Smallpox, in particular, ravaged whole communities and is thought to have been a much more direct cause of the precipitous decline in the Indian population in the 1600s than the numerous wars and skirmishes with European settlers.
Indian customs and culture at the time were extraordinarily diverse, as could be expected, given the expanse of the land and the many different environments to which they had adapted. Some generalizations, however, are possible. Most tribes, particularly in the wooded eastern region and the Midwest, combined aspects of hunting, gathering, and the cultivation of maize and other products for their food supplies. In many cases, the women were responsible for farming and the distribution of food, while the men hunted and participated in war.
By all accounts, Native-American society in North America was closely tied to the land. Identification with nature and the elements was integral to religious beliefs. Their life was essentially clan–oriented and communal, with children allowed more freedom and tolerance than was the European custom of the day.
Although some North American tribes developed a type of hieroglyphics to preserve certain texts, Native-American culture was primarily oral, with a high value placed on the recounting of tales and dreams. Clearly, there was a good deal of trade among various groups and strong evidence exists that neighboring tribes maintained extensive and formal relations—both friendly and hostile.
The first Europeans
The first Europeans to arrive in North America—at least the first for whom there is solid evidence—were Norse, traveling west from Greenland, where Erik the Red had founded a settlement around the year 985. In 1001 his son Leif is thought to have explored the northeast coast of what is now Canada and spent at least one winter there.
While Norse sagas suggest that Viking sailors explored the Atlantic coast of North America down as far as the Bahamas, such claims remain unproven. In 1963, however, the ruins of some Norse houses dating from that era were discovered at L’Anse-aux-Meadows in northern Newfoundland, thus supporting at least some of the saga claims.
In 1497, just five years after Christopher Columbus landed in the Caribbean looking for a western route to Asia, a Venetian sailor named John Cabot arrived in Newfoundland on a mission for the British king. Although quickly forgotten, Cabot’s journey was later to provide the basis for British claims to North America. It also opened the way to the rich fishing grounds off George’s Banks, to which European fishermen, particularly the Portuguese, were soon making regular visits.
Columbus never saw the mainland of the future United States, but the first explorations of it were launched from the Spanish possessions that he helped establish. The first of these took place in 1513 when a group of men under Juan Ponce de León landed on the Florida coast near the present city of St. Augustine.
With the conquest of Mexico in 1522, the Spanish further solidified their position in the Western Hemisphere. The ensuing discoveries added to Europe’s knowledge of what was now named America—after the Italian Amerigo Vespucci, who wrote a widely popular account of his voyages to a “New World.” By 1529 reliable maps of the Atlantic coastline from Labrador to Tierra del Fuego had been drawn up, although it would take more than another century before hope of discovering a “Northwest Passage” to Asia would be completely abandoned.
Among the most significant early Spanish explorations was that of Hernando De Soto, a veteran conquistador who had accompanied Francisco Pizarro in the conquest of Peru. Leaving Havana in 1539, De Soto’s expedition landed in Florida and ranged through the southeastern United States as far as the Mississippi River in search of riches.
Another Spaniard, Francisco Vázquez de Coronado, set out from Mexico in 1540 in search of the mythical Seven Cities of Cibola. Coronado’s travels took him to the Grand Canyon and Kansas, but failed to reveal the gold or treasure his men sought. However, his party did leave the peoples of the region a remarkable, if unintended, gift: Enough of his horses escaped to transform life on the Great Plains. Within a few generations, the Plains Indians had become masters of horsemanship, greatly expanding the range and scope of their activities.
While the Spanish were pushing up from the south, the northern portion of the present—day United States was slowly being revealed through the journeys of men such as Giovanni da Verrazano. A Florentine who sailed for the French, Verrazano made landfall in North Carolina in 1524, then sailed north along the Atlantic Coast past what is now New York harbor.
A decade later, the Frenchman Jacques Cartier set sail with the hope—like the other Europeans before him—of finding a sea passage to Asia. Cartier’s expeditions along the St. Lawrence River laid the foundation for the French claims to North America, which were to last until 1763.
Following the collapse of their first Quebec colony in the 1540s, French Huguenots attempted to settle the northern coast of Florida two decades later. The Spanish, viewing the French as a threat to their trade route along the Gulf Stream, destroyed the colony in 1565. Ironically, the leader of the Spanish forces, Pedro Menéndez, would soon establish a town not far away—St. Augustine. It was the first permanent European settlement in what would become the United States.
The great wealth that poured into Spain from the colonies in Mexico, the Caribbean, and Peru provoked great interest on the part of the other European powers. Emerging maritime nations such as England, drawn in part by Francis Drake’s successful raids on Spanish treasure ships, began to take an interest in the New World.
In 1578 Humphrey Gilbert, the author of a treatise on the search for the Northwest Passage, received a patent from Queen Elizabeth to colonize the “heathen and barbarous landes” in the New World that other European nations had not yet claimed. It would be five years before his efforts could begin. When he was lost at sea, his half‑brother, Walter Raleigh, took up the mission.
In 1585 Raleigh established the first British colony in North America, on Roanoke Island off the coast of North Carolina. It was later abandoned, and a second effort two years later also proved a failure. It would be 20 years before the British would try again. This time—at Jamestown in 1607—the colony would succeed, and North America would enter a new era.
Early settlements
The early 1600s saw the beginning of a great tide of emigration from Europe to North America. Spanning more than three centuries, this movement grew from a trickle of a few hundred English colonists to a flood of millions of newcomers. Impelled by powerful and diverse motivations, they built a new civilization on the northern part of the continent.
The first English immigrants to what is now the United States crossed the Atlantic long after thriving Spanish colonies had been established in Mexico, the West Indies, and South America. Like all early travelers to the New World, they came in small, overcrowded ships. During their six-to 12-week voyages, they lived on meager rations. Many died of disease, ships were often battered by storms, and some were lost at sea.
Most European emigrants left their homelands to escape political oppression, to seek the freedom to practice their religion, or to find opportunities denied them at home. Between 1620 and 1635, economic difficulties swept England. Many people could not find work. Even skilled artisans could earn little more than a bare living. Poor crop yields added to the distress. In addition, the Commercial Revolution had created a burgeoning textile industry, which demanded an ever-increasing supply of wool to keep the looms running. Landlords enclosed farmlands and evicted the peasants in favor of sheep cultivation. Colonial expansion became an outlet for this displaced peasant population.
The colonists’ first glimpse of the new land was a vista of dense woods. The settlers might not have survived had it not been for the help of friendly Indians, who taught them how to grow native plants—pumpkin, squash, beans, and corn. In addition, the vast, virgin forests, extending nearly 2,100 kilometers along the Eastern seaboard, proved a rich source of game and firewood. They also provided abundant raw materials used to build houses, furniture, ships, and profitable items for export.
Although the new continent was remarkably endowed by nature, trade with Europe was vital for articles the settlers could not produce. The coast served the immigrants well. The whole length of shore provided many inlets and harbors. Only two areas—North Carolina and southern New Jersey—lacked harbors for ocean-going vessels.
Majestic rivers—the Kennebec, Hudson, Delaware, Susquehanna, Potomac, and numerous others—linked lands between the coast and the Appalachian Mountains with the sea. Only one river, however, the St. Lawrence—dominated by the French in Canada—offered a water passage to the Great Lakes and the heart of the continent. Dense forests, the resistance of some Indian tribes, and the formidable barrier of the Appalachian Mountains discouraged settlement beyond the coastal plain. Only trappers and traders ventured into the wilderness. For the first hundred years the colonists built their settlements compactly along the coast.
Political considerations influenced many people to move to America. In the 1630s, arbitrary rule by England’s Charles I gave impetus to the migration. The subsequent revolt and triumph of Charles’ opponents under Oliver Cromwell in the 1640s led many cavaliers—“king’s men”—to cast their lot in Virginia. In the German-speaking regions of Europe, the oppressive policies of various petty princes—particularly with regard to religion—and the devastation caused by a long series of wars helped swell the movement to America in the late 17th and 18th centuries.
The journey entailed careful planning and management, as well as considerable expense and risk. Settlers had to be transported nearly 5,000 kilometers across the sea. They needed utensils, clothing, seed, tools, building materials, livestock, arms, and ammunition. In contrast to the colonization policies of other countries and other periods, the emigration from England was not directly sponsored by the government but by private groups of individuals whose chief motive was profit.
The first of the British colonies to take hold in North America was Jamestown. On the basis of a charter which King James I granted to the Virginia (or London) company, a group of about 100 men set out for the Chesapeake Bay in 1607. Seeking to avoid conflict with the Spanish, they chose a site about 60 kilometers up the James River from the bay.
Made up of townsmen and adventurers more interested in finding gold than farming, the group was unequipped by temperament or ability to embark upon a completely new life in the wilderness. Among them, Captain John Smith emerged as the dominant figure. Despite quarrels, starvation, and Native-American attacks, his ability to enforce discipline held the little colony together through its first year.
In 1609 Smith returned to England, and in his absence, the colony descended into anarchy. During the winter of 1609-1610, the majority of the colonists succumbed to disease. Only 60 of the original 300 settlers were still alive by May 1610. That same year, the town of Henrico (now Richmond) was established farther up the James River.
It was not long, however, before a development occurred that revolutionized Virginia’s economy. In 1612 John Rolfe began cross‑breeding imported tobacco seed from the West Indies with native plants and produced a new variety that was pleasing to European taste. The first shipment of this tobacco reached London in 1614. Within a decade it had become Virginia’s chief source of revenue.
Prosperity did not come quickly, however, and the death rate from disease and Indian attacks remained extraordinarily high. Between 1607 and 1624 approximately 14,000 people migrated to the colony, yet only 1,132 were living there in 1624. On recommendation of a royal commission, the king dissolved the Virginia Company, and made it a royal colony that year.
During the religious upheavals of the 16th century, a body of men and women called Puritans sought to reform the Established Church of England from within. Essentially, they demanded that the rituals and structures associated with Roman Catholicism be replaced by simpler Calvinist Protestant forms of faith and worship. Their reformist ideas, by destroying the unity of the state church, threatened to divide the people and to undermine royal authority.
In 1607 a small group of Separatists—a radical sect of Puritans who did not believe the Established Church could ever be reformed—departed for Leyden, Holland, where the Dutch granted them asylum. However, the Calvinist Dutch restricted them mainly to low-paid laboring jobs. Some members of the congregation grew dissatisfied with this discrimination and resolved to emigrate to the New World.
In 1620, a group of Leyden Puritans secured a land patent from the Virginia Company. Numbering 101, they set out for Virginia on the Mayflower. A storm sent them far north and they landed in New England on Cape Cod. Believing themselves outside the jurisdiction of any organized government, the men drafted a formal agreement to abide by “just and equal laws” drafted by leaders of their own choosing. This was the Mayflower Compact.
In December the Mayflower reached Plymouth harbor; the Pilgrims began to build their settlement during the winter. Nearly half the colonists died of exposure and disease, but neighboring Wampanoag Indians provided the information that would sustain them: how to grow maize. By the next fall, the Pilgrims had a plentiful crop of corn, and a growing trade based on furs and lumber.
A new wave of immigrants arrived on the shores of Massachusetts Bay in 1630 bearing a grant from King Charles I to establish a colony. Many of them were Puritans whose religious practices were increasingly prohibited in England. Their leader, John Winthrop, urged them to create a “city upon a hill” in the New World—a place where they would live in strict accordance with their religious beliefs and set an example for all of Christendom.
The Massachusetts Bay Colony was to play a significant role in the development of the entire New England region, in part because Winthrop and his Puritan colleagues were able to bring their charter with them. Thus the authority for the colony’s government resided in Massachusetts, not in England.
Under the charter’s provisions, power rested with the General Court, which was made up of “freemen” required to be members of the Puritan, or Congregational, Church. This guaranteed that the Puritans would be the dominant political as well as religious force in the colony. The General Court elected the governor, who for most of the next generation would be John Winthrop.
The rigid orthodoxy of the Puritan rule was not to everyone’s liking. One of the first to challenge the General Court openly was a young clergyman named Roger Williams, who objected to the colony’s seizure of Indian lands and advocated separation of church and state. Another dissenter, Anne Hutchinson, challenged key doctrines of Puritan theology. Both they and their followers were banished.
Williams purchased land from the Narragansett Indians in what is now Providence, Rhode Island, in 1636. In 1644, a sympathetic Puritan-controlled English Parliament gave him the charter that established Rhode Island as a distinct colony where complete separation of church and state as well as freedom of religion was practiced.
So‑called heretics like Williams were not the only ones who left Massachusetts. Orthodox Puritans, seeking better lands and opportunities, soon began leaving Massachusetts Bay Colony. News of the fertility of the Connecticut River Valley, for instance, attracted the interest of farmers having a difficult time with poor land. By the early 1630s, many were ready to brave the danger of Indian attack to obtain level ground and deep, rich soil. These new communities often eliminated church membership as a prerequisite for voting, thereby extending the franchise to ever larger numbers of men.
At the same time, other settlements began cropping up along the New Hampshire and Maine coasts, as more and more immigrants sought the land and liberty the New World seemed to offer.
New Netherland and Maryland
Hired by the Dutch East India Company, Henry Hudson in 1609 explored the area around what is now New York City and the river that bears his name, to a point probably north of present-day Albany, New York. Subsequent Dutch voyages laid the basis for their claims and early settlements in the area.
As with the French to the north, the first interest of the Dutch was the fur trade. To this end, they cultivated close relations with the Five Nations of the Iroquois, who were the key to the heartland from which the furs came. In 1617 Dutch settlers built a fort at the junction of the Hudson and the Mohawk Rivers, where Albany now stands.
Settlement on the island of Manhattan began in the early 1620s. In 1624, the island was purchased from local Native Americans for the reported price of $24. It was promptly renamed New Amsterdam.
In order to attract settlers to the Hudson River region, the Dutch encouraged a type of feudal aristocracy, known as the “patroon” system. The first of these huge estates were established in 1630 along the Hudson River. Under the patroon system, any stockholder, or patroon, who could bring 50 adults to his estate over a four-year period was given a 25-kilometer river-front plot, exclusive fishing and hunting privileges, and civil and criminal jurisdiction over his lands. In turn, he provided livestock, tools, and buildings. The tenants paid the patroon rent and gave him first option on surplus crops.
Further to the south, a Swedish trading company with ties to the Dutch attempted to set up its first settlement along the Delaware River three years later. Without the resources to consolidate its position, New Sweden was gradually absorbed into New Netherland, and later, Pennsylvania and Delaware.
In 1632 the Catholic Calvert family obtained a charter for land north of the Potomac River from King Charles I in what became known as Maryland. As the charter did not expressly prohibit the establishment of non-Protestant churches, the colony became a haven for Catholics. Maryland’s first town, St. Mary’s, was established in 1634 near where the Potomac River flows into the Chesapeake Bay.
While establishing a refuge for Catholics, who faced increasing persecution in Anglican England, the Calverts were also interested in creating profitable estates. To this end, and to avoid trouble with the British government, they also encouraged Protestant immigration.
Maryland’s royal charter had a mixture of feudal and modern elements. On the one hand the Calvert family had the power to create manorial estates. On the other, they could only make laws with the consent of freemen (property holders). They found that in order to attract settlers—and make a profit from their holdings—they had to offer people farms, not just tenancy on manorial estates. The number of independent farms grew in consequence. Their owners demanded a voice in the affairs of the colony. Maryland’s first legislature met in 1635.
Colonial-Indian relations
By 1640 the British had solid colonies established along the New England coast and the Chesapeake Bay. In between were the Dutch and the tiny Swedish community. To the west were the original Americans, then called Indians.
Sometimes friendly, sometimes hostile, the Eastern tribes were no longer strangers to the Europeans. Although Native Americans benefited from access to new technology and trade, the disease and thirst for land that the early settlers also brought posed a serious challenge to their long-established way of life.
At first, trade with the European settlers brought advantages: knives, axes, weapons, cooking utensils, fishhooks, and a host of other goods. Those Indians who traded initially had significant advantage over rivals who did not. In response to European demand, tribes such as the Iroquois began to devote more attention to fur trapping during the 17th century. Furs and pelts provided tribes the means to purchase colonial goods until late into the 18th century.
Early colonial-Native-American relations were an uneasy mix of cooperation and conflict. On the one hand, there were the exemplary relations that prevailed during the first half century of Pennsylvania’s existence. On the other were a long series of setbacks, skirmishes, and wars, which almost invariably resulted in an Indian defeat and further loss of land.
The first of the important Native-American uprisings occurred in Virginia in 1622, when some 347 whites were killed, including a number of missionaries who had just recently come to Jamestown.
White settlement of the Connecticut River region touched off the Pequot War in 1637. In 1675 King Philip, the son of the native chief who had made the original peace with the Pilgrims in 1621, attempted to unite the tribes of southern New England against further European encroachment of their lands. In the struggle, however, Philip lost his life and many Indians were sold into servitude.
The steady influx of settlers into the backwoods regions of the Eastern colonies disrupted Native-American life. As more and more game was killed off, tribes were faced with the difficult choice of going hungry, going to war, or moving and coming into conflict with other tribes to the west.
The Iroquois, who inhabited the area below lakes Ontario and Erie in northern New York and Pennsylvania, were more successful in resisting European advances. In 1570 five tribes joined to form the most complex Native-American nation of its time, the “Ho-De-No-Sau-Nee,” or League of the Iroquois. The league was run by a council made up of 50 representatives from each of the five member tribes. The council dealt with matters common to all the tribes, but it had no say in how the free and equal tribes ran their day-to-day affairs. No tribe was allowed to make war by itself. The council passed laws to deal with crimes such as murder.
The Iroquois League was a strong power in the 1600s and 1700s. It traded furs with the British and sided with them against the French in the war for the dominance of America between 1754 and 1763. The British might not have won that war otherwise.
The Iroquois League stayed strong until the American Revolution. Then, for the first time, the council could not reach a unanimous decision on whom to support. Member tribes made their own decisions, some fighting with the British, some with the colonists, some remaining neutral. As a result, everyone fought against the Iroquois. Their losses were great and the league never recovered.
Second generation of British colonies
The religious and civil conflict in England in the mid-17th century limited immigration, as well as the attention the mother country paid the fledgling American colonies.
In part to provide for the defense measures England was neglecting, the Massachusetts Bay, Plymouth, Connecticut, and New Haven colonies formed the New England Confederation in 1643. It was the European colonists’ first attempt at regional unity.
The early history of the British settlers reveals a good deal of contention—religious and political—as groups vied for power and position among themselves and their neighbors. Maryland, in particular, suffered from the bitter religious rivalries that afflicted England during the era of Oliver Cromwell. One of the casualties was the state’s Toleration Act, which was revoked in the 1650s. It was soon reinstated, however, along with the religious freedom it guaranteed.
With the restoration of King Charles II in 1660, the British once again turned their attention to North America. Within a brief span, the first European settlements were established in the Carolinas and the Dutch driven out of New Netherland. New proprietary colonies were established in New York, New Jersey, Delaware, and Pennsylvania.
The Dutch settlements had been ruled by autocratic governors appointed in Europe. Over the years, the local population had become estranged from them. As a result, when the British colonists began encroaching on Dutch claims in Long Island and Manhattan, the unpopular governor was unable to rally the population to their defense. New Netherland fell in 1664. The terms of the capitulation, however, were mild: The Dutch settlers were able to retain their property and worship as they pleased.
As early as the 1650s, the Albemarle Sound region off the coast of what is now northern North Carolina was inhabited by settlers trickling down from Virginia. The first proprietary governor arrived in 1664. The first town in Albemarle, a remote area even today, was not established until the arrival of a group of French Huguenots in 1704.
In 1670 the first settlers, drawn from New England and the Caribbean island of Barbados, arrived in what is now Charleston, South Carolina. An elaborate system of government, to which the British philosopher John Locke contributed, was prepared for the new colony. One of its prominent features was a failed attempt to create a hereditary nobility. One of the colony’s least appealing aspects was the early trade in Indian slaves. With time, however, timber, rice, and indigo gave the colony a worthier economic base.
In 1681 William Penn, a wealthy Quaker and friend of Charles II, received a large tract of land west of the Delaware River, which became known as Pennsylvania. To help populate it, Penn actively recruited a host of religious dissenters from England and the continent—Quakers, Mennonites, Amish, Moravians, and Baptists.
When Penn arrived the following year, there were already Dutch, Swedish, and English settlers living along the Delaware River. It was there he founded Philadelphia, the “City of Brotherly Love.”
In keeping with his faith, Penn was motivated by a sense of equality not often found in other American colonies at the time. Thus, women in Pennsylvania had rights long before they did in other parts of America. Penn and his deputies also paid considerable attention to the colony’s relations with the Delaware Indians, ensuring that they were paid for land on which the Europeans settled.
Georgia was settled in 1732, the last of the 13 colonies to be established. Lying close to, if not actually inside the boundaries of Spanish Florida, the region was viewed as a buffer against Spanish incursion. But it had another unique quality: The man charged with Georgia’s fortifications, General James Oglethorpe, was a reformer who deliberately set out to create a refuge where the poor and former prisoners would be given new opportunities.
Settlers, slaves, and servants
Men and women with little active interest in a new life in America were often induced to make the move to the New World by the skillful persuasion of promoters. William Penn, for example, publicized the opportunities awaiting newcomers to the Pennsylvania colony. Judges and prison authorities offered convicts a chance to migrate to colonies like Georgia instead of serving prison sentences.
But few colonists could finance the cost of passage for themselves and their families to make a start in the new land. In some cases, ships’ captains received large rewards from the sale of service contracts for poor migrants, called indentured servants, and every method from extravagant promises to actual kidnapping was used to take on as many passengers as their vessels could hold.
In other cases, the expenses of transportation and maintenance were paid by colonizing agencies like the Virginia or Massachusetts Bay Companies. In return, indentured servants agreed to work for the agencies as contract laborers, usually for four to seven years. Free at the end of this term, they would be given “freedom dues,” sometimes including a small tract of land.
Perhaps half the settlers living in the colonies south of New England came to America under this system. Although most of them fulfilled their obligations faithfully, some ran away from their employers. Nevertheless, many of them were eventually able to secure land and set up homesteads, either in the colonies in which they had originally settled or in neighboring ones. No social stigma was attached to a family that had its beginning in America under this semi-bondage. Every colony had its share of leaders who were former indentured servants.
There was one very important exception to this pattern: African slaves. The first black Africans were brought to Virginia in 1619, just 12 years after the founding of Jamestown. Initially, many were regarded as indentured servants who could earn their freedom. By the 1660s, however, as the demand for plantation labor in the Southern colonies grew, the institution of slavery began to harden around them, and Africans were brought to America in shackles for a lifetime of involuntary servitude.
The enduring mystery of the Anasazi
Time-worn pueblos and dramatic cliff towns, set amid the stark, rugged mesas and canyons of Colorado and New Mexico, mark the settlements of some of the earliest inhabitants of North America, the Anasazi (a Navajo word meaning “ancient ones”).
By 500 A.D. the Anasazi had established some of the first villages in the American Southwest, where they hunted and grew crops of corn, squash, and beans. The Anasazi flourished over the centuries, developing sophisticated dams and irrigation systems; creating a masterful, distinctive pottery tradition; and carving multiroom dwellings into the sheer sides of cliffs that remain among the most striking archaeological sites in the United States today.
Yet by the year 1300, they had abandoned their settlements, leaving their pottery, implements, even clothing—as though they intended to return—and seemingly vanished into history. Their homeland remained empty of human beings for more than a century—until the arrival of new tribes, such as the Navajo and the Ute, followed by the Spanish and other European settlers.
The story of the Anasazi is tied inextricably to the beautiful but harsh environment in which they chose to live. Early settlements, consisting of simple pithouses scooped out of the ground, evolved into sunken kivas (underground rooms) that served as meeting and religious sites. Later generations developed the masonry techniques for building square, stone pueblos. But the most dramatic change in Anasazi living was the move to the cliff sides below the flattopped mesas, where the Anasazi carved their amazing, multilevel dwellings.
The Anasazi lived in a communal society. They traded with other peoples in the region, but signs of warfare are few and isolated. And although the Anasazi certainly had religious and other leaders, as well as skilled artisans, social or class distinctions were virtually nonexistent.
Religious and social motives undoubtedly played a part in the building of the cliff communities and their final abandonment. But the struggle to raise food in an increasingly difficult environment was probably the paramount factor. As populations grew, farmers planted larger areas on the mesas, causing some communities to farm marginal lands, while others left the mesa tops for the cliffs. But the Anasazi couldn’t halt the steady loss of the land’s fertility from constant use, nor withstand the region’s cyclical droughts. Analysis of tree rings, for example, shows that a drought lasting 23 years, from 1276 to 1299, finally forced the last groups of Anasazi to leave permanently.
Although the Anasazi dispersed from their ancestral homeland, their legacy remains in the remarkable archaeological record that they left behind, and in the Hopi, Zuni, and other Pueblo peoples who are their descendants. | http://en.wikibooks.org/wiki/Outline_of_U.S._History/Early_America | 13 |
32 | The term fascism was first used by Italian dictator Benito Mussolini in 1919. The term comes from the Italian word fascio, which means “union” or “league.” It also refers to the ancient Roman symbol of power, the fasces, a bundle of sticks bound to an ax, which represented civic unity and the authority of Roman officials to punish wrongdoers.
Fascist movements surfaced in most European countries and in some former European colonies in the early 20th century. Fascist political parties and movements capitalized on the intense patriotism that emerged as a response to widespread social and political uncertainty after World War I (1914-1918) and the Russian Revolution of 1917. With the important exceptions of Italy and Germany, however, fascist movements failed in their attempts to seize political power. In Italy and Germany after World War I, fascists managed to win control of the state and attempted to dominate all of Europe, resulting in millions of deaths in the Holocaust and World War II (1939-1945). Because fascism had a decisive impact on European history from the end of World War I until the end of the World War II, the period from 1918 to 1945 is sometimes called the fascist era. Fascism was widely discredited after Italy and Germany lost World War II, but persists today in new forms.
Some scholars view fascism in narrow terms, and some even insist that the ideology was limited to Italy under Mussolini. When the term is capitalized as Fascism, it refers to the Italian movement. But other writers define fascism more broadly to include many movements, from Italian Fascism to contemporary neo-Nazi movements in the United States. This article relies on a very broad definition of fascism, and includes most movements that aim for total social renewal based on the national community while also pushing for a rejection of liberal democratic institutions.
II. Major Elements
Scholars disagree over how to define the basic elements of fascism. Marxist historians and political scientists (that is, those who base their approach on the writings of German political theorist Karl Marx) view fascism as a form of politics that is cynically adopted by governments to support capitalism and to prevent a socialist revolution. These scholars have applied the label of fascism to many authoritarian regimes that came to power between World War I and World War II, such as those in Portugal, Austria, Poland, and Japan. Marxist scholars also label as fascist some authoritarian governments that emerged after World War II, including regimes in Argentina, Chile, Greece, and South Africa.
Some non-Marxist scholars have dismissed fascism as a form of authoritarianism that is reactionary, responding to political and social developments but without any objective beyond the exercise of power. Some of these scholars view fascism as a crude, barbaric form of nihilism, asserting that it lacks any coherent ideals or ideology. Many other historians and political scientists agree that fascism has a set of basic traits—a fascist minimum—but tend to disagree over what to include in the definition. Scholars disagree, for example, over issues such as whether the concept of fascism includes Nazi Germany and the Vichy regime (the French government set up in southern France in 1940 after the Nazis had occupied the rest of the country).
Beginning in the 1970s, some historians and political scientists began to develop a broader definition of fascism, and by the 1990s many scholars had embraced this approach. This new approach emphasizes the ways in which fascist movements attempt revolutionary change and their central focus on popularizing myths of national or ethnic renewal. Seen from this perspective, all forms of fascism have three common features: anticonservatism, a myth of ethnic or national renewal, and a conception of a nation in crisis.
Fascist movements usually try to retain some supposedly healthy parts of the nation's existing political and social life, but they place more emphasis on creating a new society. In this way fascism is directly opposed to conservatism—the idea that it is best to avoid dramatic social and political change. Instead, fascist movements set out to create a new type of total culture in which values, politics, art, social norms, and economic activity are all part of a single organic national community. In Nazi Germany, for example, the fascist government in the 1930s tried to create a new Volksgemeinschaft (people's community) built around a concept of racial purity. A popular culture of Nazi books, movies, and artwork that celebrated the ideal of the so-called new man and new woman supported this effort. With this idealized people's community in mind, the government created new institutions and policies (partly as propaganda) to build popular support. But the changes were also an attempt to transform German society in order to overcome perceived sources of national weakness. In the same way, in Italy under Mussolini the government built new stadiums and held large sporting events, sponsored filmmakers, and financed the construction of huge buildings as monuments to fascist ideas. Many scholars therefore conclude that fascist movements in Germany and Italy were more than just reactionary political movements. These scholars argue that these fascist movements also represented attempts to create revolutionary new modern states.
B. Myth of National or Ethnic Renewal
Even though fascist movements try to bring about revolutionary change, they emphasize the revival of a mythical ethnic, racial, or national past. Fascists revise conventional history to create a vision of an idealized past. These mythical histories claim that former national greatness has been destroyed by such developments as the mixing of races, the rise of powerful business groups, and a loss of a shared sense of the nation. Fascist movements set out to regain the heroic spirit of this lost past through radical social transformations. In Nazi Germany, for example, the government tried to "purify" the nation by killing millions of Jews and other minority groups. The Nazis believed they could create harmonious community whose values were rooted in an imaginary past in which there were no differences of culture, "deviant" ideologies, or "undesirable" genetic traits.
Because fascist ideologies place great value on creating a renewed and unified national or ethnic community, they are hostile to most other ideologies. In addition to rejecting conservatism, fascist movements also oppose such doctrines as liberalism, individualism, materialism, and communism. In general, fascists stand against all scientific, economic, religious, academic, cultural, and leisure activities that do not serve their vision of national political life.
C. Idea of a Nation in Crisis
A fascist movement almost always asserts that the nation faces a profound crisis. Sometimes fascists define the nation as the same as a nation-state (country and people with the same borders), but in other cases the nation is defined as a unique ethnic group with members in many countries. In either case, the fascists present the national crisis as resolvable only through a radical political transformation. Fascists differ over how the transformation will occur. Some see a widespread change in values as coming before a radical political transformation. Others argue that a radical political transformation will then be followed by a change in values. Fascists claim that the nation has entered a dangerous age of mediocrity, weakness, and decline. They are convinced that through their timely action they can save the nation from itself. Fascists may assert the need to take drastic action against a nation's "inner" enemies.
Fascists promise that with their help the national crisis will end and a new age will begin that restores the people to a sense of belonging, purpose, and greatness. The end result of the fascist revolution, they believe, will be the emergence of a new man and new woman. This new man and new woman will be fully developed human beings, uncontaminated by selfish desires for individual rights and self-expression and devoted only to an existence as part of the renewed nation's destiny.
III. How Fascist Movements Differ
Because each country's history is unique, each fascist movement creates a particular vision of an idealized past depending on the country's history. Fascist movements sometimes combine quasi-scientific racial and economic theories with these mythical pasts to form a larger justification for the fascist transformation, but also may draw on religious beliefs. Even within one country, separate fascist movements sometimes arise, each creating its own ideological variations based on the movement's particular interpretation of politics and history. In Italy after World War I, for example, the Fascist Party led by Benito Mussolini initially faced competition from another fascist movement led by war hero Gabriele D'Annunzio.
A. Intellectual Foundations
The diversity of fascist movements means that each has its own individual intellectual and cultural foundation. Some early fascist movements were inspired in part by early 20th century social and political thought. In this period the French philosopher Georges Sorel built on earlier radical theories to argue that social change should be brought about through violent strikes and acts of sabotage organized by trade unions. Sorel's emphasis on violence seems to have influenced some proponents of fascism. The late 19th and early 20th century also saw an increasing intellectual preoccupation with racial differences. From this development came fascism's tendency toward ethnocentrism—the belief in the superiority of a particular race. The English-born German historian Houston Stewart Chamberlin, for example, proclaimed the superiority of the German race, arguing that Germans descended from genetically superior bloodlines. Some early fascists also interpreted Charles Darwin's theory of evolution to mean that some races of people were inherently superior. They argued that this meant that the “survival of the fittest” required the destruction of supposedly inferior peoples.
But these philosophical influences were not the main inspiration for most fascist movements. Far more important was the example set by the fascist movements in Germany and Italy. Between World War I and World War II fascist movements and parties throughout Europe imitated Italian Fascism and German Nazism. Since 1945 many racially inclined fascist organizations have been inspired by Nazism. These new Nazi movements are referred to as neo-Nazis because they modify Nazi doctrine and because the original Nazi movement inspires them.
B. Views on Race
Though all fascist movements are nationalist, some fascist ideologies regard an existing set of national boundaries as an artificial constraint on an authentic people or ethnic group living within those boundaries. Nazism, for example, sought to extend the frontiers of the German state to include all major concentrations of ethnic Germans. This ethnic concept of Germany was closely linked to an obsession with restoring the biological purity of the race, known as the Aryan race, and the destruction of the allegedly degenerate minorities. The result was not only the mass slaughter of Jews and Gypsies (Roma), but the sterilization or killing of hundreds of thousands of ethnic Germans who were members of religious minorities or mentally or physically disabled, or for some other reason deemed by self-designated race experts not to have lives worth living. The Nazis' emphasis on a purified nation also led to the social exclusion or murder of other alleged deviants, such as Communists, homosexuals, and Jehovah's Witnesses.
The ultranationalism and ethnocentrism of fascist ideologies makes all of them racist. Some forms of fascism are also anti-Semitic (hostile to Jews) or xenophobic (fearful of foreign people). Some fascist movements, such as the Nazis, also favor eugenics—attempts to supposedly improve a race through controlled reproduction. But not all fascist movements have this hostility toward racial and ethnic differences. Some modern forms of fascism, in fact, preach a “love of difference” and emphasize the need to preserve distinct ethnic identities. As a result, these forms of fascism strongly oppose immigration in order to maintain the purity of the nation. Some scholars term this approach differentialism, and point to right-wing movements in France during the 1990s as examples of this form of fascism.
Some modern fascist variants have broken with the early fascist movements in another important way. Many early fascist movements sought to expand the territory under their control, but few modern fascist movements take this position. Instead of attempting to take new territory, most modern fascists seek to racially purify existing nations. Some set as their goal a Europe of ethnically pure nations or a global Aryan solidarity.
C. Attitudes Toward Religion
In addition, fascist movements do not share a single approach to religion. Nazism was generally hostile to organized religion, and Hitler's government arrested hundreds of priests in the late 1930s. Some other early fascist movements, however, tried to identify themselves with a national church. In Italy, for example, the Fascists in the 1930s attempted to gain legitimacy by linking themselves to the Catholic Church. In the same way, small fascist groups in the United States in the 1980s and 1990s combined elements of neo-Nazi or Aryan paganism with Christianity. In all these cases, however, the fascist movements have rejected the original spirit of Christianity by celebrating violence and racial purity.
D. Emphasis on Militarism
Fascist movements also vary in their reliance on military-style organization. Some movements blend elite paramilitary organizations (military groups staffed by civilians) with a large political party led by a charismatic leader. In most cases, these movements try to rigidly organize the lives of an entire population. Fascism took on this military or paramilitary character partly because World War I produced heightened nationalism and militarism in many countries. Even in these movements, however, there were many purely intellectual fascists who never served in the military. Nazi Germany and Italy under Mussolini stand as the most notable examples of a paramilitary style of organization. Since the end of World War II, however, the general public revulsion against war and anything resembling Nazism created widespread hostility to paramilitary political organizations. As a result, fascist movements since the end of World War II have usually relied on new nonparamilitary forms of organization. There have been some fascist movements that have paramilitary elements, but these have been small compared to the fascist movements in Germany and Italy of the 1930s and 1940s. In addition, most of the paramilitary-style fascist movements formed since World War II have lacked a single leader who could serve as a symbol of the movement, or have even intentionally organized themselves into leaderless terrorist cells. Just as most fascist movements in the postwar period downplayed militarism, they have also abandoned some of the more ambitious political programs created in Nazi Germany and Fascist Italy. Specifically, recent movements have rejected the goals of corporatism (government-coordinated economics), the idea that the state symbolizes the people and embodies the national will, and attempts to include all social groups in a single totalitarian movement.
E. Use of Political Rituals Another feature of fascism that has largely disappeared from movements after World War II is the use of quasi-religious rituals, spectacular rallies, and the mass media to generate mass support. Both Nazism and Italian Fascism held rallies attended by hundreds of thousands, created a new calendar of holidays celebrating key events in the regime's history, and conducted major sporting events or exhibitions. All of this was intended to convince people that they lived in a new era in which history itself had been transformed. In contrast to what fascists view as the absurdity and emptiness of life under liberal democracy, life under fascism was meant to be experienced as historical, life-giving, and beautiful. Since 1945, however, fascist movements have lacked the mass support to allow the staging of such theatrical forms of politics. The movements have not, however, abandoned the vision of creating an entirely new historical era.
IV. Compared to Other Radical Right-Wing Ideologies
Although fascism comes in many forms, not all radical right-wing movements are fascist. In France in the 1890s, for example, the Action Française movement started a campaign to overthrow the democratic government of France and restore the king to power. Although this movement embraced the violence and the antidemocratic tendencies of fascism, it did not develop the fascist myth of revolutionary rebirth through popular power. There have also been many movements that were simply nationalist but with a right-wing political slant. In China, for example, the Kuomintang (The Chinese National People's Party), led by Chiang Kai-shek, fought leftist revolutionaries until Communists won control of China in 1949. Throughout the 20th century this type of right-wing nationalism was common in many military dictatorships in Latin America, Africa, and Asia. Fascism should also be distinguished from right-wing separatist movements that set out to create a new nation-state rather than to regenerate an existing one. This would exclude cases such as the Nazi puppet regime in Croatia during World War II. This regime, known as the Ustaše government, relied on paramilitary groups to govern, and hoped that their support for Nazism would enable Croatia to break away from Yugoslavia. This separatist goal distinguishes the Ustaše from genuine fascist movements.
Fascism also stands apart from regimes that are based on racism but do not pursue the goal of creating a revolutionary new order. In the 1990s some national factions in Bosnia and Herzegovina engaged in ethnic cleansing, the violent removal of targeted ethnic groups with the objective of creating an ethnically pure territory. In 1999 the Serbian government's insistence upon pursuing this policy against ethnic Albanians in the province of Kosovo led to military intervention by the North Atlantic Treaty Organization (NATO). But unlike fascist movements, the national factions in Yugoslavia did not set out to destroy all democratic institutions. Instead these brutal movements hoped to create ethnically pure democracies, even though they used violence and other antidemocratic methods. Another example of a racist, but not fascist, organization was the Ku Klux Klan in the 1920s, which became a national mass movement in the United States. Although racial hatred was central to the Klan's philosophy, its goals were still reactionary rather than revolutionary. The Klan hoped to control black people, but it did not seek to build an entirely new society, as a true fascist movement would have. Since 1945, however, the Klan has become increasingly hostile to the United States government and has established links with neo-Nazi groups. In the 1980s and 1990s this loose alliance of antigovernment racists became America's most significant neo-fascist movement.
V. The Origins of Fascism
Despite the many forms that fascism takes, all fascist movements are rooted in two major historical trends. First, in late 19th-century Europe mass political movements developed as a challenge to the control of government and politics by small groups of social elites or ruling classes. For the first time, many countries saw the growth of political organizations with membership numbering in the thousands or even millions. Second, fascism gained popularity because many intellectuals, artists, and political thinkers in the late 19th century began to reject the philosophical emphasis on rationality and progress that had emerged from the 18th-century intellectual movement known as the Enlightenment.
These two trends had many effects. For example, new forms of popular racism and nationalism arose that openly celebrated irrationality and vitalism—the idea that human life is self-directed and not subject to predictable rules and laws. This line of thinking led to calls for a new type of nation that would overcome class divisions and create a sense of historical belonging for its people. For many people, the death and brutality of World War I showed that rationality and progress were not inherent in humanity, and that a radically new direction had to be taken by Western civilization if it was to survive. World War I also aroused intense patriotism that continued after the war. These sentiments became the basis of mass support for national socialist movements that promised to confront the disorder in the world. Popular enthusiasm for such movements was especially strong in Germany and Italy, which had only become nation-states in the 19th century and whose parliamentary traditions were weak. Despite having fought on opposite sides, both countries emerged from the war to face political instability and a widespread feeling that the nation had been humiliated in the war and by the settlement terms of the Treaty of Versailles. In addition, many countries felt threatened by Communism because of the success of the Bolsheviks during the Russian Revolution.
VI. The First Fascist Movement: Italy
A. Mussolini's Fasci
The first fascist movement developed in Italy after World War I. Journalist and war veteran Benito Mussolini served as the guiding force behind the new movement. Originally a Marxist, by 1909 Mussolini was convinced that a national rather than an international revolution was necessary, but he was unable to find a suitable catalyst or vehicle for the populist revolutionary energies it demanded. At first he looked to the Italian Socialist Party and edited its newspaper Avanti! (Forward!). But when war broke out in Europe in 1914, he saw it as an opportunity to galvanize patriotic energies and create the spirit of heroism and self-sacrifice necessary for the country's renewal. He thus joined the interventionist campaign, which urged Italy to enter the war. In 1914, as Italian leaders tried to decide whether to enter the war, Mussolini founded the newspaper Il Popolo d'Italia (The People of Italy) to encourage Italy to join the conflict. After Italy declared war against Germany and Austria-Hungary in May 1915, Mussolini used Il Popolo d'Italia, to persuade Italians that the war was a turning point for their country. Mussolini argued that when the frontline combat soldiers returned from the war, they would form a new elite and bring about a new type of state and transform Italian society. The new elite would spread community and patriotism, and introduce sweeping changes in every part of society.
Mussolini established the Fasci Italiani di Combattimento (Italian Combat Veteran's League) in 1919 to channel the revolutionary energies of the returning soldiers. The group's first meeting assembled a small group of war veterans, revolutionary syndicalists (socialists who worked for a national revolution as the first step toward an international one), and futurists (a group of poets who wanted Italian politics and art to fuse in a celebration of modern technological society's dramatic break with the past). The Fasci di Combattimento, sometimes known simply as the Fasci, initially adopted a leftist agenda, including democratic reform of the government, increased rights for workers, and a redistribution of wealth.
In the elections of 1919 Fascist candidates won few votes. Fascism gained widespread support only in 1920 after the Socialist Party organized militant strikes in Turin and Italy's other northern industrial cities. The Socialist campaign caused chaos through much of the country, leading to concerns that further Socialist victories could damage the Italian economy. Fear of the Socialists spurred the formation of hundreds of new Fascist groups throughout Italy. Members of these groups formed the Blackshirts—paramilitary squadre (squads) that violently attacked Socialists and attempted to stifle their political activities.
B. Mussolini's Rise to Power
The Fascists gained widespread support as a result of their effective use of violence against the Socialists. Prime Minister Giovanni Giolitti then gave Mussolini's movement respectability by including Fascist candidates in his government coalition bloc that campaigned in the May 1921 elections. The elections gave the newly formed National Fascist Party (PNF) 35 seats in the Italian legislature. The threat from the Socialists weakened, however, and the Fascists seemed to have little chance of winning more power until Mussolini threatened to stage a coup d'état in October 1922. The Fascists showed their militant intentions in the March on Rome, in which about 25,000 black-shirted Fascists staged demonstrations throughout the capital. Although the Italian parliament moved swiftly to crush the protest, King Victor Emmanuel III refused to sign a decree that would have imposed martial law and enabled the military to destroy the Fascists.
Instead the king invited Mussolini to join a coalition government along with Giolitti. Mussolini accepted the bargain, but it was another two years before Fascism became an authoritarian regime. Early in 1925 Mussolini seized dictatorial powers during a national political crisis sparked by the Blackshirts' murder of socialist Giacomo Matteotti, Mussolini's most outspoken parliamentary critic.
C. Fascist Consolidation of Power
Between 1925 and 1931, the Fascists consolidated power through a series of new laws that provided a legal basis for Italy's official transformation into a single-party state. The government abolished independent political parties and trade unions and took direct control of regional and local governments. The Fascists sharply curbed freedom of the press and assumed sweeping powers to silence political opposition. The government created a special court and police force to suppress so-called anti-Fascism. In principle Mussolini headed the Fascist Party and as head of state led the government in consultation with the Fascist Grand Council. In reality, however, he increasingly became an autocrat answerable to no one. Mussolini was able to retain power because of his success in presenting himself as an inspired Duce (Leader) sent by providence to make Italy great once more.
The Fascist government soon created mass organizations to regiment the nation's youth as well as adult leisure time. The Fascists also established a corporatist economic system, in which the government, business, and labor unions collectively formulated national economic policies. The system was intended to harmonize the interests of workers, managers, and the state. In practice, however, Fascist corporatism retarded technological progress and destroyed workers' rights. Mussolini also pulled off a major diplomatic success when he signed the Lateran Treaty with the Vatican in 1929, which settled a long-simmering dispute over the Catholic Church's role in Italian politics. This marked the first time in Italian history that the Catholic Church and the government agreed over their respective roles. Between 1932 and 1934 millions of Italians attended the Exhibition of the Fascist Revolution in Rome, staged by the government to mark Fascism's first ten years in power. By this point the regime could plausibly boast that it had brought the country together through the Risorgimento (Italian unification process) and had turned Italy into a nation that enjoyed admiration and respect abroad.
For a time it seemed that Italy had recovered from the national humiliation, political chaos, and social division following World War I and was managing to avoid the global economic and political crises caused by the Great Depression. Mussolini could claim that he had led the country through a true revolution with a minimum of bloodshed and repression, restoring political stability, national pride, and economic growth. All over the country, Mussolini's speeches drew huge crowds, suggesting that most Italians supported the Fascist government. Many countries closely watched the Italian corporatist economic experiment. Some hoped that it would prove to be a Third Way—an alternative economic policy between free-market capitalism and communism. Mussolini won the respect of diplomats all over the world because of his opposition to Bolshevism, and he was especially popular in the United States and Britain. To many, the Fascist rhetoric of Italy's rebirth seemed to be turning into a reality.
D. The Fall of Italian Fascism
Two events can be seen as marking the turning point in Fascism's fortunes. First, Adolf Hitler became chancellor of Germany in January 1933, which meant that Mussolini had the support of a powerful fascist ally. Second, Italy invaded Ethiopia in October 1935 (see Italy: The Ethiopian Campaign). In less than a year the Fascist army crushed the poorly equipped and vastly outnumbered Ethiopians. Mussolini's power peaked at this point, as he seemed to be making good on his promise to create an African empire worthy of the descendants of ancient Rome. The League of Nations condemned the invasion and voted to impose sanctions on Italy, but this only made Mussolini a hero of the Italian people, as he stood defiant against the dozens of countries that opposed his militarism. But the Ethiopian war severely strained Italy's military and economic resources. At the same time, international hostility to Italy's invasion led Mussolini to forge closer ties with Hitler, who had taken Germany out of the League of Nations.
As Hitler and Mussolini worked more closely together, they became both rivals and allies. Hitler seems to have dictated Mussolini's foreign policy. Both Germany and Italy sent military assistance to support General Francisco Franco's quasi-fascist forces during the Spanish Civil War, which broke out in 1936. The Italian troops in Spain suffered several dramatic losses, however, undermining Mussolini's claim that his Fascist army made Italy a military world power. Then in November 1936 Mussolini announced the existence of the Rome-Berlin Axis—a formal military alliance with Nazi Germany. Fascism, once simply associated with Italy's resolution of its domestic problems, had become the declared enemy of Britain, France, and the United States, and of many other democratic and most communist countries. Italian Fascism was fatally linked with Hitler's bold plans to take control of much of Europe and Russia. The formation of the pact with Hitler further isolated Italy internationally, leading Mussolini to move the country closer to a program of autarky (economic self-sufficiency without foreign trade). As Italy prepared for war, the government's propaganda became more belligerent, the tone of mass rallies more militaristic, and Mussolini's posturing more vain and delusional. Italian soldiers even started to mimic the goose-step marching style of their Nazi counterparts, though it was called the Roman step.
Although the Italian Fascists had ridiculed Nazi racism and declared that Italy had no “Jewish problem,” in 1938 the government suddenly issued Nazi-style anti-Semitic laws. The new laws denied that Jews could be Italian. This policy eventually led the Fascist government of the Italian Social Republic—the Nazi puppet government in northern Italy—to give active help to the Nazis when they sent 8,000 Italian Jews to their deaths in extermination camps in the fall of 1943. Mussolini knew his country was ill-prepared for a major European war and he tried to use his influence to broker peace in the years before World War II. But he had become a prisoner of his own militaristic rhetoric and myth of infallibility. When Hitler's armies swept through Belgium into France in the spring of 1940, Mussolini abandoned neutrality and declared war against France and Britain. In this way he locked Italy into a hopeless war against a powerful alliance that eventually comprised the British empire, the Union of Soviet Socialist Republics (USSR), and the United States. Italy's armed forces were weak and unprepared for war, despite Mussolini's bold claims of invincibility. Italian forces suffered humiliating defeats in 1940 and 1941, and Mussolini's popularity in Italy plummeted. In July 1943, faced with imminent defeat at the hands of the Allies despite Nazi reinforcements, the Fascist Grand Council passed a vote of no confidence against Mussolini, removing him from control of the Fascist Party. The king ratified this decision, dismissed Mussolini as head of state and had him arrested.
Most Italians were overjoyed at the news that the supposedly infallible Mussolini had been deposed. The popular consensus behind the regime had evaporated, leaving only the fanaticism of intransigenti (hard-liners). Nevertheless, Nazi Schutzstaffel (SS) commandos rescued Mussolini from his mountain-top prison, and Hitler then put him in control of the Italian Social Republic—the Nazi puppet government in northern Italy. The Nazis kept Mussolini under tight control, however, using him to crush partisans (anti-Fascist resistance fighters) and to delay the defeat of Germany. Partisans finally shot Mussolini as he tried to flee in disguise to Switzerland in April 1945. Meanwhile hundreds of thousands of Italian soldiers endured terrible suffering, either forced to fight alongside the Nazis in Italy or on the Russian front, or to work for the Nazi regime as slave labor.
The rise and fall of Fascism in Italy showed several general features of fascism. First, Italian Fascism fed off a profound social crisis that had undermined the legitimacy of the existing system. Many Europeans supported fascism in the 1930s because of a widespread perception that the parliamentary system of government was fundamentally corrupt and inefficient. Thus it was relatively easy for Italians to support Mussolini's plans to create a new type of state that would transform the country into a world power and restore Italy to the prominence it enjoyed during the Roman Empire and the Renaissance.
Second, Italian Fascism was an uneasy blend of elitism and populism. A revolutionary elite imposed Fascist rule on the people. In order to secure power the movement was forced to collaborate with conservative ruling elites—the bourgeoisie (powerful owners of business), the army, the monarchy, the Church, and state officials. At the same time, however, the Fascist movement made sustained efforts to generate genuine popular enthusiasm and to revolutionize the lives of the Italian people.
Third, Fascism was a charismatic form of politics that asserted the extraordinary capabilities of the party and its leader. The main tool for the Fascistization (conversion to Fascism) of the masses and the creation of the new Fascist man was not propaganda, censorship, education, or terror, or even the large fascist social and military organizations. Instead, the Fascists relied on the extensive use of a ritualized, theatrical style of politics designed create a sense of a new historical era that abolished the politics of the past. In this sense Fascism was an attempt to confront urbanization, class conflict, and other problems of modern society by making the state itself the object of a public cult, creating a sort of civic religion.
Fourth, Italy embraced the fascist myth that national rebirth demanded a permanent revolution—a constant change in social and political life. To sustain a sense of constant renewal, Italian Fascism was forced by its own militarism to pursue increasingly ambitious foreign policy goals and ever more unrealizable territorial claims. This seems to indicate that any fascist movement that identifies rebirth with imperialist expansion and manages to seize power will eventually exhaust the capacity of the nation to win victory after victory. In the case of Italian Fascism, this exhaustion set in quickly.
A fifth feature of Italian Fascism was its attempt to achieve a totalitarian synthesis of politics, art, society, and culture, although this was a conspicuous failure. Italian Fascism never created a true new man. Modern societies have a mixture of people with differing values and experiences. This diversity can be suppressed but not reversed. The vast majority of Italians may have temporarily embraced Fascist nationalism because of the movement's initial successes, but the people were never truly Fascistized. In short, in its militarized version between World War I and World War II, the fascist vision was bound to lead in practice to a widening gap between rhetoric and reality, goals and achievements.
Finally, the fate of Italian Fascism illustrates how the overall goal of a fascist utopia has always turned to nightmare. Tragically for Italy and the international community, Mussolini embarked on his imperial expansion just as Hitler began his efforts to reverse the Versailles Treaty and reestablish Germany as a major military power. This led to the formation of the Axis alliance, which gave Hitler a false sense of security about the prospects for his imperial schemes. The formation of this alliance helped lead to World War II, and it committed Mussolini to unwinnable military campaigns that resulted in the Allied invasion of Italy in 1943. The death, destruction, and misery of the fighting in Italy was inflicted on a civilian population that had come to reject the Fascist vision of Italian renewal, but whose public displays of enthusiasm for the regime before the war had kept Mussolini in power.
VII. Fascism in Germany: National Socialism
The only fascist movement outside Italy that came to power in peacetime was Germany's National Socialist German Workers Party—the Nazis. The core of the National Socialist program was an ideology and a policy of war against Germany's supposed moral and racial decay and a struggle to begin the country's rebirth. This theme of struggle and renewal dominates the many ideological statements of Nazism, including Adolf Hitler's book Mein Kampf (My Struggle, 1939), speeches by propaganda minister Joseph Goebbels, and Leni Riefenstahl's propaganda film Triumph des Willens (Triumph of the Will, 1935).
All of the Nazi government's actions served this dual purpose of destroying the supposed sickness of the old Germany and creating a healthy new society. The government abolished democratic freedoms and institutions because they were seen as causing national divisions. In their place the government created an authoritarian state, known as the Third Reich, that would serve as the core of the new society. The Nazis promoted German culture, celebrated athleticism and youth, and tried to ensure that all Germans conformed physically and mentally to an Aryan ideal. But in order to achieve these goals, the Nazi regime repressed supposedly degenerate books and paintings, sterilized physically and mentally disabled people, and enslaved and murdered millions of people who were considered enemies of the Reich or "subhuman." This combination of renewal and destruction was symbolized by the pervasive emblem of Nazism, the swastika—a cross with four arms broken at right angles. German propaganda identified the swastika with the rising sun and with rebirth because the bars of the symbol suggest perpetual rotation. To its countless victims, however, the swastika came to signify cruelty, death, and terror.
A. Main Features
There were two features specific to Nazism that combined to make it so extraordinarily destructive and barbaric once in power. The first feature was the Nazi myth of national greatness. This myth suggested that the country was destined to become an imperial and great military power. Underpinning this myth was a concept of the nation that blended romantic notions about national history and character with pseudo-scientific theories of race, genetics, and natural selection. It led naturally to a foreign policy based on the principle of first uniting all ethnic Germans within the German nation, and then creating a vast European empire free of racial enemies. These ideas led to international wars of unprecedented violence and inhumanity.
The second important feature of Nazism was that it developed in the context of a modern economy and society. Even after Germany's defeat in World War I, the country was still one of the most advanced nations in the world in terms of infrastructure, government efficiency, industry, economic potential, and standards of education. Germany also had a deep sense of national pride, belonging, and roots, and a civic consciousness that stressed duty and obedience. In addition, the nation had a long tradition of anti-Semitism and imperialism, and of respect for gifted leaders. The institutions of democracy had only weak roots in Germany, and after World War I democracy was widely rejected as un-German.
B. Hitler's Rise to Power
The dangerous combination of Germany's modernity and its racist, imperialist ultranationalism became apparent after the economic and political failure of the Weimar Republic, the parliamentary government established in Germany following World War I. Unlike Mussolini, Hitler took control of a country that had a strong industrial, military, and governmental power base that was merely dormant after World War I. Hitler also became more powerful than Mussolini because the Nazis simply radicalized and articulated widely held prejudices, whereas the Fascists of Italy had to create new ones. Although the Nazi Party won control of the German legislature after a democratic election in 1932 , in 1933 Hitler suspended the constitution, abolished the presidency, and declared himself Germany's Führer (leader). Once in control, Hitler was able to insert his fascist vision of the new Germany into a highly receptive political culture. The Third Reich quickly created the technical, organizational, militaristic, and social means to implement its far-reaching schemes for the transformation of Germany and large parts of Europe.
The Nazis' attempts to build a new German empire led to the systematic killings of about six million civilians during the 1940s, and the deaths of millions more as the result of Nazi invasion and occupation—a horror rivaled only by Josef Stalin's rule in the Soviet Union during the 1930s. The Nazis primarily killed Jews, but also targeted homosexuals, people with disabilities, and members of religious minorities such as the Jehovah's Witnesses. All of this killing and destruction stemmed from the Nazis' conviction that non-Germans had sapped the strength of the German nation. At the same time, the Nazis attempted to take control of most of Europe in an effort to build a new racial empire. This effort led to World War II and the deaths of millions of soldiers and civilians. After early successes in the war, Germany found itself facing defeat on all sides. German forces were unable to overcome the tenacity and sheer size of the Soviet military in Eastern Europe, while in Western Europe and North Africa they faced thousands of Allied aircraft, tanks, and ships. Facing certain defeat, Hitler killed himself in April 1945, and Germany surrendered to the Allies in the following month.
Although scholars generally view Italy under Mussolini as the benchmark for understanding fascism in general, the German case shows that not all fascist movements were exactly alike. German National Socialism differed from Italian Fascism in important ways. The most important differences were Nazism's commitment to a more extreme degree of totalitarian control, and its racist conception of the ideal national community.
Hitler's visionary fanaticism called for the Gleichschaltung (coordination) of every possible aspect of life in Germany. The totalitarianism that resulted in Germany went further than that of Italy, although not as far as Nazi propaganda claimed. Italian Fascism lacked the ideological fervor to indulge in systematic ethnic cleansing on the scale seen in Germany. Although the Italian Fascist government did issue flagrantly anti-Semitic laws in 1938, it did not contemplate mass extermination of its Jewish population. In Italy Fascism also was marked by pluralism, compromise, and inefficiency as compared to Nazism. As a result, in Fascist Italy far more areas of personal, social, and cultural life escaped the intrusion of the state than in Nazi Germany. Nevertheless, both Italian Fascism and German National Socialism rested on the same brutal logic of rebirth through what was seen as creative destruction. In Italy this took form in attempts by the Fascist Party to recapture Roman qualities, while in Germany it led the Nazis to attempt to re-Aryanize European civilization.
When Nazism is compared to other forms of fascism, it becomes clear that Nazism was not just a peculiar movement that emerged from Germany's unique history and culture. Instead, Nazism stands as a German variant of a political ideology that was popular to varying degrees throughout Europe between World War I and World War II. As a result of this line of thinking, some historians who study Nazism no longer speculate about what elements of German history led to Nazism. Instead, they try to understand which conditions in the German Weimar Republic allowed fascism to become the country's dominant political force in 1932, and the process by which fascists were able to gain control of the state in 1933. The exceptional nature of the success of fascism in Germany and Italy is especially clear when compared to the fate of fascism in some other countries.
VIII. Fascism in Other Countries from 1919 to 1945
World War I and the global economic depression of the 1930s destabilized nearly all liberal democracies in Europe, even those that had not fought in the war. Amidst this social and political uncertainty, fascism gained widespread popularity in some countries but consistently failed to overthrow any parliamentary system outside of Italy and Germany. In many countries fascism attracted considerable attention in newspaper and radio reports, but the movement never really threatened to disturb the existing political order. This was the case in countries such as Czechoslovakia, Denmark, England, Holland, Iceland, Ireland, Norway, Sweden, and Switzerland. Fascism failed to take root in these countries because no substantial electoral support existed there for a revolution from the far right. In France, Finland, and Belgium, far-right forces with fascistic elements mounted a more forceful challenge in the 1930s to elected governments, but democracy prevailed in these political conflicts. In the Communist USSR, the government was so determined to crush any forms of anticommunist dissent that it was impossible for a fascist movement to form there.
But fascism did represent a significant movement in a handful of European countries. A review of the countries where fascism saw some success but ultimately failed helps explain the more general failure of fascism. These countries included Spain, Portugal, Austria, France, Hungary, and Romania. In these countries fascism was denied the political space in which to grow and take root. Fascist movements were opposed by powerful coalitions of radical right-wing forces, which either crushed or absorbed them. Some conservative regimes adopted features of fascism to gain popularity.
Spain's fascist movement, the Falange Española (Spanish Phalanx) was hobbled by the country's historical lack of a coherent nationalist tradition. The strongest nationalist sentiments originated in Basque Country in north central Spain and in Catalonia in the northeast. But in both areas the nationalists favored separation rather than the unification of Spain as a nation. The Falange gained some support in the 1930s, but it was dominated by the much stronger coalition of right-wing groups led by General Francisco Franco. The Falangists fought alongside Franco's forces against the country's Republican government during the Spanish Civil War in 1936 and 1937. But the Falange was too small to challenge the political supremacy of Franco's coalition of monarchists (supporters of royal authority), Catholics, and conservative military forces.
The Republican government killed the Falangist leader José Antonio Primo de Rivera in November 1936. With the loss of this key leader, Franco managed to absorb fascism into his movement by combining the Falange with the Carlists, a monarchist group that included a militia known as the Requetés (Volunteers). The fascism of the Falange retained some influence when Franco became dictator in 1939, but this was primarily limited to putting a radical and youthful face on Franco's repressive regime. Franco's quasi-fascist government controlled Spanish politics until Franco's death in 1975. Franco's reign marked the longest-lived form of fascist political control, but fascist ideology took second place to Franco's more general goal of protecting the interests of Spain's traditional ruling elite.
In Portugal the dictator António de Olivera Salazar led a right-wing authoritarian government in the 1930s that showed fascist tendencies, but was less restrictive than the regimes of other fascist countries. Salazar sought to create a quasi-fascist Estado Novo (New State) based on strict government controls of the economy, but his government was relatively moderate compared to those in Italy, Germany, and Spain. Salazar's conservative authoritarianism was opposed by another movement with fascist tendencies, the National Syndicalists, which hoped to force a more radical fascist transformation of Portugal. But Salazar's government banned the National Syndicalist movement in 1934 and sent its leader, Rolão Preto, into exile in Spain. Salazar continued to rule as the dictator of Portugal until 1968.
In the wake of World War I, Marxist forces on the left and quasi-fascist groups on the right increasingly polarized Austrian politics. Some right-wing forces organized the paramilitary Heimwehr (Home Defense League) to violently attack members of the Socialist Party. Other right-wing forces created an Austrian Nazi party, but this group rejected many basic elements of fascism. The somewhat less extreme Christian Social Party led by Engelbert Dollfuss won power in 1932 through a parliamentary coalition with the Heimwehr. Once in power, Dollfuss created a quasi-fascist regime that resisted incorporation into Hitler's Germany and emphasized the government's ties with the Catholic Church. Dollfuss was killed when the Austrian Nazis attempted a putsch (takeover) in 1934, but the Nazis failed in this effort to take control of the government. The government then suppressed the Nazi party, eliminating the threat of extreme fascism in Austria until Nazi Germany annexed the country in 1938.
The Vichy regime in France stood as one of the most radical quasi-fascist governments during World War II. The regime took its name from the town of Vichy, which was the seat of the pro-German government controlled by the Nazis from 1940 until 1945. The Vichy government shared many characteristics with Nazism, including an official youth organization, a brutal secret police, a reliance on the political rituals of a "civic religion," and vicious anti-Semitic policies that led to the killing of an estimated 65,000 French Jews. The Vichy regime was headed by Henri Philippe Pétain, a fatherly figure who ensured that genuine fascists gained little popular support for their radical plans to rejuvenate France. At the same time, fascists in other parts of the country supported the Nazi occupation, but the Germans never granted real power to these radical forces.
Fascism had a mixed impact on Hungarian politics in the 1920s and 1930s. Some Hungarian leaders hoped that an alliance with Nazi Germany would bring the return of Transylvania, Croatia, and Slovakia—territories that Hungary had lost in World War I. At the same time, however, many Hungarians feared that Germany would try to regain its historical military dominance of the region. Right-wing nationalist groups who favored close ties to Germany flourished in the 1930s, and by 1939 the fascist Arrow Cross movement was the dominant political party. Under the leadership of the radical army officer Fernec Szálasi, the Arrow Cross sought to enlarge Hungary and hoped to position the country along with Italy and Germany as one of Europe's great powers. The Hungarian government led by Miklós Horthy de Nagybánya supported Hitler's overall regional ambitions and maintained close ties with the Nazi government, but the regime felt threatened by the Arrow Cross's challenge to its authority. Horthy clamped down on the Arrow Cross, even though his own government had fascist tendencies.
During World War II Hungary sent about 200,000 soldiers to fight alongside the German army on the Russian front, and about two-thirds of the Hungarian force was killed. As the war turned against Germany, Hungary began to curtail its support for the Nazis, leading Hitler to send troops to occupy Hungary in 1944. The Nazis installed Szálasi as the head of a puppet government that cooperated with the SS when it began rounding up the country's Jewish population for deportation to Nazi extermination camps. By the end of World War II, fascist Hungarian forces and the Nazis had killed an estimated 550,000 Hungarian Jews. The Arrow Cross party collapsed after the war, and some of its leaders were tried as war criminals.
To the east of Hungary, Romanian fascist forces nearly won control of the government. The Iron Guard, the most violent and anti-Semitic movement in the country, grew rapidly when the Romanian economy was battered by the global depression of the 1930s. As the Iron Guard became more powerful, Romanian ruler King Carol II withdrew his initial support for the movement and in 1938 ordered the execution of its top leaders. Romanian general Ion Antonescu, who was backed by the Iron Guard and by Nazi Germany, demanded that Carol II abdicate his rule. After the king left the country, Antonescu set up a quasi-fascist military dictatorship that included fellow members of the Iron Guard. Intent upon creating their own new order, the Iron Guard assassinated political enemies and seized Jewish property. But the campaign led to economic and political chaos, which convinced Nazi officials that the Iron Guard should be eliminated. In 1941, amidst rumors that the Iron Guard was planning a coup, Antonescu crushed the movement with Nazi approval. Antonescu's army then cooperated with Nazi soldiers to exterminate Jews in the eastern portion of the country in 1941, and thousands more died when the fascist forces expelled them to a remote eastern region of the country. By the end of the war an estimated 364,000 Jews had died in the Romanian Holocaust as a result of this alliance of conservative and fascist forces.
IX. Fascism after World War II
After the world became fully aware of the enormous human suffering that occurred in Nazi concentration camps and extermination centers, many people came to see the defeat of fascism as a historic victory of humanity over barbarism. World War II discredited fascism as an ideology, and after the war most of the world saw levels of sustained economic growth that had eluded most countries in the years after World War I. The economic and political turmoil that had spurred fascist movements in the years after World War I seemed to have disappeared. At the same time fascism could not take root in the conditions of tight social and political control in the USSR. Government controls also prevented fascism from gaining a foothold in Soviet client states in Eastern Europe.
But fascism proved resilient, and new movements adapted the ideology to the changed political environment. Some support for a revival of fascism came from the movement's supporters who were disappointed by the defeat of the Axis powers. In addition, a new generation of ultranationalists and racists who grew up after 1945 hoped to rebuild the fascist movement and were determined to continue the struggle against what they saw as decadent liberalism. During the Cold War, in which the United States and the Soviet Union vied for global dominance, these new fascists focused their efforts on combatting Communism, the archenemy of their movement.
Since 1945 fascism has spread to other countries, notably the United States. In several countries fascist groups have tried to build fascist movements based on historical developments such as fear of immigration, increased concern over ecological problems, and the Cold War. Along with the change in ideology, fascists have adopted new tools, such as rock music and the Internet, to spread their ideas. Some fascist groups have renounced the use of paramilitary groups in favor of a "cultural campaign" for Europeans to recover their "true identity."
Fundamentally, contemporary fascism remains tightly linked to its origins in the early 20th century. Fascism still sets as its goal the overthrow of liberal democratic institutions, such as legislatures and courts, and keeps absolute political power as its ultimate aim. Fascism also retains its emphasis on violence, sometimes spurring horrific incidents. For instance, fascist beliefs motivated the 1995 bombing of the federal building in Oklahoma City, Oklahoma, that killed 168 people and wounded more than 500 others. In Germany, fascist groups in the early 1990s launched scores of firebomb attacks against the homes of immigrants, sometimes killing residents. In 1999, inspired by Nazi ideals of ethnic cleansing, fascist groups conducted a series of bomb attacks in London. The attacks were directed against ethnic minorities, gays, and lesbians.
After World War II, only South Africa saw the emergence of a significant fascist movement that followed the prewar pattern. In South Africa the white supremacist paramilitary movement Afrikaner Weerstandsbeweging (Afrikaner Resistance Movement) organized radical white South Africans to create a new hard-line racial state. Most white South Africans supported the system of racial and economic exploitation of the black majority known as apartheid, but only a small fraction went so far as to support the Afrikaner Resistance Movement. The movement carried out repeated acts of violence and sabotage in the 1980s and especially the 1990s, but remained a minor political force. South Africa's political reforms in the 1990s led to the further reduction in support for the Afrikaner Resistance Movement. In other countries, widespread hostility to fascism made it impossible to create a mass movement coordinated by a paramilitary political party, as Nazi Germany's National Socialists or Romania's Iron Guard had been. As a result, fascists have relied on a number of new strategies to keep the prospect of national revolution open.
X. New Fascist Strategies
Fascist groups have developed many new strategies since World War II, but they have virtually no chance of winning control of the government in any country. Citizens in all countries hope for political stability and economic prosperity, and do not see fascism as a realistic way of achieving these goals. Even in countries where ethnic tensions are strong, such as in some areas that were once part of the USSR or under its control, there is no mass support for visions of a reborn national community based on self-sacrifice, suppression of individualism, and isolation from global culture and trade.
A. Reliance on Dispersed Small Groups
One of the most important new fascist strategies is to form small groups of ideologically committed people willing to dedicate their lives to the fascist cause. In some cases these minor groups turn to terrorism. Since 1945, fascists in Western Europe and the United States formed many thousands of small groups, with memberships ranging from a few hundred to less than ten. These small groups can be very fragile. Many of them are dissolved or change names after a few years, and members sometimes restlessly move through a number of groups or even belong to several at once. Although the groups often use bold slogans and claim that their forces will create a severe social crisis, in practice they remain unable to change the status quo. These groups remain ineffective because they fail to attract mass support, failing even to win significant support from their core potential membership of disaffected white males.
Despite their weaknesses, these small fascist groups cannot be dismissed as insignificant. Some of them have been known to carry out acts of violence against individuals. In 1997 in Denmark, for example, a fascist group was accused of sending bombs through the mail to assassinate political opponents. In the United States, fascists have assaulted and killed African Americans, Jews, and other minorities, and set off scores of bombs. Small fascist groups also present a threat because the fliers they distribute and the marches and meetings they hold can create a local climate of racial intolerance. This encourages discrimination ranging from verbal abuse to murder. In addition, the small size and lack of centralized organization that weakens these groups also makes them nearly impossible for governments to control. If a government stops violence by arresting members of a few groups, the larger fascist network remains intact. This virtually guarantees that the ideology of fascism will survive even if government authorities clamp down on some organizations.
B. Shift to Electoral Politics
In addition to organizing through small groups, some fascists have tried to participate in mainstream party-based electoral politics. In contrast to the first fascist movements, these new fascist parties do not rely on a military branch to fight their opponents, and they tend to conceal their larger fascist agenda. To make fascist ideas seem acceptable, some parties water down their revolutionary agenda in order to win voter support even from people who do not want radical change and a fascist regime. Instead of emphasizing their long-term objectives for change, the fascist parties focus on issues such as the threat of Communism, crime, global economic competition, the loss of cultural identity allegedly resulting from mass immigration, and the need for a strong, inspiring leader to give the nation a direction.
Italy, for example, saw this type of quasi-democratic fascism with the 1946 formation of the Movimento Sociale Italiano (MSI), which hoped to keep fascist ideals alive. In the mid-1990s the MSI managed to widen its support significantly when it renounced the goals of historic Italian Fascism and changed its name to the National Alliance (Alleanza Nazionale, or AN). Although the AN presents itself as comparable to other right-wing parties, its programs still retain significant elements of their fascist origins. During the 1990s several other extreme-right parties gained significant mass support, including the Republicans (Die Republikaner) in Germany, the National Front (Front National, or FN) in France, the Freedom Movement (Die Freiheitlichen) in Austria, the Flemish Bloc (Vlaams Blok) in Belgium, and the Liberal Democratic Party in Russia. All of these groups have some fascistic elements, but reject the revolutionary radicalism of true fascism.
C. Emphasis on Cultural Change
Since World War II, some fascist movements have also shifted their goal from the political overthrow of democratic governments to a general cultural transformation. These movements hope that a cultural transformation will create the necessary conditions to achieve a radical political change. This form of fascism played an important role in the formative phase of the New Right. In the 1960s and 1970s New Right intellectuals criticized both liberal democratic politics and communism, arguing that societies should be organized around ethnic identity. Unlike earlier fascist movements, the New Right agenda did not require paramilitary organizations, uniforms, or a single unifying leader.
As a result of their emphasis on culture and ethnicity, the New Right argues that it is important to maintain a diversity of cultures around the world. But since it favors the preservation of ethnic cultures, the New Right strongly opposes the mixing of cultures that is increasingly common in the United States, Canada, and Europe. As a result, New Right thinkers attack the rise of global culture, the tendencies toward closer ties between countries, and all other trends that encourage the loss of racial identity. These thinkers argue that people who oppose racism in fact want to allow racial identity to be destroyed and are therefore promoting racial hatred. Known as differentialists, these fascists proclaim their love of all cultures, but in practice attack the multiculturalism and tolerance that lies at the heart of liberal democracy. Some political scientists and historians therefore argue that differentialism is really just a thinly disguised form of racism and fascism. Since the 1980s some leading New Right intellectuals have moved away from the fascist vision of a new historical era. However, the ideas that form the basis of the New Right movement continue to exert considerable influence on fascist activists who wish to disguise their true agenda. One example is "Third Positionists," who claim to reject capitalism and communism in their search for a "third way" based on revolutionary nationalism.
D. Attempts to Build a Global Movement
Fascists since World War II have also reshaped fascist ideology by attempting to create an international fascist movement. New Rightists and Third Positionists in Europe condemn cultural and ethnic mixing, and strive to unite fascist forces in Britain, Denmark, France, Italy, and other countries behind a shared vision of a reborn Europe. These fascists thus break with the narrow nationalism that characterized the first fascist movements. At the same time, neo-Nazi groups worldwide have embraced the myth of Aryan superiority, which German fascists used as the basis for war against the rest of humanity. The neo-Nazis hope to build a global movement, and rely on this central element of racism to create a doctrine of white supremacy for all of Europe, Canada, the United States, and other places with substantial populations of white people. The new international character of fascism can also be seen in the pseudo-scholarly industry that publishes propaganda in an academic style to play down, trivialize, or excuse the horrors of Nazism. This approach is sometimes called historical revisionism, although it is separate from a much more general and mainstream approach to history known as revisionism. Some of these self-styled scholars manufacture or distort documentary evidence to “prove” that the Nazis did not create extermination camps that killed millions of Jews during the Holocaust. All professional historians completely reject any attempt to show that the Holocaust never happened, but there continues to be a loosely knit international community of fascist writers who make such claims. The Internet has made it much easier for these writers to spread their ideas and propaganda in a way that is practically impossible to censor. While fascism has no prospect of returning to its former influence, it is set to be a continuous source of ideological and physical attacks on liberal society for the foreseeable future, and a permanent component of many democracies. | http://www.angelfire.com/tx5/ara/pde/facism.html | 13 |
66 | To receive more information about up-to-date research on micronutrients, sign up for the free, semi-annual LPI Research Newsletter here.
Structure and Physiology
Bone Composition and Structure. Our skeleton may seem an inert structure, but it is an active organ, made up of tissue and cells in a continual state of activity throughout a lifetime. Bone tissue is comprised of a mixture of minerals deposited around a protein matrix, which together contribute to the strength and flexibility of our skeletons. Sixty-five percent of bone tissue is inorganic mineral, which provides the hardness of bone. The major minerals found in bone are calcium and phosphorus in the form of an insoluble salt called hydroxyapatite (HA) [chemical formula: (Ca)10(PO4)6(OH)2)]. HA crystals lie adjacent and bound to the organic protein matrix. Magnesium, sodium, potassium, and citrate ions are also present, conjugated to HA crystals rather than forming distinct crystals of their own (1).
The remaining 35% of bone tissue is an organic protein matrix, 90-95% of which is type I collagen. Collagen fibers twist around each other and provide the interior scaffolding upon which bone minerals are deposited (1).
Types of Bone. There are two types of bone tissue: cortical (compact) bone and trabecular (spongy or cancellous) bone (2). Eighty percent of the skeleton is cortical bone, which forms the outer surface of all bones. The small bones of the wrists, hands, and feet are entirely cortical bone. Cortical bone looks solid but actually has microscopic openings that allow for the passage of blood vessels and nerves. The other 20% of skeleton is trabecular bone, found within the ends of long bones and inside flat bones (skull, pelvis, sternum, ribs, and scapula) and spinal vertebrae. Both cortical and trabecular bone have the same mineral and matrix components but differ in their porosity and microstructure: trabecular bone is much less dense, has a greater surface area, and undergoes more rapid rates of turnover (see Bone Remodeling/Turnover below).
There are three phases of bone development: growth, modeling (or consolidation), and remodeling (see figure). During the growth phase, the size of our bones increases. Bone growth is rapid from birth to age two, continues in spurts throughout childhood and adolescence, and eventually ceases in the late teens and early twenties. Although bones stop growing in length by about 20 years of age, they change shape and thickness and continue accruing mass when stressed during the modeling phase. For example, weight training and body weight exert mechanical stresses that influence the shape of bones. Thus, acquisition of bone mass occurs during both the growth and modeling/consolidation phases of bone development. The remodeling phase consists of a constant process of bone resorption (breakdown) and formation that predominates during adulthood and continues throughout life. Beginning around age 34, the rate of bone resorption exceeds that of bone formation, leading to an inevitable loss of bone mass with age (3).
Peak Bone Mass. Bone mass refers to the quantity of bone present, both matrix and mineral. Bone mass increases through adolescence and peaks in the late teen years and into our twenties. The maximum amount of bone acquired is known as peak bone mass (PBM) (see figure) (4, 5). Achieving one’s genetically determined PBM is influenced by several environmental factors, discussed more extensively below (see Determinants of Adult Bone Health below).
Technically, we cannot detect the matrix component of bone, so bone mass cannot be measured directly. We can, however, detect bone mineral by using dual X-ray absorptiometry (DEXA). In this technique, the absorption of photons from an X-ray is a function of the amount of mineral present in the path of the beam. Therefore, bone mineral density (BMD) measures the quantity of mineral present in a given section of bone and is used as a proxy for bone mass (6).
Although BMD is a convenient clinical marker to assess bone mass and is associated with osteoporotic fracture risk, it is not the sole determinant of fracture risk. Bone quality (architecture, strength) and propensity to fall (balance, mobility) also factor into risk assessment and should be considered when deciding upon an intervention strategy (see Osteoporosis).
Bone Remodeling/Turnover. Bone tissue, both mineral and organic matrix, is continually being broken down and rebuilt in a process known as remodeling or turnover. During remodeling, bone resorption and formation are always “coupled”—osteoclasts first dissolve a section of bone and osteoblasts then invade the newly created space and secrete bone matrix (6). The goal of remodeling is to repair and maintain a healthy skeleton, adapt bone structure to new loads, and regulate calcium concentration in extracellular fluids (7). The bone remodeling cycle, which refers to the time required to complete the entire series of cellular events from resorption to final mineralization, lasts approximately 40 weeks (8, 9). Additionally, remodeling units cycle at staggered stages. Thus, any intervention that influences bone remodeling will affect newly initiated remodeling cycles at first, and there is a lag time, known as the “bone remodeling transient,” until all remodeling cycles are synchronized to the treatment exposure (8). Considering the bone remodeling transient and the length of time required to complete a remodeling cycle, a minimum of two years is needed to realize steady-state treatment effects on BMD (10).
The rates of bone tissue turnover differ depending on the type of bone: trabecular bone has a faster rate of turnover than cortical bone. Osteoporotic fracture manifests in trabecular bone, primarily as fractures of the hip and spine, and many osteoporotic therapies target remodeling activities in order to alter bone mass (11).
Bone Cells. The cells responsible for bone formation and resorption are osteoblasts and osteoclasts, respectively. Osteoblasts prompt the formation of new bone by secreting the collagen-containing component of bone that is subsequently mineralized (1). The enzyme alkaline phosphatase is secreted by osteoblasts while they are actively depositing bone matrix; alkaline phosphatase travels to the bloodstream and is therefore used as a clinical marker of bone formation rate. Osteoblasts have receptors for vitamin D, estrogen, and parathyroid hormone (PTH). As a result, these hormones have potent effects on bone health through their regulation of osteoblastic activity.
Once they have finished secreting matrix, osteoblasts either die, become lining cells, or transform into osteocytes, a type of bone cell embedded deep within the organic matrix (9, 12). Osteocytes make up 90-95% of all bone cells and are very long-lived (up to decades) (12). They secrete soluble factors that influence osteoclastic and osteoblastic activity and play a central role in bone remodeling in response to mechanical stress (9, 12, 13).
Osteoclasts erode the surface of bones by secreting enzymes and acids that dissolve bone. More specifically, enzymes degrade the organic matrix and acids solubilize bone mineral salts (1). Osteoclasts work in small, concentrated masses and take approximately three weeks to dissolve bone, at which point they die and osteoblasts invade the space to form new bone tissue. In this way, bone resorption and formation are always “coupled.” End products of bone matrix breakdown (hydroxyproline and amino-terminal collagen peptides) are excreted in the urine and can be used as convenient biochemical measures of bone resorption rates.
Maximum Attainment of Peak Bone Mass. The majority of bone mass is acquired during the growth phase of bone development (see figure) (4, 6). Attaining one’s peak bone mass (PBM) (i.e., the maximum amount of bone) is the product of genetic, lifestyle, and environmental factors (5, 14). Sixty to 80% of PBM is determined by genetics, while the remaining 20-40% is influenced by lifestyle factors, primarily nutrition and physical activity (15). In other words, diet and exercise are known to contribute to bone mass acquisition but can only augment PBM within an individual’s genetic potential.
Acquisition of bone mass during the growth phase is sometimes likened to a “bone bank account” (4, 5). As such, maximizing PBM is important when we are young in order to protect against the consequences of age-related bone loss. However, improvements in bone mineral density (BMD) generally do not persist once a supplement or exercise intervention is terminated (16, 17). Thus, attention to diet and physical activity during all phases of bone development is beneficial for bone mass accrual and skeletal health.
Rate of Bone Loss with Aging. Bone remodeling is a lifelong process, with resorption and formation linked in space and time. Yet the scales tip such that bone loss outpaces bone gain as we age. Beginning around age 34, the rate of bone resorption exceeds the rate of bone formation, leading to an inevitable loss of bone mass with age (see figure) (18). Age-related estrogen reduction is associated with increased bone remodeling activity—both resorption and formation—in both sexes (13). However, the altered rate of bone formation does not match that of resorption; thus, estrogen deficiency contributes to loss of bone mass over time (9, 13). The first three to five years following the onset of menopause ('early menopause') are associated with an accelerated, self-limiting loss of bone mass (3, 18, 19). Subsequent postmenopausal bone loss occurs at a linear rate as we age (3). As we continue to lose bone, we near the threshold for osteoporosis and are at high-risk for fractures of the hip and spine.
Osteomalacia. Osteomalacia, also known as “adult rickets,” is a failure to mineralize bone. Stereotypically, osteomalacia results from vitamin D deficiency (serum 25-hydroxyvitamin D levels <20 nmol/L or <8 ng/mL) and the associated inability to absorb dietary calcium and phosphorus across the small intestine. Plasma calcium concentration is tightly controlled, and the body has a number of mechanisms in place to adjust to fluctuating blood calcium levels. In response to low blood calcium, PTH levels increase and vitamin D is activated. The increase in PTH stimulates bone remodeling activity—both resorption and formation, which are always coupled. Thus, osteoclasts release calcium and phosphorus from bone in order to restore blood calcium levels, and osteoblasts mobilize to replace the resorbed bone. During osteomalacia, however, the deficiency of calcium and phosphorus results in incomplete mineralization of the newly secreted bone matrix. In severe cases, newly formed, unmineralized bone loses its stiffness and can become deformed under the strain of body weight.
Osteopenia. Simply put, osteopenia and osteoporosis are varying degrees of low bone mass. Whereas osteomalacia is characterized by low-mineral and high-matrix content, osteopenia and osteoporosis result from low levels of both. As defined by the World Health Organization (WHO), osteopenia precedes osteoporosis and occurs when one’s bone mineral density (BMD) is between 1 and 2.5 standard deviations (SD) below that of the average young adult (30 years of age) woman (see figure).
Osteoporosis. Osteoporosis is a condition of increased bone fragility and susceptibility to fracture due to loss of bone mass. Clinically, osteoporosis is defined as a BMD that is greater than 2.5 SD below the mean for young adult women (see figure). It has been estimated that fracture risk in adults is approximately doubled for each SD reduction in BMD (6). Common sites of osteoporotic fracture are the hip, femoral neck, and vertebrae of spinal column—skeletal sites rich in trabecular bone.
BMD, the quantity of mineral present per given area/volume of bone, is only a surrogate for bone strength. Although it is a convenient biomarker used in clinical and research settings to predict fracture risk, the likelihood of experiencing an osteoporotic fracture cannot be predicted solely by BMD (6). The risk of osteoporotic fracture is influenced by additional factors, including bone quality (microarchitecture, geometry) and propensity to fall (balance, mobility, muscular strength). Other modifiable and non-modifiable factors also play into osteoporotic fracture risk, and they are generally additive (21). The WHO Fracture Risk Assessment Tool was designed to account for some of these additional risk factors. Once you have your BMD measurement, visit the WHO Web site to calculate your 10-year probability of fracture, taking some of these additional risk factors into account.
Paying attention to modifiable risk factors for osteoporosis is an important component of fracture prevention strategies. For more details about individual dietary factors and osteoporosis, see the Micronutrient Information Center's Disease Index and the LPI Research Newsletter article by Dr. Jane Higdon.
Micronutrient supply plays a prominent role in bone health. Several minerals have direct roles in hydroxyapatite (HA) crystal formation and structure; other nutrients have indirect roles as cofactors or as regulators of cellular activity (22, 23).Table 1 below lists the dietary reference intakes (DRIs) for micronutrients important to bone health. The average dietary intake of Americans (aged 2 years and older) is also provided for comparative purposes (24).
|Table 1. DRIs for Micronutrients Important to Bone Health|
|Micronutrient||RDA or AI*||UL (≥19 y)||Mean intake (≥2 y, all food sources) (24)|
1,000 mg/d (19-70y)
1,200 mg/d (>70y)
1,000 mg/d (19-50y)
1,200 mg/d (>50y)
|Men & Women:
2,500 mg/d (19-50y)
2,000 mg/d (>50y)
|Phosphorus||Men & Women:
|Men & Women:
4 g/d (19-70y)
3 g/d (>70y)
|Fluoride||Men: 4 mg/d*
Women: 3 mg/d*
|Men & Women:
400 mg/d (19-30y)
420 mg/d (>31y)
310 mg/d (19-30y)
320 mg/d (>31y)
|Men & Women:
|Sodium||Men & Women:
1.5 g/d (19-50y)
1.3 g/d (51-70y)
1.2 g/d (>70y)
|Men & Women:
|Vitamin D||Men & Women:
15 mcg (600 IU)/d (19-70y)
20 mcg (800 IU)/d (>70y)
|Men & Women:
(3,000 IU)/db Women:
|Men & Women:
3,000 mcg (10,000 IU)/db
|ND||80 mcg/d||Vitamin C||Men:
|Men & Women:
1.3 mg/d (19-50y)
1.7 mg/d (>50y)
1.3 mg/d (19-50y)
1.5 mg/d (>50y)
|Men & Women:
|Folate||Men & Women:
|Men & Women:
|Vitamin B12||Men & Women:
|Abbreviations: RDA, recommended dietary allowance; AI, adequate intake; UL, tolerable upper intake level; y, years; d, day; g, gram; mg, milligram; mcg, microgram; IU, international units; ND, not determinable|
aApplies only to the supplemental form
bApplies only to preformed retinol
cApplies to the synthetic form in fortified foods and supplements
Calcium. Calcium is the most common mineral in the human body. About 99% of the calcium in the body is found in bones and teeth, while the other 1% is found in blood and soft tissues. Calcium levels in the blood must be maintained within a very narrow concentration range for normal physiological functioning, namely muscle contraction and nerve impulse conduction. These functions are so vital to survival that the body will demineralize bone to maintain normal blood calcium levels when calcium intake is inadequate.
In response to low blood calcium, parathyroid hormone (PTH) is secreted. PTH targets three main axes in order to restore blood calcium concentration: (1) vitamin D is activated (see the section on vitamin D below), (2) filtered calcium is retained by the kidneys, and (3) bone resorption is induced (1). It is critical to obtain enough dietary calcium in order to balance the calcium taken from our bones in response to fluctuating blood calcium concentrations.
Several randomized, placebo-controlled trials (RCTs) have tested whether calcium supplementation reduces age-related bone loss and fracture incidence in postmenopausal women. In the Women’s Health Initiative (WHI), 36,282 healthy, postmenopausal women (aged 50 to 79 years; mean age 62 years) were randomly assigned to receive placebo or 1,000 mg calcium carbonate and 400 IU vitamin D3 daily (25). After a mean of seven years of follow-up, the supplement group had significantly less bone loss at the hip. A 12% reduction in the incidence of hip fracture in the supplement group did not reach statistical significance, possibly due to the low rates of absolute hip fracture in the 50 to 60 year age range. The main adverse event reported in the supplement group was an increased proportion of women with kidney stones. Another RCT assessed the effect of 1,000 mg of calcium citrate versus placebo on bone density and fracture incidence in 1,472 healthy postmenopausal women (aged 74±4 years) (26). Calcium had a significant beneficial effect on bone mineral density (BMD) but an uncertain effect on fracture rates. The high incidence of constipation with calcium supplementation may have contributed to poor compliance, which limits data interpretation and clinical efficacy. Hip fracture was significantly reduced in an RCT involving 1,765 healthy, elderly women living in nursing homes (mean age 86±6 years) given 1,200 mg calcium triphosphate and 800 IU vitamin D3 daily for 18 months (27). The number of hip fractures was 43% lower and the number of nonvertebral fractures was 32% lower in women treated with calcium and vitamin D3 supplements compared to placebo. While there is a clear treatment benefit in this trial, the institutionalized elderly population is known to be at high risk for vitamin deficiencies and fracture rates and may not be representative of the general population.
Overall, the majority of calcium supplementation trials (and meta-analyses thereof) show a positive effect on BMD, although the size of the effect is modest (3, 7, 28, 29). Furthermore, the response to calcium supplementation may depend on habitual calcium intake and age: those with chronic low intakes will benefit most from supplementation (7, 29), and women within the first five years after menopause are somewhat resistant to calcium supplementation (7, 10).
The current recommendations in the U.S. for calcium are based on a combination of balance data and clinical trial evidence, and they appear to be set at levels that support bone health (see table 1 above) (30, 31). Aside from the importance of meeting the RDA, calcium is a critical adjuvant for therapeutic regimens used to treat osteoporosis (7, 11). The therapy (e.g., estrogen replacement, pharmaceutical agent, and physical activity) provides a bone-building stimulus that must be matched by raw materials (nutrients) obtained from the diet. Thus, calcium supplements are a necessary component of any osteoporosis treatment strategy.
A recent meta-analysis (32) and prospective study (33) have raised concern over the safety of calcium supplements, either alone or with vitamin D, on the risk of cardiovascular events. Although these analyses raise an issue that needs further attention, there is insufficient evidence available at this time to definitely refute or support the claims that calcium supplementation increases the risk of cardiovascular disease. For more extensive discussion of this issue, visit the LPI Spring/Summer 2012 Research Newsletter or the LPI News Article.
Phosphorus. More than half the mass of bone mineral is comprised of phosphorus, which combines with calcium to form HA crystals. In addition to this structural role, osteoblastic activity relies heavily on local phosphate concentrations in the bone matrix (11, 34). Given its prominent functions in bone, phosphorus deficiency could contribute to impaired bone mineralization (34). However, in healthy individuals, phosphorus deficiency is uncommon, and there is little evidence that phosphorus deficiency affects the incidence of osteoporosis (23). Excess phosphorus intake has negligible affects on calcium excretion and has not been linked to a negative impact on bone (35).
Fluoride. Fluoride has a high affinity for calcium, and 99% of our body fluoride is stored in calcified tissues, i.e., teeth and bones (36). In our teeth, very dense HA crystals are embedded in collagen fibers. The presence of fluoride in the HA crystals (fluoroapatite) enhances resistance to destruction by plaque bacteria (1, 36), and fluoride has proven efficacy in the prevention of dental caries (37).
While fluoride is known to stimulate bone formation through direct effects on osteoblasts (38), high-dose fluoride supplementation may not benefit BMD or reduce fracture rates (39, 40). The presence of fluoride in HA increases the crystal size and contributes to bone fragility; thus, uncertainties remain about the quality of newly formed bone tissue with fluoride supplementation (9, 23).
Chronic intake of fluoridated water, on the other hand, may benefit bone health (9, 36). Two large prospective studies comparing fracture rates between fluoridated and non-fluoridated communities demonstrate that long-term, continuous exposure to fluoridated water (1 mg/L) is safe and associated with reduced incidence of fracture in elderly individuals (41, 42).
Magnesium. Magnesium (Mg) is a major mineral with essential structural and functional roles in the body. It is a critical component of our skeleton, with 50-60% of total body Mg found in bone where it colocalizes with HA, influencing the size and strength of HA crystals (23). Mg also serves a regulatory role in mineral metabolism. Mg deficiency is associated with impaired secretion of PTH and end-organ resistance to the actions of PTH and 1,25-dihydroxyvitamin D3 (43). Low dietary intake of Mg is common in the U.S. population (24), and it has therefore been suggested that Mg deficiency could impair bone mineralization and represent a risk factor for osteoporosis.
However, observational studies of the association between Mg intake and bone mass or bone loss have produced mixed results, with most showing no association (34). The effect of Mg supplementation on trabecular bone density in postmenopausal women was assessed in one controlled intervention trial (44). Thirty-one postmenopausal women (mean age, 57.6±10.6 years) received two to six tablets of 125 mg each magnesium hydroxide (depending on individual tolerance levels) for six months, followed by two tablets daily for another 18 months. Twenty-three age-matched osteoporotic women who refused treatment served as controls. After one year of Mg supplementation, there was either an increase or no change in bone density in 27 out of 31 patients; bone density was significantly decreased in controls after one year. Although encouraging, this is a very small study, and only ten Mg-supplemented patients persisted into the second year.
Sodium. Sodium is thought to influence skeletal health through its impact on urinary calcium excretion (34). High-sodium intake increases calcium excretion by the kidneys. If the urinary calcium loss is not compensated for by increased intestinal absorption from dietary sources, bone calcium will be mobilized and could potentially affect skeletal health. However, even with the typical high sodium intakes of Americans (2,500 mg or more per day), the body apparently increases calcium absorption efficiency to account for renal losses, and a direct connection between sodium intake and abnormal bone status in humans has not been reported (34, 45). Nonetheless, compensatory mechanisms in calcium balance may diminish with age (11), and keeping sodium within recommended levels is associated with numerous health benefits.
Vitamin A. Both vitamin A deficiency and excess can negatively affect skeletal health. Vitamin A deficiency is a major public health concern worldwide, especially in developing nations. In growing animals, vitamin A deficiency causes bone abnormalities due to impaired osteoclastic and osteoblastic activity (46). These abnormalities can be reversed upon vitamin A repletion (47).
In animals, vitamin A toxicity (hypervitaminosis A) is associated with poor bone growth, loss of bone mineral content, and increased rate of fractures (22). Case studies in humans have indicated that extremely high vitamin A intakes (100,000 IU/day or more, several fold above the tolerable upper intake level [UL] (see table 1 above) are associated with hypercalcemia and bone resorption (48-50).
The question remains, however, if habitual, excessive vitamin A intake has a negative effect on bone (22, 51, 52). There is some observational evidence that high vitamin A intake (generally in supplement users and at intake levels >1,500 mcg [5,000 IU]/day) is associated with an increased risk of osteoporosis and hip fracture (53-55). However, methods to assess vitamin A intake and status are notoriously unreliable (56), and the observational studies evaluating the association between vitamin A status or vitamin A intake with bone health report inconsistent results (57, 58). At this time, striving for the recommended dietary intake (RDA) for vitamin A (see table 1 above) is an important and safe goal for optimizing skeletal health.
Vitamin D. The primary function of vitamin D is to maintain calcium and phosphorus absorption in order to supply the raw materials of bone mineralization (9, 59). In response to low blood calcium, vitamin D is activated and promotes the active absorption of calcium across the intestinal cell (59). In conjunction with PTH, activated 1,25-dihydroxyvitamin D3 retains filtered calcium by the kidneys. By increasing calcium absorption and retention, 1,25-dihydroxyvitamin D3 helps to offset calcium lost from the skeleton.
Low circulating 25-hydoxyvitamin D3 (the storage form of vitamin D3) triggers a compensatory increase in PTH, a signal to resorb bone. The Institute of Medicine determined that maintaining a serum 25-hydroxyvitamin D3 level of 50 nmol/L (20 ng/ml) benefits bone health across all age groups (31). However, debate remains over the level of serum 25-hydroxyvitamin D3 that corresponds to optimum bone health. Based on a recent review of clinical trial data, the authors concluded that serum 25-hydroxyvitamin D3 should be maintained at 75-110 nmol/L (30-44 ng/ml) for optimal protection against fracture and falls with minimal risk of hypercalcemia (60). The level of intake associated with this higher serum 25-hydroxyvitamin D3 range is 1,800 to 4,000 IU per day, significantly higher than the current RDA (see table 1 above) (60).
As mentioned in the Calcium section above, several randomized controlled trials (and meta-analyses) have shown that combined calcium and vitamin D supplementation decreases fracture incidence in older adults (29, 61-63). The efficacy of vitamin D supplementation may depend on habitual calcium intake and the dose of vitamin D used. In combination with calcium supplementation, the dose of vitamin D associated with a protective effect is 800 IU or more per day (29, 64). In further support of this value, a recent dosing study performed in 167 healthy, postmenopausal, white women (aged 57 to 90 years old) with vitamin D insufficiency (15.6 ng/mL at baseline) demonstrated that 800 IU/d of vitamin D3 achieved a serum 25-hydoxyvitamin D3 level greater than 20 ng/mL (65). The dosing study, which included seven groups ranging from 0 to 4,800 IU per day of vitamin D3 plus calcium supplementation for one year, also revealed that serum 25-hydroxyvitamin D3 response was curvilinear and plateaued at approximately 112 nmol/L (45 ng/mL) in subjects receiving more than 3,200 IU per day of vitamin D3.
Some trials have evaluated the effect of high-dose vitamin D supplementation on bone health outcomes. In one RCT, high-dose vitamin D supplementation was no better than the standard dose of 800 IU/d for improving bone mineral density (BMD) at the hip and lumbar spine (66). In particular, 297 postmenopausal women with low bone mass (T-score ≤-2.0) were randomized to receive high-dose (20,000 IU vitamin D3 twice per week plus 800 IU per day) or standard-dose (placebo plus 800 IU per day) for one year; both groups also received 1,000 mg elemental calcium per day. After one year, both groups had reduced serum PTH, increased serum 25-hydroxyvitamin D3, and increased urinary calcium/creatinine ratio, although to a significantly greater extent in the high-dose group. BMD was similarly unchanged or slightly improved in both groups at all measurement sites. In the Vital D study, 2,256 elderly women (aged 70 years and older) received a single annual dose of 500,000 IU of vitamin D3 or placebo administered orally in the autumn or winter for three to five years (67). Calcium intake was quantified annually by questionnaire; both groups had a median daily calcium intake of 976 mg. The vitamin D group experienced significantly more falls and fractures compared to placebo, particularly within the first three months after dosing. Not only was this regimen ineffective at lowering risk, it suggests that the safety of infrequent, high-dose vitamin D supplementation warrants further study.
The RDAs for calcium and vitamin D go together, and the requirement for one nutrient assumes that the need for the other nutrient is being met (31). Thus, the evidence supports the use of combined calcium and vitamin D supplements in the prevention of osteoporosis in older adults.
Vitamin K. The major function of vitamin K1 (phylloquinone) is as a cofactor for a specific enzymatic reaction that modifies proteins to a form that facilitates calcium-binding (68). Although only a small number of vitamin-K-dependent proteins have been identified, four are present in bone tissue: osteocalcin (also called bone GLA protein), matrix GLA protein (MGP), protein S, and Gas 6 (68, 69). The putative role of vitamin K in bone biology is attributed to its role as cofactor in the carboxylation of these glutamic acid (GLA)-containing proteins (70).
There is observational evidence that diets rich in vitamin K are associated with a decreased risk of hip fracture in both men and women; however, the association between vitamin K intake and BMD is less certain (70). It is possible that a higher intake of vitamin K1, which is present in green leafy vegetables, is a marker of a healthy lifestyle that is responsible for driving the beneficial effect on fracture risk (68, 70). Furthermore, a protective effect of vitamin K1 supplementation on bone loss has not been confirmed in randomized controlled trials (69-71).
Vitamin K2 (menaquinone) at therapeutic doses (45 mg/day) is used in Japan to treat osteoporosis (see the Micronutrient Information Center’s Disease Index). Although a 2006 meta-analysis reported an overall protective effect of menaquinone-4 (MK-4) supplementation on fracture risk at the hip and spine (72), more recent data have not corroborated a protective effect of MK-4 and may change the outcome of the meta-analysis if included in the dataset (70).
A double-blind, placebo-controlled intervention performed in 2009 observed no effect of either vitamin K1 (1 mg/d) or MK-4 (45 mg/d) supplementation on markers of bone turnover or BMD among healthy, postmenopausal women (N=381) receiving calcium and vitamin D supplements (69). In the Postmenopausal Health Study II, the effect of supplemental calcium, vitamin D, and vitamin K (in fortified dairy products) and lifestyle counseling on bone health was examined in healthy, postmenopausal women (73, 74). One hundred fifty women (mean age 62 years) were randomly assigned to one of four groups: (1) 800 mg calcium plus 10 mcg vitamin D3 (N=26); (2) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg vitamin K1 (N=26); (3) 800 mg calcium, 10 mcg vitamin D3, plus 100 mcg MK-7 (N=24); and (4) control group receiving no dietary intervention or counseling. Supplemental nutrients were delivered via fortified milk and yoghurt, and subjects were advised to consume one portion of each on a daily basis and to attend biweekly counseling sessions during the one-year intervention. BMD significantly increased in all three treatments compared to controls. Between the three diet groups, a significant effect of K1 or MK-7 on BMD remained only at the lumbar spine (not at hip and total body) after controlling for serum vitamin D and calcium intake. Overall, the positive influence on BMD was attributed to the combined effect of diet and lifestyle changes associated with the intervention, rather than with an isolated effect of vitamin K or MK-7 (73).
We often discuss the mineral aspect of bone, but the organic matrix is also an integral aspect of bone quality and health. Collagen makes up 90% of the organic matrix of bone. Type I collagen fibers twist around each other in a triple helix and become the scaffold upon which minerals are deposited.
Vitamin C is a required cofactor for the hydroxylation of lysine and proline during collagen synthesis by osteoblasts (75). In guinea pigs, vitamin C deficiency is associated with defective bone matrix production, both quantity and quality (76). Unlike humans and guinea pigs, rats can synthesize ascorbic acid on their own. Using a special strain of rats with a genetic defect in ascorbic acid synthesis (Osteogenic Disorder Shionogi [ODS] rats), researchers can mimic human scurvy by feeding these animals a vitamin C-deficient diet (77). Ascorbic acid-deficient ODS rats have a marked reduction in bone formation with no defect in bone mineralization (78). More specifically, ascorbic acid deficiency impairs collagen synthesis, the hydroxylation of collagenous proline and lysine residues, and osteoblastic adhesion to bone matrix (78).
In observational studies, vitamin C intake and status is inconsistently associated with bone mineral density and fracture risk (22). A double-blind, placebo-controlled trial was performed with the premise that improving the collagenous bone matrix will enhance the efficacy of mineral supplementation to counteract bone loss (75). Sixty osteopenic women (35 to 55 years of age) received a placebo comprised of calcium and vitamin D (1,000 mg calcium carbonate plus 250 IU vitamin D) or this placebo plus CB6Pro (500 mg vitamin C, 75 mg vitamin B6, and 500 mg proline) daily for one year. In contrast to controls receiving calcium plus vitamin D alone, there was no bone loss detected in the spine and femur in the CB6Pro group.
High levels of a metabolite known as homocysteine (hcy) are an independent risk factor for cardiovascular disease (CVD) (see the Disease Index) and may also be a modifiable risk factor for osteoporotic fracture (22). A link between hcy and the skeleton was first noted in studies of hyperhomocysteinuria, a metabolic disorder characterized by exceedingly high levels of hcy in the plasma and urine. Individuals with hyperhomocysteinuria exhibit numerous skeletal defects, including reduced bone mineral density (BMD) and osteopenia (79). In vitro studies indicate that a metabolite of hcy inhibits lysyl oxidase, an enzyme involved in collagen cross-linking, and that elevated hcy itself may stimulate osteoclastic activity (80-82).
The effect of more subtle elevations of plasma hcy on bone health is more difficult to demonstrate, and observational studies in humans report conflicting results (79, 83). Some report an association between elevated plasma hcy and fracture risk (84-86), while others find no relationship (87-89). A recent meta-analysis of 12 observational studies reported that elevated plasma homocysteine is associated with increased risk of incident fracture (90).
Folate, vitamin B12, and vitamin B6 help keep blood levels of hcy low; thus, efforts to reduce plasma hcy levels by meeting recommended intake levels for these vitamins may benefit bone health (83). Few intervention trials evaluating hcy-lowering therapy on bone health outcomes have been conducted. In one trial, 5,522 participants (aged 55 years and older) in the Heart Outcomes Prevention Evaluation (HOPE) 2 trial were randomized to receive daily hcy level-lowering therapy (2.5 mg folic acid, 50 mg vitamin B6, and 1 mg vitamin B12) or placebo for a mean duration of five years (91). Notably, HOPE 2 participants were at high-risk for cardiovascular disease and have preexisting CVD, diabetes mellitus, or another CVD risk factor. Although plasma hcy levels were reduced in the treatment group, there were no significant differences between treatment and placebo on the incidence of skeletal fracture. A randomized, double-blind, placebo-controlled intervention is under way that will assess the effect of vitamin B12 and folate supplementation on fracture incidence in elderly individuals (92). During the B-PROOF (B-vitamins for the Prevention Of Osteoporotic Fracture) trial, 2,919 subjects (65 years and older) with elevated hcy (≥12 micromol/L) will receive placebo or a daily tablet with 500 mcg B12 plus 400 mcg folic acid for two years (both groups also receive 15 mcg [600 IU] vitamin D daily). The first results are expected in 2013 and may help clarify the relationship between hcy, B-vitamin status, and osteoporotic hip fracture.
Smoking. Cigarette smoking has an independent, negative effect on bone mineral density (BMD) and fracture risk in both men and women (93, 94). Several meta-analyses have been conducted to assess the relationship between cigarette smoking and bone health. After pooling data from a number of similar studies, there is a consistent, significant reduction in bone mass and increased risk of fracture in smokers compared to non-smokers (95-97). The effects were dose-dependent and had a strong association with age. Smoking cessation may slow or partially reverse the bone loss caused by years of smoking.
Unhealthy lifestyle habits and low body weight present in smokers may contribute to the negative impact on bone health (93, 94). Additionally, smoking leads to alterations in hormone (e.g., 1,25-dihydroxyvitamin D3 and estrogen) production and metabolism that could affect bone cell activity and function (93, 94). The deleterious effects of smoking on bone appear to be reversible; thus, efforts to stop smoking will benefit many aspects of general health, including bone health.
Alcohol. Chronic light alcohol intake is associated with a positive effect on bone density (98). If one standard drink contains 10 g ethanol, then this level of intake translates to one drink per day for women and two drinks per day for men (98). The effect of higher alcohol intakes (11-30 g ethanol per day) on BMD is more variable and may depend on age, gender, hormonal status, and type of alcoholic beverage consumed (98). At the other end of the spectrum, chronic alcoholism has a documented negative effect on bone and increases fracture risk (98). Alcoholics consuming 100-200 g ethanol per day have low bone density, impaired osteoblastic activity, and metabolic abnormalities that compromise bone health (98, 99).
Physical Activity. Physical activity is highly beneficial to skeletal health across all stages of bone development. Regular resistance exercise helps to reduce osteoporotic fracture risk for two reasons: it both directly and indirectly increases bone mass, and it reduces falling risk by improving strength, balance, and coordination (100).
Physical activity increases bone mass because mechanical forces imposed on bone induce an adaptive osteogenic (bone-forming) response. Bone adjusts its strength in proportion to the degree of bone stress (1), and the intensity and novelty of the load, rather than number of repetitions or sets, matter for building bone mass (101). The American College of Sports Medicine suggests that adults engage in the following exercise regimen in order to maintain bone health (see table 2 below) (100):
|Table 2. Exercise recommendations for bone health according to the American College of Sports Medicine|
|MODE||Weight-bearing endurance activities||Tennis, stair climbing, jogging|
|Activities that involve jumping||Volleyball, basketball|
|Resistance exercise||Weight lifting|
|INTENSITY||Moderate to high|
|FREQUENCY||Weight-bearing endurance activities||3-5 times per week|
|Resistance exercise||2-3 times per week|
|DURATION||30-60 minutes per day||Combination of weight-bearing endurance activities, activities that involve jumping, and resistance exercise that targets all major muscle groups|
Additionally, the ability of the skeleton to respond to physical activity can be either constrained or enabled by nutritional factors. For example, calcium insufficiency diminishes the effectiveness of mechanical loading to increase bone mass, and highly active people who are malnourished are at increased fracture risk (2, 100). Thus, exercise can be detrimental to bone health when the body is not receiving the nutrients it needs to remodel bone tissue in response to physical activity.
Micronutrients play a prominent role in bone health. The emerging theme with supplementation trials seems to be that habitual intake influences the efficacy of the intervention. In other words, correcting a deficiency and meeting the RDAs of micronutrients involved in bone health will improve bone mineral density (BMD) and benefit the skeleton (see table 1). To realize lasting effects on bone, the intervention must persist throughout a lifetime. At all stages of life, high impact and resistance exercise in conjunction with adequate intake of nutrients involved in bone health are critical factors in maintaining a healthy skeleton and minimizing bone loss.
The propensity of clinical trial data supports supplementation with calcium and vitamin D in older adults as a preventive strategy against osteoporosis. Habitual, high intake of vitamin A at doses >1,500 mcg (5,000 IU) per day may negatively impact bone. Although low dietary vitamin K intake is associated with increased fracture risk, RCTs have not supported a direct role for vitamin K1 (phylloquinone) or vitamin K2 (menaquinone) supplementation in fracture risk reduction. The other micronutrients important to bone health (phosphorus, fluoride, magnesium, sodium, and vitamin C) have essential roles in bone, but clinical evidence in support of supplementation beyond recommended levels of intake to improve BMD or reduce fracture incidence is lacking.
Many Americans, especially the elderly, are at high risk for deficiencies of several micronutrients (24). Some of these nutrients are critical for bone health, and the LPI recommends supplemental calcium, vitamin D, and magnesium for healthy adults (see the LPI Rx for Health).
Written in August 2012 by:
Giana Angelo, Ph.D.
Linus Pauling Institute
Oregon State University
Reviewed in August 2012 by:
Connie M. Weaver, Ph.D.
Distinguished Professor and Department Head
Department of Nutrition Science
This article was underwritten, in part, by a grant from
Bayer Consumer Care AG, Basel, Switzerland.
Copyright 2012-2013 Linus Pauling Institute
The Linus Pauling Institute Micronutrient Information Center provides scientific information on the health aspects of dietary factors and supplements, foods, and beverages for the general public. The information is made available with the understanding that the author and publisher are not providing medical, psychological, or nutritional counseling services on this site. The information should not be used in place of a consultation with a competent health care or nutrition professional.
The information on dietary factors and supplements, foods, and beverages contained on this Web site does not cover all possible uses, actions, precautions, side effects, and interactions. It is not intended as nutritional or medical advice for individual problems. Liability for individual actions or omissions based upon the contents of this site is expressly disclaimed.
Thank you for subscribing to the Linus Pauling Institute's Research Newsletter.
You should receive your first issue within a month. We appreciate your interest in our work. | http://lpi.oregonstate.edu/infocenter/bonehealth.html | 13 |
51 | Sticky, in the social sciences and particularly economics, describes a situation in which a variable is resistant to change. Sticky prices are an important part of macroeconomic theory since they may be used to explain why markets might not reach equilibrium in the short run or even possibly the long-run. Nominal wages may also be sticky. Market forces may reduce the real value of labour in an industry, but wages will tend to remain at previous levels in the short run. This can be due to institutional factors such as price regulations, legal contractual commitments (e.g. office leases and employment contracts), labour unions, human stubbornness, human needs, or self-interest. Stickiness may apply in one direction. For example, a variable that is "sticky downward" will be reluctant to drop even if conditions dictate that it should. However, in the long run it will drop to the equilibrium level.
Economists tend to cite four possible causes of price stickiness: menu costs, money illusion, imperfect information with regard to price changes, and fairness concerns.Robert Hall cites incentive and cost barriers on the part of firms to help explain stickiness in wages.
Examples of stickiness
Many firms, during recessions, lay off workers. Yet many of these same firms are reluctant to begin hiring, even as the economic situation improves. This can result in slow job growth during a recovery. Wages, prices, and employment levels can all be sticky. Normally, a variable oscillates according to changing market conditions, but when stickiness enters the system, oscillations in one direction are favored over the other, and the variable exhibits "creep"—it gradually moves in one direction or another. This is also called the "ratchet effect". Over time a variable will have ratcheted in one direction.
For example, in the absence of competition, firms rarely lower prices, even when production costs decrease (i.e. supply increases) or demand drops. Instead, when production becomes cheaper, firms take the difference as profit, and when demand decreases they are more likely to hold prices constant, while cutting production, than to lower them. Therefore, prices are sometimes observed to be sticky downward, and the net result is one kind of inflation.
Prices in an oligopoly can often be considered sticky-upward. The kinked demand curve, resulting in elastic price elasticity of demand above the current market clearing price, and inelasticity below it, requires firms to match price reductions by their competitors to maintain market share.
Note: For a general discussion of asymmetric upward- and downward-stickiness with respect to upstream prices see Asymmetric price transmission.
Modeling sticky prices
Economists have tried to model sticky prices in a number of ways. These models can be classified as either time-dependent, where firms change prices with the passage of time and decide to change prices independently of the economic environment, or state-dependent, where firms decide to change prices in response to changes in the economic environment. The differences can be thought of as differences in a two-stage process: In time-dependent models, firms decide to change prices and then evaluate market conditions; In state-dependent models, firms evaluate market conditions and then decide how to respond.
In time-dependent models price changes are staggered exogenously, so a fixed percentage of firms change prices at a given time. There is no selection as to which firms change prices. Two commonly used time-dependent models based on papers by John B. Taylor and Guillermo Calvo. In Taylor (1980), firms change prices every nth period. In Calvo (1983), firms change prices at random. In both models the choice of changing prices is independent of the inflation rate.
The Taylor model is one where firms set the price knowing exactly how long the price will last (the duration of the price spell). Firms are divided into cohorts, so that each period the same proportion of firms reset their price. For example, with two period price-spells, half of the firm reset their price each period. Thus the aggregate price level is an average of the new price set this period and the price set last period and still remaining for half of the firms. In general, if price-spells last for n periods, a proportion of 1/n firms reset their price each period and the general price is an average of the prices set now and in the preceding n-1 periods. At any point in time, there will be a uniform distribution of ages of price-spells: (1/n) will be new prices in their first period, 1/n in their second period, and so on until 1/n will be n periods old. The average age of price-spells will be (n+1)/2 (if you count the first period as 1).
In the Calvo staggered contracts model, there is a constant probability h that the firm can set a new price. Thus a proportion h of firms can reset their price in any period, whilst the remaining proportion (1-h) keep their price constant. In the Calvo model, when a firm sets its price, it does not know how long the price-spell will last. Instead, the firm faces a probability distribution over possible price-spell durations. The probability that the price will last for i periods is (1-h)(i-1), and the expected duration is h-1. For example, if h=0.25, then a quarter of firms will rest their price each period, and the expected duration for the price-spell is 4. There is no upper limit to how long price-spells may last: although the probability becomes small over time, it is always strictly positive. Unlike the Taylor model where all completed price-spells have the same length, there will at any time be a distribution of completed price-spell lengths.
In state-dependent models the decision to change prices is based on changes in the market and are not related to the passage of time. Most models relate the decision to change prices changes to menu costs. Firms change prices when the benefit of changing a price becomes larger than the menu cost of changing a price. Price changes may be bunched or staggered over time. Prices change faster and monetary shocks are over faster under state dependent than time. Examples of state-dependent models include the one proposed by Golosov and Lucas and one suggested by Dotsey, King and Wolman
Significance in macroeconomics
Sticky prices play an important role in Keynesian, macroeconomic theory, especially in new Keynesian thought. Keynesian macroeconomists suggest that markets fail to clear because prices fail to drop to market clearing levels when there is a drop in demand. Economists have also looked at sticky wages as an explanation for why there is unemployment. Huw Dixon and Claus Hansen showed that even if only part of the economy has sticky prices, this can influence prices in other sectors and lead to prices in the rest of the economy becoming less responsive to changes in demand. Thus price and wage stickiness in one sector can "spill over" and lead to the economy behaving in a more Keynesian way.
Mathematical example: a little price stickiness can go a long way.
To see how a small sector with a fixed price can affect the way rest of the flexible prices behave, suppose that there are two sectors in the economy: a proportion a with flexible prices Pf and a proportion 1-a that are affected by menu costs with sticky prices Pm. Suppose that the flexible price sector price Pf has the market clearing condition of the following form:
where is the aggregate price index (which would result if consumers had Cobb-Douglas preferences over the two goods). The equilibrium condition says that the real flexible price equals some constant (for example could be real marginal cost). Now we have a remarkable result: no matter how small the menu cost sector, so long as a<1, the flexible prices get "pegged" to the fixed price. Using the aggregate price index the equilibrium condition becomes
which implies that
What this result says is that no matter how small the sector affected by menu-costs, it will tie down the flexible price. In macroeconomic terms all nominal prices will be sticky, even those in the potentially flexible price sector, so that changes in nominal demand will feed through into changes in output in both the meno-cost sector and the flexible price sector.
Now, this is of course an extreme result resulting from the real rigidity taking the form of a constant real marginal cost. For example, if we allowed for the real marginal cost to vary with aggregate output Y, then we would have
so that the flexible prices would vary with output Y. However, the presence of the fixed prices in the menu-cost sector would still act to dampen the responsiveness of the flexible prices, although this would now depend upon the size of the menu-cost sector a, the sensitivity of to Y and so on.
Sticky information is a term used in macroeconomics to refer to the fact that agents at any particular time may be basing their behavior on information that is old and does not take into account recent events. The first model of Sticky information was developed by Stanley Fischer in his 1977 article. He adopted a "staggered" or "overlapping" contract model. Suppose that there are two unions in the economy, who take turns to choose wages. When it is a union's turn, it chooses the wages it will set for the next two periods. In contrast to John B. Taylor's model where the nominal wage is constant over the contract life, in Fischer's model the union can choose a different wage for each period over the contract. The key point is that at any time t, the union setting its new contract will be using the up to date latest information to choose its wages for the next two periods. However, the other union is still choosing its wage based on the contract it planned last period, which is based on the old information.
The importance of sticky information in Fischer's model is that whilst wages in some sectors of the economy are reacting to the latest information, those in other sectors are not. This has important implications for monetary policy. A sudden change in monetary policy can have real effects, because of the sector where wages have not had a chance to adjust to the new information.
The idea of Sticky information was later developed by N. Gregory Mankiw and Ricardo Reis. This added a new feature to Fischer's model: there is a fixed probability that you can replan your wages or prices each period. Using quarterly data, they assumed a value of 25%: that is, each quarter 25% of randomly chosen firms/unions can plan a trajectory of current and future prices based on current information. Thus if we consider the current period: 25% of prices will be based on the latest information available; the rest on information that was available when they last were able to replan their price trajectory. Mankiw and Reis found that the model of sticky information provided a good way of explaining inflation persistence.
Evaluation of sticky information models
Sticky information models do not have nominal rigidity: firms or unions are free to choose different prices or wages for each period. It is the information that is sticky, not the prices. Thus when a firm gets lucky and can re-plan its current and future prices, it will choose a trajectory of what it believes will be the optimal prices now and in the future. In general, this will involve setting a different price every period covered by the plan.
This is at odds with the empirical evidence on prices, . There are now many studies of price rigidity in different countries: the US, the Eurozone, the UK and others. These studies all show that whilst there are some sectors where prices change frequently, there are also other sectors where prices remain fixed over time. The lack of sticky prices in the sticky information model is inconsistent with the behavior of prices in most of the economy. This has led to attempts to formulate a "dual Stickiness" model that combines sticky information with sticky prices.
- Taylor, John B. (1980), “Aggregate Dynamics and Staggered Contracts,” Journal of Political Economy. 88(1), 1-23.
- Calvo, Guillermo A. (1983), “Staggered Prices in a Utility-Maximizing Framework,” Journal of Monetary Economics. 12(3), 383-398.
- Oleksiy Kryvtsov and Peter J. Klenow. "State-Dependent or Time-Dependent Pricing: Does It Matter For Recent U.S. Inflation?" The Quarterly Journal of Economics, MIT Press, vol. 123(3), pages 863-904, August.
- Mikhail Golosov & Robert E. Lucas Jr., 2007. "Menu Costs and Phillips Curves," Journal of Political Economy, University of Chicago Press, vol. 115, pages 171-199.
- Dotsey M, King R, Wolman A State-Dependent Pricing And The General Equilibrium Dynamics Of Money And Output, Quarterly Journal of Economics, volume 114, pages 655-690.
- Dixon, Huw and Hansen, Claus A mixed industrial structure magnifies the importance of menu costs, European Economic Review, 1999, pages 1475–1499.
- Dixon, Huw Nominal wage flexibility in a partly unionised economy, The Manchester School of Economic and Social Studies, 1992, 60, 295-306.
- Dixon, Huw Macroeconomic Price and Quantity responses with heterogeneous Product Markets, Oxford Economic Papers, 1994, vol. 46(3), pages 385-402, July.
- Dixon (1992), Proposition 1 page 301
- Fischer, S. (1977): “Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule,” Journal of Political Economy, 85(1), 191–205.
- Mankiw, N.G. and R. Reis (2002) "Sticky Information Versus Sticky Prices: A Proposal To Replace The New Keynesian Phillips Curve," Quarterly Journal of Economics, 117(4), 1295–1328
- V. V. Chari, Patrick J. Kehoe, Ellen R. McGrattan (2008), New Keynesian Models: Not Yet Useful for Policy Analysis, Federal Reserve Bank of Minneapolis Research Department Staff Report 409
- Edward S. Knotec II. (2010), A Tale of Two Rigidities: Sticky Prices in a Sticky-Information Environment. Journal of Money, Credit and Banking 42:8, 1543–1564
- Peter J. Klenow & Oleksiy Kryvtsov, 2008. "State-Dependent or Time-Dependent Pricing: Does It Matter for Recent U.S. Inflation?," The Quarterly Journal of Economics, MIT Press, vol. 123(3), pages 863-904,
- Luis J. Álvarez & Emmanuel Dhyne & Marco Hoeberichts & Claudia Kwapil & Hervé Le Bihan & Patrick Lünnemann & Fernando Martins & Roberto Sabbatini & Harald Stahl & Philip Vermeulen & Jouko Vilmunen, 2006. "Sticky Prices in the Euro Area: A Summary of New Micro-Evidence," Journal of the European Economic Association, MIT Press, vol. 4(2-3), pages 575-584,
- Philip Bunn & Colin Ellis, 2012. "Examining The Behaviour Of Individual UK Consumer Prices," Economic Journal, Royal Economic Society, vol. 122(558), pages F35-F55
- Knotec (2010)
- Bill Dupor, Tomiyuki Kitamura, Takayuki Tsuruga, Integrating Sticky Prices and Sticky Information, Review of Economics and Statistics, August 2010, Vol. 92, No. 3, Pages 657-669
- Arrow, Kenneth J.; Hahn, Frank H. (1973). General competitive analysis. Advanced textbooks in economics 12 (1980 reprint of (1971) San Francisco, CA: Holden-Day, Inc. Mathematical economics texts. 6 ed.). Amsterdam: North-Holland. ISBN 0-444-85497-5. MR 439057.
- Fisher, F. M. (1983). Disequilibrium foundations of equilibrium economics. Econometric Society Monographs (1989 paperback ed.). New York: Cambridge University Press. p. 248. ISBN 978-0-521-37856-7.
- Gale, Douglas (1982). Money: in equilibrium. Cambridge economic handbooks 2. Cambridge, U.K.: Cambridge University Press. p. 349. ISBN 978-0-521-28900-9.
- Gale, Douglas (1983). Money: in disequilibrium. Cambridge economic handbooks. Cambridge, U.K.: Cambridge University Press. p. 382. ISBN 978-0-521-26917-9.
- Grandmont, Jean-Michel (1985). Money and value: A reconsideration of classical and neoclassical monetary economics. Econometric Society Monographs 5. Cambridge University Press. p. 212. ISBN 978-0-521-31364-3. MR 934017.
- Grandmont, Jean-Michel, ed. (1988). Temporary equilibrium: Selected readings. Economic Theory, Econometrics, and Mathematical Economics. Academic Press. p. 512. ISBN 0-12-295146-8, ISBN 978-0-12-295146-6 Check
|isbn=value (help). MR 987252.
- Herschel I. Grossman, 1987.“monetary disequilibrium and market clearing” in The New Palgrave: A Dictionary of Economics, v. 3, pp. 504–06.
- The New Palgrave Dictionary of Economics, 2008, 2nd Edition. Abstracts:
- "monetary overhang" by Holger C. Wolf.
- "non-clearing markets in general equilibrium" by Jean-Pascal Bénassy.
- "fixprice models" by Joaquim Silvestre. "inflation dynamics" by Timothy Cogley.
- "temporary equilibrium" by J.-M. Grandmont. | http://en.m.wikipedia.org/wiki/Sticky_(economics) | 13 |
21 | This lesson introduces the concept of monopoly. It calls upon students to consider how monopoly power might affect the quality and price of goods and services offered to consumers. In light of what they learn about the history of trusts and the Sherman AntiTrust Act, the students write editorials, stating and explaining their views about laws prohibiting monopolies. Finally, students consider the effect that the Internet has had on the potential of companies to become entrenched as monopolies in our national and global economies.
- Define monopoly.
- Explain the market power that monopolies can exert.
- Evaluate American laws prohibiting monopolies.
This lesson is intended to help students will develop an understanding of economic monopolies. It introduces the Sherman Anti-Trust Act, which forbade the establishment and practices of economic monopolies in the United States. Working as newspaper editorialists, students explain whether or not they believe that monopolies should be prohibited in free market economic systems. The students also consider the ways in which today's technological infrastructure has influenced the capacity of companies to establish themselves as monopolies. Finally, the students create radio broadcasts explaining the nature of monopolies today.
Monopoly: This EconEdLink glossary provides a large number of definitions of economic concepts.
Monopoly Defined: This page provides a print-out for students that defines monopolies.
Monopoly Defined Handout
Standard Oil and the Sherman Anti-Trust Act: This EconEdLink worksheet provides information and questions for students to answer related to monopolies.
Student Responses to Editorials: This worksheet allows students to write if they felt that the editorial that they read was well-written or not.
Student Responses to Editorials
Edublogs: At this site, students and teachers can set up their own blogs and post their own editorials.
Technology and Monopolies: This worksheet helps students to understand how advances in technology are related to monopolies.
Technology and Monopolies
Odeo Enterprise: This site allows students to create their own podcasts.
Creating a Monopoly: This EconEdLink worksheet asks students to develop their own monopoly.
Creating a Monopoly
To begin this lesson, tell the students that you want to purchase a pen from somebody. Ask whether any of them would be pen that they would be willing to sell. After the students have completed this short exercise ask them what they wrote willing to sell you a pen. Tell them to write down on a piece of paper the price that they would charge for a pen--using the down. Also ask them to help you decide which pen to purchase: what information should you think about in making your decision about which pen to purchase? The students may suggest that you should think about which pen you want, and that you should try to purchase it for the lowest possible price. If the students do not suggest these ideas on their own, raise them for the students. Ask them to explain why these ideas make sense.
Now tell the students to imagine that one student in the class owned all of the pens in the classroom. And you have decided that you would buy a pen only from somebody in the class. Ask them how this scenario might influence the price of the pen and the quality of the pen being sold. Here you would like to thear the students state that if one person owned all of the pens, that person could charge more money for them and sell lower-quality pens. Ask the students to explain why this is true. They should recognize that since only one person was selling pens, this individual would not have to worry about either the price set by other people or the quality of the pens that other people were selling. Tell the students that this scenario is an example of a monopoly.
Now show the students the definition of monopoly and use it as a transparency. Ask the students to explain this definition in their own words. Then shift the discussion: ask the students if they think it is fair for monopolies to exist. Urge them to support their opinions. As the students share their opinions take notes on the board. Encourage the students to express ideas that both support and oppose monopolies.
Now tell students that they should learn how the U.S. government views economic monopolies. Invite them to read and complete the worksheet entitled Standard Oil and the Sherman Anti-Trust Laws , available here. After the students have completed this work, reconvene the class. Call on students to share their answers with one another. Click here to view possible answers to the Standard Oil and Sherman Anti-Trust Laws worksheet.
Ask the students why companies might to be monopolies. Help them understand that in a free-market economic system, people work to make money and companies exist to make profit: individual companies want to make as much profit as possible. Certainly companies would love to be monopolies if this meant that they could make greater profits, as it most certainly would.
Ask the students how companies might try to become monopolies. There are several possible answers. Companies might lower their prices in order to attract customers away from their competition. Companies might also try to produce the best product or service available at the lowest cost in order to attract new customers. Companies can also become monopolies by inventing new products and acquiring a patent to prevent others from copying their products. At times the government establishes monopolies when policy makers believe it is in the public's best interest. For example, municipalities typically grant monopoly status to electric companies since it would be too expensive for several electric companies to compete in the same community. When the government does establish monopolies, it typically regulates them to insure that they do not unfairly raise prices or lower quality.
Help the students understand that, without government regulation, companies that become monopolies may lower the quality of their products or services (perhaps by spending less money producing them), or they may raise the price of the goods and services that they sell. Ask the students why companies might do this. The students should recognize that companies can reduce quality or raise prices if they no longer face competition. As appropriate refer to the pen-selling example to underscore this point.
Introduce the point that the U.S. government seeks in various ways to foster competition. Help the students understand that while individual firms might want to be monopolies and enjoy the benefits of monopoly status U.S. government does not think that monopolies are good for our nation since monopolies can raise prices and lower the quality of goods and services.
Now introduce a writing assignment. Tell the students that for the next part of this assignment they should pretend they are newspaper editorialists. To clarify the task, ask the students what they know about the job of a newspaper editorialist. If anybody states that editorialists write opinion pieces for the newspaper, underscore that response. If the students do not know this, tell them. Remind the students, however, that editorialists cannot simply write their opinion and expect others to accept it. They must justify their opinions with high quality reasoning. Remind the students that editors write for the public. On this assignment, therefore, classmates will read one another's editorials, and comment on them, upon completion. Ask the students to respond to the interactive question below.
- Pretend that you are an editorialist, and write an editorial considering whether or not you believe that monopolies should be illegal.
After the students have completed this work, tell them to form groups of three. Explain that in these groups they should read one another's opinions and respond to them. They should state whether the agree or disagree with the writer's conclusion. They also should explain why they agree or disagree by commenting on how the writer uses facts and reasons to support his or her conclusion. The Student Responses to Editorials can be used as a handout. Be sure to encourage the students to read the comments that their group mates write about their opinions.
[NOTE: As an alternative to the above interactive you may choose to have the students work in a "blog" setting. Using the website Edublogs
or another "blog" site of your choice set up a "blog" and have the students create their editorials. Once each student has created an editorial have the students comment on one another's editorial piece so they can complete the group work portion of the class.]
Now remind the students that the U.S. government outlawed monopolies at the end of the nineteenth century. Suggest that much has changed in American society between the 1890s and today. Ask the students what they can think of that has changed. Among many other things, the students may mention an infusion of computer technology. They should recognize that not all of the innovations we take for granted today--the personal computer, the World Wide Web, e-mail, instant text messaging, podcasts, open-source materials, etc.--were widely available 10 years ago, let alone 110 years ago. Ask the students to complete the worksheet entitled Technology and Monopolies , working in groups of two to three. This worksheet asks the students to consider whether or not they think recent technological advances would make it easier and cheaper to start businesses. They are then asked how these advances might affect the ability of individual companies to establish themselves as monopolies. After the students have completed this work, reconvene the class and invite students to share their answers with one another. Lead a discussion in which students consider the influence that the technology available in recent years has had on firms seeking to establish themselves as monopolies. During this discussion, ask the students if they think it is important for the U.S. government to continue to still have a law prohibiting the establishment of monopolies. During this discussion, urge the students to support their opinions thoughtfully.
In this lesson, students have learned about the role that monopolies play in economies. They have learned that monopolies are outlawed by the U.S. government. They have learned why companies would want to be monopolies -- i.e., because monopoly power sometimes enables companies to charge higher rates for their products/services, generating greater profit. In addition to considering their own perspectives on monopolies, the students have thought about their classmates' perspectives. Finally, the students have examined the influence of today's technology has on the ability of companies to establish themselves as monopolies.
Tell the students that in order to demonstrate their knowledge of monopolies, they should develop a radio interview, working in groups of two or three, in which the participants explain the nature of monopolies, the ways in which today's technological infrastructure has influenced the establishment of monopolies and whether or not they think monopolies should be illegal today. The students might particularly enjoy making podcast interviews. If you choose to have them make podcasts, consider using Odeo, an excellent resource. You can link to the Odeo site from here . The Odeo website has very user-friendly directions. Students can even call into Odeo via telephone to create their podcasts. If you would prefer not to use podcasts, you can simply ask the students to develop presentations which they can perform before the class. If you choose to do the activity in this way, ask the students who are not performing to write down one idea they learn from each presentation.
Assign the students to develop a plan to create their own monopoly. Having created their palns, they should also analyze their plans to determine what effect the plans might have on the greater economy. To begin this step, ask the students, working in groups of two or three, to complete the worksheet entitled Creating a Monopoly . After the students have completed this worksheet, reconvene the class. Invite students to share their answers with one another. During this discussion encourage the students to consider how efforts to create a monopoly might negatively influence the quality of goods and services that would be available to consumers. Urge the students to support their ideas thoughtfully.
“It seems like a very good lesson. I wish I had been able to print the material to use in my classroom.”
“Great lesson. I used this for a group of middle school students. I'm glad I noticed the student's version. I also used the board game Monopoly as an extension. Students grasped the concept more fully after playing the game.” | http://www.econedlink.org/lessons/index.php?lid=686&type=educator | 13 |
14 | Oil and Gas: Characteristics and Occurrence
Oil and Gas Reservoirs
Hydrocarbons and their associated impurities occur in rock formations that are usually buried thousands of feet or metres below the surface. Scientists and engineers often call rock formations that hold hydrocarbons "reservoirs." Oil does not flow in underground rivers or pool up in subterranean lakes, contrary to what some people think. And, as you've learned, gasoline and other refined hydrocarbons do not naturally occur in pockets under the ground, just waiting to be drilled for. Instead, crude oil and natural gas occur in buried rocks and, once produced from a well, companies have to refine the crude oil and process the natural gas into useful products. Further, not every rock can hold hydrocarbons. To serve as an oil and gas reservoir, rocks have to meet several criteria.
(fig- 5 i) :: a pore is a small open space ::
Characteristics of Reservoir Rocks
Nothing looks more solid than a rock. Yet, choose the right rock-say, a piece of sandstone or limestone-and look at it under a microscope. You see many tiny openings or voids. Geologists call these tiny openings "pores" (fig- 5 i). A rock with pores is "porous" and a porous rock has "porosity." Reservoir rocks must be porous, because hydrocarbons can occur only in pores.
A reservoir rock is also permeable-that is, its pores are connected (fig- 5 2). If hydrocarbons are in the pores of a rock, they must be able to move out of them. Unless hydrocarbons can move from pore to pore, they remain locked in place, unable to flow into a well. A suitable reservoir rock must therefore be porous, permeable, and contain enough hydrocarbons to make it economically feasible for the operating company to drill for and produce them.
(fig- 5 2) :: connected pores give a rock permeability ::
Origin and Accumulation of Oil and Gas
To understand how hydrocarbons get into buried rocks, visualize an ancient sea teeming with vast numbers of living organisms. Some are fishes and other large swimming beasts; others, however, are so small that you cannot see them without a strong magnifying glass or a microscope. Although they are small, they are very abundant. Millions and millions of them live and die daily. It is these tiny and plentiful organisms that many scientists believe gave rise to oil and gas.
When these tiny organisms died millions of years ago, their remains settled to the bottom. Though very small, as thousands of years went by, enormous quantities of this organic sediment accumulated in thick deposits on the seafloor. The organic material mixed with the mud and sand on the bottom. Ultimately, many layers of sediments built up until they became hundreds or thousands of feet (metres) thick.
The tremendous weight of the overlying sediments created great pressure and heat on the deep layers. The heat and pressure changed the deep layers into rock. At the same time, heat, pressure, and other forces changed the dead organic material in the layers into hydrocarbons: crude oil and natural gas.
Meanwhile, geological action created cracks, or faults, in the earth's crust. Earth movement folded layers of rock upward and downward. Molten rock thrusted upward, altering the shape of the surrounding beds. Disturbances in the earth shoved great blocks of land upward, dropped them downward, and moved them sideways. Wind and water eroded formations, earthquakes buried them, and new sediments fell onto them. Land blocked a bay's access to open water, and the resulting inland sea evaporated. Great rivers carried tons of sediment; then dried up and became buried by other rocks. In short, geological forces slowly but constantly altered the very shape of the earth. These alterations in the layers of rock are important because, under the right circumstances, they can trap and store hydrocarbons.
Even while the earth changed, the weight of overlying rocks continued to push downward, forcing hydrocarbons out of their source rocks. Seeping through subsurface cracks and fissures, oozing through small connections between rock grains, the hydrocarbons moved upward. They moved until a subsurface barrier stopped them or until they reached the earth's surface, as they did at Oil Creek. Most of the hydrocarbons, however, did not reach the surface. Instead, they became trapped and stored in a layer of subsurface rock. Today, the oil industry seeks petroleum that was formed and trapped millions of years ago.
A hydrocarbon reservoir has a distinctive shape, or configuration, that prevents the escape of hydrocarbons that migrate into it. Geologists classify reservoir shapes, or traps, into two types: structural traps and stratigraphic traps.
Structural traps form because of a deformation in the rock layer that contains the hydrocarbons. Two examples of structural traps are fault traps and anticlinal traps (fig. 5 3)
(fig. 5 3) :: fault traps and anticlinal traps
A fault is a break in the layers of rock. A fault trap occurs when the formations on either side of the fault move. The formations then come to rest in such a way that, when petroleum migrates into one of the formations, it becomes trapped there. Often, an impermeable formation on one side of the fault moves opposite a porous and permeable formation on the other side. The petroleum migrates into the porous and permeable formation. Once there, it cannot get out because the impervious layer at the fault line traps it.
Anticlinal Traps An anticline is an upward fold in the layers of rock, much like a domed arch in a building. The oil and gas migrate into the folded porous and permeable layer and rise to the top. They cannot escape because of an overlying bed of impermeable rock.
Stratigraphic traps form when other beds seal a reservoir bed or when the permeability changes within the reservoir bed itself In one stratigraphic trap, a horizontal, impermeable rock layer cuts off, or truncates, an inclined layer of petroleum-bearing rock (fig- 54A). Sometimes a petroleum-bearing formation pinches out-that is, an impervious layer cuts it off (fig- 54B). Other stratigraphic traps are lens-shaped. Impervious layers surround the hydrocarbon-bearing rock (fig. 54C) - Still another occurs when the porosity and permeability change within the reservoir itself The upper reaches of the reservoir are nonporous and impermeable; the lower part is porous and permeable and contains hydrocarbons (fig- 54D).
(fig- 54A) (fig- 54B)
(fig- 54C) (fig- 54D)
Many other traps occur. In a combination trap, for example, more than one kind of trap forms a reservoir. A faulted anticline is an example. Several faults cut across the anticline. In some places, the faults trap oil and gas (fig- 55). Another trap is a piercement dome. In this case, a molten substance-salt is a common one-pierced surrounding rock beds. While molten, the moving salt deformed the horizontal beds. Later, the salt cooled and solidified and some of the deformed beds trapped oil and gas (fig- 56). Spindletop was formed by a piercement dome.
(fig- 55) (fig- 56)
From The Primer of Oilwell Drilling, 6th edition Copyright © 2001 Petroleum Extension Service (PETEX®) of The University of Texas at Austin. All rights reserved | http://www.blueridgegroup.com/v4.0/index-1.html | 13 |
18 | In this chapter fluids at rest will be studied. Mass density, weight density, pressure, fluid pressure, buoyancy, and Pascal's principle will be discussed. In the following, the symbol ( ρ ) is pronounced " rho ."
Example 1 : The mass density of steel is 7.8 gr /cm3. A chunk of steel has a volume of 141cm3. Determine (a) its mass in grams and (b) its weight density in N/m3. Solve before looking at the solution. Use horizontal fraction bars.
Solution: (a) Since ρ = M / V ; M = ρV ; M = (7.8 gr / cm3) (141 cm3 ) ; M = 1100 grams.
Before going to Part (b), let's first convert (gr/cm3) to its Metric version (kg/m3). Use horizontal fraction bars.
7.8 gr / cm3 = 7.8 (0.001kg) / (0.01m)3 = 7800 kg/m3.
1 kg is equal to 1000gr. This means that 1 gr is 0.001kg as is used above.
Also, 1 m is 100cm. This means that 1cm is 0.01m. Cubing each, results in : 1cm3 = 0.000001m3 as is used above. Now, let's solve Part (b).
(b) D = ρg ; D = [7800 kg /m3] [ 9.8 m/s2] = 76000 N /m3.
Not only you should write Part (b) with horizontal fraction bars, but also check the correctness of the units as well.
Example 2 : A piece of aluminum weighs 31.75N. Determine (a) its mass and (b) its volume if the mass density of aluminum is 2.7gr/cm3.
Solution: (a) w = Mg ; M = w / g ; M = 31.75N / [9.8 m/s2] ; M = 3.2 kg ; M = 3200 grams.
(b) ρ = M / V ; V = M / ρ ; V = 3200gr / [2.7 gr /cm3] = 1200 cm3.
Example 3 : The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold that is known to be an alloy of gold and copper has a mass of 7.55kg and a volume of 534 cm3. Calculate the mass percentage of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process. Do your best to solve it by yourself first.
Solution: Two equations can definitely be written down. The sum of masses as well as the sum of volumes are given. The formula M = ρV is applicable to both metals. Mgold = ρgoldVgold and Mcopper = ρcopperVcopper . Look at the following as a system of two equations in two unknowns:
|Mg + Mc = 7550 gr||ρgVg + ρcVc = 7550||19.3 Vg+ 8.9Vc = 7550||19.3 Vg + 8.9Vc =7550|
|Vg + Vc = 534 cm3||Vg + Vc = 534||Vg + Vc = 534||Vg = 534 - Vc|
Substituting for Vg in the first equation yields:
19.3 (534 - Vc) + 8.9 Vc = 7550 ; 10306.2 -10.4Vc = 7550 ; 2756.2 = 10.4Vc ; Vc = 265 cm3.
Since Vg = 534 - Vc ; therefore, Vg = 534 - 265 = 269 cm3.
The masses are: Mg = ρgVg ; Mg = (19.3 gr/cm3) ( 269 cm3 ) = 5190 gr ; Mc = 2360 gr.
The mass percentage of gold in the alloy (7750gr) is Mgold / Malloy = (5190/7550) = 0.687 = 68.7 %
Karat means the number of portions out of 24 portions. [68.7 / 100] = [ x / 24] ; x = 16.5 karat.
Pressure is defined as force per unit area. Let's use lower case p for pressure; therefore, p = F / A. The SI unit for pressure is N/m2 called " Pascal." The American unit is lbf / ft2. Two useful commercial units are: kgf / cm2 and lbf / in2 or psi.
Example 4: Calculate the average pressure that a 120-lbf table exerts on the floor by each of its four legs if the cross-sectional area of each leg is 1.5 in2.
Solution: p = F / A ; p = 120lbf / (4x 1.5 in2) = 20 lbf / in2 or 20 psi.
Example 5: (a) Calculate the weight of a 102-gram mass piece of metal. If this metal piece is rolled to a square sheet that is 1.0m on each side, and then spread over the same size (1.0m x 1.0m ) table, (b) what pressure would it exert on the square table?
Solution: (a) w = Mg ; w = (0.102 kg)(9.8 m/s2) ; w = 1.0 N
(b) p = F / A ; p = 1.0N / (1.0m x 1.0m) ; p = 1.0 N/m2 ; p = 1.0 Pascal (1.0Pa)
As you may have noticed, 1 Pa is a small amount of pressure. The atmospheric pressure is 101,300 Pa. We may say that the atmospheric pressure is roughly 100,000 Pa, or 100kPa We will calculate this later.
Fluid Pressure: Both liquids and gases are considered fluids. The study of fluids at rest is called Fluid Statics. The pressure in stationary fluids depends on weight density, D, of the fluid and depth, h, at which pressure is to be calculated. Of course, as we go deeper in a fluid, its density increases slightly because at lower points, there are more layers of fluid pressing down causing the fluid to be denser. For liquids, the variation of density with depth is very small for relatively small depths and may be neglected. This is because of the fact that liquids are incompressible. For gases, the density increase with depth becomes significant and may not be neglected. Gases are called compressible fluids. If we assume that the density of a fluid remains fairly constant for relatively small depths, the formula for fluid pressure my be written as:
p = hD or p = h ρg
where ρ is the mass density and D is the weight density of the fluid.
Example 6: Calculate (a) the pressure due to just water at a depth of 15.0m below lake surface. (b) What is the total pressure at that depth if the atmospheric pressure is 101kPa? (c) Also find the total external force on a spherical research chamber which external diameter is 5.0m. Water has a mass density of ρ = 1000 kg/m3.
Solution: (a) p = hD ; p = h ρg ; p = (15.0m)(1000 kg/m3)(9.8 m/s2) = 150,000 N /m2 or Pa.
(b) [p total] external = p liquid + p atmosphere ; [p total] external = 150,000Pa + 101,000Pa = 250,000Pa.
(c) p = F / A ; solving for F, yields: F = pA ; Fexternal = (250,000 N/m2)(4π)(2.5m)2 = 20,000,000 N.
F = 2.0x107N ( How may millions?!)
Chapter 11 Test Yourself 1:
1) Average mass density, ρ is defined as (a) mass of unit volume (b) mass per unit volume (c) a &b. click here
2) Average weight density, D is defined as (a) weight per unit volume (b) mass of unit volume times g (c) both a & b.
3) D = ρg is correct because (a) w = Mg (b) D is weight density and ρ is mass density (c) both a & b.
4) 4.0cm3 of substance A has a mass of 33.0grams, and 8.0cm3 of substance B has a mass of 56.0 grams. (a) A is denser than B (b) B is denser than A (c) Both A and B have the same density. click here
Problem: 1gram was originally defined to be the mass of 1cm3 of pure water. Answer the following question by first doing the calculations. Make sure to write down neatly with horizontal fraction bars. click here
5) On this basis, one suitable unit for the mass density of water is (a) 1 cm3/gr (b) 1 gr/cm3 (c) both a & b.
6) We know that 1kg = 1000gr. We may say that (a) 1gr = (1/1000)kg (b) 1gr = 0.001kg (c) both a & b.
7) We know that 1m = 100cm. We may say that (a) 1m3 = 100cm3 (b) 1m3 = 10000cm3 (c) 1m3 = 1000,000cm3.
8) We know that 1cm = 0.01m. We may write (a) 1cm3 = 0.000001m3 (b) 1cm3 = 0.001m3 (c) 1cm3 = 0.01m3.
9) Converting gr/cm3 to kg/m3 yields: (a)1gr/cm3 = 1000 kg/m3 (b)1gr/cm3 = 100 kg/m3 (c)1gr/cm3 = 10 kg/m3.
10) From Q9, the mass density of water is also (a) 1000 kg/m3 (b) 1 ton/m3, because 1ton=1000kg (c) both a & b.
11) Aluminum is 2.7 times denser than water. Since ρwater = 1000kg/m3 ; therefore, ρAlum. = (a) 2700kg/m3 (b) 27kg/m3 (c) 27000kg/m3. click here
12) Mercury has a mass density of 13.6 gr/cm3. In Metric units (kg/m3), its density is (a) 1360 kg/m3 (b) 13600 kg/m3 (c) 0.00136kg/m3.
13) The weight density of water is (a) 9.8 kg/m3 (b) 9800kg/m3 (c) 9800N/m3. click here
14) The volume of a piece of copper is 0.00247m3. Knowing that copper is 8.9 times denser than water, first find the mass density of copper in Metric units and then find the mass of the copper piece. Ans. : (a) 44kg (b) 22kg (c) 16kg.
Problem: The weight of a gold sphere is 1.26N. The mass density of gold is ρgold = 19300kg/m3.
15) The weight density, D, of gold is (a) 1970 N/m3 (b) 189000 N/m3 (c) 100,000 N/m3.
16) The volume of the gold sphere is (a) 6.66x10-6m3 (b) 6.66cm3 (c) both a & b. click here
17) The radius of the gold sphere is (a) 1.167cm (b) 0.9523cm (c) 2.209cm.
18) Pressure is defined as (a) force times area (b) force per unit area (c) force per length.
19) The Metric unit for pressure is (a) N/m3 (b) N/cm3 (c) N/m2. click here
20) Pascal is the same thing as (a) lbf / ft2 (b) N/m2 (c) lbf / in2.
21) psi is (a) lbf / ft2 (b) N/m2 (c) lbm / in2 (d) none of a, b, or c.
22) A solid brick may be placed on a flat surface on three different sides that have three different surface areas. To create the greatest pressure it must be placed on its (a) largest side (b) smallest side (c) middle-size side.
Problem: 113 grams is about 4.00 ounces. A 102 gram mass is 0.102kg. The weight of a 0.102kg mass is 1.00N. Verify this weight. If a 0.102gram piece of say copper that weighs 1.00N, is hammered or rolled to a flat sheet (1.00m by 1.00m), how thin would that be? May be one tenth of 1 mm? Note that a (1m) by (1m) rectangular sheet of metal may be viewed as a rectangular box which height or thickness is very small, like a sheet of paper. If you place your hand under such thin sheet of copper, do you hardly feel any pressure? Answer the following questions:
23) The weight density of copper that is 8.9 times denser than water is (a) 8900N/m2 (b) 1000N/m3 (c) 87220N/m3.
24) The volume of a 0.102kg or 1.00N piece (sheet) of copper is (a) 1.15x10-5m3 (b) 1.15x105m3 (c) 8900m3.
25) For a (1m)(1m) = 1m2 base area of the sheet, its height or thickness is (a) 1.15x10-5m (b) 1.15x105m (c) 8900m.
26) The small height (thickness) in Question 25 is (a) 0.0115mm (b) 0.0115cm (c) 890cm.
27) The pressure (force / area) or (weight / area) that the above sheet generates is (a) 1N/1m2 (b) 1 Pascal (c) both a & b.
28) Compared to pressures in water pipes or car tires, 1 Pascal of pressure is (a) a great pressure (b) a medium pressure (c) a quite small pressure.
29) The atmospheric pressure is roughly (a) 100Pa (b) 100,000 Pa (c) 100kPa (d) both b & c.
30) The atmospheric pressure is (a) 14.7 psi (b) 1.0 kgf/m2 (c) 1.0 kgf/cm2 (d) a & c.
|Gravity pulls the air molecules around the Earth toward the Earth's center. This makes the air layers denser and denser as we move from outer space toward the Earth's surface. It is the weight of the atmosphere that causes the atmospheric pressure. The depth of the atmosphere is about 60 miles. If we go 60 miles above the Earth surface, air molecules become very scarce to where we might travel one meter and not collide with even a single molecule (a good vacuum!). Vacuum establishes the basis for absolute zero pressure. Any gas pressure measured with respect to vacuum is called " absolute pressure. "||
Calculation of the Atmospheric Pressure:
The trick to calculate the atmospheric pressure is to place a 1-m long test tube filled with mercury inverted over a pot of mercury such that air can not get in the tube. Torricelli ( Italian) was the first to try this. The figure is shown above. In doing this, we will see that the mercury level drops to 76.0cm or 30.0 inches if the experiment is performed at ocean level. The top of the tube lacks air and does not build up air pressure. This device acts as a balance. If it is taken to the top of a high mountain where there is a smaller number of air layers above one's head, the mercury level goes down. This device can even be calibrated to measure elevation for us based on air pressure.
The pressure that the 76.0-cm column of mercury generates is equal the pressure that the same diameter column of air generates but with a length of 60 miles ( from the Earth's surface all the way up to the no-air region).
Using the formula for pressure ( p = F / A ), the pressure of the mercury column or the atmospheric pressure can be calculated as follows:
patm = the mercury weight / the tube cross-sectional Area. ( Write using horiz. fraction bars).
patm = (VHg)(DHg) / A = (A)(hHg)(DHg) / A = hHgDHg . Note that the tube's volume = VHg. = (base area) (height) = (A)(hHg.).
patm = hHgDHg ( This further verifies the formula for for pressure in a fluid).
In Torricelli's experiment, hHg = 76.0cm and DHg = 13.6 grf /cm3 ; therefore ,
patm = ( 76.0cm )( 13.6 grf /cm3 ) = 1033.6 grf / cm2
Converting grf to kgf results in patm = 1.0336 kgf / cm2
To 2 significant figures, this result is a coincidence: patm = 1.0 kgf /cm2.
If you softly place a 2.2 lbf (or 1.0 kgf ) weight over your finger nail ( A = 1 cm2 almost), you will experience a pressure of 1.0 kgf / cm2 (somewhat painful) that is equivalent to the atmospheric pressure. The atmosphere is pressing with a force of 1 kgf = 9.8 N on every cm2 of our bodies and we are used to it. This pressure acts from all directions perpendicular to our bodies surfaces at any point. An astronaut working outside a space station must be in a very strong suit that can hold 1 atmosphere of pressure inside compared to the zero pressure outside and not explode.
Example 7: Convert the atmospheric pressure from 1.0336 kgf / cm2 to lbf / in2 or psi.
Solution: 1 kgf = 2.2 lbf and 1 in. = 2.54 cm. Convert and show that patm = 14.7 psi.
Example 8: Convert the atmospheric pressure from 1.0336 kgf / cm2 to N / m2 or Pascals (Pa).
Solution: 1 kgf = 9.8N and 1 m = 100 cm. Convert and show that patm = 101,300 Pa.
Example 9: The surface area of an average size person is almost 1m2. Calculate the total force that the atmosphere exerts on such person.
Solution: p = F / A ; F = pA ; F = ( 101,300 N/m2 )( 1 m2 ) = 100,000 N.
F = ( 1.0336 kgf / cm2 )( 10,000 cm2 ) = 10,000 kgf = 10 ton force.
Example 10: A submarine with a total outer area of 2200m2 is at a depth of 65.0m below ocean surface. The density of ocean water is 1030 kg/m3. Calculate (a) the pressure due to water at that depth, (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2.
Solution: (a) p = hD ; p = h ρg ; p = (65.0m)(1030 kg/m3)(9.81 m/s2) = 657,000 N /m2 or Pa.
(b) [p total] external = p liquid + p atmosphere ; [ p total ]external = 657,000Pa + 101,000Pa = 758,000Pa.
(c) p = F / A ; solving for F, yields: F = pA ; F = (758,000 N/m2)(2200m) = 1.67x109 N.
Buoyancy, Archimedes' Principle:
When a non-dissolving object is submerged in a fluid (liquid or gas), the fluid exerts an upward force onto the object that is called the buoyancy force (B). The magnitude of the buoyancy force is equal to the weight of displaced fluid. The formula for buoyancy is therefore,
B = Vobject Dfluid
Example 11: Calculate the downward force necessary to keep a 1.0-lbf basketball submerged under water knowing that its diameter is 1.0ft. The American unit for the weight density of water is Dwater = 62.4 lbf /ft3.
Solution: The volume of the basketball (sphere) is: Vobject = (4/3) π R3 = (4/3)(3.14)(0.50 ft)3 = 0.523 ft3.
The upward force (buoyancy) on the basketball is: B = Vobject Dfluid = (0.523 ft3)(62.4 lbf / ft3) = 33 lbf .
Water pushes the basketball up with a force of magnitude 33 lbf while gravity pulls it down with a force of 1.0 lbf (its weight); therefore, a downward force of 32 lbf is needed to keep the basketball fully under water. The force diagram is shown below:
A Good Link to Try: http://www.mhhe.com/physsci/physical/giambattista/fluids/fluids.html .
Example12: Calculate the necessary upward force to keep a (5.0cm)(4.0cm)(2.0cm)-rectangular aluminum bar from sinking when submerged in water knowing that Dwater = 1 grf / cm3 and DAl = 2.7 grf / cm3.
Solution: The volume of the bar is Vobject = (5.0cm)(4.0cm)(2.0cm) = 40cm3.
The buoyancy force is: B = Vobject Dfluid = (40cm3)(1 grf / cm3) = 40grf.
The weight of the bar in air is w = Vobject Dobject = (40cm3)(2.7 grf / cm3) = 110grf.
Water pushes the bar up with a force of magnitude 40. grf while gravity pulls it down with 110grf ; therefore, an upward force of 70 grf is needed to keep the bar fully under water and to avoid it from sinking. The force diagram is shown below:
Example13: A boat has a volume of 40.0m3 and a mass of 2.00 tons. What load will push 75.0% of its volume into water? Each metric ton is 1000 kg. Let g = 9.81 m/s2.
Solution: Vobj = 0.750 x 40.0m3 = 30.0m3.
B =Vobject Dfluid = (30.0m3)(1000 kg /m3)(9.81 m/s2) = 294,000N.
w = Mg = (2.00 x 103 kg)(9.81 m/s2) = 19600N.
F = B - w = 294,000N - 19600N = 274,000N.
An important and useful principle in fluid statics is the " Pascal's Principle." Its statement is as follows: The pressure imposed at any point of a confined fluid transmits itself to all points of that fluid without significant losses.
One application of the Pascal's principle is the mechanism in hydraulic jacks. As shown in the figure, a small force, f, applied to a small piston of area, a ,imposes a pressure onto the liquid (oil) equal to f/a. This pressure transmits throughout the oil as well as onto the internal boundaries of the jack specially under the big piston. On the big piston, the big load F, pushes down over the big area A. This pressure is F/A . The two pressures must be equal, according to Pascal's principle. We may write:
f /a = F/A
Although, for balance, the force that pushes down on the big piston is much greater in magnitude than the force that pushes down on the small piston; however, the small piston goes through a big displacement in order for the big piston to go through a small displacement.
Example14: In a hydraulic jack the diameters of the small and big pistons are 2.00cm and 26.00cm respectively. A truck that weighs 33800N is to be lifted by the big piston. Find (a) the force that has to push the smaller piston down, and (b) the pressure under each piston.
Solution: (a) a = π r2 = π (1.00cm)2 = 3.14 cm2 ; A = π R2 = π (13.00cm)2 = 530.66 cm2
f / a = F / A ; f / 3.14cm2 = 33800N / 530.66cm2 ; f = 200N
(b) p = f /a = 63.7 N/cm2 ; p = F / A = 63.7 N/cm2.
Chapter 11 Test Yourself 2:
1) In Torricelli's experiment of measuring the atmospheric pressure at the ocean level, the height of mercury in the tube is (a) 76.0cm (b) 7.6cm (c) 760mm (d) a & c. click here
2) The space above the tube in the Torricelli's experiment is (a) at regular pressure (b) almost vacuum (c) at a very small amount of mercury vapor pressure, because due to vacuum, a slight amount of mercury evaporates and creates minimal mercury vapor pressure (d) both b & c.
3) The pressure inside a stationary fluid (liquid or gas) depends on (a) mass density and depth (b) weight density and depth (c)depth only regardless of the fluid type. click here
4) A pressure gauge placed 50.0m below ocean surface measures (a) a higher (b) a lower (c) the same pressure compared to a gauge that is placed at the same depth in a lake.
5) The actual pressure at a certain depth in an ocean on a planet that has an atmosphere is equal to (a) just the liquid pressure (b) the liquid pressure + the atmospheric pressure (c) the atmospheric pressure only. click here
6) The formula that calculates the pressure at a certain depth in a fluid is (a) p = h ρ (b) p = hD (c) p = h ρg (d) both b & c.
Problem: Mercury is a liquid metal that is 13.6 times denser than water. Answer the following questions:
7) The mass density of mercury is (a) 13600 kg/m3 (b) 13.6 ton / m3 (c) both a & b. click here
8) The weight density of mercury is (a) 130,000N/m3 (b) 1400N/m3 (c) 20,100N/m3.
9) In a mercury tank, the liquid pressure at a depth if 2.4m below mercury surface is (a) 213000N/m2 (b) 312000N/m3 (c) 312000N/m2. click here
10) In the previous question, the total pressure at that depth is (a) 412000 N/m3 (b) 212000N/m2 (c) 412000 N/m2.
Problem: This problem shows the effect of depth due to liquid pressure. A long vertical and narrow steel pipe of 1.0cm in diameter is connected to a spherical barrel of internal diameter of 1.00m. The barrel is also made of steel and can withstand an internal total force of 4,00,000N! The barrel is gradually filled with water through the thin and long pipe on its top while allowing the air out of the tank. When the spherical part of the tank is full of water, further filling makes the water level in the thin pipe to go up fast (Refer to Problem 7 at the very end of this chapter for a suitable figure). As the water level goes up, it quickly builds up pressure. p = hD. If the pipe is 15.5m long, for example, answer Questions 11 through 14.
11) The liquid pressure at the center of the barrel is (a) 151900N/m2 (b) 156800N/m2 (c) 16000kg/m2.
12) The total internal area of the barrel is (a) 0.785m2. (b) 3.141m2. (a) 1.57m2. click here
13) The total force on the internal surface of the sphere is (a) 246000N (b) 123000N (c) 492000N.
14) Based on the results in the previous question, the barrel (a) withstands the pressure (b) does not withstand the pressure. click here
15) The liquid pressure at a depth of 10m ( 33ft ) below water on this planet is roughly (a) 200,000Pa (b) 100,000Pa (c) 300,000Pa.
16) Since the atmospheric pressure is also roughly 100,000 Pa, we may say that every 10m of water depth or height is equivalent to (a) 1 atmosphere of pressure (b) 2 atmospheres of pressure (c) 3 atmospheres of pressure.
17) In the Torricelli's experiment, if the formula P = hD is used to calculate the pressure caused by 0.760m of mercury the value of atmospheric pressure becomes (a) 9800 N/m2 (b) 98000 N/m2 (c) 101,000 N/m2. Perform the calculation. ρmercury = 13600kg/m3. click here
18) To convert 101,000 N/m2 or the atmospheric pressure to lbf /in2 or psi, one may replace (N) by 0.224 lbf and (m) by 39.37 in. The result of the conversion is (a) 25.4ps (b) 14.7psi (c) 16.2psi. Perform the calculation.
19) To convert 101,000 N/m2 or the atmospheric pressure to kgf /cm2, one may replace (N) by 0.102 kgf and (m) by 100cm. The result of the conversion is (a) 1.0 kgf /cm2 (b) 2.0 kgf /cm2 (c) 3.0kgf /cm2. Perform the calculation.
20) Due to the atmospheric pressure, every cm2 of our bodies is under a force of (a) 1.0kgf (b) 9.8N (c) both a & b. click here
21) An example of an area approximately close to 1cm2 is the size of (a) a finger nail (b) a quarter (c) a dollar coin.
22) The formula that calculates the area of sphere is Asphere = (a) πr2 (b) 2πr2 (c) 4πr2.
23) The force due to liquid pressure on a 5.0m diameter spherical chamber that is at a depth of 40.0m below ocean surface is (a) 3.14x106N (b) 3.08x107N (c) 6.16x106N. click here
24) Buoyancy for a submerged object in a non-dissolving liquid is (a) the upward force that the liquid exerts on that object (b) equal to the mass of the displaced fluid (c) equal to the weight of the displaced fluid (d) a & c.
25) The direction of the buoyancy force is (a) always downward (b) always upward (c) sometimes upward and sometimes downward. click here
26) The buoyancy on a cube 0.080m on each side and fully submerged in water is (a) 5.02N (b) 63N (c) 0.512N.
27) If the cube in the previous question is made of aluminum ( ρ = 2700 kg/m3), it has a weight of (a) 13.5N (b) 170N (c) 0.189N. click here
28) The force necessary to keep the cube in the previous question from sinking in water is (a) 107N (b) 8.5N (c) 7.0N.
Problem: A (12.0m)(50.0m)(8.0m-height)-barge has an empty mass of 1250 tons. For safety reasons and preventing it from sinking, only 6.0m of its height is allowed to go under water. Answer the following questions:
29) The total volume of the barge is (a) 480m3 (b) 60m3 (c) 4800m3. click here
30) The effective (safe) volume of the barge that can be submerged in water is (a) 3600m3 (b) 50m3 (c) 360m3.
31) The buoyancy force on the barge when submerged in water to its safe height is (a) 1.83x106N (b) 5.43x108N (c) 3.53x107N. click here
32) The safe load that the barge can carry is (a) Buoyancy + its empty weight (b) Buoyancy - its empty weight (c) Buoyancy - its volume.
33) The mass of the barge in kg is (a) 1.25x103 ton (b) 1.25x106 kg (c) a & b.
34) The weight of the barge in N is (a) 1.23x107 N (b) 2.26x107 N (c) neither a nor b. click here
35) The safe load in N that the barge can carry is (a) 3.41x107N (b) 2.3x107N (c) 2.53x107N.
36) The safe load in Metric tons is (a) 1370 ton (b) 2350 ton (a) 5000 ton.
37) According to Pascal's principle, a pressure imposed (a) on any fluid (b) on a confined fluid (c) on a mono-atomic fluid, transmits itself to all points of that fluid without any significant loss. click here
Problem: In a hydraulic jack the diameter of the big cylinder is 10.0 times the diameter of the small cylinder. Answer the following questions:
38) The ratio of the areas (of the big piston to the small piston) is (a) 10.0 (b) 100 (c) 50.0. click here
39) The ratio of the applied forces (on the small piston to that of the big piston) is (a) 1/100 (b) 1/10 (c) 1/25. click here
40) If the applied force to the small piston is 147.0N, the mass of the car it can lift is (a) 1200kg (b) 3500kg (c) 1500kg.
1) The mass density of mercury is 13.6 gr /cm3. A cylindrical vessel that has a height of 8.00cm and a base radius of 4.00cm is filled with mercury. Find (a) the volume of the vessel. Calculate the mass of mercury in (b) grams, (c) kg, and (d) find its weight both in N and kgf. Note that 1kgf = 9.81N.
2) A piece of copper weighs 49N. Determine (a) its mass and (b) its volume. The mass density of copper is 8.9gr/cm3.
3) The mass densities of gold and copper are 19.3 gr/cm3 and 8.9 gr/cm3, respectively. A piece of gold necklace has a mass of 51.0 grams and a volume of 3.50 cm3. Calculate (a) the mass percentage and (b) the karat of gold in the alloy assuming that the volume of the alloy is equal to the volume of copper plus the volume of gold. In other words, no volume is lost or gained as a result of the alloying process.
4) Calculate the average pressure that a 32-ton ten-wheeler truck exerts on the ground by each of its ten tires if the contact area of each tire with the ground is 750 cm2. 1 ton = 1000kg. Express your answers in (a) Pascal, (b) kgf/cm2, and (c) psi.
5) Calculate (a) the water pressure at a depth of 22.0m below ocean surface. (b)What is the total pressure at that depth if the atmospheric pressure is 101,300Pa? (c) Find the total external force on a shark that has an external total surface area of 32.8 ft2. Ocean water has a mass density of ρ = 1030 kg/m3.
6) A submarine with a total outer area of 1720m2 is at a depth of 33.0m below ocean surface. The mass density of ocean water is 1025 kg/m3. Calculate (a) the pressure due to water at that depth. (b) the total external pressure at that depth, and (c) the total external force on it. Let g = 9.81 m/s2.
7) In the figure shown, calculate the liquid pressure at the center of the barrel if the narrow pipe is filled up to (a) Point A, (b) Point B, and (c) Point C.
Using each pressure you find (in Parts a, b, and c) as the average pressure inside the barrel, calculate (d) the corresponding internal force on the barrel in each case.
If it takes 4.00x107N for the barrel to rupture, (e) at what height of water in the pipe will that happen?
A sphere = 4πR2 and g = 9.81m/s2.
8) In problem 7, why is it not necessary to add the atmospheric pressure to the pressure you find for each case?
9) A volleyball has a diameter of 25.0cm and weighs 2.0N. Find (a) its volume. What downward force can keep it submerged in a type of alcohol that has a mass density of 834 kg/m3 (b) in Newtons and (c) lb-force? Vsphere=(4/3)πR3.
10) What upward force is needed to keep a 560cm3 solid piece of aluminum completely under water avoiding it from sinking (a) in Newtons, and (b) in lbf.? The mass density of aluminum is 2700kg/m3.
11) A boat has a volume of 127m3 and weighs 7.0 x104N. For safety, no more than 67.0% of its volume should be in water. What maximum load (a) in Newtons, (b) in kgf, (c) in ton-force, and (d) in lbf can be put in it?
1) 402cm3, 5470grams, 5.47kg, 53.7N & 5.47kgf 2) 5.0kg , 562 cm3
3) 72.3% , 17.4 karat
4) 420 kPa, 4.3 kgf/cm2, 61 psi
5) 222kPa, 323kPa, 969 kN
6) 332kPa, 433kPa, 7.45x108N
7) 176kPa, 225kPa, 255kPa, 2.00x107N, 2.55x107N, 2.88x107N,
36.1m above the barrel's center
8) For students to answer 9) 8.18x10-3m3, 64.9N, 14.6 lbf
10) 9.3N, 2.1 lbf 11) 765000 N, 78000 kgf, 78 ton-force, 170,000 lbf | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter11.htm | 13 |
25 | Inequality and equity
Equity means fairness or evenness, and achieving it is considered to be an economic objective. Despite the general recognition of the desirability of fairness, it is often regarded as too normative a concept given that it is difficult to define and measure. However, for most economists, equity relates to how fairly income and opportunity are distributed between different groups in society.
The opposite of equity is inequality, and this can arise in two main ways:
Inequality of outcome
Inequality of outcome from economic transactions occurs when some individuals gain much more than others from an economic transaction. For example, individuals who sell their labour to a single buyer, a monopsonist, may receive a much lower wage than those who sell their labour to a firm in a very competitive market. Differences in income are an important type of inequality of outcome.
Inequality of opportunity
Inequality of opportunity occurs when individuals are denied access to institutions or employment, which limits their ability to benefit from living in a market economy. For example, children from poor homes may be denied access to high quality education, which limits their ability to achieve high levels of income in the future..
Does inequality serve a purpose?
Market economies rely on the price mechanism to allocate resources. This means that economic resources are allocated prices which reflect demand and supply, which operate via incentives. For example, rising wages act as an incentive to labour to become more employable, and provide a reward for those that do. Inequality, therefore, acts as an incentive to improve and specialise in producing those goods, services, and resources that command the highest reward.
However, critics of unregulated market economies raise doubts about the need for such vast differences in income that exist in the UK, and many other economies, and that significantly smaller differences would create a sufficient incentive to reward effort, ability, and wealth creation. For example, an average wage of £5,000 per week for a professional footballer would be more than sufficient to encourage gifted young footballers to want to become a professional player. This compares with wages of over £100,000 per week for the best players.
Measuring inequality of income
The two main methods for measuring inequality are the Lorenz curve and the Gini index.
The Lorenz curve
A Lorenz curve shows the % of income earned by a given % of the population.
A ‘perfect’ income distribution would be one where each % received the same % of income.
Perfect equality would be, for example, where 60% of the population gain 60% of national income. In the above Lorenz curve, 60% of the population gain only 20% of the income, hence the curve diverges from the line of perfect equality of income.
The further the Lorenz curve is from the 45 degree line, the less equal is the distribution of income.
Changes in the position of the Lorenze curve indicates changes in the distribution of income. In this example, the curve for 2010 is further away from the line of equal distribution than the curve in 1990, implying a wider distribution of income.
The Gini co-efficient and index
The Gini co-efficient or index is a mathematical device used to compare income distributions over time and between economies. The Gini co-efficient can be used in conjunction with the Lorenz curve. It is calculated by comparing the area under the Lorenz curve and the area from the 450 line to the right hand and 'x' axis. In terms of the Gini index, the closer the number is to 100 the greater the degree of inequality.
Although inequality has steadily increased over the last 30 years, the double dip recession had a greater impact on middle and higher income levels, and the Index fell from 36 to 34 between 2009 and 2011.
See also: Gini index
Equity and efficiency
When markets are free from imperfections, such as information failure and externalities, resources will be allocated in such a way that efficiency will be achieved. In attempting to achieve equity, governments may intervene and distort the workings of the market so that while equity is gained, efficiency is lost. For example, welfare payments help narrow the gap between rich and poor, but they may create moral hazard and produce a disincentive effect, so that welfare recipients remain dependent on welfare payments into the long run. This is known as the equity-efficiency trade-off.
Causes of inequality of income
There are several possible reasons for the
widening gap between rich and poor in the
The increased labour market participation of married females, greatly adding to the incomes of married-couple households.
Single parents, who constitute a large share of the lowest quintile, have done proportionately badly as benefits have not kept pace with earnings - benefits are linked to inflation, not to average earnings.
The same is true for the increasing numbers of low paid pensioners.
The wages of skilled workers have risen in comparison with unskilled.
Trade union power has been eroded which, along with the abolition of Wage Councils in 1993, has meant that protection for low paid has decreased.
The rise of the ‘unofficial labour market’ means that there appears to be a growing number of low paid immigrant workers, who work for cash and are paid much less than the national minimum wage.
A reduction in the level of progressiveness of the tax and benefits system, which occurred from the early 1980s. Prior to this, top marginal tax rates had a considerable re-distributive effect.
A rise of share ownership, and increasingly profitable performance of the stock market, which has increased incomes for shareholders.
Given that the general trend is for house prices and shares prices to be greater than the general price level, owners of property and financial assets, such as shares, have generally done much better than non-owners. Rising house and share prices have increased personal wealth which can be translated into spending via equity withdrawal.
A rapid increase in executive pay, often referred to as elite compensation. However, it can be argued that high pay levels are necessary to avoid a Lemons Problem. Those subscribing to this view argue that unless executive pay rates are set above the market rate, the recruitment process would be clogged-up with poor quality applicants.
An increasingly flexible labour market, with more workers being employed part-time, as opposed to full time, would help increase income inequalities.
See: income gap
See also: polices to reduce inequality and poverty | http://economicsonline.co.uk/Managing_the_economy/Inequality_and_equity.html | 13 |
20 | 11. Investigating the shapes of graphs
It is a useful skill to be able to draw an accurate (representative) sketch graph of a function of a variable. It can aid in the understanding of a topic, and moreover, it can aid those who might find the mental envisaging of some of the more complex functions very difficult.
Often one refers to the "vertex" of a quadratic function, but what is this?
The vertex is the point where the graph changes the direction, but with the new skills of differentiation this can be generalised (rather helpfully):
A stationary point is a point on a graph of:
This is simple to explain in words. One is basically finding all values of "x" (and hence the coordinates of the corresponding points, if it is required) of the places on the graph where the gradient is equal to zero.
First one must calculate the derivative, such that one is able to calculate the value of "x" for which this is zero, and hence the gradient is zero.
Hence, one now uses the original function to obtain the "y" value, and thence the coordinate of the point:
Hence there is a stationary point, or vertex at (-1, 2).
One can check this using the rules about the transformation of graphs, along with the completion of the square technique.
Maximum and minimum points
It is evident that there are different types of these stationary points. One can envisage simply that there are those point whose gradient is positive, and then becomes zero, after which they are negative (maxima), and those points whose gradient change is from negative to zero to positive (minima)
One could perform an analysis upon these points to check whether they are maxima, or minima.
1. For the stationary point calculated in the previous example, deduce whether it is a point of local maximum, or local minimum.
One obtained the point (-1, 2) on the graph of:
One can therefore take an "x" value either side of this stationary point, and calculate the gradient.
This is evidently negative.
This is evidently positive.
Hence the gradient has gone from negative, to zero, to positive; and therefore the stationary point is a local minimum.
It is important that one understands that these "minima" and "maxima" are with reference to the local domain. This means that one can have several points of local minimum, or several points of local maximum on the same graph (the maximum is not the single point whose value of "y" is greatest, and the minimum is not the single point whose value of "y" is least).
An application to roots of equations
It is evident, and has been shown previously, that one can obtain the roots of an equation through the analysis, and calculation of the points of intersection of two functions (when graphed). It is evident why this is true; for example:
It is therefore simple to deduce that:
Are the real roots to the equation.
This is correct, and is useful knowledge when conjoined with the knowledge of stationary points, and basic sketching skills.
Consider that one wishes to calculate the roots of the equation:
These roots (if they are real) are graphically described as the intersections of the lines:
Hence one would plot both graphs, and calculate the points of intersection.
However, it is often the case that one will merely want to know how many real roots there are to an equation, and hence the work on sketch graphs is relevant.
One does not need to know the accurate roots, merely the number of them, and hence it is useful to learn how to plot a good sketch graph.
First one would calculate the stationary points of one of the functions, and then one could deduce their type. This could then be sketched onto a pair of axes. Repetition of this with the second function would lead to a clear idea of where the intersections may (or may not be), and therefore one can not only give the number of real roots to the equation, but also approximations as to the answer (these are usually given as inequalities relating to the positioning of the x-coordinate of the intersection with those of the stationary points).
There is a much better, and usually more powerful technique for calculating the type of stationary point one is dealing with than the method described earlier.
If one is to think of the derivative of a function to be the "rate of change" of the function, the second derivative is the "rate of change, of the rate of change" of a function.
This is a difficult sounding phrase, but it is a rather easy concept. In some functions, one will have noticed that the derivative involves a value of "x", and hence there is a change to the gradient. A straight line has a constant value as the derivative, and hence it has a constant gradient. A constant gradient is not changing, there is no "rate of change of the gradient, other than zero".
A derivative that has a term of a variable within it will have a second derivative other than zero. This is because at any given value of "x" on the curve, there is a different gradient, and hence one can calculate how this changes.
1. What is the second derivative of the function of "x":
This is simple; being by calculating the first derivative:
Now, one would like to know what the rate of change, of this gradient is, hence one can calculate the derivative of the derivative (the second derivative):
One should be aware that the second derivative is notated in two ways:
The former is pronounced "f two-dashed x", and the latter, "d two y, d x squared".
The application of this to minima and maxima is useful.
In many cases, this is a much more powerful tool than the original testing with values above and below the stationary point.
1. Demonstrate why (through the use of second derivative) the stationary point calculated in the example earlier (in this section) produced a local minimum.
First one can assert the function:
Now, the derivative:
Finally, the second derivative:
Hence the point is a local minimum, and moreover, the graph bends upwards, and does not have a local maximum.
Graphs of other functions
Functions other than the simple polynomial one that has been considered can use the same method.
One is already aware that these graphs of fractional, or negative indices of "x" are differentiable using the same rule for differentiating powers of "x" as the positive, integer powers use.
One does have to be slightly more careful however, as there are some points on these graphs that are undefined (the square root of any negative value, for instance, is not defined in the set of real numbers).
One should simply apply the same principles.
First one must find the derivative (it might be a good idea to find the second derivative at the same time, so as to do all of the calculus first):
(Note, in this example one might wish to write down the original expression of "y" as a positive, and negative power of "x", it will aid, one would imagine, the understanding of the situation).
Now one can find the stationary points:
Hence, there are stationary points at (-1, -2), and (1, 2).
Now one can identify them, in turn:
Hence there is a point of local maximum at (-1, -2), and a point of local minimum at (1, 2).
One thing that one should be aware of is that sometimes one will encounter a change in the gradient of a curve that is from a positive to a positive, or from a negative to a negative, this is a point of inflexion. Although this is not strictly in the syllabus, it is useful to know, and can help to explain the stationary point that is found in graphs such as .
Read these other OCR Core 1 notes:
- Coordinates, points, and lines
- Some important graphs
- Index Notation
- Graphs of nth power functions
- Transforming graphs
- Investigating the shapes of graphs
- Applications of differentiation | http://www.thestudentroom.co.uk/wiki/Revision:OCR_Core_1_-_Investigating_the_shapes_of_graphs | 13 |
14 | Native Americans in the United States are the indigenous peoples from the regions of North America now encompassed by the continental United States, including parts of Alaska. They comprise a large number of distinct tribes, states, and ethnic groups, many of which survive as intact political communities. There has been a wide range of terms used to describe them and no consensus has been reached among indigenous members as to what they prefer to be called collectively. They have been known as American Indians, Amerindians, Amerinds, Aboriginal, Indians, Indigenous, Original Americans, Red Indians, or Red Men.
European colonization of the Americas was a period of conflict between Old and New World cultures. Most of the written historical record about Native Americans began with European contact. Ideologies clashed, old world diseases decimated, religious institutions challenged, and technologies were exchanged in what would be one of the greatest and most devastating meetings of cultures in the history of the world. Native Americans lived in hunter/farmer subsistence societies with much fewer societal constraints and institutional structures, and much less focus on material goods and market transactions, than the rigid, institutionalized, market-based, materialistic, and tyrannical societies of Western Europe. The differences between the two societies were vast enough to make for significant misunderstandings and create long-lasting cultural conflicts.
As the colonies revolted against England and established the United States of America, the ideology of Manifest destiny was ingrained into the American psyche. The idea of civilizing Native Americans (as conceived by George Washington, Thomas Jefferson, and Henry Knox) and assimilation (whether voluntary as with the Choctaw, or forced) were a consistent policy through American administrations. Major resistance, or “Indian Wars,” to American expansion were nearly a constant issue up until the 1890s.
Native Americans today have a special relationship with the United States of America. They can be found as nations, tribes, or bands of Native Americans who have sovereignty or independence from the government of the United States, and whose society and culture still flourish amidst a larger immigrated American (such as European, African, Asian, Middle Eastern) populace. Native Americans who were not already U.S. citizens as granted by other provisions such as with a treaty term were granted citizenship in 1924 by the Congress of the United States.
According to the still-debated New World migration model, a migration of humans from Eurasia to the Americas took place via Beringia, a land bridge which formerly connected the two continents across what is now the Bering Strait. The minimum time depth by which this migration had taken place is confirmed at c. 12,000 years ago, with the upper bound (or earliest period) remaining a matter of some unresolved contention. These early Paleoamericans soon spread throughout the Americas, diversifying into many hundreds of culturally distinct nations and tribes. According to the oral histories of many of the indigenous peoples of the Americas, they have been living there since their genesis, described by a wide range of traditional creation accounts.
After 1492 European exploration of the Americas revolutionized how the Old and New Worlds perceived themselves. One of the first major contacts, in what would be called the American Deep South, occurred when conquistador Juan Ponce de León landed in La Florida in April of 1513. Ponce de León was later followed by other Spanish explorers like Pánfilo de Narváez in 1528 and Hernando de Soto in 1539.
The European exploration and subsequent colonization obliterated some Native Americans populations and cultures. Other re-organized to form new cultural groups. From the 16th through the 19th centuries, the population of Native Americans declined in the following ways: epidemic diseases brought from Europe along with violence at the hands of European explorers and colonists; displacement from their lands; internal warfare, enslavement; and a high rate of intermarriage. Most mainstream scholars believe that, among the various contributing factors, epidemic disease was the overwhelming cause of the population decline of the American natives because of their lack of immunity to new diseases brought from Europe.
European explorers and settlers brought infectious diseases to North America against which the Native Americans had no natural immunity. Chicken pox and measles, though common and rarely fatal among Europeans, often proved deadly to Native Americans. Smallpox proved particularly deadly to Native American populations. Epidemics often immediately followed European exploration and sometimes destroyed entire village populations. While precise figures are difficult to determine, some historians estimate that up to 80% of some Native populations died due to European diseases after first contact.
In 1618–1619, smallpox wiped out 90% of the Massachusetts Bay Native Americans. Historians believe Mohawk Native Americans were infected after contact with children of Dutch traders in Albany in 1634. The disease swept through Mohawk villages, reaching Native Americans at Lake Ontario in 1636, and the lands of the Iroquois by 1679, as it was carried by Mohawks and other Native Americans who traveled the trading routes. The high rate of fatalities caused breakdowns in Native American societies and disrupted generational exchanges of culture.
Similarly, after initial direct contact with European explorers in the 1770s, smallpox rapidly killed at least 30% of Northwest Coast Native Americans. For the next 80 to 100 years, smallpox and other diseases devastated native populations in the region. Puget Sound area populations once as high as 37,000 were reduced to only 9,000 survivors by the time settlers arrived en masse in the mid-19th century.
Smallpox epidemics in 1780–1782 and 1837–1838 brought devastation and drastic depopulation among the Plains Indians. By 1832, the federal government established a smallpox vaccination program for Native Americans (The Indian Vaccination Act of 1832). It was the first program created to address a health problem of American Indians.
In the sixteenth century Spaniards and other Europeans brought horses to the Americas. The reintroduction of horses resulted in benefits to Native Americans. As they adopted the animals, they began to change their cultures in substantial ways, especially by extending their ranges. Some of the horses escaped and began to breed and increase their numbers in the wild. Horses had originated naturally in North America and migrated westward via the Bering Land Bridge to Asia. The early American horse was game for the earliest humans and was hunted to extinction about 7,000 BC, just after the end of the last glacial period.
The re-introduction of the horse to North America had a profound impact on Native American culture of the Great Plains. The tribes trained and used the horses to ride and to carry packs or pull travois, to expand their territories markedly, more easily exchange goods with neighboring tribes, and more easily hunt game. They fully incorporated the use of horses into their societies, including using the horses to conduct warring raids.
Native American societies reminded Europeans of a golden age only known to them in folk history. The idea of freedom and democratic ideals was born in the Americas because "it was only in America" that Europeans from 1500 to 1776 knew of societies that were "truly free."
Natural freedom is the only object of the polity of the [Native Americans]; with this freedom do nature and climate rule alone amongst them ... [Native Americans] maintain their freedom and find abundant nourishment . . . [and are] people who live without laws, without police, without religion.|20px|20px|- Jean Jacques Rousseau, Jesuit and Savage in New France
The Iroquois nations' political confederacy and democratic government has been credited as one of the influences on the Articles of Confederation and the United States Constitution. However, there is heated debate among historians about the importance of their contribution. Although Native American governmental influence is debated, it is a historical fact that several founding fathers had contact with the Iroquois, and prominent figures such as Thomas Jefferson and Benjamin Franklin were involved with their stronger and larger native neighbor-- the Iroquois.
During the American Revolution, the newly proclaimed United States competed with the British for the allegiance of Native American nations east of the Mississippi River. Most Native Americans who joined the struggle sided with the British, hoping to use the American Revolutionary War to halt further colonial expansion onto Native American land. Many native communities were divided over which side to support in the war. The first native community to sign a treaty with the new United States Government was the Lenape. For the Iroquois Confederacy, the American Revolution resulted in civil war. Cherokees split into a neutral (or pro-American) faction and the anti-American Chickamaugas, led by Dragging Canoe.
Frontier warfare during the American Revolution was particularly brutal, and numerous atrocities were committed by settlers and native tribes alike. Noncombatants suffered greatly during the war. Military expeditions on each side destroyed villages and food supplies to reduce the ability of people to fight, as in frequent raids in the Mohawk Valley and western New York. The largest of these expeditions was the Sullivan Expedition of 1779, in which American troops destroyed more than 40 Iroquois villages to neutralize Iroquois raids in upstate New York. The expedition failed to have the desired effect: Native American activity became even more determined.
American Indians have played a central role in shaping the history of the nation, and they are deeply woven into the social fabric of much of American life ... During the last three decades of the twentieth century, scholars of ethnohistory, of the "new Indian history," and of Native American studies forcefully demonstrated that to understand American history and the American experience, one must include American Indians.|20px|20px|- Robbie Ethridge, Creek Country.
The British made peace with the Americans in the Treaty of Paris (1783), through which they ceded vast Native American territories to the United States without informing the Native Americans. The United States initially treated the Native Americans who had fought with the British as a conquered people who had lost their lands. Although many of the Iroquois tribes went to Canada with the Loyalists, others tried to stay in New York and western territories and tried to maintain their lands. Nonetheless, the state of New York made a separate treaty with Iroquois and put up for sale of land that had previously been their territory. The state established a reservation near Syracuse for the Onondagas who had been allies of the colonists.
The United States was eager to expand, to develop farming and settlements in new areas, and to satisfy land hunger of settlers from New England and new immigrants. The national government initially sought to purchase Native American land by treaties. The states and settlers were frequently at odds with this policy.
European nations often sent Native Americans, often against their will and others went willingly, to the Old World as objects of curiosity. They often entertained royalty and were sometimes prey to commercial purposes. Christianization of Native Americans was a charted purpose for some European colonies.
American policy toward Native Americans had continued to evolve after the American Revolution. George Washington and Henry Knox believed that Native Americans were equals but that their society was inferior. He formulated a policy to encourage the "civilizing" process. Washington had a six-point plan for civilization which included,
1. impartial justice toward Native Americans
2. regulated buying of Native American lands
3. promotion of commerce
4. promotion of experiments to civilize or improve Native American society
5. presidential authority to give presents
6. punishing those who violated Native American rights.
Robert Remini, a historian, wrote that "once the Indians adopted the practice of private property, built homes, farmed, educated their children, and embraced Christianity, these Native Americans would win acceptance from white Americans." The United States appointed agents, like Benjamin Hawkins, to live among the Native Americans and to teach them how to live like whites.
How different would be the sensation of a philosophic mind to reflect that instead of exterminating a part of the human race by our modes of population that we had persevered through all difficulties and at last had imparted our Knowledge of cultivating and the arts, to the Aboriginals of the Country by which the source of future life and happiness had been preserved and extended. But it has been conceived to be impracticable to civilize the Indians of North America - This opinion is probably more convenient than just.|20px|20px|-Henry Knox to George Washington, 1790s.
In the late eighteenth century, reformers starting with Washington and Knox, in efforts to "civilize" or otherwise assimilate Native Americans (as opposed to relegating them to reservations), adopted the practice of educating native children. The Civilization Fund Act of 1819 promoted this civilization policy by providing funding to societies (mostly religious) who worked on Native American improvement. Native American Boarding Schools, which were run primarily by Christian missionaries, often proved traumatic to Native American children, who were forbidden to speak their native languages, taught Christianity and denied the right to practice their native religions, and in numerous other ways forced to abandon their Native American identities and adopt European-American culture. There were many documented cases of sexual, physical and mental abuse occurring at these schools.
The Indian Citizenship Act of 1924 granted U.S. citizenship to all Native Americans. Prior to the passage of the act, nearly two-thirds of Native Americans were already U.S. citizens. The earliest recorded date of Native Americans becoming U.S. citizens was in 1831 when the Mississippi Choctaw became citizens after the United States Legislature ratified the Treaty of Dancing Rabbit Creek. Under article XIV of that treaty, any Choctaw, who elected not to move to Native American Territory, could become an American citizen when he registers and if he stays on designated lands for five years after treaty ratification. Citizenship could also be obtained by:
1. Treaty Provision (as with the Mississippi Choctaw)
2. Allotment under the Act of February 8, 1887
3. Issuance of Patent in Fee Simple
4. Adopting Habits of Civilized Life
5. Minor Children
6. Citizenship by Birth
7. Becoming Soldiers and Sailors in the U.S. Armed Forces
9. Special Act of Congress.
Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, That all noncitizen Native Americans born within the territorial limits of the United States be, and they are hereby, declared to be citizens of the United States: Provided, That the granting of such citizenship shall not in any manner impair or otherwise affect the right of any Native American to tribal or other property.|20px|20px| - Indian Citizenship Act of 1924
Manifest Destiny had serious consequences for Native Americans since continental expansion implicitly meant the occupation of Native American land. Manifest Destiny was an explanation or justification for expansion and westward movement, or, in some interpretations, an ideology or doctrine which helped to promote the process of civilization. Advocates of Manifest Destiny believed that expansion was not only good, but that it was obvious and certain. The term was first used primarily by Jacksonian Democrats in the 1840s to promote the annexation of much of what is now the Western United States (the Oregon Territory, the Texas Annexation, and the Mexican Cession).
What a prodigious growth this English race, especially the American branch of it, is having! How soon will it subdue and occupy all the wild parts of this continent and of the islands adjacent. No prophecy, however seemingly extravagant, as to future achievements in this way [is] likely to equal the reality.|20px|20px|-Rutherford Birchard Hayes, U.S. President, January 1, 1857, Personal Diary.
The age of Manifest Destiny, which came to be known as "Indian Removal", gained ground. Although some humanitarian advocates of removal believed that Native Americans would be better off moving away from whites, an increasing number of Americans regarded the natives as nothing more than "savages" who stood in the way of American expansion. Thomas Jefferson believed that while Native Americans were the intellectual equals of whites, they had to live like the whites or inevitably be pushed aside by them. Jefferson's belief, rooted in Enlightenment thinking, that whites and Native Americans would merge to create a single nation did not last, and he began to believe that the natives should emigrate across the Mississippi River and maintain a separate society.
Conflicts generally known as "Indian Wars" broke out between colonial/American government and Native American societies. U.S. government authorities entered into numerous treaties during this period but later abrogated many for various reasons; however, many treaties are considered "living" documents. Major conflicts east of the Mississippi River include the Pequot War, Creek War, and Seminole Wars. Native American Nations west of the Mississippi were numerous and wer the last to submit to U.S. authority. Notably, a multi-tribal army led by Tecumseh, a Shawnee chief, fought a number of engagements during the period 1811-12, known as Tecumseh's War. In the latter stages, Tecumseh's group allied with the British forces in the War of 1812 and was instrumental in the conquest of Detroit. Other military engagements included Native American victories at the Battle of the Wabash in 1791, the Seminole Wars, and the Battle of Little Bighorn in 1876. Defeats included the Creek War of 1813-14, the Sand Creek Massacre in 1864 and the Wounded Knee in 1890. These conflicts were catalysts to the decline of dominate Native American culture.
The Indian (was thought) as less than human and worthy only of extermination. We did shoot down defenseless men, and women and children at places like Camp Grant, Sand Creek, and Wounded Knee. We did feed strychnine to red warriors. We did set whole villages of people out naked to freeze in the iron cold of Montana winters. And we did confine thousands in what amounted to concentration camps.|20px|20px| Wellman- The Indian Wars of the West, 1934
In the nineteenth century, the incessant westward expansion of the United States incrementally compelled large numbers of Native Americans to resettle further west, often by force, almost always reluctantly, long held to be an illegal practice, given the status of the Hopewell Treaty of 1785. Under President Andrew Jackson, United States Congress passed the Indian Removal Act of 1830, which authorized the President to conduct treaties to exchange Native American land east of the Mississippi River for lands west of the river. As many as 100,000 Native Americans eventually relocated in the West as a result of this Indian Removal policy. In theory, relocation was supposed to be voluntary and many Native Americans did remain in the East. In practice, great pressure was put on Native American leaders to sign removal treaties.
The most egregious violation of the stated intention of the removal policy took place under the Treaty of New Echota, which was signed by a dissident faction of Cherokees but not the elected leadership. President Jackson rigidly enforced the treaty, which resulted in the deaths of an estimated 4,000 Cherokees on the Trail of Tears. About 17,000 Cherokees along with approximately 2,000 black slaves held by Cherokees were removed from their homes.
Native American Removal forced or coerced the relocation of major Native American groups in the Eastern United States, resulting directly and indirectly in the deaths of tens of thousands. Tribes were generally located to reservations on which they could more easily be separated from traditional life and pushed into European-American society. Some southern states additionally enacted laws in the 19th century forbidding non-Native American settlement on Native American lands, with the intention to prevent sympathetic white missionaries from aiding the scattered Native American resistance.
The resulting increase in contact with the world outside of the reservation system brought profound changes to American Indian culture. "The war," said the U.S. Indian commissioner in 1945, "caused the greatest disruption of Indian life since the beginning of the reservation era", affecting the habits, views, and economic well-being of tribal members. The most significant of these changes was the opportunity—as a result of wartime labor shortages—to find well-paying work. Yet there were losses to contend with as well. Altogether, 1,200 Pueblo Indians served in World War II; only about half came home alive. in addition many more Navajo served as code talkers for the military in the pacific. the code they made was never cracked by the Japanese.
There are 561 federally recognized tribal governments in the United States. These tribes possess the right to form their own government, to enforce laws (both civil and criminal), to tax, to establish requirements for membership, to license and regulate activities, to zone and to exclude persons from tribal territories. Limitations on tribal powers of self-government include the same limitations applicable to states; for example, neither tribes nor states have the power to make war, engage in foreign relations, or coin money (this includes paper currency).
Many Native Americans and advocates of Native American rights point out that the US Federal government's claim to recognize the "sovereignty" of Native American peoples falls short, given that the US still wishes to govern Native American peoples and treat them as subject to US law. True respect for Native American sovereignty, according to such advocates, would require the United States federal government to deal with Native American peoples in the same manner as any other sovereign nation, handling matters related to relations with Native Americans through the Secretary of State, rather than the Bureau of Indian Affairs. The Bureau of Indian Affairs reports on its website that its "responsibility is the administration and management of of land held in trust by the United States for American Indians, Indian tribes, and Alaska Natives. Many Native Americans and advocates of Native American rights believe that it is condescending for such lands to be considered "held in trust" and regulated in any fashion by a foreign power, whether the US Federal Government, Canada, or any other non-Native American authority.
According to 2003 United States Census Bureau estimates, a little over one third of the 2,786,652 Native Americans in the United States live in three states: California at 413,382, Arizona at 294,137 and Oklahoma at 279,559.
As of 2000, the largest tribes in the U.S. by population were Navajo, Cherokee, Choctaw, Sioux, Chippewa, Apache, Blackfeet, Iroquois, and Pueblo. In 2000, eight of ten Americans with Native American ancestry were of mixed blood. It is estimated that by 2100 that figure will rise to nine out of ten. In addition, there are a number of tribes that are recognized by individual states, but not by the federal government. The rights and benefits associated with state recognition vary from state to state.
Some tribal nations have been unable to establish their heritage and obtain federal recognition. The Muwekma Ohlone of the San Francisco bay area are pursuing litigation in the federal court system to establish recognition. Many of the smaller eastern tribes have been trying to gain official recognition of their tribal status. The recognition confers some benefits, including the right to label arts and crafts as Native American and permission to apply for grants that are specifically reserved for Native Americans. But gaining recognition as a tribe is extremely difficult; to be established as a tribal group, members have to submit extensive genealogical proof of tribal descent.
Military defeat, cultural pressure, confinement on reservations, forced cultural assimilation, outlawing of native languages and culture, termination policies of the 1950s and 1960s and earlier, slavery and poverty, have had deleterious effects on Native Americans' mental and physical health. Contemporary health problems suffered disproportionately include alcoholism, heart disease, diabetes, and suicide.
As recently as the 1970s, the Bureau of Indian Affairs was still actively pursuing a policy of "assimilation", dating at least to the Indian Citizenship Act of 1924. The goal of assimilation—plainly stated early on—was to eliminate the reservations and steer Native Americans into mainstream U.S. culture. In July 2000 the Washington state Republican Party adopted a resolution of termination for tribal governments. As of 2004, there are still claims of theft of Native American land for the coal and uranium it contains.
In the state of Virginia, Native Americans face a unique problem. Virginia has no federally recognized tribes, largely due to Walter Ashby Plecker. In 1912, Plecker became the first registrar of the state's Bureau of Vital Statistics, serving until 1946. Plecker believed that the state's Native Americans had been "mongrelized" with its African American population. A law passed by the state's General Assembly recognized only two races, "white" and "colored". Plecker pressured local governments into reclassifying all Native Americans in the state as "colored", leading to the destruction of records on the state's Native American community.
Maryland also has a non-recognized tribal nation—the Piscataway Indian Nation.
In order to receive federal recognition and the benefits it confers, tribes must prove their continuous existence since 1900. The federal government has so far refused to bend on this bureaucratic requirement. A bill currently before U.S. Congress to ease this requirement has been favorably reported out of a key Senate committee, being supported by both of Virginia's senators, George Allen and John Warner, but faces opposition in the House from Representative Virgil Goode, who has expressed concerns that federal recognition could open the door to gambling in the state.
In the early 21st century, Native American communities remain an enduring fixture on the United States landscape, in the American economy, and in the lives of Native Americans. Communities have consistently formed governments that administer services like firefighting, natural resource management, and law enforcement. Most Native American communities have established court systems to adjudicate matters related to local ordinances, and most also look to various forms of moral and social authority vested in traditional affiliations within the community. To address the housing needs of Native Americans, Congress passed the Native American Housing and Self Determination Act (NAHASDA) in 1996. This legislation replaced public housing, and other 1937 Housing Act programs directed towards Indian Housing Authorities, with a block grant program directed towards Tribes.
Despite the ongoing political and social issues surrounding Native Americans' position in the United States, there has been relatively little public opinion research on attitudes toward them among the general public. In a 2007 focus group study by the nonpartisan Public Agenda organization, most non-Indians admitted they rarely encounter Native Americans in their daily lives. While sympathetic toward Native Americans and expressing regret over the past, most people had only a vague understanding of the problems facing Native Americans today. For their part, Native Americans told researchers that they believed they continued to face prejudice and mistreatment in the broader society.
LeCompte also endured taunting on the battlefield. "They ridiculed him and called him a 'drunken Indian.' They said, 'Hey, dude, you look just like a haji--you'd better run.' They call the Arabs 'haji.' I mean, it's one thing to worry for your life, but then to have to worry about friendly fire because you don't know who in the hell will shoot you?|20px|20px| Tammie LeCompte, May 25, 2007, Soldier highlights problems in U.S. Army
Conflicts between the federal government and native Americans occasionally erupt into violence. Perhaps one of the more noteworthy incidents in recent history is the Wounded Knee incident in small town of Wounded Knee, South Dakota. On February 27, 1973, the town was surrounded by federal law enforcement officials and the United States military. The town itself was under the control of members of the American Indian Movement which was protesting a variety of issues important to the organization. Two members of AIM were killed and one United States Marshal was paralyzed as a result of gunshot wounds. In the aftermath of the conflict, one man, Leonard Peltier was arrested and sentenced to life in prison while another, John Graham, as late as 2007, was extradited to the U.S. to stand trial for killing a Native American woman, months after the standoff, that he believed to be an FBI informant.
He is ignoble—base and treacherous, and hateful in every way. Not even imminent death can startle him into a spasm of virtue. The ruling trait of all savages is a greedy and consuming selfishness, and in our Noble Red Man it is found in its amplest development. His heart is a cesspool of falsehood, of treachery, and of low and devilish instincts ... The scum of the earth!'|20px|20px| Mark Twain, 1870, The Noble Red Man''
Intertribal and interracial mixing was common among Native American tribes making it difficult to clearly identify which tribe an individual belonged to. Bands or entire tribes occasionally split or merged to form more viable groups in reaction to the pressures of climate, disease and warfare. A number of tribes practiced the adoption of captives into their group to replace their members who had been captured or killed in battle. These captives came from rival tribes and later from European settlers. Some tribes also sheltered or adopted white traders and runaway slaves and Native American-owned slaves. So a number of paths to genetic mixing existed.
In later years, such mixing, however, proved an obstacle to qualifying for recognition and assistance from the U.S. federal government or for tribal money and services. To receive such support, Native Americans must belong to and be certified by a recognized tribal entity. This has taken a number of different forms as each tribal government makes its own rules while the federal government has its own set of standards. In many cases, qualification is based upon the percentage of Native American blood, or the "blood quantum" of an individual seeking recognition. To attain such certainty, some tribes have begun requiring genetic genealogy (DNA testing). Requirements for tribal certification vary widely by tribe. The Cherokee require only a descent from a Native American listed on the early 20th century Dawes Rolls while federal scholarships require enrollment in a federally recognized tribe as well as a Certificate of Degree of Indian Blood card showing at least a one-quarter Native American descent. Tribal rules regarding recognition of members with Native American blood from multiple tribes are equally diverse and complex.
Tribal membership conflicts have led to a number of activist groups, legal disputes and court cases. One example are the Cherokee freedmen, who were descendants of slaves once owned by the Cherokees. The Cherokees had allied with the Confederate States of America in the American Civil War and, after the war, were forced by the federal government, in an 1866 treaty, to free their slaves and make them citizens. They were later disallowed as tribe members due to their not having "Indian blood". However, in March 2006, the Judicial Appeals Tribunal—the Cherokee Nation's highest court—ruled that Cherokee freedmen are full citizens of the Cherokee Nation. The court declared that the Cherokee freedmen retain citizenship, voting rights and other privileges despite attempts to keep them off the tribal rolls for not having identifiable "Indian" blood. In March 2007, however, the Cherokee Nation of Oklahoma passed a referendum requiring members to have descent from at least one Native American ancestor on the Dawes Rolls. More than 1200 Freedmen lost their tribal membership after more than 100 years of participation.
In the 20th century, people among white ethnic groups began to claim descent from an "American Indian princess", often a Cherokee. The prototypical "American Indian princess" was Pocahontas, and, in fact, descent from her is a frequent claim. However, the American Indian "princess" is a false concept, derived from the application of European concepts to Native Americans, as also seen in the naming of war chiefs as "kings". Descent from "Indian braves" is also sometimes claimed.
Descent from Native Americans became fashionable not only among whites claiming prestigious colonial descent but also among whites seeking to claim connection to groups with distinct folkways that would differentiate them from the mass culture. Large influxes of recent immigrants with unique social customs may have been partially an object of envy. Among African Americans, the desire to be more than black was sometimes expressed in claims of Native American descent. Those passing as white might use the slightly more acceptable Native American ancestry to explain inconvenient details of their heritage.
Native Americans have been depicted by American artists in various ways at different historical periods. During the period when America was first being colonized, in the fifteenth and sixteenth centuries, the artist John White made watercolors and engravings of the people native to the southeastern states. John White’s images were, for the most part, faithful likenesses of the people he observed. Later the artist Theodore de Bry used White’s original watercolors to make a book of engravings entitled, A briefe and true report of the new found land of Virginia. In his book, de Bry often altered the poses and features of White’s figures to make them appear more European, probably in order to make his book more marketable to a European audience. During the period that White and de Bry were working, when Europeans were first coming into contact with native Americans, there was a large interest and curiosity in native American cultures by Europeans, which would have created the demand for a book like de Bry’s.
Several centuries later, during the construction of the Capitol building in the early nineteenth century, the U.S. government commissioned a series of four relief panels to crown the doorway of the Rotunda. The reliefs encapsulate a vision of European—Native American relations that had assumed mythicohistorical proportions by the nineteenth century. The four panels depict: The Preservation of Captain Smith by Pocahontas (1825) by Antonio Capellano, The Landing of the Pilgrims (1825) and The Conflict of Daniel Boone and the Indians (1826–27) by Enrico Causici, and William Penn’s Treaty with the Indians (1827) by Nicholas Gevelot. The reliefs present idealized versions of the Europeans and the native Americans, in which the Europeans appear refined and gentile, and the natives appear ferocious and savage. The Whig representative of Virginia, Henry A. Wise, voiced a particularly astute summary of how Native Americans would read the messages contained in all four reliefs: “We give you corn, you cheat us of our lands: we save your life, you take ours.” While many nineteenth century images of native Americans conveyed similarly negative messages, there were artists, such as Charles Bird King, who sought to express a more realistic image of the native Americans.
Native Americans in television and movies roles were first depicted by European-Americans dressed in mock traditional attire. Such instances include the The Last of the Mohicans (1920), Hawkeye and the Last of the Mohicans (1957), and F Troop (1965-67). In later decades Native American actors such as Jay Silverheels in The Lone Ranger television series (1949-57) and Iron Eyes Cody came to prominence; however, roles were still diminutive and not reflective of Native American culture. In the 1970s some Native Americans roles in movies had become reality based. Little Big Man (1970), Billy Jack (1971), and The Outlaw Josey Wales (1976) depicted Native Americans in minor supporting roles. Dances with Wolves (1990), The Last of the Mohicans (1992), and Smoke Signals (1998) employed Native American actors, culture, and languages so that those features could portray a better sense of authenticity.
The use of Native American mascots in sports has become a contentious issue in the United States and Canada. Americans have had a history of "playing Indian" that dates back to at least the 1700s. Many individuals admire the heroism and romanticism evoked by the classic Native American image, but many too view the use of mascots as both offensive and demeaning (especially amongst Native Americans). Despite the concerns that have been raised, many Native American mascots are still used in American sports from the elementary to the professional level.
(Trudie Lamb Richmond doesn't) know what to say when kids argue, 'I don't care what you say, we are honoring you. We are keeping our Indian.' ... What if it were 'our black' or 'our Hispanic'?|20px|20px|- Amy D'orio quoting Trudie Lamb Richmond, March 1996, Indian Chief Is Mascot No More
In August 2005, the National Collegiate Athletic Association (NCAA) banned the use of "hostile and abusive" Native American mascots from postseason tournaments. An exception was made to allow the use of tribal names as long as approved by that tribe (such as the Seminole Tribe of Florida approving the use of their name as the mascot for Florida State University.) The use of Native American themed team names in U.S. professional sports is widespread and often controversial, with examples such as Chief Wahoo of the Cleveland Indians and the Washington Redskins.
Could you imagine people mocking African Americans in black face at a game?” he said. 'Yet go to a game where there is a team with an Indian name and you will see fans with war paint on their faces. Is this not the equivalent to black face?' |20px|20px|- Teaching Tolerance, May 9, 2001, Native American Mascots Big Issue in College Sports
The term Native American was originally introduced in the United States by anthropologists as a more accurate term for the indigenous people of the Americas, as distinguished from the people of India. Because of the widespread acceptance of this newer term in and outside of academic circles, some people believe that Indians is outdated or offensive. People from India (and their descendants) who are citizens of the United States are known as Indian Americans or Asian Indians.
Criticism of the neologism Native American, however, comes from diverse sources. Some American Indians have misgivings about the term Native American. Russell Means, a famous American Indian activist, opposes the term Native American because he believes it was imposed by the government without the consent of American Indians. He has also argued that this use of the word Indian derives not from a confusion with India but from a Spanish expression En Dio, meaning "in God". Furthermore, some American Indians question the term Native American because, they argue, it serves to ease the conscience of "white America" with regard to past injustices done to American Indians by effectively eliminating "Indians" from the present. Still others (both Indians and non-Indians) argue that Native American is problematic because "native of" literally means "born in," so any person born in the Americas could be considered "native". However, very often the compound "Native American" will be capitalized in order to differentiate this intended meaning from others. Likewise, "native" (small 'n') can be further qualified by formulations such as "native-born" when the intended meaning is only to indicate place of birth or origin.
A 1995 US Census Bureau survey found that more American Indians in the United States preferred American Indian to Native American. Nonetheless, most American Indians are comfortable with Indian, American Indian, and Native American, and the terms are often used interchangeably. The traditional term is reflected in the name chosen for the National Museum of the American Indian, which opened in 2004 on the Mall in Washington, D.C..
Recently, the U.S. Census Bureau has introduced the "Asian-Indian" category to avoid ambiguity when sampling the Indian-American population.
Though cultural features, language, clothing, and customs vary enormously from one tribe to another, there are certain elements which are encountered frequently and shared by many tribes. ' Early hunter-gatherer tribes made stone weapons from around 10,000 years ago; as the age of metallurgy dawned, newer technologies were used and more efficient weapons produced. Prior to contact with Europeans, most tribes used similar weaponry. The most common implements were the bow and arrow, the war club, and the spear. Quality, material, and design varied widely. Native American use of fire both helped provide insects for food and altered the landscape of the continent to help the human population flourish.
Large mammals like mammoths and mastodons were largely extinct by around 8,000 B.C., primarily because of being overhunted by the American Indians. Native Americans switched to hunting other large game, such as bison. The Great Plains tribes were still hunting the bison when they first encountered the Europeans. Acquiring horses from the Spanish and learning to ride in the 17th century greatly altered the natives' culture, changing the way in which they hunted large game. In addition, horses became a central feature of Native lives and a measure of wealth.
Before the formation of tribal structure, a structure dominated by gentes existed.
The Iroquois, living around the Great Lakes and extending east and north, used strings or belts called wampum that served a dual function: the knots and beaded designs mnemonically chronicled tribal stories and legends, and further served as a medium of exchange and a unit of measure. The keepers of the articles were seen as tribal dignitaries.
Pueblo peoples crafted impressive items associated with their religious ceremonies. Kachina dancers wore elaborately painted and decorated masks as they ritually impersonated various ancestral spirits. Sculpture was not highly developed, but carved stone and wood fetishes were made for religious use. Superior weaving, embroidered decorations, and rich dyes characterized the textile arts. Both turquoise and shell jewelry were created, as were high-quality pottery and formalized pictorial arts.
Navajo spirituality focused on the maintenance of a harmonious relationship with the spirit world, often achieved by ceremonial acts, usually incorporating sandpainting. The colors—made from sand, charcoal, cornmeal, and pollen—depicted specific spirits. These vivid, intricate, and colorful sand creations were erased at the end of the ceremony.
Native American Agriculture started about 7,000 years ago in the area of present day Illinois. The first crop the Native Americans grew was squash. This was the first of several crops the Native Americans learned to domesticate. Others included cotton, sunflower, pumpkins, watermelon, tobacco, goosefoot, and sump weed. The most important crop the Native Americans raised was maize. It was first started in Mesoamerica and spread north. About 2,000 years ago it reached eastern America. This crop was important to the Native Americans because it was part of their everyday diet, it could be stored in underground pits during the winter, and no part of it was wasted. The husk was made into art crafts and the cob was used as fuel for fires. By 800 A.D. the Native Americans had established 3 main crops which were beans, squash, and corn called the three sisters. Agriculture in the southwest started around 4,000 years ago when traders brought cultigens from Mexico. Due to the varying climate, some ingenuity had to be done for agriculture to be successful. The climate in the southwest ranged from cool, moist mountains regions, to dry, sandy soil in the desert. Some innovations of the time included irrigation to bring water into the dry regions, and the selection of seed based on their seed trait. In the southwest, they grew beans that were self-supported, much of the way they are grown today. In the east, however, they were planted right by corn in order for the vein to be able to climb the stalk. The gender role of the Native Americans, when it came to agriculture, varied from region to region. In the southwest area, men would prepare the soil with hoes. The women were in charge of planting, weeding, and harvesting the crops. In most other regions, the women were in charge of doing everything including clearing the land. Clearing the land was an immense chore since the Native Americans rotated fields frequently. There have been stories about how Squanto showed pilgrims to put fish in fields and this would acts like a fertilizer, but this story is not true. They did plant beans next to corn; the beans would replace the nitrogen the corn took from the ground. They also had controlled fires to burn weeds and this would put nutrients back into the ground. If this did not work they would simply abandon the field and go find a new spot for their field.
Some of the tools the Native Americans used were the hoe, the maul, and the dibber. The hoe was the main tool used to till the land and prepare it for planting and then used for weeding. The first versions were made out of wood and stone. When the settlers brought iron, Native Americans switched to iron hoes and hatches. The dibber was essentially a digging stick, and was used to plant the seed. Once the plants were harvested they were prepared by the women for eating. The maul was used to grind the corn into mash and was eaten that way or made into corn bread.
No particular religion or religious tradition is hegemonic among Native Americans in the United States. Most self-identifying and federally recognized Native Americans claim adherence to some form of Christianity, some of these being cultural and religious syntheses unique to the particular tribe. Traditional Native American spiritual rites and ceremonies are maintained by many Americans of both Native and non-Native identity. These spiritualities may accompany adherence to another faith, or can represent a person's primary religious identity. While much Native American spiritualism exists in a tribal-cultural continuum, and as such cannot be easily separated from tribal identity itself, certain other more clearly-defined movements have arisen within "Trad" Native American practitioners, these being identifiable as "religions" in the clinical sense. The Midewiwin Lodge is a traditional medicine society inspired by the oral traditions and prophesies of the Ojibwa (Chippewa) and related tribes. Traditional practices include the burning of sacred herbs (tobacco, sweetgrass, sage, etc.), the sweatlodge, fasting (paramount in "vision quests"), singing and drumming, and the smoking of natural tobacco in a pipe. A practitioner of Native American spiritualities and religions may incorporate all, some or none of these into their personal or tribal rituals.
Another significant religious body among Native peoples is known as the Native American Church. It is a syncretistic church incorporating elements of native spiritual practice from a number of different tribes as well as symbolic elements from Christianity. Its main rite is the peyote ceremony. Prior to 1890, traditional religious beliefs included Wakan Tanka. In the American Southwest, especially New Mexico, a syncretism between the Catholicism brought by Spanish missionaries and the native religion is common; the religious drums, chants, and dances of the Pueblo people are regularly part of Masses at Santa Fe's Saint Francis Cathedral. Native American-Catholic syncretism is also found elsewhere in the United States. (e.g., the National Kateri Tekakwitha Shrine in Fonda, New York and the National Shrine of the North American Martyrs in Auriesville, New York).
Native Americans are the only known ethnic group in the United States requiring a federal permit to practice their religion. The eagle feather law, (Title 50 Part 22 of the Code of Federal Regulations), stipulates that only individuals of certifiable Native American ancestry enrolled in a federally recognized tribe are legally authorized to obtain eagle feathers for religious or spiritual use. Native Americans and non-Native Americans frequently contest the value and validity of the eagle feather law, charging that the law is laden with discriminatory racial preferences and infringes on tribal sovereignty. The law does not allow Native Americans to give eagle feathers to non-Native Americans, a common modern and traditional practice. Many non-Native Americans have been adopted into Native American families, made tribal members and given eagle feathers.
Most Native American tribes had traditional gender roles. In some tribes, such as the Iroquois nation, social and clan relationships were matrilineal and/or matriarchal, although several different systems were in use. One example is the Cherokee custom of wives owning the family property. Men hunted, traded and made war, while women gathered plants, cared for the young and the elderly, fashioned clothing and instruments and cured meat. The cradleboard was used by mothers to carry their baby while working or traveling. However, in some (but not all) tribes a kind of transgender was permitted; see Two-Spirit.
At least several dozen tribes allowed polygyny to sisters, with procedural and economic limits.
Apart from making home, women had many tasks that were essential for the survival of the tribes. They made weapons and tools, took care of the roofs of their homes and often helped their men hunt buffalos. In some of the Plains Indian tribes there reportedly were medicine women who gathered herbs and cured the ill.
In some of these tribes such as the Sioux girls were also encouraged to learn to ride, hunt and fight. Though fighting was mostly left to the boys and men, there had been cases of women fighting alongside them, especially when the existence of the tribe was threatened.
Native American leisure time led to competitive individual and team sports. Early accounts include team games played between tribes with hundreds of players on the field at once. Jim Thorpe, Notah Begay III, and Billy Mills are well known professional athletes.
Native American ball sports, sometimes referred to as lacrosse, stickball, or baggataway, was often used to settle disputes rather than going to war which was a civil way to settle potential conflict. The Choctaw called it ISITOBOLI ("Little Brother of War"); the Onondaga name was DEHUNTSHIGWA'ES ("men hit a rounded object"). There are three basic version classifed as Great Lakes, Iroquoian, and Southern. The game is played with one or two rackets/sticks and one ball. The object of the game is to land the ball on the opposing team's goal (either a single post or net) to score and prevent the opposing team from scoring on your goal. The game involves as few as twenty or as many as 300 players with no height or weight restrictions and no protective gear. The goals could be from a few hundred feet apart to a few miles; in Lacrosse the field is 110 yards. A Jesuit priest referenced stickball in 1729, and George Catlin painted the subject.
Chunke was a game that consisted of a stone shaped disk that was about 1–2 inches in length. The disk was thrown down a corridor so that it could roll past the players at great speed. The disk would roll down the corridor, and players would throw wooden shafts at the moving disk. The object of the game was to strike the disk or prevent your opponents from hitting it.
Billy Mills is a Lakota and USMC officer who was trained to compete in the 1964 Olympics. Mills was a virtual unknown. He had finished second in the U.S. Olympic trials. His time in the preliminaries was a full minute slower than Clarke's.
Jim Thorpe, a Sauk and Fox Native American, was an all-round athlete playing football and baseball. Future President Dwight Eisenhower injured his knee while trying to tackle Thorpe. Eisenhower recalled of Thorpe in a 1961 speech, "Here and there, there are some people who are supremely endowed. My memory goes back to Jim Thorpe. He never practiced in his life, and he could do anything better than any other football player I ever saw." In the Olympics, Thorpe could run the 100-yard dash in 10 seconds flat, the 220 in 21.8 seconds, the 440 in 51.8 seconds, the 880 in 1:57, the mile in 4:35, the 120-yard high hurdles in 15 seconds, and the 220-yard low hurdles in 24 seconds. He could long jump 23 ft 6 in and high-jump 6 ft 5 in. He could pole vault 11 feet, put the shot 47 ft 9 in, throw the javelin 163 feet, and throw the discus 136 feet. Thorpe entered the U.S. Olympic trials for both the pentathlon and the decathlon.
Traditional Native American music is almost entirely monophonic, but there are notable exceptions. Native American music often includes drumming and/or the playing of rattles or other percussion instruments but little other instrumentation. Flutes and whistles made of wood, cane, or bone are also played, generally by individuals, but in former times also by large ensembles (as noted by Spanish conquistador de Soto). The tuning of these flutes is not precise and depends on the length of the wood used and the hand span of the intended player, but the finger holes are most often around a whole step apart and, at least in Northern California, a flute was not used if it turned out to have an interval close to a half step.
Performers with Native American parentage have occasionally appeared in American popular music, such as Tina Turner, Rita Coolidge, Wayne Newton, Gene Clark, Blackfoot, Tori Amos and Redbone. Some, such as John Trudell, have used music to comment on life in Native America, and others, such as R. Carlos Nakai integrate traditional sounds with modern sounds in instrumental recordings. A variety of small and medium-sized recording companies offer an abundance of recent music by Native American performers young and old, ranging from pow-wow drum music to hard-driving rock-and-roll and rap.
The most widely practiced public musical form among Native Americans in the United States is that of the pow-wow. At pow-wows, such as the annual Gathering of Nations in Albuquerque, New Mexico, members of drum groups sit in a circle around a large drum. Drum groups play in unison while they sing in a native language and dancers in colorful regalia dance clockwise around the drum groups in the center. Familiar pow-wow songs include honor songs, intertribal songs, crow-hops, sneak-up songs, grass-dances, two-steps, welcome songs, going-home songs, and war songs. Most indigenous communities in the United States also maintain traditional songs and ceremonies, some of which are shared and practiced exclusively within the community.
Native American art comprises a major category in the world art collection. Native American contributions include pottery(Native American pottery), paintings, jewellery, weavings, sculptures, basketry, and carvings.
The integrity of certain Native American artworks is now protected by an act of Congress that prohibits representation of art as Native American when it is not the product of an enrolled Native American artist.
The Inuit, or Eskimo, prepared and buried large amounts of dried meat and fish. Pacific Northwest tribes crafted seafaring dugouts 40–50 feet long for fishing. Farmers in the Eastern Woodlands tended fields of maize with hoes and digging sticks, while their neighbors in the Southeast grew tobacco as well as food crops. On the Plains, some tribes engaged in agriculture but also planned buffalo hunts in which herds were driven over bluffs. Dwellers of the Southwest deserts hunted small animals and gathered acorns to grind into flour with which they baked wafer-thin bread on top of heated stones. Some groups on the region's mesas developed irrigation techniques, and filled storehouses with grain as protection against the area's frequent droughts.
In the early years, as these native peoples encountered European explorers and settlers and engaged in trade, they exchanged food, crafts, and furs for blankets, iron and steel implements, horses, trinkets, firearms, and alcoholic beverages.
Today, other than tribes successfully running casinos, many tribes struggle. There are an estimated 2.1 million Native Americans, and they are the most impoverished of all ethnic groups. According to the 2000 Census, an estimated 400,000 Native Americans reside on reservation land. While some tribes have had success with gaming, only 40% of the 562 federally recognized tribes operate casinos. According to a 2007 survey by the U.S. Small Business Administration, only 1 percent of Native Americans own and operate a business. Native Americans rank at the bottom of nearly every social statistic: highest teen suicide rate of all minorities at 18.5%, highest rate of teen pregnancy, highest high school drop out rate at 54%, lowest per capita income, and unemployment rates between 50% to 90%.
The barriers to economic development on Indian reservations often cited by others and two experts Joseph Kalt and Stephen Cornell of Harvard University, in their classic report: What Can Tribes Do? Strategies and Institutions in American Indian Economic Development, are as follows (incomplete list, see full Kalt & Cornell report):
One of the major barriers for overcoming the economic strife is the lack of entrepreneurial knowledge and experience across Indian reservations. “A general lack of education and experience about business is a significant challenge to prospective entrepreneurs,” also says another report on Native American entrepreneurship by the Northwest Area Foundation in 2004. “Native American communities that lack entrepreneurial traditions and recent experiences typically do not provide the support that entrepreneurs need to thrive. Consequently, experiential entrepreneurship education needs to be embedded into school curricula and after-school and other community activities. This would allow students to learn the essential elements of entrepreneurship from a young age and encourage them to apply these elements throughout life.”. One publication devoted to addressing these issues is Rez Biz magazine.
The earliest record of African and Native American relations occurred in April, 1502, when the first Africans kidnapped were brought to Hispanola to serve as slaves. Some escaped and somewhere inland on Santo Domingo life birthed the first circle of African-Native Americans. In addition, an example of African slaves' escaping from European colonists and being absorbed by American Indians occurred as far back as 1526. In June of that year, Lucas Vasquez de Ayllon established a Spanish colony near the mouth of the Pee Dee River in what is now eastern South Carolina. The Spanish settlement was named San Miquel de Guadalupe. Amongst the settlement were 100 enslaved Africans. In 1526, the first African slaves fled the colony and took refuge with local Native Americans
European Colonists created treaties with Native American tribes requesting the return of any runaway slaves. For example, in 1726, the British Governor of New York exacted a promise from the Iroquois to return all runaway slaves who had joined up with them. This same promise was extracted from the Huron Natives in 1764 and from the Delaware Natives in 1765. Numerous advertisements requested the return of African Americans who had married Native Americans or who spoke a Native-American language. Individuals in some tribes, especially the Cherokee, owned African slaves; however, other tribes incorporated African Americans, slave or freemen, into the tribe. Interracial relations between Native Americans and African Americans has been apart of American history that has been neglected.
In 2006, the U.S. Census Bureau estimated that about 1.0 percent of the U.S. population was of American Indian or Alaska Native descent. This population is unevenly distributed across the country. Below, are all 50 states, (the District of Columbia and Puerto Rico) are listed by the proportion of residents citing American Indian or Alaska Native ancestry, based on 2006 estimates:
In 2006, the U.S. Census Bureau estimated that about less than 1.0 percent of the U.S. population was of Native Hawaiian or Pacific Islander descent. This population is unevenly distributed across 26 states. Below, are the 26 states that had at least 0.1%. They are listed by the proportion of residents citing Native Hawaiian or Pacific Islander ancestry, based on 2006 estimates: | http://www.reference.com/browse/Delaware+Natives | 13 |
29 | |Germany Table of Contents
Medieval Germany, lying on the open Central European Plain, was divided into hundreds of contending kingdoms, principalities, dukedoms, bishoprics, and free cities. Economic survival in that environment, like political or even physical survival, did not mean expanding across unlimited terrain, as in the United States. It meant a constant struggle that required collaboration with some, competition with others, and an intimate understanding among government, commerce, and production. A desire to save was also born in the German experience of political, military, and economic uncertainty.
Even under these difficult conditions, Germany had already developed a strong economy during the Middle Ages. It was based on guild and craft production, but with elements of merchant capitalism and mercantilism. The trade conducted by its cities ranged far and wide throughout Europe in all directions, and Germany as a whole often had trade surpluses with neighboring states. One reason for these exports was the sheer necessity for the small states to sell abroad in order to buy the many things they could not produce at home.
The German guilds of the Middle Ages established the German tradition of creating products known for quality and durability. A craftsman was not permitted to pursue a trade until he could demonstrate the ability to make high-quality products. Out of that same tradition came an equally strong passion for education and vocational training, for no craftsman was recognized until he had thoroughly learned a trade, passed a test, and been certified.
The Industrial Revolution reached Germany long after it had flowered in Britain, and the governments of the German states supported local industry because they did not want to be left behind. Many enterprises were government initiated, government financed, government managed, or government subsidized. As industry grew and prospered in the nineteenth century, Prussia and other German states consciously supported all economic development and especially transportation and industry.
The north German states were for the most part richer in natural resources than the southern states. They had vast agricultural tracts from Schleswig-Holstein in the west through Prussia in the east. They also had coal and iron in the Ruhr Valley. Through the practice of primogeniture, widely followed in northern Germany, large estates and fortunes grew. So did close relations between their owners and local as well as national governments.
The south German states were relatively poor in natural resources except for their people, and those Germans therefore engaged more often in small economic enterprises. They also had no primogeniture rule but subdivided the land among several offspring, leading those offspring to remain in their native towns but not fully able to support themselves from their small parcels of land. The south German states, therefore, fostered cottage industries, crafts, and a more independent and self-reliant spirit less closely linked to the government.
German banks played central roles in financing German industry. They also shaped industrywide producer cooperatives, known as cartels. Different banks formed cartels in different industries. Cartel contracts were accepted as legal and binding by German courts although they were held to be illegal in Britain and the United States.
The first German cartel was a salt cartel, the Neckar Salt Union of 1828, formed in Württemberg and Baden. The process of cartelization began slowly, but the cartel movement took hold after 1873 in the economic depression that followed the postunification speculative bubble. It began in heavy industry and spread throughout other industries. By 1900 there were 275 cartels in operation; by 1908, over 500. By some estimates, different cartel arrangements may have numbered in the thousands at different times, but many German companies stayed outside the cartels because they did not welcome the restrictions that membership imposed.
The government played a powerful role in the industrialization of the German Empire founded by Otto von Bismarck in 1871 (see Bismarck and Unification, ch. 1). It supported not only heavy industry but also crafts and trades because it wanted to maintain prosperity in all parts of the empire. Even where the national government did not act, the highly autonomous regional and local governments supported their own industries. Each state tried to be as self-sufficient as possible.
Despite the several ups and downs of prosperity and depression that marked the first decades of the German Empire, the ultimate wealth of the empire proved immense. German aristocrats, landowners, bankers, and producers created what might be termed the first German economic miracle, the turn-of-the-century surge in German industry and commerce during which bankers, industrialists, mercantilists, the military, and the monarchy joined forces.
The German Empire also established, under Bismarck's direction, the social compact under which the German laboring classes supported the national ambitions of the newly united German state in exchange for a system of social welfare that would make them, if not full participants in the system, at least its beneficiaries and pensioners. Bismarck was not a socialist, but he believed that it was necessary to accept portions of the socialist platform to sustain prosperity and social cohesion.
From the prosperity of the empire during the Wilhelmine era (1890-1914), Germany plunged into World War I, a war it was to lose and one that spawned many of the economic crises that would destroy the successor Weimar Republic (see The Weimar Republic, 1918-33, ch. 1). Even the British economist John Maynard Keynes denounced the 1919 Treaty of Versailles as ruinous to German and global prosperity. The war and the treaty were followed by the Great Inflation of the early 1920s that wreaked havoc on Germany's social structure and political stability. During that inflation, the value of the nation's currency, the Reichsmark, collapsed from 8.9 per US$1 in 1918 to 4.2 trillion per US$1 by November 1923. Then, after a brief period of prosperity during the mid-1920s, came the Great Depression, which destroyed what remained of the German middle class and paved the way for the dictatorship of Adolf Hitler. During the Hitler era (1933-45), the economy developed a hothouse prosperity, supported with high government subsidies to those sectors that Hitler favored because they gave Germany military power and economic autarchy, that is, economic independence from the global economy. Finally, the entire enterprise collapsed in the Stunde Null (Zero Hour), when Germany lay in ruins at the end of World War II in May 1945 and when every German knew that he or she had to begin life all over again.
The first several years after World War II were years of bitter penury for the Germans. Their land, their homes, and their property lay in ruin. Millions were forced to flee with nothing but the clothes on their backs. Tens of millions did not have enough to eat or to wear. Inflation raged. Parker pens, nylon stockings, and Camel cigarettes represented the accepted, if not the legal, tender of the time. Occupation projections showed that the average German would be able to purchase a plate every five years, a pair of shoes every twelve years, and a suit every fifty years.
As Germany's postwar economic and political leaders shaped their plans for the future German economy, they saw in ruin a new beginning, an opportunity to position Germany on a new and totally different path. The economy was to be an instrument for prosperity, but it was also to safeguard democracy and to help maintain a stable society. The new German leaders wanted social peace as well as economic prosperity. They wanted an economic system that would give all an equal opportunity in order to avoid creating underprivileged social groups whose bitter frustration would erupt into revolution and--in turn--repression.
The man who took full advantage of Germany's postwar opportunity was Ludwig Erhard, who was determined to shape a new and different kind of German economy. He was given his chance by United States officials, who found him working in Nuremberg and who saw that many of his ideas coincided with their own.
Erhard's first step was currency reform: the abolition of the Reichsmark and the creation of a new currency, the deutsche mark. He carried out that reform on June 20, 1948, installing the new currency with the concurrence of the Western Allies but also taking advantage of the opportunity to abolish most Nazi and occupation rules and regulations in order to establish the genesis of a free economy. The currency reform, whose purpose was to provide a respected store of value and a widely accepted legal tender, succeeded brilliantly. It established the foundations of the West German economy and of the West German state.
More about the Economy of Germany.
Source: U.S. Library of Congress | http://countrystudies.us/germany/135.htm | 13 |
18 | History of Poland during the Jagiellon dynasty
|Part of a series on the|
|History of Poland|
|Prehistory and protohistory|
History of Poland during the Jagiellon dynasty is the period in the history of Poland that spans the late Middle Ages and early Modern Era. Beginning with the Lithuanian Grand Duke Jogaila (Władysław II Jagiełło), the Jagiellon dynasty (1386–1572) formed the Polish–Lithuanian union. The partnership brought vast Lithuania-controlled Rus' areas into Poland's sphere of influence and proved beneficial for the Poles and Lithuanians, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries.
In the Baltic Sea region Poland's struggle with the Teutonic Knights continued and included the Battle of Grunwald (1410) and in 1466 the milestone Peace of Thorn under King Casimir IV Jagiellon; the treaty created the future Duchy of Prussia. In the south Poland confronted the Ottoman Empire and the Crimean Tatars, and in the east helped Lithuania fight the Grand Duchy of Moscow. Poland's and Lithuania's territorial expansion included the far north region of Livonia.
Poland was developing as a feudal state, with predominantly agricultural economy and an increasingly dominant landed nobility component. The Nihil novi act adopted by the Polish Sejm (parliament) in 1505, transferred most of the legislative power from the monarch to the Sejm. This event marked the beginning of the period known as "Golden Liberty", when the state was ruled by the "free and equal" Polish nobility.
Protestant Reformation movements made deep inroads into the Polish Christianity, which resulted in unique at that time in Europe policies of religious tolerance. The European Renaissance currents evoked in late Jagiellon Poland (kings Sigismund I the Old and Sigismund II Augustus) an immense cultural flowering.
Late Middle Ages (14th–15th century)
Jagiellon monarchy
In 1385 the Union of Krewo was signed between Queen Jadwiga and Jogaila, the Grand Duke of Lithuania, the last pagan state in Europe. The act arranged for Jogaila's baptism (after which Jogaila was known in Poland by his baptismal name, Władysław, and the Polish version of his Lithuanian name, Jagiełło) and for the couple's marriage and constituted the beginning of the Polish–Lithuanian union. The Union strengthened both nations in their shared opposition to the Teutonic Knights and the growing threat of the Grand Duchy of Moscow.
Vast expanses of Rus' lands, including the Dnieper River basin and extending south to the Black Sea, were at that time under Lithuanian control. Lithuania fought the invading Mongols and had taken advantage of the power vacuum in the south and east resulting from the Mongol destruction of Kievan Rus'. The population of the Grand Duchy's enlarged territory was accordingly heavily Ruthenian and Eastern Orthodox. The territorial expansion caused Lithuania's confrontation with the emerging from the Tatar rule and itself expanding Grand Duchy of Moscow.
Uniquely in Europe, the union connected two states geographically located on the opposite sides of the great civilizational divide between the Western or Latin, and the Eastern or Byzantine worlds. The consequences of this fact would be felt throughout the history of the region that, at the time of the Union of Krewo, comprised Poland and Lithuania.
The Union's intention was to create a common state under King Władysław Jagiełło, but the Polish ruling oligarchy's idea of incorporation of Lithuania into Poland turned out to be unrealistic. There were going to be territorial disputes and warfare between Poland and Lithuania or Lithuanian factions; the Lithuanians at times had even found it expedient to conspire with the Teutonic Knights against the Poles. Geographic consequences of the dynastic union and the preferences of the Jagiellon kings accelerated the process of reorientation of Polish territorial priorities to the east.
Between 1386 and 1572 Poland and Lithuania, joined until 1569 by a personal union, were ruled by a succession of constitutional monarchs of the Jagiellon dynasty. The political influence of the Jagiellon kings was diminishing during this period, which was accompanied by the ever increasing role in central government and national affairs of landed nobility.[a] The royal dynasty however had a stabilizing effect on Poland's politics. The Jagiellon Era is often regarded as a period of maximum political power, great prosperity, and in its later stage, the Golden Age of Polish culture.
Social and economic developments
The 13th and 14th century feudal rent system, under which each estate had well defined rights and obligations, degenerated around the 15th century, as the nobility tightened their control of the production, trade and other economic activities, created many directly owned agricultural enterprises known as folwarks (feudal rent payments were being replaced with forced labor on lord's land), limited the rights of the cities and pushed most of the peasants into serfdom. Such practices were increasingly sanctioned by the law. For example the Piotrków Privilege of 1496, granted by King Jan Olbracht, banned rural land purchases by townspeople and severely limited the ability of peasant farmers to leave their villages. Polish towns, lacking national representation protecting their class interests, preserved some degree of self-government (city councils and jury courts), and the trades were able to organize and form guilds. The nobility soon excused themselves from their principal duty – mandatory military service in case of war (pospolite ruszenie). The nobility's split into two main layers was institutionalized (never legally formalized) in the Nihil novi "constitution" of 1505, which required the king to consult general sejm, that is the Senate (highest level officials), as well as the lower chamber of (regional) deputies, the Sejm proper, before enacting any changes. The masses of ordinary szlachta competed or tried to compete against the uppermost rank of their class, the magnates, for the duration of Poland's independent existence.
Poland and Lithuania in personal union under Jagiełło
The first king of the new dynasty was the Grand Duke of Lithuania Jogaila, or Władysław II Jagiełło as the King of Poland. He was elected a king of Poland in 1386, after becoming a Catholic Christian and marrying Jadwiga of Anjou, daughter of Louis I, who was Queen of Poland in her own right. Latin Rite Christianization of Lithuania followed. Jogaila's rivalry in Lithuania with his cousin Vytautas, opposed to Lithuania's domination by Poland, was settled in 1392 and in 1401 in the Union of Vilnius and Radom: Vytautas became the Grand Duke of Lithuania for life under Jogaila's nominal supremacy. The agreement made possible close cooperation between the two nations, necessary to succeed in the upcoming struggle with the Teutonic Order. The Union of Horodło (1413) specified the relationship further and had granted privileges to the Roman Catholic (as opposed to Eastern Orthodox) portion of Lithuanian nobility.
Struggle with the Teutonic Knights
The Great War of 1409–1411, precipitated by the Lithuanian uprising in the Order controlled Samogitia, included the Battle of Grunwald (Tannenberg), where the Polish and Lithuanian-Rus' armies completely defeated the Teutonic Knights. The offensive that followed lost its impact with the ineffective siege of Malbork (Marienburg). The failure to take the fortress and eliminate the Teutonic (later Prussian) state had for Poland dire historic consequences in the 18th, 19th and 20th centuries. The Peace of Thorn (1411) had given Poland and Lithuania rather modest territorial adjustments, including Samogitia. Afterwards there were negotiations and peace deals that didn't hold, more military campaigns and arbitrations. One attempted, unresolved arbitration took place at the Council of Constance. There in 1415, Paulus Vladimiri, rector of the Kraków Academy, presented his Treatise on the Power of the Pope and the Emperor in respect to Infidels, in which he advocated tolerance, criticized the violent conversion methods of the Teutonic Knights, and postulated that pagans have the right to peaceful coexistence with Christians and political independence. This stage of the Polish-Lithuanian conflict with the Teutonic Order ended with the Treaty of Melno in 1422. Another war (see Battle of Pabaiskas) was concluded in the Peace of Brześć Kujawski in 1435.
Hussite movement; Polish-Hungarian union
During the Hussite Wars (1420–1434), Jagiełło, Vytautas and Sigismund Korybut were involved in political and military maneuvering concerning the Czech crown, offered by the Hussites first to Jagiełło in 1420. Zbigniew Oleśnicki became known as the leading opponent of a union with the Hussite Czech state.
The Jagiellon dynasty was not entitled to automatic hereditary succession, as each new king had to be approved by nobility consensus. Władysław Jagiełło had two sons late in his life, from his last marriage. In 1430 the nobility agreed to the succession of the future Władysław III, only after the King gave in and guaranteed the satisfaction of their new demands. In 1434 the old monarch died and his minor son Władysław was crowned; the Royal Council led by Bishop Oleśnicki undertook the regency duties.
In 1438 the Czech anti-Habsburg opposition, mainly Hussite factions, offered the Czech crown to Jagiełło's younger son Casimir. The idea, accepted in Poland over Oleśnicki's objections, resulted in two unsuccessful Polish military expeditions to Bohemia.
After Vytautas' death in 1430 Lithuania became embroiled in internal wars and conflicts with Poland. Casimir, sent as a boy by King Władysław on a mission there in 1440, was surprisingly proclaimed by the Lithuanians a Grand Duke of Lithuania, and stayed in Lithuania.
Oleśnicki gained the upper hand again and pursued his long-term objective of Poland's union with Hungary. At that time Turkey embarked on a new round of European conquests and threatened Hungary, which needed the powerful Polish-Lithuanian ally. Władysław III in 1440 assumed also the Hungarian throne. Influenced by Julian Cesarini, the young king led the Hungarian army against the Ottoman Empire in 1443 and again in 1444. Like his mentor, Władysław Warneńczyk was killed at the Battle of Varna.
Beginning toward the end of Jagiełło's life, Poland was practically governed by a magnate oligarchy led by Oleśnicki. The rule of the dignitaries was actively opposed by various szlachta groups. Their leader Spytek of Melsztyn was killed during an armed confrontation in 1439, which allowed Oleśnicki to purge Poland of the remaining Hussite sympathizers and pursue his other objectives without significant opposition.
Casimir IV Jagiellon
In 1445 Casimir, the Grand Duke of Lithuania, was asked to assume the Polish throne vacated by the death of his brother Władysław. Casimir was a tough negotiator and did not accept the Polish nobility's conditions for his election. He finally arrived in Poland and was crowned in 1447 on his terms. Becoming a King of Poland Casimir also freed himself from the control the Lithuanian oligarchy had imposed on him; in the Vilnius Privilege of 1447 he declared the Lithuanian nobility having equal rights with Polish szlachta. In time Kazimierz Jagiellończyk was able to remove from power Cardinal Oleśnicki and his group, basing his own power on the younger middle nobility camp instead. A conflict with the pope and the local Church hierarchy over the right to fill vacant bishop positions Casimir also resolved in his favor.
War with the Teutonic Order and its resolution
In 1454 the Prussian Confederation, an alliance of Prussian cities and nobility opposed to the increasingly oppressive rule of the Teutonic Knights, asked King Casimir to take over Prussia and stirred up an armed uprising against the Knights. Casimir declared a war on the Order and a formal incorporation of Prussia into the Polish Crown; those events led to the Thirteen Years' War. The weakness of pospolite ruszenie (the szlachta wouldn't cooperate without new across-the-board concessions from Casimir) prevented a takeover of all of Prussia, but in the Second Peace of Thorn (1466) the Knights had to surrender the western half of their territory to the Polish Crown (the areas known afterwards as Royal Prussia, a semi-autonomous entity), and to accept Polish-Lithuanian suzerainty over the remainder (the later Ducal Prussia). Poland regained Pomerelia and with it the all-important access to the Baltic Sea, as well as Warmia. In addition to land warfare, naval battles had taken place, where ships provided by the City of Danzig (Gdańsk) successfully fought Danish and Teutonic fleets.
Other 15th century Polish territorial gains, or rather revindications, included the Duchy of Oświęcim and Duchy of Zator on Silesia's border with Lesser Poland, and there was notable progress regarding the incorporation of the Piast Masovian duchies into the Crown.
Turkish and Tatar wars
The influence of the Jagiellon dynasty in Central Europe had been on the rise. In 1471 Casimir's son Władysław became a king of Bohemia, and in 1490 also of Hungary. The southern and eastern outskirts of Poland and Lithuania became threatened by Turkish invasions beginning in the late 15th century. Moldavia's involvement with Poland goes back to 1387, when Petru I, Hospodar of Moldavia, seeking protection against the Hungarians, paid Jagiełło homage in Lviv, which gave Poland access to the Black Sea ports. In 1485 King Casimir undertook an expedition into Moldavia, after its seaports were overtaken by the Ottoman Turks. The Turkish controlled Crimean Tatars raided the eastern territories in 1482 and 1487, until they were confronted by King Jan Olbracht (John Albert), Casimir's son and successor. Poland was attacked in 1487–1491 by remnants of the Golden Horde. They had invaded into Poland as far as Lublin before being beaten at Zaslavl. King John Albert in 1497 made an attempt to resolve the Turkish problem militarily, but his efforts were unsuccessful as he was unable to secure effective participation in the war by his brothers, King Ladislaus II of Bohemia and Hungary and Alexander, the Grand Duke of Lithuania, and because of the resistance on the part of Stephen the Great, the ruler of Moldavia. More Ottoman Empire-instigated destructive Tatar raids took place in 1498, 1499 and 1500. John Albert's diplomatic peace efforts that followed were finalized after the king's death in 1503, resulting in a territorial compromise and an unstable truce.
Moscow's threat to Lithuania; Sigismund I
Lithuania was increasingly threatened by the growing power of the Grand Duchy of Moscow. Through the campaigns of 1471, 1492 and 1500 Moscow took over much of Lithuania's eastern possessions. The Grand Duke Alexander was elected King of Poland in 1501, after the death of John Albert. In 1506 he was succeeded by Sigismund I the Old (Zygmunt I Stary) in both Poland and Lithuania, as the political realities were drawing the two states closer together. Prior to that Sigismund had been a Duke of Silesia by the authority of his brother Ladislaus II of Bohemia, but like other Jagiellon rulers before him, he had not pursued the Polish Crown's claim to Silesia.
Culture in the Late Middle Ages
The culture of the 15th century Poland was mostly medieval. Under favorable social and economic conditions the crafts and industries in existence already in the preceding centuries became more highly developed, and their products were much more widespread. Paper production was one of the new industries, and printing developed during the last quarter of the century. In 1473 Kasper Straube produced in Kraków the first Latin print, in 1475 in Wrocław (Breslau) Kasper Elyan printed for the first time in Polish, and after 1490 from Schweipolt Fiol's shop in Kraków came the world's oldest prints in Cyrillic, namely Old Church Slavonic language religious texts.
Luxury items were in high demand among the increasingly prosperous nobility, and to a lesser degree among the wealthy town merchants. Brick and stone residential buildings became common, but only in cities. The mature Gothic style was represented not only in architecture, but also prominently in sacral wooden sculpture. The altar of Veit Stoss in St. Mary's Church in Kraków is one of the most magnificent in Europe art works of its kind.
The Kraków University, which stopped functioning after the death of Casimir the Great, was renewed and rejuvenated around 1400. Augmented by a theology department, the "academy" was supported and protected by Queen Jadwiga and the Jagiellon dynasty members, which is reflected in its present name. Europe's oldest department of mathematics and astronomy was established in 1405. Among the university's prominent scholars were Stanisław of Skarbimierz, Paulus Vladimiri and Albert of Brudzewo, Copernicus' teacher.
The precursors of Polish humanism, John of Ludzisko and Gregory of Sanok, were professors at the university. Gregory's court was the site of an early literary society at Lwów (Lviv), after he had become the archbishop there. Scholarly thought elsewhere was represented by Jan Ostroróg, a political publicist and reformist, and Jan Długosz, a historian, whose Annals is the largest in Europe history work of his time and a fundamental source for history of medieval Poland. There were also active in Poland distinguished and influential foreign humanists. Filippo Buonaccorsi, a poet and diplomat, who arrived from Italy in 1468 and stayed in Poland until his death in 1496, established in Kraków another literary society. Known as Kallimach, he wrote the lives of Gregory of Sanok, Zbigniew Oleśnicki, and very likely that of Jan Długosz. He tutored and mentored the sons of Casimir IV and postulated unrestrained royal power. Conrad Celtes, a German humanist, organized in Kraków the first in this part of Europe humanist literary and scholarly association Sodalitas Litterarum Vistulana.
Early Modern Era (16th century)
Agriculture-based economic expansion
The folwark, a serfdom based large-scale farm and agricultural business, was a dominant feature on Poland's economic landscape beginning in the late 15th century and for the next 300 years. This dependence on nobility-controlled agriculture diverged the ways of central-eastern Europe from those of the western part of the continent, where, in contrast, elements of capitalism and industrialization were developing to a much greater extent than in the East, with the attendant growth of the bourgeoisie class and its political influence. The combination of the 16th century agricultural trade boom in Europe, with the free or cheap peasant labor available, made during that period the folwark economy very profitable.
The 16th century saw also further development of mining and metallurgy, and technical progress took place in various commercial applications. Great quantities of exported agricultural and forest products floated down the rivers and transported by land routes resulted in positive trade balance for Poland throughout the 16th century. Imports from the West included industrial and luxury products and fabrics.
Most of the grain exported was leaving Poland through Danzig (Gdańsk), which because of its location at the terminal point of the Vistula and its tributaries waterway and of its Baltic seaport trade role became the wealthiest, most highly developed, and most autonomous of the Polish cities. It was also by far the largest center of crafts and manufacturing. Other towns were negatively affected by Danzig's near-monopoly in foreign trade, but profitably participated in transit and export activities. The largest of them were Kraków (Cracow), Poznań, Lwów (Lviv), and Warszawa (Warsaw), and outside of the Crown, Breslau (Wrocław). Thorn (Toruń) and Elbing (Elbląg) were the main, after Danzig, cities in Royal Prussia.
Burghers and nobles
During the 16th century, prosperous patrician families of merchants, bankers, or industrial investors, many of German origin, still conducted large-scale business operations in Europe or lent money to Polish noble interests, including the royal court. Some regions were relatively highly urbanized, for example in Greater Poland and Lesser Poland at the end of the 16th century 30% of the population lived in cities. 256 towns were founded, most in Red Ruthenia.[b] The townspeople's upper layer was ethnically multinational and tended to be well-educated. Numerous burgher sons studied at the Academy of Kraków and at foreign universities; members of their group are among the finest contributors to the culture of Polish Renaissance. Unable to form their own nationwide political class, many, despite the legal obstacles, melted into the nobility.
The nobility or szlachta in Poland constituted a greater proportion (up to 10%) of the population, than in other European countries. In principle they were all equal and politically empowered, but some had no property and were not allowed to hold offices, or participate in sejms or sejmiks, the legislative bodies. Of the "landed" nobility some possessed a small patch of land which they tended themselves and lived like peasant families (mixed marriages gave some peasants one of the few possible paths to nobility), while the magnates owned dukedom-like networks of estates with several hundred towns and villages and many thousands of subjects. The 16th century Poland was a "republic of nobles", and it was the nobility's "middle class" that formed the leading component during the later Jagiellon period and afterwards, but the magnates held the highest state and church offices. At that time szlachta in Poland and Lithuania was ethnically diversified and belonged to various religious denominations. During this period of tolerance such factors had little bearing on one's economic status or career potential. Jealous of their class privilege ("freedoms"), the Renaissance szlachta developed a sense of public service duties, educated their youth, took keen interest in current trends and affairs and traveled widely. While the Golden Age of Polish Culture adopted the western humanism and Renaissance patterns, the style of the nobles beginning in the second half of the century acquired a distinctly eastern flavor. Visiting foreigners often remarked on the splendor of the residencies and consumption-oriented lifestyle of wealthy Polish nobles.
In a situation analogous with that of other European countries, the progressive internal decay of the Polish Church created conditions favorable for the dissemination of the Reformation ideas and currents. For example, there was a chasm between the lower clergy and the nobility-based Church hierarchy, which was quite laicized and preoccupied with temporal issues, such as power and wealth, often corrupt. The middle nobility, which had already been exposed to the Hussite reformist persuasion, increasingly looked at the Church's many privileges with envy and hostility.
The teachings of Martin Luther were accepted most readily in the regions with strong German connections: Silesia, Greater Poland, Pomerania and Prussia. In Danzig (Gdańsk) in 1525 a lower-class Lutheran social uprising took place, bloodily subdued by Sigismund I; after the reckoning he established a representation for the plebeian interests as a segment of the city government. Königsberg and the Duchy of Prussia under Albrecht Hohenzollern became a strong center of Protestant propaganda dissemination affecting all of northern Poland and Lithuania. Sigismund I quickly reacted against the "religious novelties", issuing his first related edict in 1520, banning any promotion of the Lutheran ideology, or even foreign trips to the Lutheran centers. Such attempted (poorly enforced) prohibitions continued until 1543.
Sigismund's son Sigismund II Augustus (Zygmunt II August), a monarch of a much more tolerant attitude, guaranteed the freedom of the Lutheran religion practice in all of Royal Prussia by 1559. Besides Lutheranism, which, within the Polish Crown, ultimately found substantial following mainly in the cities of Royal Prussia and western Greater Poland, the teachings of the persecuted Anabaptists and Unitarians, and in Greater Poland the Czech Brothers, were met, at least among the szlachta, with a more sporadic response.
In Royal Prussia, 41% of the parishes were counted as Lutheran in the second half of the 16th century, but that percentage kept increasing. According to Kasper Cichocki, who wrote in the early 17th century, only remnants of Catholicism were left there in his time. Lutheranism was strongly dominant in Royal Prussia throughout the 17th century, with the exception of Warmia (Ermland).
Around 1570, of the at least 700 Protestant congregations in Poland-Lithuania, over 420 were Calvinist and over 140 Lutheran, with the latter including 30-40 ethnically Polish. Protestants encompassed approximately 1/2 of the magnate class, 1/4 of other nobility and townspeople, and 1/20 of the non-Orthodox peasantry. The bulk of the Polish-speaking population had remained Catholic, but the proportion of Catholics became significantly diminished within the upper social ranks.
Calvinism on the other hand, in mid 16th century gained many followers among both the szlachta and the magnates, especially in Lesser Poland and Lithuania. The Calvinists, who led by Jan Łaski were working on unification of the Protestant churches, proposed the establishment of a Polish national church, under which all Christian denominations, including Eastern Orthodox (very numerous in the Grand Duchy of Lithuania and Ukraine), would be united. After 1555 Sigismund II, who accepted their ideas, sent an envoy to the pope, but the papacy rejected the various Calvinist postulates. Łaski and several other Calvinist scholars published in 1563 the Bible of Brest, a complete Polish Bible translation from the original languages, an undertaking financed by Mikołaj Radziwiłł the Black. After 1563–1565 (the abolishment of state enforcement of the Church jurisdiction), full religious tolerance became the norm. The Polish Catholic Church emerged from this critical period weakened, but not badly damaged (the bulk of the Church property was preserved), which facilitated the later success of Counter-Reformation.
Among the Calvinists, who also included the lower classes and their leaders, ministers of common background, disagreements soon developed, based on different views in the areas of religious and social doctrines. The official split took place in 1562, when two separate churches were officially established, the mainstream Calvinist, and the smaller, more reformist, known as the Polish Brethren or Arians. The adherents of the radical wing of the Polish Brethren promoted, often by way of personal example, the ideas of social justice. Many Arians (Piotr of Goniądz, Jan Niemojewski) were pacifists opposed to private property, serfdom, state authority and military service; through communal living some had implemented the ideas of shared usage of the land and other property. A major Polish Brethren congregation and center of activities was established in 1569 in Raków near Kielce, and lasted until 1638, when Counter-Reformation had it closed. The notable Sandomierz Agreement of 1570, an act of compromise and cooperation among several Polish Protestant denominations, excluded the Arians, whose more moderate, larger faction toward the end of the century gained the upper hand within the movement.
The act of the Warsaw Confederation, which took place during the convocation sejm of 1573, provided guarantees, at least for the nobility, of religious freedom and peace. It gave the Protestant denominations, including the Polish Brethren, formal rights for many decades to come. Uniquely in 16th century Europe, it turned the Commonwealth, in the words of Cardinal Stanislaus Hosius, a Catholic reformer, into a "safe haven for heretics".
Culture of Polish Renaissance
Golden Age of Polish culture
The Polish "Golden Age", the period of the reigns of Sigismund I and Sigismund II, the last two Jagiellon kings, or more generally the 16th century, is most often identified with the rise of the culture of Polish Renaissance. The cultural flowering had its material base in the prosperity of the elites, both the landed nobility and urban patriciate at such centers as Cracow and Danzig. As was the case with other European nations, the Renaissance inspiration came in the first place from Italy, a process accelerated to some degree by Sigismund I's marriage to Bona Sforza. Many Poles traveled to Italy to study and to learn its culture. As imitating Italian ways became very trendy (the royal courts of the last two Jagiellon kings provided the leadership and example for everybody else), many Italian artists and thinkers were coming to Poland, some settling and working there for many years. While the pioneering Polish humanists, greatly influenced by Erasmus of Rotterdam, accomplished the preliminary assimilation of the antiquity culture, the generation that followed was able to put greater emphasis on the development of native elements, and because of its social diversity, advanced the process of national integration.
Literacy, education and patronage of intellectual endeavors
Beginning in 1473 in Cracow (Kraków), the printing business kept growing. By the turn of the 16th/17th century there were about 20 printing houses within the Commonwealth, 8 in Cracow, the rest mostly in Danzig (Gdańsk), Thorn (Toruń) and Zamość. The Academy of Kraków and Sigismund II possessed well-stocked libraries; smaller collections were increasingly common at noble courts, schools and townspeople's households. Illiteracy levels were falling, as by the end of the 16th century almost every parish ran a school.
The Lubrański Academy, an institution of higher learning, was established in Poznań in 1519. The Reformation resulted in the establishment of a number of gymnasiums, academically oriented secondary schools, some of international renown, as the Protestant denominations wanted to attract supporters by offering high quality education. The Catholic reaction was the creation of Jesuit colleges of comparable quality. The Kraków University in turn responded with humanist program gymnasiums of its own.
The university itself experienced a period of prominence at the turn of the 15th/16th century, when especially the mathematics, astronomy and geography faculties attracted numerous students from abroad. Latin, Greek, Hebrew and their literatures were likewise popular. By the mid 16th century the institution entered a crisis stage, and by the early 17th century regressed into Counter-reformational conformism. The Jesuits took advantage of the infighting and established in 1579 a university college in Vilnius, but their efforts aimed at taking over the Academy of Kraków were unsuccessful. Under the circumstances many elected to pursue their studies abroad.
Zygmunt I Stary, who built the presently existing Wawel Renaissance castle, and his son Sigismund II Augustus, supported intellectual and artistic activities and surrounded themselves with the creative elite. Their patronage example was followed by ecclesiastic and lay feudal lords, and by patricians in major towns.
Polish science reached its culmination in the first half of the 16th century. The medieval point of view was criticized, more rational explanations were attempted. Copernicus' De revolutionibus orbium coelestium, published in Nuremberg in 1543, shook up the traditional value system extended into an understanding of the physical universe, doing away with its Christianity-adopted Ptolemaic anthropocentric model and setting free the explosion of scientific inquiry. Generally the prominent scientists of the period resided in many different regions of the country, and increasingly, the majority were of urban, rather than noble origin.
Nicolaus Copernicus, a son of a Toruń trader from Kraków, made many contributions to science and the arts. His scientific creativity was inspired at the University of Kraków, at the institution's height; he also studied at Italian universities later. Copernicus wrote Latin poetry, developed an economic theory, functioned as a cleric-administrator, political activist in Prussian sejmiks, and led the defense of Olsztyn against the forces of Albrecht Hohenzollern. As an astronomer, he worked on his scientific theory for many years at Frombork, where he died.
Josephus Struthius became famous as a physician and medical researcher. Bernard Wapowski was a pioneer of Polish cartography. Maciej Miechowita, a rector at the Cracow Academy, published in 1517 Tractatus de duabus Sarmatiis, a treatise on the geography of the East, an area in which Polish investigators provided first-hand expertise for the rest of Europe.
Andrzej Frycz Modrzewski was one of the greatest theorists of political thought in Renaissance Europe. His most famous work, On the Improvement of the Commonwealth, was published in Kraków in 1551. Modrzewski criticized the feudal societal relations and proposed broad realistic reforms. He postulated that all social classes should be subjected to the law to the same degree, and wanted to moderate the existing inequities. Modrzewski, an influential and often translated author, was a passionate proponent of peaceful resolution of international conflicts. Bishop Wawrzyniec Goślicki (Goslicius), who wrote and published in 1568 a study entitled De optimo senatore (The Counsellor in the 1598 English translation), was another popular and influential in the West political thinker.
Historian Marcin Kromer wrote De origine et rebus gestis Polonorum (On the origin and deeds of Poles) in 1555 and in 1577 Polonia, a treatise highly regarded in Europe. Marcin Bielski's Chronicle of the Whole World, a universal history, was written ca. 1550. The chronicle of Maciej Stryjkowski (1582) covered the history of Eastern Europe.
Modern Polish literature begins in the 16th century. At that time the Polish language, common to all educated groups, matured and penetrated all areas of public life, including municipal institutions, the legal code, the Church and other official uses, coexisting for a while with Latin. Klemens Janicki, one of the Renaissance Latin language poets, a laureate of a papal distinction, was of peasant origin. Another plebeian author, Biernat of Lublin, wrote his own version of Aesop's fables in Polish, permeated with his socially radical views.
A Literary Polish language breakthrough came under the influence of the Reformation with the writings of Mikołaj Rej. In his Brief Discourse, a satire published in 1543, he defends a serf from a priest and a noble, but in his later works he often celebrates the joys of the peaceful but privileged life of a country gentleman. Rej, whose legacy is his unbashful promotion of the Polish language, left a great variety of literary pieces. Łukasz Górnicki, an author and translator, perfected the Polish prose of the period. His contemporary and friend Jan Kochanowski became one of the greatest Polish poets of all time.
Kochanowski was born in 1530 into a prosperous noble family. In his youth he studied at the universities of Kraków, Königsberg and Padua and traveled extensively in Europe. He worked for a time as a royal secretary, and then settled in the village of Czarnolas, a part of his family inheritance. Kochanowski's multifaceted creative output is remarkable for both the depth of thoughts and feelings that he shares with the reader, and for its beauty and classic perfection of form. Among Kochanowski's best known works are bucolic Frascas (trifles), epic poetry, religious lyrics, drama-tragedy The Dismissal of the Greek Envoys, and the most highly regarded Threnodies or laments, written after the death of his young daughter.
Following the European and Italian in particular musical trends, the Renaissance music was developing in Poland, centered around the royal court patronage and branching from there. Sigismund I kept from 1543 a permanent choir at the Wawel castle, while the Reformation brought large scale group Polish language church singing during the services. Jan of Lublin wrote a comprehensive tablature for the organ and other keyboard instruments. Among the composers, who often permeated their music with national and folk elements, were Wacław of Szamotuły, Mikołaj Gomółka, who wrote music to Kochanowski translated psalms, and Mikołaj Zieleński, who enriched the Polish music by adopting the Venetian School polyphonic style.
Architecture, sculpture and painting
Architecture, sculpture and painting developed also under Italian influence from the beginning of the 16th century. A number of professionals from Tuscany arrived and worked as royal artists in Kraków. Francesco Fiorentino worked on the tomb of Jan Olbracht already from 1502, and then together with Bartolommeo Berrecci and Benedykt from Sandomierz rebuilt the royal castle, which was accomplished between 1507 and 1536. Berrecci also built Sigismund's Chapel at Wawel Cathedral. Polish magnates, Silesian Piast princes in Brzeg, and even Kraków merchants (by the mid 16th century their class economically gained strength nationwide) built or rebuilt their residencies to make them resemble the Wawel Castle. Kraków's Sukiennice and Poznań City Hall are among numerous buildings rebuilt in the Renaissance manner, but Gothic construction continued alongside for a number of decades.
Between 1580 and 1600 Jan Zamoyski commissioned the Venetian architect Bernardo Morando to build the city of Zamość. The town and its fortifications were designed to consistently implement the Renaissance and Mannerism aesthetic paradigms.
Tombstone sculpture, often inside churches, is richly represented on graves of clergy and lay dignitaries and other wealthy individuals. Jan Maria Padovano and Jan Michałowicz of Urzędów count among the prominent artists.
Painted illuminations in Balthasar Behem Codex are of exceptional quality, but draw their inspiration largely from Gothic art. Stanisław Samostrzelnik, a monk in the Cistercian monastery in Mogiła near Kraków, painted miniatures and polychromed wall frescos.
Republic of middle nobility; execution movement
During the reign of Sigismund I, szlachta in the lower chamber of general sejm (from 1493 a bicameral legislative body), initially decidedly outnumbered by their more privileged colleagues from the senate (which is what the appointed for life prelates and barons of the royal council were being called now), acquired a more numerous and fully elected representation. Sigismund however preferred to rule with the help of the magnates, pushing szlachta into the "opposition".
After the Nihil novi act of 1505, a collection of laws known as Łaski's Statutes was published in 1506 and distributed to Polish courts. The legal pronouncements, intended to facilitate the functioning of a uniform and centralized state, with ordinary szlachta privileges strongly protected, were frequently ignored by the kings, beginning with Sigismund I, and the upper nobility or church interests. This situation became the basis for the formation around 1520 of the szlachta's execution movement, for the complete codification and execution, or enforcement, of the laws.
In 1518 Sigismund I married Bona Sforza d'Aragona, a young, strong-minded Italian princess. Bona's sway over the king and the magnates, her efforts to strengthen the monarch's political position, financial situation, and especially the measures she took to advance her personal and dynastic interests, including the forced royal election of the minor Sigismund Augustus in 1529 and his premature coronation in 1530, increased the discontent among szlachta activists.
The opposition middle szlachta movement came up with a constructive reform program during the Kraków sejm of 1538/1539. Among the movement's demands were termination of the kings' practice of alienation of royal domain, giving or selling land estates to great lords at the monarch' discretion, and a ban on concurrent holding of multiple state offices by the same person, both legislated initially in 1504. Sigismund I's unwillingness to move toward the implementation of the reformers' goals negatively affected the country's financial and defensive capabilities.
The relationship with szlachta had only gotten worse during the early years of the reign of Sigismund II Augustus and remained bad until 1562. Sigismund Augustus' secret marriage with Barbara Radziwiłł in 1547, before his accession to the throne, was strongly opposed by his mother Bona and by the magnates of the Crown. Sigismund, who took over the reign after his father's death in 1548, overcame the resistance and had Barbara crowned in 1550; a few months later the new queen died. Bona, estranged from her son returned to Italy in 1556, where she died soon afterwards.
The Sejm, until 1573 summoned by the king at his discretion (for example when he needed funds to wage a war), composed of the two chambers presided over by the monarch, became in the course of the 16th century the main organ of the state power. The reform-minded execution movement had its chance to take on the magnates and the church hierarchy (and take steps to restrain their abuse of power and wealth) when Sigismund Augustus switched sides and lent them his support at the sejm of 1562. During this and several more sessions of parliament, within the next decade or so, the Reformation inspired szlachta was able to push through a variety of reforms, which resulted in a fiscally more sound, better governed, more centralized and territorially unified Polish state. Some of the changes were too modest, other had never become completely implemented (e. g. recovery of the usurped Crown land), but nevertheless for the time being the middle szlachta movement was victorious.
Resources and strategic objectives
Despite the favorable economic development, the military potential of 16th century Poland was modest in relation to the challenges and threats coming from several directions, which included the Ottoman Empire, the Teutonic state, the Habsburgs, and Muscovy. Given the declining military value and willingness of pospolite ruszenie, the bulk of the forces available consisted of professional and mercenary soldiers. Their number and provision depended on szlachta-approved funding (self-imposed taxation and other sources) and tended to be insufficient for any combination of adversaries. The quality of the forces and their command was good, as demonstrated by victories against a seemingly overwhelming enemy. The attainment of strategic objectives was supported by a well-developed service of knowledgeable diplomats and emissaries. Because of the limited resources at the state's disposal, the Jagiellon Poland had to concentrate on the area most crucial for its security and economic interests, which was the strengthening of Poland's position along the Baltic coast.
Prussia; struggle for Baltic area domination
The Peace of Thorn of 1466 reduced the Teutonic Knights, but brought no lasting solution to the problem they presented for Poland and their state avoided paying the prescribed tribute. The chronically difficult relations had gotten worse after the 1511 election of Albrecht as Grand Master of the Order. Faced with Albrecht's rearmament and hostile alliances, Poland waged a war in 1519; the war ended in 1521, when mediation by Charles V resulted in a truce. As a compromise move Albrecht, persuaded by Martin Luther, initiated a process of secularization of the Order and the establishment of a lay duchy of Prussia, as Poland's dependency, ruled by Albrecht and afterwards by his descendants. The terms of the proposed pact immediately improved Poland's Baltic region situation, and at that time also appeared to protect the country's long-term interests. The treaty was concluded in 1525 in Kraków; the remaining state of the Teutonic Knights (East Prussia centered on Königsberg) was converted into the Protestant (Lutheran) Duchy of Prussia under the King of Poland and the homage act of the new Prussian duke in Kraków followed.
In reality the House of Hohenzollern, of which Albrecht was a member, the ruling family of the Margraviate of Brandenburg, had been actively expanding its territorial influence; for example already in the 16th century in Farther Pomerania and Silesia. Motivated by a current political expediency, Sigismund Augustus in 1563 allowed the Brandenburg elector branch of the Hohenzollerns, excluded under the 1525 agreement, to inherit the Prussian fief rule. The decision, confirmed by the 1569 sejm, made the future union of Prussia with Brandenburg possible. Sigismind II, unlike his successors, was however careful to assert his supremacy. The Polish–Lithuanian Commonwealth, ruled after 1572 by elective kings, was even less able to counteract the growing importance of the dynastically active Hohenzollerns.
In 1568 Sigismund Augustus, who had already embarked on a war fleet enlargement program, established the Maritime Commission. A conflict with the City of Gdańsk (Danzig), which felt that its monopolistic trade position was threatened, ensued. In 1569 Royal Prussia had its legal autonomy largely taken away, and in 1570 Poland's supremacy over Danzig and the Polish King's authority over the Baltic shipping trade were regulated and received statutory recognition (Karnkowski's Statutes).
Wars with Moscow
In the 16th century the Grand Duchy of Moscow continued activities aimed at unifying the old Rus' lands still under Lithuanian rule. The Grand Duchy of Lithuania had insufficient resources to counter Moscow's advances, already having to control the Rus' population within its borders and not being able to count on loyalty of Rus' feudal lords. As a result of the protracted war at the turn of the 15th and 16th centuries, Moscow acquired large tracts of territory east of the Dnieper River. Polish assistance and involvement were increasingly becoming a necessary component of the balance of power in the eastern reaches of the Lithuanian domain.
Under Vasili III Moscow fought a war with Lithuania and Poland between 1512 and 1522, during which in 1514 the Russians took Smolensk. That same year the Polish-Lithuanian rescue expedition fought the victorious Battle of Orsha under Hetman Konstanty Ostrogski and stopped the Duchy of Moscow's further advances. An armistice implemented in 1522 left Smolensk land and Severia in Russian hands. Another round of fighting took place during 1534–1537, when the Polish aid led by Hetman Jan Tarnowski made possible the taking of Gomel and fiercely defeated Starodub. New truce (Lithuania kept only Gomel), stabilization of the border and over two decades of peace followed.
The Jagiellons and the Habsburgs; Ottoman Empire expansion
In 1515, during a congress in Vienna, a dynastic succession arrangement was agreed to between Maximilian I, Holy Roman Emperor and the Jagiellon brothers, Vladislas II of Bohemia and Hungary and Sigismund I of Poland and Lithuania. It was supposed to end the Emperor's support for Poland's enemies, the Teutonic and Russian states, but after the election of Charles V, Maximilian's successor in 1519, the relations with Sigismund had worsened.
The Jagiellon rivalry with the House of Habsburg in central Europe was ultimately resolved to the Habsburgs' advantage. The decisive factor that damaged or weakened the monarchies of the last Jagiellons was the Ottoman Empire's Turkish expansion. Hungary's vulnerability greatly increased after Suleiman the Magnificent took the Belgrade fortress in 1521. To prevent Poland from extending military aid to Hungary, Suleiman had a Tatar-Turkish force raid southeastern Poland-Lithuania in 1524. The Hungarian army was defeated in 1526 at the Battle of Mohács, where the young Louis II Jagiellon, son of Vladislas II, was killed. Subsequently, after a period of internal strife and external intervention, Hungary was partitioned between the Habsburgs and the Ottomans.
The 1526 death of Janusz III of Masovia, the last of the Masovian Piast dukes line (a remnant of the fragmentation period divisions), enabled Sigismund I to finalize the incorporation of Masovia into the Polish Crown in 1529.
From the early 16th century the Pokuttya border region was contested by Poland and Moldavia (see Battle of Obertyn). A peace with Moldavia took effect in 1538 and Pokuttya remained Polish. An "eternal peace" with the Ottoman Empire was negotiated by Poland in 1533 to secure frontier areas. Moldavia had fallen under Turkish domination, but Polish-Lithuanian magnates remained actively involved there. Sigismund II Augustus even claimed "jurisdiction" and in 1569 accepted a formal, short-lived suzerainty over Moldavia.
Livonia; struggle for Baltic area domination
Because of its desire to control Livonian Baltic seaports, especially Riga, and other economic reasons, in the 16th century the Grand Duchy of Lithuania was becoming increasingly interested in extending its territorial rule to Livonia, a country, by the 1550s largely Lutheran, traditionally ruled by the Brothers of the Sword knightly order. This put Poland and Lithuania on a collision course with Moscow and other powers, which had also attempted expansion in that area.
Soon after the 1525 Kraków (Cracow) treaty, Albrecht (Albert) of Hohenzollern, seeking a dominant position for his brother Wilhelm, the Archbishop of Riga, planned a Polish-Lithuanian fief in Livonia. What happened instead was the establishment of a Livonian pro-Polish-Lithuanian party or faction. Internal fighting in Livonia took place when the Grand Master of the Brothers concluded in 1554 a treaty with Moscow, declaring his state's neutrality regarding the Russian-Lithuanian conflict. Supported by Albrecht and the magnates Sigismund II declared a war on the Order. Grand Master Wilhelm von Fürstenberg accepted the Polish-Lithuanian conditions without a fight, and according to the 1557 Poswol treaty, a military alliance obliged the Livonian state to support Lithuania against Moscow.
Other powers aspiring to the Livonian Baltic access responded with partitioning of the Livonian state, which triggered the lengthy Livonian War, fought between 1558 and 1583. Ivan IV of Russia took Dorpat (Tartu) and Narva in 1558, and soon the Danes and Swedes had occupied other parts of the country. To protect the integrity of their country, the Livonians now sought a union with the Polish-Lithuanian state. Gotthard Kettler, the new Grand Master, met in Vilnius (Vilna, Wilno) with Sigismund Augustus in 1561 and declared Livonia a vassal state under the Polish King. The agreement of November 28 called for secularization of the Brothers of the Sword Order and incorporation of the newly established Duchy of Livonia into the Rzeczpospolita ("Republic") as an autonomous entity. Under the Union of Vilnius the Duchy of Courland and Semigallia was also created as a separate fief, to be ruled by Kettler. Sigismund II obliged himself to recover the parts of Livonia lost to Moscow and the Baltic powers, which had led to grueling wars with Russia (1558–1570 and 1577–1582) and heavy struggles having to do also with the fundamental issues of control of the Baltic trade and freedom of navigation.
The Baltic region policies of the last Jagiellon king and his advisors were the most mature of the 16th century Poland's strategic programs. The outcome of the efforts in that area was to a considerable extent successful for the Commonwealth. The conclusion of the above wars took place during the reign of King Stephen Báthory.
Poland and Lithuania in real union under Sigismund II
Sigismund II's childlessness added urgency to the idea of turning the personal union between Poland and the Grand Duchy of Lithuania into a more permanent and tighter relationship; it was also a priority for the execution movement. Lithuania's laws were codified and reforms enacted in 1529, 1557, 1565–1566 and 1588, gradually making its social, legal and economic system similar to that of Poland, with the expanding role of the middle and lower nobility. Fighting wars with Moscow under Ivan IV and the threat perceived from that direction provided additional motivation for the real union for both Poland and Lithuania.
The process of negotiating the actual arrangements turned out to be difficult and lasted from 1563 to 1569, with the Lithuanian magnates, worried about losing their dominant position, being at times uncooperative. It took Sigismunt II's unilateral declaration of the incorporation into the Polish Crown of substantial disputed border regions, including most of Lithuanian Ukraine, to make the Lithuanian magnates rejoin the process, and participate in the swearing of the act of the Union of Lublin on July 1, 1569. Lithuania for the near future was becoming more secure on the eastern front. It's increasingly Polonized nobility made in the coming centuries great contributions to the Commonwealth's culture, but at the cost of Lithuanian national development.
The Lithuanian language survived as a peasant vernacular and also as a written language in religious use, from the publication of the Lithuanian Catechism by Martynas Mažvydas in 1547. The Ruthenian language was and remained in the Grand Duchy's official use even after the Union, until the takeover of Polish.
The Commonwealth: multicultural, magnate dominated
By the Union of Lublin a unified Polish–Lithuanian Commonwealth (Rzeczpospolita) was created, stretching from the Baltic Sea and the Carpathian mountains to present-day Belarus and western and central Ukraine (which earlier had been Kievan Rus' principalities). Within the new federation some degree of formal separateness of Poland and Lithuania was retained (distinct state offices, armies, treasuries and judicial systems), but the union became a multinational entity with a common monarch, parliament, monetary system and foreign-military policy, in which only the nobility enjoyed full citizenship rights. Moreover, the nobility's uppermost stratum was about to assume the dominant role in the Commonwealth, as magnate factions were acquiring the ability to manipulate and control the rest of szlachta to their clique's private advantage. This trend, facilitated further by the liberal settlement and land acquisition consequences of the union, was becoming apparent at the time of, or soon after the 1572 death of Sigismund Augustus, the last monarch of the Jagiellon dynasty.
One of the most salient characteristics of the newly-established Commonwealth was its multiethnicity, and accordingly diversity of religious faiths and denominations. Among the peoples represented were Poles (about 50% or less of the total population), Lithuanians, Latvians, Rus' people (corresponding to today's Belarusians, Ukrainians, Russians or their East Slavic ancestors), Germans, Estonians, Jews, Armenians, Tatars and Czechs, among others, for example smaller West European groups. As for the main social segments in the early 17th century, nearly 70% of the Commonwealth's population were peasants, over 20% residents of towns, and less than 10% nobles and clergy combined. The total population, estimated at 8–10 millions, kept growing dynamically until the middle of the century. The Slavic populations of the eastern lands, Rus' or Ruthenia, were solidly, except for the Polish colonizing nobility (and Polonized elements of local nobility), Eastern Orthodox, which portended future trouble for the Commonwealth.
Jewish settlement
Poland had become the home to Europe's largest Jewish population, as royal edicts guaranteeing Jewish safety and religious freedom, issued during the 13th century (Bolesław the Pious, Statute of Kalisz of 1264), contrasted with bouts of persecution in Western Europe. This persecution intensified following the Black Death of 1348–1349, when some in the West blamed the outbreak of the plague on the Jews. As scapegoats were sought, pogroms and mass killings took place in a number of German cities, which caused an exodus of survivors heading east. Much of Poland was spared from this disease, and Jewish immigration brought their valuable contributions and abilities to the rising state. The number of Jews in Poland kept increasing throughout the Middle Ages; the population had reached about 30,000 toward the end of the 15th century, and, as refugees escaping further persecution elsewhere kept streaming in, 150,000 in the 16th century. A royal privilege issued in 1532 granted the Jews freedom to trade anywhere within the kingdom. Massacres and expulsions from many German states continued until 1552–1553. By the mid-16th century, 80% of the world's Jews lived and flourished in Poland in Lithuania; most of western and central Europe was by that time closed to Jews. In Poland-Lithuania the Jews were increasingly finding employment as managers and intermediaries, facilitating the functioning of and collecting revenue in huge magnate-owned land estates, especially in the eastern borderlands, developing into an indispensable mercantile and administrative class. Despite the partial resettlement of Jews in Western Europe following the Thirty Years' War (1618–1648), a great majority of world Jewry had lived in Eastern Europe (in the Commonwealth and in the regions further east and south, where many migrated), until the 1940s.
See also
- History of Poland during the Piast dynasty
- History of the Polish–Lithuanian Commonwealth (1569–1648)
a.^ This is true especially regarding legislative matters and legal framework. Despite the restrictions the nobility imposed on the monarchs, the Polish kings had never become figureheads. In practice they wielded considerable executive power, up to and including the last king, Stanisław August Poniatowski. Some were at times even accused of absolutist tendencies, and it may be for the lack of sufficiently strong personalities or favorable circumstances, that none of the kings had succeeded in significant and lasting strengthening of the monarchy.
- Wyrozumski 1986
- Gierowski 1986
- Wyrozumski 1986, pp. 178–180
- Davies 1998, pp. 392, 461–463
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 55; Fogra, Kraków 1999, ISBN 83-85719-40-7
- A Traveller's History of Poland, by John Radzilowski; Northampton, Massachusetts: Interlink Books, 2007, ISBN 1-56656-655-X, p. 63-65
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki. Cambridge: Cambridge University Press, 2nd edition 2006, ISBN 0-521-61857-6, p. 68-69
- Wyrozumski 1986, pp. 180–190
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 41
- Stopka 1999, p. 91
- Wyrozumski 1986, pp. 190–195
- Wyrozumski 1986, pp. 195–198, 201–203
- Wyrozumski 1986, pp. 198–206
- Wyrozumski 1986, pp. 206–207
- Wyrozumski 1986, pp. 207–213
- 'Stopka 1999, p. 86
- "Russian Interaction with Foreign Lands". Strangelove.net. 2007-10-06. Retrieved 2009-09-19.
- "List of Wars of the Crimean Tatars". Zum.de. Retrieved 2009-09-19.
- Wyrozumski 1986, pp. 213–215
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 302
- Wyrozumski 1986, pp. 215–221
- Wyrozumski 1986, pp. 221–225
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 73
- Gierowski 1986, pp. 24–38
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 65, 68
- Gierowski 1986, pp. 38–53
- Gierowski 1986, pp. 53–64
- Wacław Urban, Epizod reformacyjny (The Reformation episode), p.30. Krajowa Agencja Wydawnicza, Kraków 1988, ISBN 83-03-02501-5.
- Various authors, ed. Marek Derwich and Adam Żurek, Monarchia Jagiellonów, 1399–1586 (The Jagiellon Monarchy: 1399–1586), p. 131-132, Urszula Augustyniak. Wydawnictwo Dolnośląskie, Wrocław 2003, ISBN 83-7384-018-4.
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 104
- Davies 2005, pp. 118
- Gierowski 1986, pp. 67–71
- Gierowski 1986, pp. 71–74
- Gierowski 1986, pp. 74–79
- Stanisław Grzybowski – Dzieje Polski i Litwy (1506-1648) (History of Poland and Lithuania (1506-1648)), p. 206, Fogra, Kraków 2000, ISBN 83-85719-48-2
- Gierowski 1986, pp. 79–84
- Anita J. Prażmowska – A History of Poland, 2004 Palgrave Macmillan, ISBN 0-333-97253-8, p. 84
- Gierowski 1986, pp. 84–85
- Gierowski 1986, pp. 85–88
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 61
- Gierowski 1986, pp. 92–105
- Basista 1999, p. 104
- Gierowski 1986, pp. 116–118
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 48, 50
- Gierowski 1986, pp. 119–121
- Basista 1999, p. 109
- Gierowski 1986, pp. 104–105
- Gierowski 1986, pp. 121–122
- Andrzej Romanowski, Zaszczuć osobnika Jasienicę (Harass the Jasienica individual). Gazeta Wyborcza newspaper wyborcza.pl, 2010-03-12
- Gierowski 1986, pp. 122–125, 151
- Basista 1999, pp. 109–110
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 58
- Gierowski 1986, pp. 125–130
- Basista 1999, pp. 115, 117
- Gierowski 1986, pp. 105–109
- Davies 1998, p. 228
- Davies 1998, pp. 392
- A Concise History of Poland, by Jerzy Lukowski and Hubert Zawadzki, p. 81
- Gierowski 1986, pp. 38–39
- Various authors, ed. Marek Derwich, Adam Żurek, Monarchia Jagiellonów (1399–1586) (Jagiellon monarchy (1399–1586)), p. 160-161, Krzysztof Mikulski. Wydawnictwo Dolnośląskie, Wrocław 2003, ISBN 83-7384-018-4.
- Ilustrowane dzieje Polski (Illustrated History of Poland) by Dariusz Banaszak, Tomasz Biber, Maciej Leszczyński, p. 40. 1996 Podsiedlik-Raniowski i Spółka, ISBN 83-7212-020-X.
- A Traveller's History of Poland, by John Radzilowski, p. 44-45
- Davies 1998, pp. 409–412
- Krzysztof Baczkowski – Dzieje Polski późnośredniowiecznej (1370–1506) (History of Late Medieval Poland (1370–1506)), p. 274-276
- Gierowski 1986, p. 46
- Richard Overy (2010), The Times Complete History of the World, Eights Edition, p. 116-117. London: Times Books. ISBN 978-0-00-788089-8.
- "European Jewish Congress – Poland". Eurojewcong.org. Retrieved 2009-09-19.
- A Traveller's History of Poland, by John Radzilowski, p. 100, 113
- Gierowski 1986, pp. 144–146, 258–261
- A. Janeczek. "Town and country in the Polish Commonwealth, 1350-1650." In: S. R. Epstein. Town and Country in Europe, 1300-1800. Cambridge University Press. 2004. p. 164.
- Gierowski, Józef Andrzej (1986). Historia Polski 1505–1764 (History of Poland 1505–1764). Warszawa: Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN). ISBN 83-01-03732-6.
- Wyrozumski, Jerzy (1986). Historia Polski do roku 1505 (History of Poland until 1505). Warszawa: Państwowe Wydawnictwo Naukowe (Polish Scientific Publishers PWN). ISBN 83-01-03732-6.
- Stopka, Krzysztof (1999). In Andrzej Chwalba. Kalendarium dziejów Polski (Chronology of Polish History),. Kraków: Wydawnictwo Literackie. ISBN 83-08-02855-1.
- Basista, Jakub (1999). In Andrzej Chwalba. Kalendarium dziejów Polski (Chronology of Polish History),. Kraków: Wydawnictwo Literackie. ISBN 83-08-02855-1.
- Davies, Norman (1998). Europe: A History. New York: HarperPerennial. ISBN 0-06-097468-0.
- Davies, Norman (2005). God's Playground: A History of Poland, Volume I. New York: Columbia University Press. ISBN 978-0-231-12817-9.
Further reading
- The Cambridge History of Poland (two vols., 1941–1950) online edition vol 1 to 1696
- Butterwick, Richard, ed. The Polish-Lithuanian Monarchy in European Context, c. 1500-1795. Palgrave, 2001. 249 pp. online edition
- Davies, Norman. Heart of Europe: A Short History of Poland. Oxford University Press, 1984.
- Davies, Norman. God's Playground: A History of Poland. 2 vol. Columbia U. Press, 1982.
- Pogonowski, Iwo Cyprian. Poland: A Historical Atlas. Hippocrene, 1987. 321 pp.
- Sanford, George. Historical Dictionary of Poland. Scarecrow Press, 2003. 291 pp.
- Stone, Daniel. The Polish-Lithuanian State, 1386-1795. U. of Washington Press, 2001.
- Zamoyski, Adam. The Polish Way. Hippocrene Books, 1987. 397 pp. | http://en.wikipedia.org/wiki/History_of_Poland_(1385-1569) | 13 |
41 | monetary policyArticle Free Pass
The usual goals of monetary policy are to achieve or maintain full employment, to achieve or maintain a high rate of economic growth, and to stabilize prices and wages. Until the early 20th century, monetary policy was thought by most experts to be of little use in influencing the economy. Inflationary trends after World War II, however, caused governments to adopt measures that reduced inflation by restricting growth in the money supply.
Monetary policy is the domain of a nation’s central bank. The Federal Reserve System (commonly called the Fed) in the United States and the Bank of England of Great Britain are two of the largest such “banks” in the world. Although there are some differences between them, the fundamentals of their operations are almost identical and are useful for highlighting the various measures that can constitute monetary policy.
The Fed uses three main instruments in regulating the money supply: open-market operations, the discount rate, and reserve requirements. The first is by far the most important. By buying or selling government securities (usually bonds), the Fed—or a central bank—affects the money supply and interest rates. If, for example, the Fed buys government securities, it pays with a check drawn on itself. This action creates money in the form of additional deposits from the sale of the securities by commercial banks. By adding to the cash reserves of the commercial banks, then, the Fed enables those banks to increase their lending capacity. Consequently, the additional demand for government bonds bids up their price and thus reduces their yield (i.e., interest rates). The purpose of this operation is to ease the availability of credit and to reduce interest rates, which thereby encourages businesses to invest more and consumers to spend more. The selling of government securities by the Fed achieves the opposite effect of contracting the money supply and increasing interest rates.
The second tool is the discount rate, which is the interest rate at which the Fed (or a central bank) lends to commercial banks. An increase in the discount rate reduces the amount of lending made by banks. In most countries the discount rate is used as a signal, in that a change in the discount rate will typically be followed by a similar change in the interest rates charged by commercial banks.
The third tool regards changes in reserve requirements. Commercial banks by law hold a specific percentage of their deposits and required reserves with the Fed (or a central bank). These are held either in the form of non-interest-bearing reserves or as cash. This reserve requirement acts as a brake on the lending operations of the commercial banks: by increasing or decreasing this reserve-ratio requirement, the Fed can influence the amount of money available for lending and hence the money supply. This tool is rarely used, however, because it is so blunt. The Bank of England and most other central banks also employ a number of other tools, such as “treasury directive” regulation of installment purchasing and “special deposits.”
Historically, under the gold standard of currency valuation, the primary goal of monetary policy was to protect the central banks’ gold reserves. When a nation’s balance of payments was in deficit, an outflow of gold to other nations would result. In order to stem this drain, the central bank would raise the discount rate and then undertake open-market operations to reduce the total quantity of money in the country. This would lead to a fall in prices, income, and employment and reduce the demand for imports and thus would correct the trade imbalance. The reverse process was used to correct a balance of payments surplus.
The inflationary conditions of the late 1960s and ’70s, when inflation in the Western world rose to a level three times the 1950–70 average, revived interest in monetary policy. Monetarists such as Harry G. Johnson, Milton Friedman, and Friedrich Hayek explored the links between the growth in money supply and the acceleration of inflation. They argued that tight control of money-supply growth was a far more effective way of squeezing inflation out of the system than were demand-management policies. Monetary policy is still used as a means of controlling a national economy’s cyclical fluctuations.
What made you want to look up "monetary policy"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/389158/monetary-policy | 13 |
15 | In 1540, on this day De Soto Discovers Gold North of Florida. Conquistador Hernando de Soto had been born to a poverty-stricken area of Spain and left to seek his fortune, which he did in the New World. He sailed to Panama in 1514 and accompanied Pizarro on the expedition to conquer the Inca in 1532.
De Soto Discovers Gold North of FloridaDe Soto, who had proven himself as an able, cunning, and ruthless commander, returned to Spain in 1534 with vast wealth from his share of the plunder. He married and petitioned the king to return to the New World as governor of Guatemala so he could explore further into the Pacific Ocean, but Charles V awarded him Cuba instead with an order to colonize Florida to the north. Ponce de Leon had discovered the vast lands to the north in 1521, but attempts colonize up the coast over the next decade had all failed due to disease, lack of supplies, and hostile natives.
In 1539, de Soto put together a 600-man expedition with ample provisions and livestock for an ongoing expedition to discover gold. He studied the stories of Cabeza de Vaca, one of the four survivors of the ill-fated Narváez expedition into North America in 1527, which suffered endless attacks from natives, shipwreck, enslavement, and finally fame among natives for healing techniques. Upon their arrival in Florida, the de Soto expedition came upon Juan Ortiz, who had been dispatched years before to find the lost Narváez and was captured by locals. De Soto took on Ortiz as a guide and friend to local Indians, which served the expedition much more smoothly than the natives Narváez had captured and forced to be guides, resulting in them leading his men in circles through the roughest territories possible with ample ground for ambushes.
After months of exploring up the Florida peninsula, the expedition wintered in Anhaica, the greatest city of the Apalachee people, whom Narváez had been falsely told were wealthy with gold. Rumors now said there was gold "toward the sun's rising". They traveled inland through the spring, northeasterly across a number of rivers and through several realms of native peoples. Finally among the Cofitachequi, they met "The Lady of the Cofitachequi", their queen. She treated the well armed men kindly with gifts of pearls, food, and, at last, gold. Rather than being native gold, however, the men recognized the items as Spanish, most likely abandoned from the nearby failed settlement by Lucas Vézquez de Ayllón that lasted only three months in 1526. Disturbed by the bad luck with gold, the expedition departed, bringing the Lady with them as an involuntary escort as they came through the lands of the Joara, what she considered her western province. There they found the "Chelaque", who were described in the later annuals translated by Londoner Richard Hakluyt, as eating "roots and herbs, which they seek in the fields, and upon wild beasts, which they kill with their bows and arrows, and are a very gentle people. All of them go naked and are very lean". The civilization was rudimentary at best, "the poorest country of maize that was seen in Florida". De Soto wanted to go further into the mountains and rest his horses there, but he determined to rest first using supplies ransomed for the Lady. During the month-lost rest, many of his soldiers searched ahead for gold, while at least one stayed and taught agricultural techniques to the locals.
During a plowing session using a horse, which the natives had never seen before, they struck a large yellow rock. The natives worked to free it and throw it away, but the conquistador recognized it as a 17-pound gold nugget. De Soto was shocked by the find, as were the natives, who had never considered the inedible metal worth anything. He immediately built a fort and dispatched men back to Cuba for reinforcements. Meanwhile, de Soto and the bulk of his force captured the Lady of the Cofitachequi again and seized her kingdom. The Spanish built a settlement at the mouth of the Santee River called Port Carlos (for Charles V) as well as another farther inland, where mining of the placer deposits of gold began. Other deposits of gold were discovered in the region, spurring a gold rush to the area. A short-lived war broke out with King Tuscaloosa in the west, but the area was quickly depopulated of natives due to disease from the Columbian Exchange.
De Soto's gold fields proved to be shallower than he hoped, but the Spanish presence in Florida was affirmed. Plantations grew up as planters experimented with what grew best, eventually settling on tobacco as a cash crop. With the seventeenth century, the English began to block the spread of Spanish influence with colonies in Virginia and Plymouth, eventually assigning a border along the James River. The French challenged Spanish control over the Mississippi River and dominated much of Canada until the Seven Years' War caused Britain to annex Canada and force France to give the Louisiana to the Spanish, dividing North America between the Spanish and British Empires.
Due to heavy taxation following the war, Enlightenment ideals caused many in the American Colonies to call for resistance and even independence. However, with a strong Spanish bastion just to the south, the outcry never spread beyond the Boston Insurrection. Instead, the American Union would gain marginal self-rule, which would be successfully tested with the Slavery Abolition Act of 1833. The expansive state of Florida, meanwhile, would undergo a bloody fifteen year war of independence from Spain. | http://www.todayinah.co.uk/index.php?story=39602-S&userid=guest@todayinah.co.uk | 13 |
112 | 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics
|Topics in calculus|
Lists of integrals
In calculus, a branch of mathematics, the derivative is a measurement of how a function changes when the values of its inputs change. Loosely speaking, a derivative can be thought of as how much a quantity is changing at some given point. For example, the derivative of the position or distance of a car at some point in time is the instantaneous velocity, or instantaneous speed (respectively), at which that car is traveling (conversely the integral of the velocity is the car's position).
A closely-related notion is the differential of a function.
The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization..
Differentiation and the derivative
Differentiation is a method to compute the rate at which a quantity, y, changes with respect to the change in another quantity, x, upon which it is dependent. This rate of change is called the derivative of y with respect to x. In more precise language, the dependency of y on x means that y is a function of x. If x and y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope of this graph at each point. This functional relationship is often denoted y = f(x), where f denotes the function.
The simplest case is when y is a linear function of x, meaning that the graph of y against x is a straight line. In this case, y = f(x) = m x + c, for real numbers m and c, and the slope m is given by
where the symbol Δ (the uppercase form of the Greek letter Delta) is an abbreviation for "change in." This formula is true because
- y + Δy = f(x+ Δx) = m (x + Δx) + c = m x + c + m Δx = y + mΔx.
It follows that Δy = m Δx.
This gives an exact value for the slope of a straight line. If the function f is not linear (i.e. its graph is not a straight line), however, then the change in y divided by the change in x varies: differentiation is a method to find an exact value for this rate of change at any given value of x.
The idea, illustrated by Figures 1-3, is to compute the rate of change as the limiting value of the ratio of the differences Δy / Δx as Δx becomes infinitely small.
In Leibniz's notation, such an infinitesimal change in x is denoted by dx, and the derivative of y with respect to x is written
suggesting the ratio of two infinitesimal quantities. (The above expression is pronounced in various ways such as "d y by d x" or "d y over d x". The oral form "d y d x" is often used conversationally, although it may lead to confusion.)
The most common approach to turn this intuitive idea into a precise definition uses limits, but there are other methods, such as non-standard analysis.
Definition via difference quotients
Let y=f(x) be a function of x. In classical geometry, the tangent line at a real number a was the unique line through the point (a, f(a)) which did not meet the graph of f transversally, meaning that the line did not pass straight through the graph. The derivative of y with respect to x at a is, geometrically, the slope of the tangent line to the graph of f at a. The slope of the tangent line is very close to the slope of the line through (a, f(a)) and a nearby point on the graph, for example (a + h, f(a + h)). These lines are called secant lines. A value of h close to zero will give a good approximation to the slope of the tangent line, and smaller values (in absolute value) of h will, in general, give better approximations. The slope of the secant line is the difference between the y values of these points divided by the difference between the x values, that is,
This expression is Newton's difference quotient. The derivative is the value of the difference quotient as the secant lines get closer and closer to the tangent line. Formally, the derivative of the function f at a is the limit
of the difference quotient as h approaches zero, if this limit exists. If the limit exists, then f is differentiable at a. Here f′ (a) is one of several common notations for the derivative ( see below).
Equivalently, the derivative satisfies the property that
which has the intuitive interpretation (see Figure 1) that the tangent line to f at a gives the best linear approximation
to f near a (i.e., for small h). This interpretation is the easiest to generalize to other settings ( see below).
Substituting 0 for h in the difference quotient causes division by zero, so the slope of the tangent line cannot be found directly. Instead, define Q(h) to be the difference quotient as a function of h:
Q(h) is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). If f is a continuous function, meaning that its graph is an unbroken curve with no gaps, then Q is a continuous function away from the point h = 0. If the limit exists, meaning that there is a way of choosing a value for Q(0) which makes the graph of Q a continuous function, then the function f is differentiable at the point a, and its derivative at a equals Q(0).
In practice, the continuity of the difference quotient Q(h) at h = 0 is shown by modifying the numerator to cancel h in the denominator. This process can be long and tedious for complicated functions, and many short cuts are commonly used to simplify the process.
The squaring function f(x) = x² is differentiable at x = 3, and its derivative there is 6. This is proven by writing the difference quotient as follows:
Then we get the simplified function in the limit:
The last expression shows that the difference quotient equals 6 + h when h is not zero and is undefined when h is zero. (Remember that because of the definition of the difference quotient, the difference quotient is always undefined when h is zero.) However, there is a natural way of filling in a value for the difference quotient at zero, namely 6. Hence the slope of the graph of the squaring function at the point (3, 9) is 6, and so its derivative at x = 3 is f '(3) = 6.
More generally, a similar computation shows that the derivative of the squaring function at x = a is f '(a) = 2a.
Continuity and differentiability
If y = f(x) is differentiable at a, then f must also be continuous at a. As an example, choose a point a and let f be the step function which returns a value, say 1, for all x less than a, and returns a different value, say 10, for all x greater than or equal to a. f cannot have a derivative at a. If h is negative, then a + h is on the low part of the step, so the secant line from a to a + h will be very steep, and as h tends to zero the slope tends to infinity. If h is positive, then a + h is on the high part of the step, so the secant line from a to a + h will have slope zero. Consequently the secant lines do not approach any single slope, so the limit of the difference quotient does not exist.
However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function y = |x| is continuous at x = 0, but it is not differentiable there. If h is positive, then the slope of the secant line from 0 to h is one, whereas if h is negative, then the slope of the secant line from 0 to h is negative one. This can be seen graphically as a "kink" in the graph at x = 0. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function y = 3√x is not differentiable at x = 0.
Most functions which occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions which have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
The derivative as a function
Let f be a function that has a derivative at every point a in the domain of f. Because every point a has a derivative, there is a function which sends the point a to the derivative of f at a. This function is written f′(x) and is called the derivative function or the derivative of f. The derivative of f collects all the derivatives of f at all the points in the domain of f.
Sometimes f has a derivative at most, but not all, points of its domain. The function whose value at a equals f′(a) whenever f′(a) is defined and is undefined elsewhere is also called the derivative of f. It is still a function, but its domain is strictly smaller than the domain of f.
Using this idea, differentiation becomes a function of functions: The derivative is an operator whose domain is the set of all functions which have derivatives at every point of their domain and whose range is a set of functions. If we denote this operator by D, then D(f) is the function f′(x). Since D(f) is a function, it can be evaluated at a point a. By the definition of the derivative function, D(f)(a) = f′(a).
For comparison, consider the doubling function f(x) =2x; f is a real-valued function of a real number, meaning that it takes numbers as inputs and has numbers as outputs:
The operator D, however, is not defined on individual numbers. It is only defined on functions:
Because the output of D is a function, the output of D can be evaluated at a point. For instance, when D is applied to the squaring function,
D outputs the doubling function,
which we named f(x). This output function can then be evaluated to get f(1) = 2, f(2) = 4, and so on.
Let f be a differentiable function, and let f′(x) be its derivative. The derivative of f′(x) (if it has one) is written f′′(x) and is called the second derivative of f. Similarly, the derivative of a second derivative, if it exists, is written f′′′(x) and is called the third derivative of f. These repeated derivatives are called higher-order derivatives.
A function f need not have a derivative, for example, if it is not continuous. Similarly, even if f does have a derivative, it may not have a second derivative. For example, let
An elementary calculation shows that f is a differentiable function whose derivative is
f′(x) is twice the absolute value function, and it does not have a derivative at zero. Similar examples show that a function can have k derivatives for any non-negative integer k but no (k + 1)-order derivative. A function that has k successive derivatives is called k times differentiable. If in addition the kth derivative is continuous, then the function is said to be of differentiability class Ck. (This is a stronger condition than having k derivatives. For an example, see differentiability class.) A function that has infinitely many derivatives is called infinitely differentiable or smooth.
On the real line, every polynomial function is infinitely differentiable. By standard differentiation rules, if a polynomial of degree n is differentiated n times, then it becomes a constant function. All of its subsequent derivatives are identically zero. In particular, they exist, so polynomials are smooth functions.
The derivatives of a function f at a point x provide polynomial approximations to that function near x. For example, if f is twice differentiable, then
in the sense that
If f is infinitely differentiable, then this is the beginning of the Taylor series for f.
Notations for differentiation
The notation for derivatives introduced by Gottfried Leibniz is one of the earliest. It is still commonly used when the equation y=f(x) is viewed as a functional relationship between dependent and independent variables. Then the first derivative is denoted by
Higher derivatives are expressed using the notation
for the nth derivative of y = f(x) (with respect to x).
With Leibniz's notation, we can write the derivative of y at the point x = a in two different ways:
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially relevant for partial differentiation. It also makes the chain rule easy to remember:
One of the most common modern notations for differentiation is due to Joseph Louis Lagrange and uses the prime mark, so that the derivative of a function f(x) is denoted f′(x) or simply f′. Similarly, the second and third derivatives are denoted
Beyond this point, some authors use Roman numerals such as
for the fourth derivative, whereas other authors place the number of derivatives in parentheses:
The latter notation generalizes to yield the notation f (n) for the nth derivative of f — this notation is most useful when we wish to talk about the derivative as being a function itself, as in this case the Leibniz notation can become cumbersome.
Newton's notation for differentiation, also called the dot notation, places a dot over the function name to represent a derivative. If y = f(t), then
denote, respectively, the first and second derivatives of y with respect to t. This notation is used almost exclusively for time derivatives, meaning that the independent variable of the function represents time. It is very common in physics and in mathematical disciplines connected with physics such as differential equations. While the notation becomes unmanageable for high-order derivatives, in practice only very few derivatives are needed.
Euler's notation uses a differential operator D, which is applied to a function f to give the first derivative Df. The second derivative is denoted D2f, and the nth derivative is denoted Dnf.
If y = f(x) is a dependent variable, then often the subscript x is attached to the D to clarify the independent variable x. Euler's notation is then written
- or ,
although this subscript is often omitted when the variable x is understood, for instance when this is the only variable present in the expression.
Euler's notation is useful for stating and solving linear differential equations.
Computing the derivative
The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. For some examples, see Derivative (examples). In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones.
Derivatives of elementary functions
In addition, the derivatives of some common functions are useful to know.
- Derivatives of powers: if
where r is any real number, then
wherever this function is defined. For example, if r = 1/2, then
and the function is defined only for non-negative x. When r = 0, this rule recovers the constant rule.
- Inverse trigonometric functions:
Rules for finding the derivative
In many cases, complicated limit calculations by direct application of Newton's difference quotient can be avoided using differentiation rules. Some of the most basic rules are the following.
- Constant rule: if f(x) is constant, then
- Sum rule:
- for all functions f and g and all real numbers a and b.
- Product rule:
- for all functions f and g.
- Quotient rule:
- Chain rule: If f(x) = h(g(x)), then
The derivative of
Here the second term was computed using the chain rule and third using the product rule: the known derivatives of the elementary functions x², x4, sin(x), ln(x) and exp(x) = ex were also used.
Derivatives in higher dimensions
Derivatives of vector valued functions
A vector-valued function y(t) of a real variable is a function which sends real numbers to vectors in some vector space Rn. A vector-valued function can be split up into its coordinate functions y1(t), y2(t), …, yn(t), meaning that y(t) = (y1(t), ..., yn(t)). This includes, for example, parametric curves in R2 or R3. The coordinate functions are real valued functions, so the above definition of derivative applies to them. The derivative of y(t) is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,
if the limit exists. The subtraction in the numerator is subtraction of vectors, not scalars. If the derivative of y exists for every value of t, then y′ is another vector valued function.
If e1, …, en is the standard basis for Rn, then y(t) can also be written as y1(t)e1 + … + yn(t)en. If we assume that the derivative of a vector-valued function retains the linearity property, then the derivative of y(t) must be
because each of the basis vectors is a constant.
This generalization is useful, for example, if y(t) is the position vector of a particle at time t; then the derivative y′(t) is the velocity vector of the particle at time t.
Suppose that f is a function that depends on more than one variable. For instance,
f can be reinterpreted as a family of functions of one variable indexed by the other variables:
In other words, every value of x chooses a function, denoted fx, which is a function of one real number. That is,
Once a value of x is chosen, say a, then f(x,y) determines a function fa which sends y to a² + ay + y²:
In this expression, a is a constant, not a variable, so fa is a function of only one real variable. Consequently the definition of the derivative for a function of one variable applies:
The above procedure can be performed for any choice of a. Assembling the derivatives together into a function gives a function which describes the variation of f in the y direction:
This is the partial derivative of f with respect to y. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee".
In general, the partial derivative of a function f(x1, …, xn) in the direction xi at the point (a1 …, an) is defined to be:
In the above difference quotient, all the variables except xi are held fixed. That choice of fixed values determines a function of one variable
and, by definition,
In other words, the different choices of a index a family of one-variable functions just as in the example above. This expression also shows that the computation of partial derivatives reduces to the computation of one-variable derivatives.
An important example of a function of several variables is the case of a scalar-valued function f(x1,...xn) on a domain in Euclidean space Rn (e.g., on R² or R³). In this case f has a partial derivative ∂f/∂xj with respect to each variable xj. At the point a, these partial derivatives define the vector
This vector is called the gradient of f at a. If f is differentiable at every point in some domain, then the gradient is a vector-valued function ∇f which takes the point a to the vector ∇f(a). Consequently the gradient determines a vector field.
If f is a real-valued function on Rn, then the partial derivatives of f measure its variation in the direction of the coordinate axes. For example, if f is a function of x and y, then its partial derivatives measure the variation in f in the x direction and the y direction. They do not, however, directly measure the variation of f in any other direction, such as along the diagonal line y = x. These are measured using directional derivatives. Choose a vector
The directional derivative of f in the direction of v at the point x is the limit
Let λ be a scalar. The substitution of h/λ for h changes the λv direction's difference quotient into λ times the v direction's difference quotient. Consequently, the directional derivative in the λv direction is λ times the directional derivative in the v direction. Because of this, directional derivatives are often considered only for unit vectors v.
If all the partial derivatives of f exist and are continuous at x, then they determine the directional derivative of f in the direction v by the formula:
This is a consequence of the definition of the total derivative. It follows that the directional derivative is linear in v.
The same definition also works when f is a function with values in Rm. We just use the above definition in each component of the vectors. In this case, the directional derivative is a vector in Rm.
The total derivative, the total differential and the Jacobian
Let f be a function from a domain in R to R. The derivative of f at a point a in its domain is the best linear approximation to f at that point. As above, this is a number. Geometrically, if v is a unit vector starting at a, then f′ (a) , the best linear approximation to f at a, should be the length of the vector found by moving v to the target space using f. (This vector is called the pushforward of v by f and is usually written f * v.) In other words, if v is measured in terms of distances on the target, then, because v can only be measured through f, v no longer appears to be a unit vector because f does not preserve unit vectors. Instead v appears to have length f′ (a). If m is greater than one, then by writing f using coordinate functions, the length of v in each of the coordinate directions can be measured separately.
Suppose now that f is a function from a domain in Rn to Rm and that a is a point in the domain of f. The derivative of f at a should still be the best linear approximation to f at a. In other words, if v is a vector on Rn, then f′ (a) should be the linear transformation that best approximates f. The linear transformation should contain all the information about how f transforms vectors at a to vectors at f(a), and in symbols, this means it should be the linear transformation f′ (a) such that
Here h is a vector in Rn, so the norm in the denominator is the standard length on Rn. However, f′ (a)h is a vector in Rm, and the norm in the numerator is the standard length on Rm. The linear transformation f′ (a), if it exists, is called the total derivative of f at a or the (total) differential of f at a.
If the total derivative exists at a, then all the partial derivatives of f exist at a. If we write f using coordinate functions, so that f = (f1, f2, ..., fm), then the total derivative can be expressed as a matrix called the Jacobian matrix of f at a:
The existence of the Jacobian is strictly stronger than existence of all the partial derivatives, but if the partial derivatives exist and satisfy mild smoothness conditions, then the total derivative exists and is given by the Jacobian.
The definition of the total derivative subsumes the definition of the derivative in one variable. In this case, the total derivative exists if and only if the usual derivative exists. The Jacobian matrix reduces to a 1×1 matrix whose only entry is the derivative f′ (x). This 1×1 matrix satisfies the property that f(a + h) − f(a) − f′(a)h is approximately zero, in other words that
Up to changing variables, this is the statement that the function is the best linear approximation to f at a.
The total derivative of a function does not give another function in the same way that one-variable case. This is because the total derivative of a multivariable function has to record much more information than the derivative of a single-variable function. Instead, the total derivative gives a function from the tangent bundle of the source to the tangent bundle of the target.
The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.
- An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C to C. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. However, this innocent definition hides some very deep properties. If C is identified with R² by writing a complex number z as x + i y, then a differentiable function from C to C is certainly differentiable as a function from R² to R² (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy Riemann equations — see holomorphic functions.
- Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M is a space which can be approximated near each point x by a vector space called its tangent space: the prototypical example is a smooth surface in R³. The derivative (or differential) of a (differentiable) map f: M → N between manifolds, at a point x in M, is then a linear map from the tangent space of M at x to the tangent space of N at f(x). The derivative function becomes a map between the tangent bundles of M and N. This definition is fundamental in differential geometry and has many uses — see pushforward (differential) and pullback (differential geometry).
- Differentiation can also be defined for maps between infinite dimensional vector spaces such as Banach spaces and Fréchet spaces. There is a generalization both of the directional derivative, called the Gâteaux derivative, and of the differential, called the Fréchet derivative.
- One deficiency of the classical derivative is that not very many functions are differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".
- The properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology — see, for example, differential algebra. | http://www.pustakalaya.org/wiki/wp/d/Derivative.htm | 13 |
16 | Jump To Section:
What is Copyright?
Copyright is the affirmation of the rights of authors, inventors, creators, et cetera of original works. It is intended to promote authorship, invention, and creation by securing certain rights. The basis for modern copyright law (U.S. Code Title 17) is found in the U.S. Constitution (Article I, Section 8, Clause 8):
"...to promote the Progress of Science and useful Arts, by securing for a limited Time to Authors and Inventors the exclusive Right to their respective Writings and Discoveries."
The exclusive rights of the creators of original works include copying, distribution, displaying and performing. Creations that can be copyright protected include (but are not limited to): books, plays, journals, music, motion pictures, photographs, paintings, sculptures, digital files, sound recordings, computer programs, websites, dance choreography, architecture, and vessel hull designs. The copyright on creations also extends to the copying, distribution, displaying, and performance of derivative works. Copyright also covers unpublished works.
Where does this leave educators, students, and researchers? Read on.
What is Fair Use?
Copyright not only protects creators and their creations, it also legally establishes the defensible position of the public to access and use copyright-protected works for educational and research purposes.
In Section 107 of Chapter 1 of Title 17 of the United States Code, fair use is explained as a limitation to the exclusive rights of copyright holders. The section reads:
[...] the fair use of a copyrighted work [...] for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include -
(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
(2) the nature of the copyrighted work;
(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
(4) the effect of the use upon the potential market for or value of the copyrighted work.
The four factors seem ambiguous because they are meant be guidelines and not firm restrictions. The determination of fair use vs. copyright infringement is often made on a case-by-case basis. Often, questions help the four factors make more sense:
(1) the purpose and character of the use
Is this for educational or research purposes? More Fair.
Or is it for commercial or for-profit purposes? Less Fair.
(2) the nature of the copyrighted work
Is this work factual and published, like a journal article? More Fair.
Or is it a creative or artistic work, like a novel or an artwork? Less Fair.
(3) the amount and substantiality used
Will only a small portion be used? More Fair.
Or will a large portion, all of it, or the most important part of large work be used? Less Fair.
(4) the effect of the use upon the potential market for or value of the copyrighted work
Will this not reduce sales or make the work more widely available than it already is? More Fair.
Or will using the work stop others from purchasing the entire work or make the important parts available to many for free? Less Fair.
A helpful resource for understanding the four factors better is Using the Four Factor Fair Use Test (UT Austin).
The Copyright and Fair Use information above is from Swisher Library's Copyright & Fair Use page. Please visit for more information and frequently asked questions on copyright, fair use, and the library.
Intellectual Property in the University community
Read the JU Intellectual Property Policy.
The United States Government has protected materials considered Intellectual Property since its inception. There are three practical reasons for this:
- To protect the ideas of creative people so that they are motivated to keep creating. That motivation usually comes in receiving financial benefits for the creator.
- To make sure the country has citizens that promote the highest degree of excellence in scientific and artistic endeavors.
- To encourage people to purchase innovative, and eventually improved, versions of those creative items as it benefits the national economy.
It was deemed that in 1999 that materials that were placed on an online course was considered to substitute for materials that would be presented in class, and that copyright remains with the professor. The TEACH (Technology Education and Copyright Harmonization) Act was established as part of a series of amendments to the Digital Millennium Copyright Act of 1998 to ensure this point. According to Audrey Latourette, "The TEACH Act, in essence, applies the teacher exemption and fair use defense to online education, but only to the extent that online delivery is a comparable replacement for the type of, and amount of, performance or display of materials that occurs in the classroom and that transmission be limited to students enrolled in the course,” though she does go on to comment that material under copyright (photographs, music and video clips) could be infringed upon easily simply because so many could have access to the material.
The University of California system has a succinct list that is helpful in summarizing the TEACH act, where it states to be compliant, course materials can be utilized by instructors in the following ways:
- Display (showing of a copy) of any work in an amount analogous to a physical classroom setting.
- Performance of nondramatic literary works.
- Performance of nondramatic musical works.
- Performance of "reasonable and limited" portions of other types of work (other than nondramatic literary or musical work) EXCEPT digital educational works.
- Distance-education students may receive transmissions at any location.
- Retention of content and distant student access for the length of a "class session."
- Copying and storage for a limited time or necessary for digital transmission to students.
- Digitization of portions of analog works if no digital version is available or if digital version is not in an accessible form.
One particularly notable point in this list is with item 4. Note that dramatic works, such as a commercial film, fits into this category. This means that you cannot put a complete film online for student viewing UNLESS it is a film created for educational intent. Thus, showing a film from a series on American history with funding from the National Endowment for the Arts would likely fit in this category, but a showing of the commercial film Amadeus would not.
In the case Vanderhurst v. Colorado Mountain College District, 16 F. Supp. 2d 1297 (D.Colo. 1998), a veterinary professor designed a course outline that he was teaching. Though the professor claimed to be the owner of the course design, and not only did he want to be able to use it in the future, he wanted to prevent the university from using it. The court decided to rule in favor of the university, as it was in the professor’s duties as a faculty member to develop the course.
When many people think about copyright, they are too busy consumed with how one person can be taken advantaged of, so that someone else can claim credit for an original idea. But I have found through my experience in the music industry to think in terms of financial benefit first, and the whole idea of copyright becomes clearer.
How does the Jacksonville University Intellectual Property Policy apply to me?
Faculty as instructor
Any of the materials created for a course offered at Jacksonville University, as long as the materials are not connected to external funding such as a grant, belong to the faculty member. That’s because they paid money to get their degrees that has shaped their original thinking. If they leave JU, their thoughts go with them.
Since the materials are created by the instructor for their own use, the faculty member is not expected to get additional compensation from the university. This would include lecture materials, such as a PowerPoint or Blackboard design. Further, since the university is not paying the faculty member for a specific design, the faculty member can take this with them if they leave.
Since the course materials are used at Jacksonville University, the university can use the materials for other instructional or administrative purposes. A good example of the instructional use is a syllabus adapted for another class. An instance of the administrative use is a syllabus or course description provided to SACS under our accreditation review.
Faculty can use any type of media that features a student’s picture or work if it is to create a website, video or other course materials for the course the student is in, and the development of that course. An example of this is that an instructor can website for a nursing course showing pictures of former nursing students working in the field. They cannot use the student’s picture or work for other purposes unless the student signs a releases granting this privilege.
A faculty member can use portions of materials created by others for instructional purposes (aka “fair use”). For example, a faculty member can Xerox a few pages out of a book and give them to their class as a handout to illustrate a specific point. But you certainly can’t give them the book for free, or even the whole chapter, because it restricts the author and publisher’s ability to make money. A whole chapter could be in a book of collective essays, written by different authors. One essay could be the most groundbreaking study in their discipline, and sales of the book depend on that essay.
Often people ask me, “how much can I use?” My generic answer is, "if the work of someone else enhances a small portion of your own offering, then you are likely safe." But if the work of someone else is used and you have the potential to hinder their ability to make money, then you need to ask for permission. Often, the copyright holder will grant permission if it just a small amount. For example, if I quote lyrics from a Van Halen song in my book, no would expect that it would hurt record sales. But I still have to ask Warner Brothers Publishing for permission.
-Dr. Thomas Harrison, Associate Professor, Music Business and Recording
How do I get permission? The best way is to have the author grant it to you, but there is the Copyright Clearing Center that can handle that for you. The copyright holder pays them a fee.
Faculty as creator and researcher
Though Jacksonville University isn’t what many consider a “Type I” research institution, there is a considerable amount of intellectual activity on campus. The basic idea is that if an institution has provided financial support for a creative project or invention, then they are entitled to a portion of the profits that the invention might generate. For example, if a scientist has used a JU lab to develop a vaccine for cancer, sold all over the world, JU should get a portion of this since they bought the lab equipment, and paid the electric bill for the research and development.
If there are funds generated by licensing the invention to an outside company the income is divided this way
- Inventor/Author (and their heirs) personal share - 50%
- Inventor/Author's Department - 20%
- University share - 30%
Faculty members have a primary responsibility to Jacksonville University first, and any other financial opportunities that emerge second. It is possible that someone could invent something on campus, and then join the licensing business to exploit the new product. In order to avoid any ‘conflict of interest,’ the faculty member is obligated to inform the university in writing that it is planning on working with the outside company, with the complete understanding the activity is not to impede on the usual obligations (research, service and instruction) of a faculty member.
As long as digital media will be around, there will be copyright infringement. Important economic models such as the music industry can be crippled if people can use copyrighted material without financial compensation.
Like many member of society, students will want things for free. Students consider music and software particularly valuable. Swapping songs, movies, software and even pictures is illegal. It hurts the industry that created the material in the first place. As a result, Jacksonville University takes a policy that restricts the use of known websites that distribute illegal materials to the best of their ability so that if anyone wants to violate the law, they have to use their own mechanism to do so.
An essential component in education is to cultivate thought. That means that to simply use the thoughts of others without giving them credit is wrong. But it is best to either explain why the others or wrong in the context of your overall argument. Plagiarism is something that academics at all levels have had to watch out for over a long period time. As you write, ask yourself, “is this really my idea?” Then ask, “can I use the ideas of others to help formulate my own ideas?” your degree is earned by in large by how you have learned how to think. If you think economically, an employer hires you in accordance to your accomplishments. Your degree represents an accomplishment that states imply you know how to think at a certain level.
Though staff members are not usually thought of a part of creative community, they could implement a design for software or a building, they are creating a work ‘for hire,’ in that they are part to complete a specific task for an employer.
Like students and faculty, staff members are not to use the creations of others without seeking the appropriate permissions if applicable. The general rule of thumb is that “if the person or agency who created the original is losing money because of your use, especially if you should pay for it and am not, then you are likely violating copyright.”
When the university employs a photographer, the photos taken by the photographer for the university revert to the university since the photos are considered a “work for hire. A work for hire is an assignment of copyright to an organization (in our case, the University) that has hired or commissioned a staff member or outside employee to prepare or create a work that they would usually be able to retain copyright. This has been supported by the cases Manning v. Board of Trustees of Community College District No. 505 (Parkland College), 109 F. Supp. 2d 976 (C.D. Ill. 2000) and Foraste v. Brown University, 248 F. Supp. 2d 71 (D.R.I. 2003)
Portions of movies can be shown in a lecture or online. The copyright defines this as “limited and reasonable,” and it should be understood that this is a portion to illustrate a point. Full movies can be shown in if they are created for educational settings, such as a documentary in a Humantieis class.
On occasion, the university will hire consultants and independent contractors. The outside consultant retains any creative right unless there is a written exception included in the agreement. Examples of the exception could be the design of a logo, the design of a building or software used to catalog library materials. Essentially the work is ‘for hire,’ as discussed in item 3 in the staff section above.
Links to resources.
Audrey W. Latourette, "Copyright Implications for Online Distance Education," (2006), 32 J.C. & U.L. 613 at 624, at
Why Students Plagiarize
Why do students plagiarize? Exploring what students are thinking when they copy work belonging to others, this session shares insights gained from listening to students’ voices and delving into their behaviors and motivations.
In this free 30-min on-demand webcast, "Why Students Plagiarize," educational psychologist and author, Jason Stephens explores why students cheat even when they believe it is morally wrong. He talks about three common motivational patterns that drive academic dishonesty:
- Under Pressure: High performance goals or high extrinsic motivation
- Under-interested: Low mastery goals or low intrinsic motivation
- Unable: Low efficacy or low perceived sense of ability | http://ju.edu/ctl/Pages/Intellectual-Property.aspx | 13 |
31 | Colonies and Empires
French Colonial Expansion and Franco-Amerindian Alliances
Over the course of the 240 years that separated Giovanni da Verrazano’s voyage of exploration in 1524 and the dismantling of New France in 1763, the French left their mark on the North American territory in a variety of ways. Through their manner of clearing and dividing the land, establishing villages and cities, building a network of roads and trails, and developing the territory with varied types of construction, they transformed and adapted environments according to their needs.
Setting out from a few small colonized areas mainly found along the St. Lawrence River in the 17th century, the French journeyed through a large part of North America. On the eve of the British conquest, New France extended from Hudson Bay in the north to the mouth of the Mississippi in the south, and from Acadia in the east to the foothills of the Rockies in the west.
Pushing ever further west and south to find a constant supply of fur, the French deployed a whole network of alliances with the most influential Aboriginal tribes, economic and military alliances that enabled them not only to contain the English on the Atlantic seaboard for over 150 years, but also to ensure the survival of an under-populated New France whose borders would have been difficult to defend otherwise.
After the Spanish discovery of America in 1492, the other major European powers did not want to be outdone. In June 1497, theexplorer John Cabot, sailing for the English crown, reached Newfoundland, while the following year, Joao Fernandes explored the north-east of the continent for Portugal. In the early 16th century, Breton and Norman fishermen came seeking cod on the Grand Banks of Newfoundland, but it was not until 1523 that the first French expedition took place. By order of François I, Giovanni da Verrazano of Florence left Dieppe to discover a new route to China. The following year, aboard the Dauphine, he surveyed the coasts of North America, sailed along the shores of the Carolinas and Virginia, then made his way up toward Cape Breton Island.
Ten years later, François I called on the mariner of Saint-Malo, Jacques Cartier, and commissioned him to go beyond the “new lands.” On his first voyage in 1534, Cartier got as far as the Gulf of St. Lawrence and made contact with the St. Lawrence Iroquoians, who spoke of a “kingdom of the Saguenay” with fabulous riches. On his second expedition the following year, he made his way up the river as far as Hochelaga, an Aboriginal village erected on the site of the future city of Montréal.
On his return to France, Cartier persuaded François I of the benefits the Crown could hope to derive from colonizing this territory, which was called “New France.” In 1541, Cartier seconded Jean-François de La Roque de Roberval, commissioned to found a colony on the north shore of the St. Lawrence, to continue seeking a passage to Asia and to bring back precious metals. Cartier left Saint-Malo in 1541, but Roberval, who was unable to raise all of the required funds, set sail only the following spring. Having sustained a number of human losses and faced with the threat of conflict with the Iroquoians of Stadacona (Québec), Cartier and his crew abandoned Cap-Rouge in the spring of 1542. At In Newfoundland, they met Roberval who ordered them to return with him to Cap-Rouge; they refused and then fled during the night. Roberval nevertheless continued his journey to Cap-Rouge, which he had to abandon in the spring of 1543 because of worsening conflicts with the Iroquoians.
The civil war between Catholics and Protestants in France curtailed overseas ambitions. After the failure of the Huguenots in Brazil in 1555, and the establishment of the French settlement at Fort Caroline in 1564–1565, it was not until the early 17th century that the French royal court decided to send ships to the North American continent again. The signing of the Edict of Nantes in April 1598, which ended the wars of religion, allowed King Henri IV to turn his attention once again to America. Pierre Du Gua de Monts, who secured a trade monopoly in 1603 and in 1609, launched several expeditions to Canada. In 1603, François Gravé Du Pont led a trade campaign and travelled as far as Hochelaga (now Montréal), following the same route as Cartier. The following year, Du Gua de Monts led an expedition and explored the coasts of Acadia and New England to find a suitable site for establishing a colony. After a failed attempt at settlement on Île Sainte-Croix, the entire crew moved to Port-Royal, which would also be abandoned in 1607.
Samuel de Champlain, a captain of the royal navy who took part in the first voyages to the Americas as a navigator, explorer and cartographer, left again for the New World in April 1608. Appointed to serve as the lieutenant of Du Gua de Monts, he sailed up the St. Lawrence in the company of François Gravé Du Pont and his crew, and built an “abitation” on the rocky plateau that overlooked the river. Following the teachings of his mentor Gravé Du Pont, Champlain called it Québec, which in the Montagnais language means “the place where the river narrows.” The capital of New France was born.
As soon as the city of Québec was founded, Champlain set out to discover the interior of the continent. In 1609, he explored Iroquois country and went as far as the lake that he named after himself. Four years later, he began exploring the “Pays d’en haut,” or the upper country, which refers to the Great Lakes region. In the course of his various voyages, he made his way up the Ottawa River in 1613 and reached Lake Huron, which he would later call the “freshwater sea.” After Champlain, many other explorers led more or less official expeditions. Thus, around 1621–1623, Étienne Brûlé reached Lake Superior while looking for copper mines. In 1626, a Recollet missionary, Father Joseph de la Roche Daillon, journeyed among the Neutral tribe, which lived near the shores of Lake Erie.
The Search for a Passage to the “Western sea”
Until the end of the 18th century, one of France’s objectives was to reach the Pacific Ocean, also called the “Western sea” and the “sea of China.” The French wanted access to the riches of the Orient without having to sail around Cape Horn. As early as 1524, Verrazano was commissioned by François I to discover the passage to the Orient. Cartier and Champlain had also thought they had discovered this passage on exploring the St. Lawrence. Beginning in the 1630s, the principal agents of discovery and exploration of the continent were Jesuit missionaries, who also hoped to evangelize the Aboriginal peoples. In 1634, Father Jean Nicollet travelled as far as Lake Michigan and reached the Baie des Puants (today Green Bay in Wisconsin). After exploring the surrounding region, he stopped seeking a passage to the “Western sea” and returned to Québec. In July 1647, Father Jean de Quen made his way up the Saguenay as far as Lac Saint-Jean and, in 1661, Fathers Claude Dablon and Gabriel Druillettes took the same route with Montagnais guides and explored the region, also hoping to find the much sought-after passage. After travelling through Iroquois country, to try to establish a mission, the Jesuits concentrated on the Great Lakes region. With a view to founding a mission there, Father Claude Allouez explored the shores of Lake Superior from Sault-Sainte-Marie to Lake Nipigon between 1665 and 1667. Two years later, he made his way to the Baie des Puants (the bay of stinking waters) and explored the area as far as the Fox River.
At the same time, in the 1660s, a vast westward movement of people set out from Montréal. The scattering of the Hurons, allies and intermediaries of the French, pushed merchants from this city to venture out and secure supplies of furs from the Aboriginal peoples of the Great Lakes region, now the area that had the most abundant fur-bearing animal resources. Many “coureurs de bois” set out en route to the “Pays d’en haut” and established trading posts there.
The colonial authorities also decided to support new exploratory voyages in the interior of the continent, despite the misgivings of Louis XIV and Jean-Baptiste Colbert, councillor of state and intendant of finances and the navy. In 1665, the Intendant of Canada, Jean Talon, received instructions to organize expeditions that once again aimed to discover a navigable route to the “Western sea.” In 1670, he sent Simon-François Daumont de Saint-Lusson to explore the Lake Superior area. On June 14, 1671, he was in Sault-Sainte-Marie. In front of an assembly of fourteen Amerindian nations, he officially took possession of the area “of the said place […] and also lakes Huron and Superior, […] and all the other contiguous and adjacent countries, rivers, lakes and streams, here, both discovered and yet to be discovered, which are bordered on one side by the Northern and Western seas, and on the other, by the Southern sea….”
In 1672, Talon put a merchant of Québec, Louis Jolliet, in charge of surveying the “great river” that Aboriginal peoples “call the Michissipi believed to flow into the sea of California.” In December, Jolliet reached the mission of Saint-Ignace, located on the north shore of the Strait of Michilimackinac. He was greeted there by Father Jacques Marquette, a Jesuit with whom he continued his voyage. Together, the two men reached Wisconsin and then discovered the Mississippi, which they called the “Colbert River.” By canoe, they went as far as the river’s confluence with the Arkansas River where they realized that the river flowed to the south. They concluded that it probably flowed into the Gulf of Mexico, and not the sea of China, as they had hoped. They then backtracked and returned to the Saint-Ignace mission.
In 1673, a native of Rouen sponsored by Governor Frontenac, René-Robert Cavelier de La Salle, built Fort Frontenac on the eastern end of Lake Ontario. This post would serve as a point of departure for several expeditions aiming to explore the shores of the Mississippi.
After several fruitless attempts, Cavelier de La Salle reached the river’s delta in 1682. On April 9, he had a cross erected and a column adorned with the arms of France, and he took solemn possession of the country he named Louisiana in honour of Louis XIV. On returning to France, Cavelier de La Salle was given a new commission to find the mouth of the Mississippi by sea. He ran aground on the coast of the current State of Texas, where he was murdered by one of his companions in 1687.
“I have taken and do now take possession […] of this country of Louisiana, the seas, harbours, ports, bays, adjacent straits; and all the nations, people, provinces, cities, towns, villages, mines, minerals, […], fisheries, streams and rivers comprised in the extent of Louisiana, […] and this with the consent of the […] Motantees, Illinois, Mesigameas (Metchigamias), Akansas, Natches, and Koroas, which are the most considerable nations dwelling therein, with whom also we have made alliance.”
Archives nationales de Paris, Colonies, C13C 3, f.° 28v. Jacques Métairie, April 9,1682, “Procès-Verbal de la prise de possession de Louisiane.” (Memoir of taking possession of the country of Louisiana)
During the decade of the 1690s, France became aware of the importance of Cavelier de La Salle’s discovery. They realized that the British – with whom they were in virtually permanent conflict – could endanger the Canadian colony by attacking through the Mississippi. Louisiana then appeared to be a zone that could serve as a barrier against English territorial and commercial expansion. In February 1699, an expedition led by the Canadian Pierre Le Moyne d’Iberville (1661–1706) reached a place that was given the name Biloxi, referring to the First Nation who lived nearby (today, in Mississippi). The French then launched many explorations with a variety of objectives: to establish alliances with the Aboriginal nations to ensure the security of the emerging colony; to seek natural resources; and to conduct trade relations with the Spanish colonies of New Mexico. The search for a passage to the “Western Sea” continued to be a goal.
In 1700, Pierre Le Sueur left Biloxi with a dozen men and made his way up the Mississippi as far as the Saint-Pierre River. In 1714, Louis Juchereau de Saint-Denis followed the Red River as far as Natchitoches country where he founded a post. He continued on, crossing Texas, until he reached San Juan Bautista (today, Piedras Negras, Mexico). However, this route was not much used, despite the desire of Louisiana authorities to establish trade relations with the Spanish of New Spain. Five years later, Jean-Baptiste Bénard de La Harpe reached the Arkansas River, and in 1723–1724, Étienne Veniard de Bourgmont explored the Missouri and surveyed the mouth of the Platte River. In 1739, two Canadian traders and brothers, Pierre-Antoine and Paul Mallet, set out from Fort de Chartres, located in Illinois country, and travelled as far as Santa Fe via the Great Plains.
Finally, a family of explorers, the La Vérendrye family, continued to look for the Western sea. In 1728, Pierre Gaultier de Varennes et de La Vérendrye was appointed commander of Fort Kaministiquia, at the head of Lake Superior, established in 1717 by Zacharie Robutel de La Noue.
From 1731, the fort was used as a rear base for exploring the region around lakes Winnipeg and Manitoba as well as the upper Missouri. La Vérendrye’s sons, Louis-Joseph and François, would push exploration as far as Big Horn, today in the State of Wyoming.
On the Importance of Discovering a Passage to the Western sea:
“If the Western sea is discovered, France and the Colony could derive great benefits in trade, because Lake Superior flows into Lake Huron, which flows into Lake Erie, which in turn flows into Lake Ontario, whose waters form the St. Lawrence River; in each of these lakes, which are navigable, we could have barques to transport merchandise that from the Western sea to Lake Superior could be transported in canoes, since the rivers are very navigable.
The navigation would be brief, compared with European vessels and subject to far fewer risks and costs, which would provide such great benefit over the trade of that country that no European nation could compete with us.”
Decreed by the Council of the Marine, February 3, 1717.
–L.A. de Bourbon, Maréchal d’Estrées (cited in Pierre Margry, Découvertes et établissements des Français dans l’ouest et dans le sud de l’Amérique septentrionale, 1614–1754, Paris: Maisonneuve, 1879–1888, vol. 6, pp. 501–503).
The same model of colonization applied to Canada and Louisiana. After having financed the first expeditions, France neglected the newly discovered territories and abandoned them to individuals or private companies before renewing its interest, either willingly or because it was compelled to do so. There were two reasons for the lack of interest. First, hopes of quick riches, as the Spanish had enjoyed in South America, were soon dashed. No gold mines were discovered in Canada or in Louisiana. Second, France feared the depopulation of its kingdom and thereby the weakening of its power in Europe. In 1666, Colbert expressed this fear to Intendant Talon: “It would not be prudent to depopulate his Kingdom as he would have to do to populate Canada.” Thus few colonists were sent to settle in the new territories.
Monarchical power was satisfied with encouraging trade companies that tried their luck overseas. This was the case in Canada, where colonization began in the early 17th century through the fish and fur trade. New France’s trading company, the Company of One Hundred Associates, founded in 1627 and managed by noblemen and entrepreneurs, was granted a monopoly to exploit and administer the colony. It was under its control that colonization actually got underway, particularly in three centres, Québec, Trois-Rivières and Montréal, founded respectively in 1608, 1634 and 1642.
It was not until Colbert took power that the French Royal Court at Versailles decided to get more involved in the development of Canada. In 1663, an institutional regime was established that made Canada a French province like the others, but that depended on the Secretary of State for the Navy and the Colonies, Jean-Baptiste Colbert. This was the era of the colony’s most significant growth. From 1663 to 1673, nearly 800 young women—“les Filles du Roi” (the King’s daughters)—were sent to the colony to marry the many available bachelors. In 1665, the Secretary of State for the Navy sent the Carignan-Salières regiment to pacify the Iroquois nations. Among the thousand soldiers deployed there, over four hundred decided to become inhabitants following the signing of the peace in 1667.
Having long been neglected because of the War of the Spanish succession (1701–1713), Louisiana was ceded in 1712 to a company managed by a wealthy merchant named Antoine Crozat. In 1717, it was returned to the Company of the IndiesThe Company of the Indies succeeded the Company of One Hundred Associates in 1664 to compete with the large Dutch and English trading companies. , founded by the Scottish banker, John Law. The banker succeeded in sending over 5,000 people to the colony but his company sustained a spectacular bankruptcy in 1720. The French Crown was forced to liquidate the debts, restructure the company and then retake the entire colony in 1731, following the massacre of colonists at the Natchez post. The new arrivals in Louisiana settled in Mobile, founded in 1701 by Le Moyne d’Iberville, then in New Orleans, established in 1718 by his brother Jean-Baptiste Le Moyne de Bienville.
In 1615, three Recollet priests, Fathers Denis Jamet, Jean Dolbeau and Joseph Le Caron, arrived in New France. Dolbeau was ordered to convert the Montagnais of the lower St. Lawrence while Le Caron was sent to Huronia, a territory located in what is now southwestern Ontario. He stayed there for one winter and then returned to Québec. Experiencing many difficulties, the Recollets would not return among the Hurons until 1623. That year, Fathers Le Caron and Nicolas Viel as well as Brother Gabriel Sagard established a mission in the village of Quienonascaran. The following year, Father Viel found himself alone in converting the Hurons. The Hurons were not, however, so easily convinced. Only a few newborn infants and dying elders were baptized. In 1625 unsatisfactory results led the Recollets to call in the Jesuits, renowned for their evangelizing missions, particularly in Asia.
Shortly after the first Jesuit missionaries—Fathers Énemond Massé, Charles Lalemant and Jean de Brébeuf—arrived, the Jesuits succeeded in supplanting the Recollets. By 1632, they alone were responsible for the pastoral and missionary work of the Roman Catholic Church in Canada. The same year saw the publication of the first volume of their Jesuit Relations, an exchange of letters between the missionaries and their home base in Paris.
Starting in 1634, they established four missions in the main Huron villages, and five years later, on the Wye River, near Midland Bay, the main mission of Saint-Marie. The mission was surrounded by a palisade that protected the living quarters, a chapel, a hospital, a forge, a mill and a rest area for missionaries returning from Native villages. In 1647, 18 priests assisted by 24 laymen were serving missions in Huron territory. They took their evangelizing work to the neighbouring Petuns, the Neutrals, the Saulteux and the Ottawas. The results were disappointing: only a few hundred men and infants were converted to Christianity. During the conflict between the Hurons and the Iroquois, eight priests were martyred by the Iroquois. This war ended with the scattering of the Hurons in 1649 and the abandonment of the Jesuit missions in this territory.
With western expansion, the Jesuits hoped to achieve something by settling in the Great Lakes region. In 1665, Father Claude Allouez reached Chequamegon Bay on the shores of Lake Superior, where he founded La Pointe mission. For 30 years, the Jesuits established new missions, in particular at Baie des Puants, Sault-Sainte-Marie and Kaskaskia. In 1671, Marquette moved the La Pointe mission to Saint-Ignace (or Michilimackinac). It would later be used for all westward missions. According to the baron de Lahontan, “it is like their headquarters in this country, & all the missions spread among the other savage nations depend on this residence.”
The first missions in Louisiana were founded by Fathers François de Montigny and Albert Davion, two priests from the Seminary of Québec who settled among the Yasous and the Natchez. Others were later established among the Arkansas, the Choctaws and the Alibamons, but the Natchez uprising of 1729 led to the destruction of most of them.
The Jesuits had a small House [in Michilimackinac] beside a kind of Church inside a palisade that separated them from the Huron Village. These good Fathers employ their Theology & their patience in vain to convert the unbelieving ignorant. It is true that they often baptize dying infants, & a few old people, who agree to receive Baptism when they are at death’s door.
Lahontan (Louis Armand de Lom D’Arce, baron de), Nouveau voyage de M. le baron de Lahontan dans l’Amérique septentrionale, La Haye, 1703, p. 115.
The Founding of Western Posts
The colonial centres of Québec, Montréal, Mobile and New Orleans served as starting points new exploration. As the explorers penetrated the interior of the continent, they marked out their paths with small forts that served as trading posts or shelters as needed. They could become permanent posts, but many were abandoned as soon as the expedition concluded.
This was also the case for trading posts. Based on the bartering system, the fur trade emerged with the first voyages of the 16th century. French and Native people became accustomed to trading manufactured goods for pelts that were sent to the mother country to supply the hat industry. Fur gradually became the primary resource of New France, despite the efforts of Colbert and Talon to diversify the economy. In the 17th century, Native people were still bringing their furs to the colony’s principal centres,- in particular the great market held at Montréal every year; but merchants were gradually forced to travel further afield to obtain the precious merchandise and ensure it would not be sold to British competitors. As a result, a number of trading posts were built in the Great Lakes region. Some were simply houses built in Native villages, others were used both as a point of sale and a warehouse for trade goods. The stored merchandise was later redistributed to all the smaller posts in the region. Michilimackinac, located at the mouth of Lake Michigan, Fort de Chartres and Niagara are examples.
The French authorities did not take a favourable view of the establishment of new permanent settlements in the Pays d’en haut. The French royal court felt that these posts weakened the growing colony of Canada. Colbert thought it necessary to concentrate the settlements around Québec, Trois-Rivières and Montréal to provide better security, in particular with regard to the Iroquois nations. Yet in the colony, the authorities allowed certain young Canadians to set out for the Great Lakes region to conduct the fur trade. Some were given a permit authorizing them to do so. Most were not, and became known as the “coureurs de bois” (literally, “runners of the woods”). On the routes leading to the Pays d’en haut or on arriving there, they built forts. Thus, Cavelier de La Salle, who dominated trade in the region south of the Great Lakes even as he conducted explorations, established forts in Niagara in 1676, in Saint-Joseph des Illinois in 1679, and on the Illinois and Mississippi rivers in 1680 (Fort Crèvecœur and Fort Prud’homme). Some were later abandoned, others occupied by the soldiers of the Troupes de la Marine, who were responsible both for protecting the interests of the traders and for keeping them under surveillance.
“It would be better to limit ourselves to a territory that the Colony will be in a position to maintain, than to embrace too vast an area a part of which we might one day be forced to abandon with some diminishment of the reputation of his Ma[jes]ty, & and this Crown.”
Letter from Colbert to Talon, Versailles, January 5, 1666.
In 1696, a serious economic crisis hit the fur sector. The price of pelts collapsed because of surplus production. The French royal court then decided to close all the western posts. Five years later it reversed its decision, no doubt because of d’Iberville’s colonization attempts and the fact that France had taken possession of Louisiana. The geostrategic importance of the Mississippi Valley led Minister of Marine, Louis Phélypeaux, comte de Pontchartrain to implement a policy that historians called defensive imperialism. It consisted of creating small coloniesat strategic crossing points. The first was Detroit, founded in 1701 by Antoine de Lamothe-Cadillac. He wanted to establish a true “agricultural colony,” attract as many people as possible and acquire wealth as quickly as he could. This was how a chain of forts from the Great Lakes to the Gulf of Mexico was gradually established.
In 1709, Fort Saint-Louis was built at Mobile, in Louisiana, at the entry to the Alabama River. Another, fort was built in 1716, among the Natchez, and the following year, Fort Toulouse was constructed at the junction of the Coosa and Tallapoosa rivers. Fort de Chartres was established in 1720 and Fort Arkansas, the following year. Not all of these forts were the same size. Some were protected by a simple palisade of logs whereas others, were built of stone, and equipped with cannons. Garrisons sometimes consisted of a handful of men, and sometimes one or several companies of soldiers.
Some forts expanded because of the importance they acquired in the region, others as the result of a colonization policy that aimed to encourage soldiers to remain in the country and take land. This policy had been implemented in the 1660s with the granting of bonuses to the men of the Carignan-Salières regiment who wished to remain in America. At the request of Governor Jacques-René de Brisay, Marquis de Denonville, the policy was resumed in another form in 1686. Every year, two soldiers per company were authorized to leave service on condition that they commit to developing the land granted to them. In the same year, 1686, 20 demobilized soldiers nevertheless became inhabitants at Montréal. In 1701, Lamothe-Cadillac wished to populate “his” colony of Detroit in the same way. Eight years later, the post had 174 inhabitants.
Throughout the 18th century, various measures encouraged soldiers to settle at the site of their garrison. In the Pays d’en haut, where the garrison soldiers at a post were usually replaced every three years, soldiers were allowed to remain longer to become accustomed to the location. In 1717, the French royal court asked the Governor of Louisiana, Le Moyne de Bienville, to recommend “to the officers to ensure that the soldiers work (…) at making gardens for each of their barrack-rooms and even allow soldiers who are hard-working to start building small living quarters on their own.” It was hoped that in this way, soldiers would be encouraged to stay on permanently in the colony. In addition, these future colonists would be entitled to a settlement bonus that included supplies, clothing and full pay for three years. In September 1734, an edict established the conditions for soldiers to obtain leave: except for those settling in Louisiana to farm, soldiers were required to practise a useful trade, as masons, carpenters, cabinetmakers, barrelmakers, roofers, toolmakers, locksmiths or gunsmiths. Later, soldiers were also encouraged to marry so that they would be less tempted to become coureurs de bois. In 1750, these measures were supplemented with the shipment of additional troops to replace departing soldier-colonists.
According to the U.S. historian Carl Brasseaux, 1,500 soldiers settled in Louisiana after leaving service between 1731 and 1762. In Canadian historian Allan Greer’s view, the soldiers were the largest source of colonists in Canada. In fact, military colonization allowed 30 soldiers per year to settle in this colony from 1713 to 1756, or 1,300 over 43 years. Some posts in the interior of the continent expanded because of this influx of colonists. This was true in particular of Detroit and the Illinois region, which was attached to Louisiana starting in 1718. Many soldiers acquired plots of land near Fort de Chartres on leaving service. Some cultivated the land or raised livestock whereas others took up the fur trade. However, most opted for the colonial centres of Montréal, Québec and New Orleans. As a result, the forts remained primarily warehouses and trading posts and only seldom became permanent settlements. That is why historians frequently talk about “colonisation sans peuplement,” or “colonization without settlement” in connection with French policy.
Trade in the Colonies
Throughout the entire French regimeThe French regime in New France prevailed from 1604–1763, thus from the founding of the Île Sainte-Croix to the signing of the Treaty of Paris that ceded the colony to Great Britain. , the fur trade was the principal driver of the Canadian economy. From 1720 to 1740, 200,000 to 400,000 pelts were exported every year to France from Montréal or Québec. Their value represented nearly 70% of exports. In Louisiana, exports were less significant—about 50,000 pelts, 100,000 during the best years—but the fur trade nevertheless played an important role in the economy of the colony.
The regulation of trade evolved over the 17th and 18th centuries. In 1654, Governor Jean de Lauson established “permits,” which, by 1681 were limited to 25 per year. The system was abolished by the edict of May 21, 1696 and reinstated 20 years later through the declaration of April 28, 1716 to assist noble families in need. Following a suspension from 1723 to 1728, permits were later sold directly to merchants and occasionally to military officers. In fact, fur commerce was in the hands of a few merchants in Montréal. These merchants supplied trade goods to “merchant-outfitters” who organized expeditions to the Pays d’en haut.
In Louisiana, companies held an exclusive monopoly on the fur trade in exchange for an annual royalty to the king. They had to maintain a clerk in every fort who was responsible for merchandise. In 1731, when the Crown reasserted control over the colony, it ordered that trade with the Native communities should be open to everyone. During the War of the Austrian Succession (1740–1748), the Governor implemented a new policy whereby the commander of each fort had to supervise stocks and oversee the granting of trade permits.
As in the Pays d’en haut, trade was rapidly dominated by a small group of families belonging to the colony’s elite. Officers stationed in forts in the hinterland and the merchants of Montréal and New Orleans formed partnerships, which sometimes coincided with matrimonial alliances.
In Canada, the most widely hunted animal was beaver. Traders bought prepared beaver pelts. There were two types of furs: the sun-dried beaver pelt, of little value, and castor gras, a pelt worn by the hunter for two or three years, which gained a sheen from his sweat. Of superior quality, castor gras was the most expensive and the most sought after. The French also bought otter, martin and fox pelts. In Louisiana, local grey beaver was clearly less in demand than the brown beaver of the Pays d’en haut. The French preferred bear, bison and above all, white-tailed deer, the most hunted animal in the Mississippi Valley.
In exchange for furs, Native trappers accepted all kinds of goods, including utilitarian objects (copper or iron cauldrons, knives, axes, pickaxes, thimbles, needles, pins, etc.); arms (guns and gunpowder, musket balls and gunflints); and clothing and personal items (shirts, hats, sheets, blankets, knick-knacks, vermilion, bells, porcelain beads). Guns were particularly valued.
As colonization advanced in Louisiana, indigo and tobacco growing occupied an increasingly important place in the colony’s economy. The French court encouraged these two crops, because it saw a way of saving money on imports to the mother country. Indeed, every year, France spent over six million pounds to buy tobacco from England. As a result, Versailles placed a great deal of hope in Louisiana and its plantations, particularly those of the Natchez whose land was considered most suitable for tobacco growing. Indigo crops, which produced a highly prized blue dye in France, continued to be the most lucrative undertaking.
Exploring North America and settling in a few strategic locations would have been impossible without alliances with the Aboriginal nations. In 1603, Champlain was able to sail up the St. Lawrence because Gravé Du Pont, in charge of the expedition, concluded an initial alliance with the Montagnais Chief Anadabijou at Pointe Saint-Mathieu (today, Baie-Sainte-Catherine). From this moment on, the French would make several oral or written commitments to Aboriginal First Nations.
They soon understood that cordial relations would be essential if they wanted to establish permanent settlements in North America. A small population and limited troops compelled them to do so. Such alliances were also fundamental for successfully exploring the continent. Without agreement from a First Nation, an explorer could not travel through their territory nor make further discoveries. Alliances were also required for trade. The alliances that Champlain concluded with the Montagnais, the Algonquins and the Etchemins gave him access to a vast trading network stretching from the St. Lawrence Valley to the Pays d’en haut.
Above all, trade depended on good relations with Aboriginal partners. For Aboriginal people, trade could take place only through alliances. At trade meetings, gifts were discussed, not prices. This was quite different from the situation in Europe, where reciprocity prevailed over a market economy. For Native people, exchanging merchandise had a symbolic value: it represented political reciprocity between independent groups whereas an absence of trade amounted to war.
The Metaphor of the Father
In the Native American tradition, trading goods involves the creation of fictional kinship relationships among participants. For trade to occur, the “Other” – the newcomer or outsider – must join their society. That is why Aboriginal peoples welcomed the Other as a relative. In the first half of the 17th century, the Hurons called the French “brothers,” as the term brother implied a degree of equality between partners. But the defeat of the Huron by the Five Nations Iroquois in 1649 turned the tide. The Governor of Canada became the “father” of the Hurons, who placed themselves (or at least were considered to have placed themselves) under his protection. In the 18th century, “father” was what First Nations called the Governor of Canada and his representatives . The alliance uniting the French and the Aboriginal peoples was symbolized as a large family in which Native people were now the metaphorical children of the French father.
Complicating the situation is the fact that Aboriginal people and the French did not interpret this metaphorical “fatherhood” in the same way. In Native societies, a father had no compelling power and was obliged to be generous to his children, to support them without regard to cost. In the Ancien Régime of France, a father could be an absolute authority figure. Thus it was generally said that the king was the father of the French people.
“The savage does not know what it is to obey: it is more necessary to entreat him than to command him (…). The father does not venture to exercise authority over his son, nor does the chief dare give commands to his soldier; he will entreat him gently, and if anyone is stubborn in regard to the proposed movement, it is necessary to flatter him in order to dissuade him, otherwise, he will go further in his opposition.”
Nicolas Perrot (v.1644–1717), Mémoire sur les mœurs, coustumes et relligion des sauvages de l’Amérique septentrionale, Leipzig and Paris: Franck, Herodd, 1864.
Despite a number of vain attempts on the part of French governors and officers to impose this European concept of fatherhood, the Franco-Amerindian alliance was actually based on the Native understanding of the term. The colonial authorities,, had no compelling powers over their “Aboriginal children” . In order to maintain their alliances, and the status of “father,” the French the responsibility of providing gifts to “their children.” The distribution of gifts was methodically organized. Every year, Native chiefs gathered at Montréal or at Mobile to receive a variety of goods from the governor in person. The distribution of gifts was accompanied by feasts provided by the colonial authorities to renew vows of friendship.
Among Amerindians, a trading alliance inevitably led to a military alliance. As a result, the allied nations demanded as a condition for trade that the French provide them with protection or take part in their wars. This explains the French involvement in the war between the Hurons and the Iroquois in the early 17th century. Champlain was forced to fight alongside the Hurons, the first economic allies of the French, and witnessed their victory at Lake Champlain in 1609. French participation in that battle was the reason for long-standing hostility on the part of the Iroquois.
Alliances among the First Nations were not inviolable, eternal contracts. They had to be endlessly consolidated or renewed, and it was not only trade or French military strength that made them prosper, but myriad diplomatic relations. Diplomacy was based on Native rituals that the French adopted. Thus, during peace negotiations, the first step in the diplomatic process often consisted of an invitation extended by either of the parties. It identified the time and place of the meeting. The first contact between ambassadors and their hosts usually took place on the edge of a forest, near an appropriate Native village. After greeting one another, mourning the dead and exchanging wampum (a pearl or shell necklace) or gifts to express sympathy, the participants would hold the traditional peace pipe ceremony. This ceremony concluded with a great feast where the nations would dance and sing to convey their good intentions toward each other.
In the mid-17th century, following their defeat by the Iroquois and their dispersal throughout the Great Lakes region, the Hurons allied themselves with the “confederation of the three fires,” comprising the Saulteux, the Potatwatomi and the Ottawas. These nations, who lived in the Pays d’en haut, became in turn the allies of the French. Later, several other nations joined the ranks of the allies, in particular, the Illinois, the Miami and the Sioux, although they conceded the main role to the four founding nations of the “confederation.” This Franco-Aboriginal league was united by two main ties, in addition to trade: a common enemy, the Iroquois, and the fear of British expansion. In Louisiana, the ties uniting France with its Native allies were the same as those in the Pays d’en haut, but the common enemy was the Chickasaw. Armed by the British, this people threatened the Choctaw nation who became the main ally of the French.
The Iroquois represented a real threat to Canada. In the 17th century, they were able to gather nearly 3,000 warriors armed by the English, far more men than the colony could muster. In the 1660s, the French government sent the Carignan-Salières regiment for protection. .Two expeditions attacked Iroquois villages in 1665 and 1666 and forced them to make peace the following year, but they attacked New France again in the early 1680s. They wanted to protect their hunting territory from French expansion. The new governor of Canada, Joseph-Antoine le Febvre de La Barre, persuaded the French court to send him a contingent of 500 Troupes de la Marine to invade the Iroquois territories and destroy them once and for all. The 1684 expedition proved a disaster, just like the one led three years later by La Barre’s successor, the Marquis de Denonville, with twice as many soldiers. The situation became more critical because the Iroquois were now threatening the very heart of the colony. In 1689, 1,500 Iroquois warriors succeeded in attacking and destroying the small town of Lachine, located near Montréal. It was then apparent that despite reinforcements, the colony could not survive without the military support of its Aboriginal allies. The Iroquois would continue to be a threat until 1701, with the signing of the Great Peace of Montréal.
In Louisiana, the massacre of colonists at the Natchez post in 1729 set the entire colony ablaze. Several punitive expeditions were led against the Natchez, a great many of whom sought refuge among the Chickasaw, a nation allied with the English. In 1736 and 1740, Governor Bienville set up two expeditions, which also failed. Peace nevertheless returned shortly thereafter, when the Natchez were expelled by the Chickasaw and continued their eastern exodus.
On the eve of the Seven Years’ War (1756–1763), Canada and Louisiana formed a “colossus with feet of clay.” A gigantic territory that stretched from the St. Lawrence Valley to the Great Lakes region and the Gulf of Mexico, New France was nevertheless a sparsely settled colony. In fact, in 1750 there were only 55,000 (do we mean European inhabitants??) inhabitants in Canada and not quite 9,000 in Louisiana. The city of Québec was home to only 8,000 people, Montréal to 5,000 and New Orleans to 3,200. Beyond these colonial centres, most of the inhabitants of the “French empire” were Aboriginal allies. Even with their assistance, how could Canadians, so few in number, vanquish the British whose North American colonies contained nearly 1.5 million people?
In 1754, two years after the start of hostilities in Europe, the first signs of tension between French and British colonists emerged in the Ohio region, crisscrossed by traders from Virginia and Pennsylvania. On learning that English soldiers were in the region, the commander of Fort Duquesne, Claude-Pierre Pécaudy de Contrecœur, sent a patrol led by Joseph Coulon de Villiers de Jumonville to order young George Washington and his troops to leave the lands claimed by France. The French soldiers were ambushed and Jumonville was killed along with nine of his men.
Thus began a long war during which France won a few victories: Monongahela in 1755, Oswego in 1756 and Carillon in 1758. However, the capitulation of the citadel of Louisbourg, on July 26, 1758, marked the beginning of the end of the French presence in North America. In turn, the forts at Duquesne, Niagara and Frontenac fell into English hands. In September 1759, General James Wolfe seized Québec, and the following year Montréal surrendered. During peace negotiations, Versailles and the royal French government lost interest in its North American colonies. Like Voltaire, the officials of the mother country thought it preferable to preserve French possessions in the Antilles that were much more profitable than Canada and Louisiana, which had drained the royal treasury. Thus, on the signing of the Treaty of Paris in 1763, the St. Lawrence valley, the Pays d’en haut and the eastern banks of the Mississippi were ceded to the British whereas the western banks were handed over to the Spanish under the “Pacte de famille”, the family compact concluded by France and Spain in 1761 to ensure the mutual protection of the two kingdoms. This was the end of New France.
Balvay, Arnaud. 2006. L’Épée et la Plume. Amérindiens et soldats des troupes de la marine en Louisiane et au Pays d’en Haut. 1683-1763. Québec: Presses de l’Université Laval.
Berthiaume, Pierre. 1990. L’Aventure américaine au XVIIIe siècle du voyage à l’écriture. Ottawa: Presses Universitaires d’Ottawa.
Charlevoix, Pierre F.X. 1998. Journal d’un voyage fait par ordre du roi dans l’Amérique septentrionale. Montréal: Presses Universitaires de Montréal (Bibliothèque du Nouveau Monde).
Cook, Peter L. 2001. “Vivre comme frères. Le rôle du registre fraternel dans les premières alliances franco-amérindiennes (vers 1580-1650),” in Recherches Amérindiennes au Québec, vol. XXXI, no. 2, pp. 55–65.
Dechêne, Louise. 1974. Habitants et marchands de Montréal au XVIIe siècle. Paris and Montréal: Plon.
Delâge, Denys. 1989. “L’alliance franco-amérindienne 1660-1701,” in Recherches Amérindiennes au Québec, vol. XIX, no. 1, pp. 3–14.
Desbarats, Catherine. 1995. “The Cost of Early Canada’s Native Alliances: Reality and Scarcity’s Rhetoric,” in William and Mary Quarterly, third series, vol. 52, no. 4 (October 1995), pp. 609–630.
Greer, Allan. 1998. Brève histoire des peuples de la Nouvelle France. Montréal: Boréal.
Havard, Gilles. 2003. Empire et métissages: Indiens et Français dans le Pays d’en haut. 1660-1715. Québec: Septentrion.
Havard, Gilles, and Vidal, Cécile. 2008. Histoire de l’Amérique française. Paris: Flammarion (Champs).
Heinrich, Pierre. 1970. La Louisiana sous la compagnie des Indes (1717-1731). New York: Burt Franklin .
Jaenen, Cornelius J. 1996. “Colonisation compacte et colonisation extensive aux XVIIe et XVIIIe siècles en Nouvelle-France,” in Alain Saussol and Joseph Zitomersky (eds.). Colonies, Territoires, Sociétés. L’enjeu français). Paris: L’Harmattan, pp.15–22.
Lahontan, Louis Armand de Lom D’Arce, baron de. 1703. Nouveau voyage de M. le baron de Lahontan dans l’Amérique septentrionale. La Haye.
Lamontagne, Roland. 1962. “L’exploration de l’Amérique du Nord à l’époque de Jean Talon,” in Revue d’histoire des sciences et de leurs applications, vol. 15, no. 1, pp. 27–30.
Lemaître, Nicole (ed.). 2009. La Mission et le sauvage. Huguenots et catholiques d’une rive atlantique à l’autre. XVIe-XIXe siècle, Paris: CTHS.
Litalien, Raymonde. 1993. Les Explorateurs de l’Amérique du Nord (1492-1795). Sillery: Septentrion.
Margry, Pierre. (ed.). 1879–1888. Découvertes et établissements des Français dans l’ouest et dans le sud de l’Amérique septentrionale, 1614-1754. 6 vol., Paris: Maisonneuve.
Mathieu, Jacques. 1991. La Nouvelle-France. Les Français en Amérique du Nord, XVI-XVIIIe siècle. Paris: Belin Sup.
Nish, Cameron. 1968. Les Bourgeois-gentilshommes de la Nouvelle-France. 1729–1748. Montréal: Fides.
Surrey, N.M. Miller. 2006. The commerce of Louisiana during the French regimeThe French regime in New France prevailed from 1604–1763, thus from the founding of the Île Sainte-Croix to the signing of the Treaty of Paris that ceded the colony to Great Britain. , 1699–1763. . Tuscaloosa, Alabama: The University of Alabama Press.
Villiers du Terrage, Marc de. 1934. Un explorateur de la Louisiane, J.-B. Bénard de La Harpe. 1683-1728. Rennes: Oberthur.
- Exploration 1497 to 1760:
- La France en Amérique/France in America:
- Dictionary of Canadian Biography Online:
- New France New Horizons:
- The Jesuit Relations:
- La Nouvelle France. Ressources françaises:
- Les cartes de la Nouvelle France:[CP1]
- Encyclopedia of French Cultural Heritage in North America:
- New France (Canadian Encyclopedia):
- Center for Archaeological Studies: | http://www.civilization.ca/virtual-museum-of-new-france/colonies-and-empires/colonial-expansion-and-alliances/ | 13 |
15 | As the two-year anniversary of the Deepwater Horizon oil spill in the Gulf of Mexico approaches, a team of scientists led by Dr. Peter Roopnarine of the California Academy of Sciences has detected evidence that pollutants from the oil have entered the ecosystem's food chain. For the past two years, the team has been studying oysters (Crassostrea virginica) collected both before and after the Deepwater Horizon oil reached the coasts of Louisiana, Alabama, and Florida. These animals can incorporate heavy metals and other contaminants from crude oil into their shells and tissue, allowing Roopnarine and his colleagues to measure the impact of the spill on an important food source for both humans and a wide variety of marine predators. The team's preliminary results demonstrate that oysters collected post-spill contain higher concentrations of heavy metals in their shells, gills, and muscle tissue than those collected before the spill. In much the same way that mercury becomes concentrated in large, predatory fish, these harmful compounds may get passed on to the many organisms that feed on the Gulf's oysters.
"While there is still much to be done as we work to evaluate the impact of the Deepwater Horizon spill on the Gulf's marine food web, our preliminary results suggest that heavy metals from the spill have impacted one of the region's most iconic primary consumers and may affect the food chain as a whole," says Roopnarine, Curator of Geology at the California Academy of Sciences.
The research team collected oysters from the coasts of Louisiana, Alabama, and Florida on three separate occasions after the Deepwater Horizon oil had reached land: August 2010, December 2010, and May 2011. For controls, they also examined specimens collected from the same localities in May 2010, prior to the landfall of oil; historic specimens collected from the Gulf in 1947 and 1970; and a geographically distant specimen collected from North Carolina in August 2010.
Oysters continually build their shells, and if contaminants are present in their environment, they can incorporate those compounds into their shells. Roopnarine first discovered that he could study the growth rings in mollusk shells to evaluate the damage caused by oil spills and other pollutants five years ago, when he started surveying the shellfish of San Francisco Bay. His work in California revealed that mollusks from more polluted areas, like the waters around Candlestick Park, had incorporated several heavy metals that are common in crude oil into their shells.
To determine whether or not the Gulf Coast oysters were incorporating heavy metals from the Deepwater Horizon spill into their shells in the same manner, Roopnarine and his colleagues used a method called "laser ablation ICP-MS," or inductively coupled plasma mass spectrometry. First, a laser vaporizes a small bit of shell at different intervals along the shell's growth rings. Then the vaporized sample is superheated in plasma, and transferred into a machine that can identify and quantify the various elements in the sample by mass. Roopnarine and his colleagues measured higher concentrations of three heavy metals common in crude oil--vanadium, cobalt, and chromium--in the post-spill specimens they examined compared to the controls, and this difference was found to be statistically significant.
In a second analysis, the scientists used ICP-MS to analyze gill and muscle tissue in both pre-spill and post-spill specimens. They found higher concentrations of vanadium, cobalt, and lead in the post-spill specimens, again with statistical significance.
In a final analysis, the team examined oyster gill tissue under the microscope and found evidence of "metaplasia," or transformation of tissues in response to a disturbance, in 89 percent of the post-spill specimens. Cells that were normally columnar (standing up straight) had become stratified (flattened)—a known sign of physical or chemical stress in oysters. Stratified cells have much less surface area available for filter feeding and gas exchange, which are the primary functions of oyster gills. Oysters suffering from this type of metaplasia will likely have trouble reproducing, which will lead to lower population sizes and less available food for oyster predators.
The team presented their data at a poster session at the American Geophysical Union meeting in December 2011, and is preparing their preliminary findings for publication. However, their work is just beginning. In addition to increasing the number of pre- and post-spill oyster specimens in their analysis, the team also plans to repeat their analyses using another bivalve species, the marsh mussel (Geukensia demissa). Roopnarine is also planning to create a mathematical model linking the oyster and mussel to other commercially important species, such as mackerel and crabs, to demonstrate the potential impact of the oil spill on the Gulf food web. Scientists don't currently know how these types of trace metals move through the food web, how long they persist, or how they impact the health of higher-level consumers, including humans—but the construction of a data-driven computer model will provide the framework for tackling these important questions.
Roopnarine and his colleagues have faced a number of challenges during the course of their study. Unfortunately, pure crude oil samples from Deepwater Horizon have remained inaccessible, making it impossible for the team to compare the heavy metal ratios they have documented in the oysters to the ratios found in the Deepwater Horizon oil. Additionally, the chemical compositions of artificial dispersants and freshwater that were intentionally spread in the Gulf to alleviate the spill are also unknown—additional variables that could affect the team's research. The team is hopeful that they will eventually be able to analyze these samples, thus shedding more light on their results.
Historical specimens for this study were provided by museum collections at the California Academy of Sciences, the Academy of Natural Sciences in Philadelphia, and the Santa Barbara Museum of Natural History. Collaborators on this project are Deanne Roopnarine from Nova Southeastern University, Florida; David Gillikin from Union College, New York; Laurie Anderson from the South Dakota School of Mines and Technology; and David Goodwin from Denison University, Ohio.
About the California Academy of Sciences
The Academy is an international center for scientific education and research and is at the forefront of efforts to understand and protect the diversity of Earth's living things. The Academy has a staff of over 50 professional educators and Ph.D.-level scientists, supported by more than 100 Research and Field Associates and over 300 Fellows. It conducts research in 12 scientific fields: anthropology, aquatic biology, botany, comparative genomics, entomology, geology, herpetology, ichthyology, invertebrate zoology, mammalogy, microbiology, and ornithology. Visit research.calacademy.org.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | http://www.eurekalert.org/pub_releases/2012-04/caos-sfh041812.php | 13 |
87 | The evolution of verbal behavior in children.Introduction
Complex language is one of the unique repertoires of the human species. Others include teaching and certain "types of imitation" (Premack, 2004), although these too may be pre-or co-requisites for certain functional uses of language. Over the last 40 years linguists have proposed theories and provided evidence related to their interpretation of the structure of language (Chomsky, 1959; Chomsky & Place, 2000, MacCorquodale, 1970; Pinker, 1999). Neuroscientists have identified neurological correlates associated with some aspects of language (Deacon, 1979, Holden, 2004). Behavior analysts have focused on the source of and controlling variables for the function of language as behavior per se (Catania, Mathews, & Shimoff, 1990; Greer & Ross, 2005; Michael, 1984; Skinner, 1957).
More recently, scholars have come to view human language as a product of evolution; "Linguists and neuroscientists armed with new types of data are moving beyond the non-evolutionary paradigm once suggested by Noam Chomsky and tackling the origins of speech head-on." (Culotta & Brooks-Hanson, 2004, p. 1315). The current work focuses on the evolution of both the non-oral motor and oral components of speech (Deacon, 1997; Holden, 2004), although some arguments are characterized necessarily more by theory than data.
Despite the evidence that primates and pigeons can be taught certa in features of verbal behavior (D. Premack & A. Premack; 2003; Savage-Rumbaugh, Rumbaugh, & Boysen, 1978; Epstein, Lanza, & Skinner, 1980), the speaker-as-own listener capability makes complex verbal behavior possible and may represent what is most unique about human verbal functions (Barnes-Holmes, Barnes-Holmes, & Cullinan, 2001; Lodhi & Greer, 1989; Horne & Lowe, 1996). Some suggest that oral communication evolved from clicking sounds to sounds of phonemes, and cite the extant clicking languages as evidence (Pennisi, 2004). It is likely that sign language and gesture predated both vocal forms; but it is the evolution of the spoken and auditory components of language that are seen as critical to the evolution of language. Some of these include changes in the anatomy of the jaw--Homo sapiens have more flexible jaw than did Neanderthals. Also, the location of the larynx relative to the trachea is different for Homo sapiens, and this anatomical feature made it possible for the humans to emit a wider range of speech sounds (Deacon, 1997). The combination of these anatomical changes, together with the identification of separate, but proximate, sites in the brain for speaking, listening, and imitation seem to be critical parts of what made spoken language possible (Deacon, 1997). The presence of these anatomical and physiological properties made it possible for the evolution of verbal functions through the process of cultural selection (Catania, 2001). The functional effects of speech sounds were acquired by the consequences provided within verbal communities. This latter focus is what constitutes the subject matter of verbal behavior.
The new foci on language, as an evolved anatomical and physiological capacity, do not necessarily suggest the existence of a universal grammar; nor, in fact, does it eliminate the possibility of an evolved universal grammar, as some argue (Pinker, 1999). Some of the linguistic neuropsychological searches for an evolved universal grammar now follow the PET and MRI trails and focus on identifying blood flow associated with the speech and hearing centers in the brain (Holden, 2004). Interesting and as important as this work may be, little, if any, is devoted to the function of language as behavior per se. Nor is it concerned with the biological or cultural evolution of verbal function in our species or in the lifespan of the individual, although anthropological linguists point to functions as the initial source. Only the research associated with Skinner's (1957) theory of verbal behavior as behavior per se, and expansions of the theory by contemporary behavior analysts, provide the means for analyzing how cultural selection gave rise to the function of language (Greer, 2002; Greer & Ross, in press; Hayes, Barnes-Holmes, & Roche, 2000; Lowe, Horne, Harris, & Randle, 2002). Currently, the linguistic, neuropsychological, and behavior analytic foci remain separate sciences, though they need not remain so (Catania, 1998). While the role of cultural selection in the evolution verbal behavior for the species remains theoretical, the development of verbal behavior within the ontogeny of the individual is empirically verifiable.
From Theory to Research
For decades after the publication of Skinner's (1957) book on verbal behavior, the majority of the publications on the theory remained theoretical. There is now a significant body of research supporting and expanding Skinner's theory of verbal behavior. We have identified over 100 experiments devoted to testing the theory and utility for educational purposes. There is an additional significant body of related work in relational frame theory that includes at least an equal number of studies (Hayes et al., 2000). In our program of research alone, we have completed at least 48 experiments (25 published papers, several in press, and recent dissertations) and a number of replications. Our particular research program was driven by our efforts to develop schools that provide all of the components of education based solely on teaching and schooling as a scientific endeavor. While the existing work in the entire corpus of behavior analysis provided a strong foundation for a science of schooling, much was still missing. Cognitive psychology offered a plethora of theories and findings, and when they were germane to our efforts, these findings proved to be operationally synonymous to those identified in behavior analysis. However Skinner's (1957) Verbal Behavior showed the way for a research program to fill in much of what was missing in the literature in a manner that allowed us to operationalize complex cognitive repertoires.
In our commitment to a thoroughgoing scientific approach to schooling, we needed functional curricula that identified repertoires of verbal operants or higher order operants, including "generative" or "productive" verbal behavior. Our efforts included using pre-existing conceptual and applied verbal behavior research, identifying the needs of children who were missing certain repertoires, and identifying the validity of untested components of Skinner's theory through new experiments done by others and us (Greer, McCorkle, & Williams, 1989; Selinske, Greer, & Lodhi, 1991). Through this process we have been able to meet real educational needs, or at least the most pressing needs--the recognition of which were missing in the existing science of behavior and cognitive psychology. Of course, these educational voids were also apparent in normative practices in education based on pre-scientific approaches that treat teaching as an art. We needed findings that worked in the day-to-day operation of our schools, if we were to educate the "whole child." Along the way, we discovered some interesting aspects of verbal behavior that may prove useful to a behavioral developmental psychology (Baer, 1970; Bijou & Baer, 1978; Gewirtz, Baer, Roth, 1958). Indeed, the evidence suggests that we have identified what Rosales-Ruiz and Baer (1996) described as "behavioral cusps"--in our case verbal behavior cusps. Rosales-Ruiz and Baer stated that,
A cusp is a change [a change in the capability of the child] that (1) is often difficult, tedious, subtle, or otherwise problematic to accomplish, yet (2) if not made, means little or no further development is possible in its realm (and perhaps in several realms); but (3) once it is made, a significant set of subsequent developments suddenly becomes easy or otherwise highly probable which (4) brings the developing organism into contact with other cusps crucial to further, more complex, or more refined development in a thereby steadily expanding, steadily more interactive realm. (Rosales-Ruiz & Baer, 1996, p. 166). [The italics in brackets were inserted into the quotation.]
Repertoires of Verbal Behavior for Instructional Purposes
First, applications of the research findings in verbal behavior in our CABAS[R] schools led to the categorization of children for instructional purposes according to levels of verbal behavior or verbal capabilities that we extrapolated from Skinner's analysis of the components of verbal behavior (Greer, 2002). (1) Traditional diagnoses or developmental constructs are useful for some inquiries, but they are not very useful for instructional purposes. The identification of the functional verbal capabilities of children, however, that we extrapolated from Skinner's work was very helpful. Skinner described the different verbal repertoires of the speaker and the relation of the speaker and listener in terms of his observations of highly literate individuals. These repertoires seemed to constitute what individuals needed to posses if they were to be verbally competent. Moreover, those verbal functions provided operational descriptions for most of the complex educational goals that had been prescribed by educational departments throughout the western world (Greer & Keohane, 2004; Greer & McCorkle, 2003). For educational purposes, the capabilities or cusps provided us with behavioral functions for a curriculum for listening, speaking, reading, writing, and the combinations that made up complex cognitive functions.
The verbal categorization proved useful in: (a) determining the ratio of instructors to students that would produce the best outcomes for students (Table 1), (b) identifying what existing tactics from the research worked for children with and without particular verbal capabilities (See Greer, 2002, Chapters 5 and 6), isolating the specific repertoires children could be taught given what each child initially brought to the table, and the development of a curricula composed of functional repertoires for complex human behavior. Most importantly, we identified the verbal "developmental cusps" (Rosales-Ruiz & Baer, 1996) or specific verbal capabilities we needed to induce, if we were to make real progress with our children. The categories provided a continuum of instructional sequences and developmental interventions that provided a functional approach to cognitive academic repertoires, and the recasting of state and international educational standards into functional repertoires of operants or higher order operants rather than structural categories alone (Greer, 1987, 2002; Greer & McCorkle, 2003). Each of the major verbal categories also identified levels of learner independence (i.e., operational definitions of autonomy) as well as what we argue are valid measures of socialization. Table 1 lists the broad verbal stages as we have related them to independence and social function.
Much of our work as teacher scientists is devoted to experimentally identifying prerequisites or co-requisites repertoires needed by each child to progress through the capabilities listed in Table 1. Once these were identified, we used or developed scientifically based tactics for moving children with the lack of a particular verbal capability from one level of verbal capability to the next level in the continuum.
When we found it necessary, and were able to teach the missing repertoires, the children made logarithmic increases in learning and emergent relations ensued. That is they acquired what has been characterized in the literature as behavioral cusps. As the evidence accumulated with individual children across numerous experiments, we also began to identify critical subcomponents of the verbal capabilities. As we identified more subcomponents, we worked our way inductively to the identification of the developmental components within the verbal capabilities suggested by Skinner. The quest led serendipitously to increased attention on the listener and speaker-as-own listener repertoires, a focus that began to be evident in the work of others also (Catania, Mathews, & Shimoff, 1990; Hayes, et al., 2000; Horne & Lowe, 1996). Table 2 lists the verbal capabilities and the components and prerequisites that we are beginning to identify as well as some of the related research.
It was evident that without the expertise to move children with language delays through a sequence of ever more sophisticated verbal capabilities or cusps, we could make only minimal progress. As we began to identify ways to provide missing capabilities, the children began to make substantial gains. As the magnitude of the differences became apparent in what the children were capable of learning following the attainment of missing repertoires, we came to consider the possibility that these verbal repertoires represented developmental verbal capabilities or verbal behavior cusps.
We have shown that certain environmental experiences evoked the capabilities for our children. However, we are mindful that providing particular prerequisite repertoires that are effective in evoking more sophisticated verbal capabilities in children with language disabilities or language delays does not necessarily demonstrate that the prerequisites are component stages in all children's verbal or cognitive development. While Gilic (2005) demonstrated that typically developing 2-year old children develop naming through the same experiences that produced changes in our children with verbal delays, others can argue effectively that typically developing children do not require specially arranged environmental events to evoke new verbal capabilities. A definitive rejoinder to this criticism awaits further research, as does the theory that incidental experiences are not required. See Pinker (1999) for the argument that such experiences are not necessary.
Milestones of the Development of Verbal Function: Fundamental Speaker and Listener Repertoires
Our rudimentary classifications of children's verbal development adhered to Skinner's (1957) focus on the verbal function of language as distinguished from a structural or linguistic focus. Skinner focused on antecedent and consequent effects of language for an individual as a means of identifying function, as distinguished from structure (Catania, 1998). Eventually, his theory led to a research program devoted to the experimental analyses of verbal behavior with humans. In a recent paper (Greer, & Ross, 2004) and a book in progress (Greer & Ross, in press), we have suggested that this research effort might be best described as verbal behavior analysis, often without distinction between its basic or applied focus. We have incorporated the listener role in our work, in addition to the speaker functions. While Skinner's self-avowed focus was the speaker, a careful reading of Verbal Behavior (Skinner, 1957/92, 1989) suggests much of his work necessarily incorporated the function of listening (e.g., the source of reinforcement for the listener, the speaker as listener). Our research on the role of the listener was necessitated by the problems encountered in teaching children and adolescents with language delays, of both native and environmental origin, to achieve increasingly complex cognitive repertoires of behavior. Without a listener repertoire many of our children could not truly enter the verbal community. We needed to provide the listener roles that were missing, but that were necessary if the repertoires of the speaker were to advance. Skinner made the point that a complete understanding of verbal behavior required the inclusion of the role of the listener (See the appendix to the reprint edition of Verbal Behavior, published by the B. F. Skinner Foundation, 1992, pp.461-470). Moreover, new research and theories based on Skinner's work have led to a more complete theory of verbal behavior that incorporates the role of the listener repertoire. These efforts include, but are not limited to:
* Research done by relational frame theorists (Barnes-Holmes, Barnes-Holmes, & Cullinan, 1999; Hayes, Barnes-Holmes, & Roche, B., 2000),
* Naming research by Horne and Lowe and their colleagues (Horne & Lowe, 1996; Lowe, Horne, Harris, & Randle 2002),
* Research on auditory matching and echoics (Chavez-Brown & Greer, 2004)
* Research on the development of naming (Greer, et al., 2005b)
* Research on conversational units and speaker-as-own-listener (Donley & Greer, 1993; Lodhi & Greer, 1989), and
* Research on learn units (Greer & McDonough, 1999).
Our levels of verbal capability incorporate the listener as part of our verbal behavior scheme (Skinner, 1989). The broad categories that we have identified to date are: (a) the pre listener stage (the child is dependent on visual cues, or, indeed, may not even be under the control of visual stimuli), (b) the listener stage (the child is verbally governed as in doing as others say) (c) the speaker stage (the child emits mands, tacts, autoclitics, intraverbal operants), (d1) the stage of rotating speaker-listener verbal episodes with others (the child emits conversational units and related components of learn units in interlocking operants between individuals), (d2) the speaker-as-own listener stage (the child engages in self talk, naming, speaker-as-own-listener editing function, and say-do correspondence), (e) reader (the child emits textual responding, textual responding as a listener and emergent joint stimulus control, and the child is verbally governed by text), (f) the writer stage (the child verbally governs the behavior of a reader for aesthetic and technical effects), (g) writer-as-own reader (the child reads and revises writing based on a target audience), and uses verbal mediation to solve problems (the child solves problems by performing operations form text or speech). Each of these has critical subcomponents and the subcomponents of the categories that we have identified to date are shown in Table 2.
The Listener Repertoire
In the verbal community a pre-listener is totally dependent on others for her care, nourishment, and very survival. Pre-listeners often learn to respond to a visual and tactile environment; but if they do not come under the control of the auditory properties of speech they remain pre-listeners. For example, in some situations they learn to sit when certain visual cues are present. It is often not the spoken stimuli such as "sit still, "look at me," or do this" to which they respond, but rather certain instructional sequences or unintentional visual cues given by teachers and caretakers. They do no respond to, or differentiate among, the auditory properties of speech as stimuli that evoke specific responses. When the basic listener repertoire is missing, children cannot progress beyond visual or other non-auditory stimulus control. However, substantial gains accrue when children achieve the listener capability, as we shall describe.
Auditory Matching. It is increasingly apparent, that children need to match word sounds with word sounds as a basic step in learning to discriminate between words, and even distinguish words from non-word sounds. While most infants acquire auditory matching with apparent ease, some children do not acquire this repertoire incidentally. Adults experience similar difficulties in echoing a new language.
Chavez-Brown & Greer (2003) and Chavez-Brown (2005) taught children who could not emit vocal verbal behavior or whose vocal speech was flawed to match pictures using BigMack[R] buttons as a pre-training procedure to teach them to use the apparatus. The teacher touched a single button set before her that had a picture on it and then touched each of the two buttons the students had in front of them (one with the target picture and one with a foil picture). Then students responded by depressing the button in front of them that matched the picture of the button in front of the teacher that had been touched by the teacher. Once the children mastered the visual matching task, used as a means to introduce them to the apparatus, we removed the pictures. In the next phase the children were taught to match the sound generated by the teacher's button (the buttons produced individual pre-recorded words or sounds). At this second stage, the depression of one of the students' buttons produced a sound and the depression of one of their buttons had no sound. Once they mastered matching sounds contrasted with no sound buttons, they learned to match words with non-word sounds as foils. Next, they learned to match particular words contrasted with different words. Finally, they learned generalized matching for words produced by pushing the buttons (i.e., they learned to match novel word sets with no errors). Our findings showed that children, who had never vocalized before, began to approximate or emit echoic responses under mand and tact-establishing operations when they mastered generalized word matching. Moreover, a second set of children, who had only approximations (i.e., faulty articulations), learned full echoics that graduated to independent mands and tacts. This matching repertoire may be an early and necessary step in the acquisition of speaking and may also be key to more advanced listening. See also correlations between auditory matching and the emissions of verbal operants identified by Marion et al. (2003) that suggested the auditory matching research we described above.
The Emersion of Basic Listener Literacy. When children have "auditory word matching" they can be taught the discriminative function needed to become verbally governed. Over the past few years, we found that children without listener repertoires reached a learning plateau and were no longer making progress in instruction beyond extensions of visual matching. We believe that children around the world who have these deficits are not making progress in early and intensive behavioral interventions. These children require inordinate numbers of instructional presentations, or learn units, and still do not make progress in acquiring repertoires that require verbal functions that are the very basic building blocks of learning. In an attempt to help these children become listeners, we developed an intervention that we call listener emersion (Greer, et al. 2005a). During listener emersion, we suspend all of the children's instructional programs and provide intensive instruction in responding to the discriminative acoustical properties of speech. This instruction continues until children's listener responses are fluent. (2)
In the listener emersion procedure, children learned to respond to words (i.e., vowel-consonant relations) spoken in person by a variety of individual voices as well as to voices recorded on tapes and other sources. By "fluent," we mean that the children learned to respond to four or more sets composed of five instructions such as "point to--," "match--", "do this," "stand up," and "turn around." The children also learned not to respond to nonsensical, impossible, or non-word vowel-consonant combinations that were inserted into the program as part of each set (i.e., "jump out the window," "blahblahblah"). These sets were presented in a counterbalanced format with criterion set at 100% accuracy. Next the children learned to complete the tasks at specified rates of accurate responding ranging from 12 to 30 per minute. Finally, they learned to respond to audio taped, mobile phone, or computer generated instructions across a variety of adult voices. Once the children's basic listener literacy emerged (i.e., the children met the listener emersion criteria), we compared the numbers of learn units required by each student to meet major instructional goals before and after listener emersion. The achievement of the objectives for the listener emersion procedure constitutes our empirical definition of basic listener literacy. This step insures that the student is controlled by vowel-consonant speech patterns of speakers. After acquiring basic listener literacy, the numbers of instructional trials or learn units the children required to achieve instructional objectives across the range of his or her instructional objectives decreased from four to ten times that which had been required prior to their obtaining basic listener literacy.
The Speaker Stage
Acquisition of Rudimentary Speaker Operants. In the late eighties, we identified procedures for inducing first instances of vocal speech that proved more effective than the operant shaping of spoken words as linguistic requests (Williams & Greer, 1989). That is, rather than teaching parts of words as vowel consonant blends, as had been the existing behavioral procedure (Lovaas, 1977), we arranged the basic establishing operations and obtained true mands and tacts using echoic -to-mand and echoic-to-tact procedures (Williams & Greer, 1989). Once true verbal operants were taught, the children used "spontaneous speech." The children came under the relevant establishing operations and antecedent stimuli (Michael, 1982, 1984, 1993) associated with mand and tact operants and related autoclitics, rather than verbal antecedent such as, "What do you want?" They did not require intraverbal prompts as a means of teaching pure tacts. In another procedure Sundberg, Loeb, Hale, and Eighenheer (2000/2001) evoked the emission of impure tacts and the emission of impure tacts and mands; these are necessary repertoires as well. In still other work Pistoljevic and Greer (2006) and Schauffler and Greer (2006) demonstrated that intensive tact instruction led to the emission of novel tacts and appropriate audience control.
Children who do not speak can be taught verbal behavior through the use of signs, pictures, or electronic speaking devices. Even so, we submit that speech is simply more useful; speech works in the community at large. When we are unable teach speech, we too, use these substitutes, although as our research has progressed there have been fewer children that we cannot teach to speak. The second choice for topography for us is electronic speaking devices as such devices supply the possibilities for speaker as own listener. The importance of speech becomes apparent when we reach the critical verbal repertoires of speaker-as own-listener and reader.
Although the use of the above procedures significantly increased the numbers of children we could teach vocal verbal operants, there were some children we still could not teach to speak. While we could teach these children to use substitute topographies for speech, the development of speech is critical for subsequent verbal capabilities. For those children who did not learn to speak using our basic echoic to-mand and echoic-to-tact procedures (Williams & Greer, 1989), others and we, designed and tested several tactics to induce first instances of speech. We taught children who had acquired fluent generalized imitation, but who could not speak, to perform chains of generalized imitation of large and small movement responses at a rate of approximately 30 correct per minute at 100% accuracy. These children were then deprived of preferred items for varying periods of time and were only able to obtain the items contingent on speech under conditions in which they first performed a rapid chain of generalized imitation (moving from large motor movements to fine motor movements related to touching their lips and tongue). As soon as the last motor movement step in the teaching chain was completed we offered the item under deprivation as we spoke its name. After several presentations as described, the children spoke their first echoic mands. Some of these children were as old as nine years of age and their first words were not separate phonemes but were mands like "baseball card," "Coke," or "popcorn." Once the echoic -to-mand was induced for a single word or words, other echoic responses were made possible and their independent mand repertoire was expanded--they acquired function. Follow-ups done years after these children spoke their first words showed that they maintained and expanded their mand and eventually their tact repertoires extensively (Ross & Greer, 2003). We currently think that the procedure acted to induce joint stimulus control across the two independent behaviors of imitating and echoing (see Skinner, 1957 for the important distinction between imitation and echoic responding). See-do joined a higher order class and a new behavioral cusp was acquired.
In a replication and an extension of this work, Tsiouri & Greer (2003) found that the same procedure could be used to develop tact repertoires, when the establishing operation was deprivation of generalized reinforcers. See Skinner (1957, page 229) for a source for the establishing operations for the tact. Moreover, tacts and mands could be evoked in tandem fashion when emission of the tact operants resulted in an opportunity to mand as a result of using the tandem procedures developed in Williams & Greer (1989) (Tsiouri & Greer, 2003).
The establishing operation is key to the development of these rudimentary operants (Michael, 1982, 1984, 1993). There appear to be three tested establishing operation tactics: (a) the interrupted chain (Sundberg, et al., 2001/2002), the incidental teaching procedure in which the incidental establishing operations opportunities are captured (Hart & Risely, 1975), and the momentary deprivation procedure (Williams & Greer, 1989). Schwartz (1994) compared the three procedures. She found them equally effective, although the momentary deprivation procedure resulted in slightly greater maintenance and required significantly less time. It is suggested that more powerful results may accrue if each of these establishing operations are taught in a multiple exemplar fashion providing the child with a range of establishing operations for controlling the emission of rudimentary operants. Still, other establishing operation tactic s are needed, such as identification of establishing operations for tacts described in Tsiouri and Greer (2003). Indeed, what is characterized in the literature as "naturalistic language" interventions, derived from Hart and Risely's incidental procedure, are essentially suggestions for capturing establishing operations as they occur in situ (McDuff, Krantz, McDuff, & McClannahan, 1988). The difficulty with relying solely on the capture of incidental establishing operations is that there are simply not enough opportunities to respond. There are now an abundance of tested tactics for evoking establishing operations in instructional sessions that can be used without waiting for an incidental occasion, although it is critical to capture incidental opportunities as well.
From Parroting to Verbal Operants. The stimulus-stimulus pairing procedure of Sundberg et al. (1996) evoked first instances of parroting of words as a source of automatic reinforcement. These investigators paired preferred events, such as tickling the children while the experimenters said words; the children began to parrot the words or sounds. Moreover, the children emitted the words in free play, suggesting that the saying of the words had acquired automatic reinforcement status. Yoon (1998) replicated the Sundberg et al. procedure, and after the parroting was present for her students, used the echoic-to-mand tactic described above (Williams & Greer, 1989), to evoke true echoics that, in turn, became independent mands. Until the parroted words were under the echoic to mand contingencies, the children were simply parroting as defined by Skinner (1957); however, obtaining the parroting as an automatic reinforcer made the development of true echoics possible. The emission of a parroting response may be a crucial first step in developing echoic responses and may be an early higher order verbal operant (3). The children in these studies moved from the listener to the speaker stage as a result of the implementation of extraordinary instructional procedures (See Sundberg & Partington, 1998 for an assessment and curriculum). Once a child has acquired a speaker repertoire the speaker-listener repertoire becomes possible. Speaker capabilities opened up extraordinary new possibilities for these children, as they did for our ancestors in the combined evolution of phylogenic capabilities in the context of capabilities evoked by cultural selection.
Transformation of Establishing Operations across Mand and Tact Functions. Initially, learning one form (e.g., word or words) in a mand or tact function does not result in usage of the form in the untaught function without direct instruction (Lamarre & Holland, 1985, Twyman, 1996). For example, a child may emit a word as a mand (e.g., "milk") under conditions of deprivation, such that the emission of "milk" results in the delivery of milk. But, the child cannot use the same form ("milk") under tact conditions (i.e., the emission of the word in the presence of the milk when the reinforcement is a social or other generalized reinforcement probability). The independence of these two functions has been reliably replicated in young typically and non-typically developing children; however, at some point most children can use forms acquired initially as mands and use the same forms as tacts, or vice versa. Some see this as evidence of something like a neurologically based universal grammar that makes such language phenomena possible (Pinker, 1999). Clearly, neural capacities must be present just as the acoustic nerve must be intact to hear. But, the unequivocal existence of a universal grammar does not necessarily follow; the source is at least as likely to lie in the contingencies of reinforcement and punishment and the capacity to be affected by these contingencie s in the formation of relational frames/higher order operants. One example of the acquisition of this verbal cusp or higher order operant is the acquisition of joint establishing operation control of a form in either mand or tact functions after learning only one function. When this verbal cusp is achieved, a child can use a form in an untaught function without direct instruction.
Nuzzolo-Gomez and Greer (2004) found that children who could not use a form learned in a mand function as a tact, or vice versa without direct instruction in the alternate function (Lamarre & Holland, 1985; Twyman, 1996a, 1996b), could be taught to do so when they were provided with relevant multiple exemplar experiences across establishing operations for a subset of forms. Greer, et al. (2003b) replicated these findings and we have used the procedure effectively with numerous children in CABAS schools. The new verbal capability doubled both incidental and direct instructional outcomes.
Speaker Immersion. Even after the children we taught had acquired a number of rudimentary speaker operants, some did not use them as frequently as we would have liked. Speaking had emerged; but it was not being used frequently, perhaps because the children had not received an adequate number of opportunities of incidental establishing operations. We designed a procedure for evoking increases in speaker behavior that we called speaker immersion (Ross, Nuzzolo, Stolfi, & Natarelli, 2006). In this procedure we immersed the children for whom the operants had already emerged in instruction devoted to the continuous use of establishing operations requiring speaking responses. All reinforcement was related to speaking and opportunities were provided throughout the day. As a result, the children's use of verbal operants dramatically increased as they learned to maximize gain with minimal effort. The children learned that it was easier and more efficient to get things done by speaking pure tacts and mands than by emitting responses that required the expenditure of more effort, thereby extending Carr and Durand's (1985) findings.
Milestones of Speaker and Listener Episodes: Interlocking Verbal Operants between Individuals
Verbal Episodes between Individuals
Verbal behavior is social as Skinner proclaimed, and perhaps one cannot be truly social without verbal behavior. A major developmental stage for children is the acquisition of the repertoire of exchanging speaker and listener roles with others--what Skinner (1957) called verbal episodes. A marker and a measure of one type of verbal episode is the conversational unit, while another type of verbal episode is a learn unit. We developed these measures as indices of interlocking verbal operants. No account of verbal behavior can be complete without the incorporation of interlocking verbal operants.
Epstein, et al. (1980) demonstrated verbal episodes between two pigeons. We argue that they demonstrated a particular kind of interlocking verbal operant that we identify as a learn unit. In that study, after extensive training, the researchers had two pigeons, Jack and Jill, respond as both speaker and listener in exchanges that simulated verbal episodes between individuals. Each pigeon responded as both speaker and listener and they exchanged roles under the relevant discriminative stimuli as well as under the conditions of reinforcement provided by each other's speaker and listener responses (a procedure also used in part by Savage-Rumbaugh, Rumbaugh, & Boysen, 1978). The pigeon that began the episode, the teacher pigeon, controlled the reinforcement in the same way that teachers deliver effective instruction (Greer & McDonough, 1999). That is, the teacher pigeon had to observe the responses of the student pigeon, judge its accuracy, and consequate the student pigeon's response. Premack (2004) argued that the lack of this kind of teaching observation in primates is evidence that this is one of the repertoires unique to humans. In the Epstein et al. study, special contingencies were arranged in adjacent operant chambers to evoke or simulate the teaching repertoire. Note that the pigeon that acted as a student did not emit the reciprocal observation that we argue needs to be present in the verbal episode we characterize as a conversational unit. In a conversational unit both parties must observe, judge, and consequate each other's verbal behavior.
We used the determination of verbal episodes as measures in studies by Becker (1989), Donley & Greer (1993), and Chu (1998) as well as related research by Lodhi and Greer (1989) and Schauffler and Greer (2006). The verbal episodes in these studies were measured in units and included a rotation of initiating episodes between individuals as well as a reciprocal observation accruing from reinforcement received as both a speaker and a listener. We called these episodes conversational units. A conversational unit begins when a speaker responds to the presence of a listener with a speaker operant that is then reinforced by the listener. This first piece of the verbal interaction is what Vargas (1982) identified as a sequelic. Next, the listener assumes a speaker role, under the control of the initial speaker who is now a listener. That is, the listener function results in the extension of sensory experiences from the speaker to the listener as evidenced by the speaker response from the individual who was the initial listener. The initial speaker then functions as a listener who must be reinforced in a listener function (i.e., the initial listener as speaker extends the sensory capacities of the initial speaker as a listener). A new unit begins when either party emits another speaker operant. Interestingly, in the cases of children with diagnoses like autism, we can now teach them a sequelic speaker function in fairly straightforward fashion using procedures described above. However, these children often have little interest in what the speaker has to say. The reinforcement function for listening is absent. We are currently working on procedures to address this problem.
Conversational units are essential markers and measures of social behavior and, we argue, their presence is a critical developmental milestone in the evolution of verbal behavior. By arranging natural establishing operations, Donley and Greer (1993) induced first instances of conversation between several severely delayed adolescents who had never before been known to emit conversation with their peers. Coming under the contingencies of reinforcement related to the exchange of roles of listener and speaker is the basic component of being social. Chu (1998) found that embedding mand operant training within a social skills package led to first instances of, and prolonged use of, conversational units between children with autism and their typically developing peers. Moreover, the use of conversational units resulted in the extinction of assaultive behavior between the siblings thereby extending Carr and Durand's (1985) finding.
Learn units are verbal episodes in which the teacher, or preprogrammed teaching device (Emurian, Hu, Wang & Durham, 2000), controls the onset of the interactions, the nature of the interactions, and most of the sources of reinforcement for the student. The teacher bases her responses on the behavior of the student by reinforcing correct responses or correcting incorrect response. The interactions provided in the Epstein et al. (1980) and the Savage-Rumbaugh et al. (1978) studies are learn units rather than conversational units as we described above. (See Greer, 2002, Chapter 2, for a thorough discussion of the learn unit and Greer & McDonough, 1999 for a review of the research).
Milestone of Speaker as Own Listener: Verbal Episodes "Within the Skin"
As Skinner pointed out, the speaker may function as her own listener as in the case of "self-talk." Lodhi and Greer empirically identified speaker as own listener in young typically developing children who engaged in self-talk while playing alone (Lodhi & Greer, 1989). This appears to be an early, if not the first, identification of conversational units in self-talk emitted by individuals under controlled experimental conditions. The developmental literature is replete with research on self-talk and its importance, but until the functional components defining self-talk were identified, self-talk remained essentially a topographical measure because the speaker and listener functions were not identified. It is very likely that speaker as own listener types of learn units are detectable also, although we have not formally tested for them except in our studies on print control that resulted in students acquiring self administration of learn units (Marsico, 1998).
We agree with Horne and Lowe (1996) that a speaker as own listener interchange occurs in the phenomenon that they identified as naming. Naming occurs when an individual hears a speaker emit a tact, and that listener experience allows the individual to emit the tact in a speaker function without direct instruction and further to respond as a listener without direct instruction. Horne and Lowe (1996) identified the phenomenon with typically developing children. Naming is a basic capability that allows children to acquire verbal functions by observation. It is a bi-directional speaker listener episode.
But what if the child does not have the repertoire? For example, matching, pointing to (both listener responses, although the point to is a pure listener response), tacting, and responding intraverbally to multiple controls for the same stimulus (the speaker response as an impure tact) are commonly independent at early instructional stages. This is the case because, although the stimulus is the same, the behaviors are very different. The child learns to point to red but does not tact (i.e., does not say "red" in the presence of red objects, or tacts and does not intraverbally respond to "What color?"). This, of course, is a phenomenon not understood well by linguists because they operate on the assumption that understanding is an automatic given--a human example of generative verbal behavior, if you will. It is a source of many problems in learning for typically developing and non-typically developing children, as well as college students who demonstrate differences in their responses to multiple -choice questions (selection responding) versus their responses to short answer or essay questions (production responding). At some point children can learn a match or point-to response and can emit a tact or intraverbal response without direct training. This is not, however, automatic for some children. Thus, we asked ourselves this question: If naming were not in a child's repertoire, could it be taught?
Induction of One Component of Naming. Greer, et al. (2005a) found that one could isolate experimentally a particular instructional history that led to naming for 2-dimensional stimuli (pictures) in children who did not initially have the repertoire. After demonstrating that the children did not have the repertoire for tacts, we provided a multiple exemplar instructional intervention with a subset of stimuli involving rotating match, point to, tact, and intraverbal responding to stimuli until the children could accurately do all of the responses related to the subset. We then returned to the initial set and a novel set as well and showed that the untaught speaker and listener repertoires had emerged.
These data suggested that the acquisition of naming, or one component of naming (i.e., going from listener to speaker) could be induced with multiple exemplar experiences. Naming is a generative verbal repertoire that Catania (1998) has called a "higher order class." The Relational Frame Theorists described this particular higher order operant as an incidence of transformation of stimulus function (Hayes, et al., 2000). Skinner referred to the phenomenon as responding in different media to the same stimulus (i.e., thematic grouping) and Relational Frame Theorists provided feasible environmental sources for this and related phenomena (i.e., multiple exemplar experiences). That is, a particular response to a single stimulus or category of stimuli when learned either as a listener repertoire or as a speaker repertoire is immediately available to the individual as a response without direct instruction once the individual has stimulus transformation across speaker and listener functions. We found that the naming repertoire emerged as a function of specific instructional experiences. This represents another case of the emergence of generative verbal behavior that is traceable to environmental circumstances. Fiorile and Greer (2006) replicated this finding. Naming also represents the acquisition of one of the speaker as own listener stages. When children have acquired it they have new verbal capabilities. Other types of generative behavior are traceable to multiple exemplar experiences, as we will discuss later.
Induction of Untaught Irregular and Regular Past Tense Responding. Still another case of speaker as own listener repertoires probably occurs in the emission of verb endings colloquially often associated with the cliche "kids say the darnedest things" (Pinker, 1992). We recently found that we could evoke untaught correct usage of regular and incorrect but "spontaneous" emission of irregular verbs (i.e., "he singed last night") as a result of multiple exemplar instruction with young children with developmental disabilities who could not emit either regular or irregular novel past tense forms without direct instruction (Greer & Yuan, 2004). The children learned to emit novel regular past tense forms without direct instruction and this abstraction was extended to irregular verbs. That is, they emitted incorrect irregular forms such as "he singed" as do young typically developing children. In a related study, Speckman (2005) found that multiple exemplar experiences also resulted in the emission of untaught suffixes as autoclitic frames for tacts. However, it is important to recognize that Pinker (1999) says the fact that children begin to use the correct irregular forms at some point and stop using the incorrect forms is a more important capability. He argues that there is no direct instruction leading to this revision in verb usage by typically developing children. But just as the initial incorrect usage has been traced to a sufficient set of experiences, it is possible that there are incidental sources of experience that make this change possible. We suspect that multiple experiences could induce this capability too, although further research remains to be done.
Milestones of Reading, Writing, Self-Editing: Extensions of the Speaker and Listener Repertoires
Reading involves textually responding (seeing a printed word and saying the word), matching various responses to the text as comprehension (printed stimulus to picture or actions, the spoken sound and all of the permutations and combinations of this relationship) (Sidman, 1994). At first glance, the reader stage appears to be simply an extension of the listener repertoire; however, on closer scrutiny, reading is necessarily an advanced speaker-as-own-listener repertoire because the reader must listen to what is read. Reading consists of speaker-listener relationships under the control of print stimuli, actions or pictures. Textually responding requires effortless rates of responding to print stimuli in order to "hear" the spoken word. After all, it was only after the Middle Ages that we began to read silently, and many religious and other ancient cultural practices still adhere to ceremonies in which one person reads aloud to an audience while the audience views the text.
The capacity to hear what one reads is important because the acoustical physical properties of sound allow more "bits" to be transmitted by sound than is possible with signs. For example, children who are deaf from birth have extreme difficulty developing reading comprehension beyond Grade 6 (Karchmer & Mitchell, 2003). There are special auditory properties of speech that allow a great deal of information or bits to be used for the benefits of the reader (aesthetic or functional), or at least this was the case before computers. Good phonetic instruction results in children textually emitting untaught combinations of morphemes and if those words are in their listener repertoire they can comprehend (See Becker, 1992 for the relevant research on multiple exemplar instruction and the emission of abstracted textual responses to untaught morphemes). However, even if a child can respond textually and thereby emit an accurate response to printed stimuli, and she does not have listener comprehension, the child "will not understand" what she has read (i.e., the child will be unable to match the sounds to a picture or action). We can textually respond to foreign language print aloud and have no idea about what we are saying. Thus, the listener component is key. For example, adolescents with multiple year delays in their reading achievement may not comprehend because they can not emit a textual response to a particular word or group of words, but once they hear a spoken version they immediately comprehend, because their listener vocabulary exceeds their textual repertoire. The listener component of reading is as important as the textual speaking component. Thus, a reader must be a reader-as-own listener, so to speak.
There is still a more basic component of reading that we identify as conditioned reinforcement for observing print and pictures in books. Tsai & Greer (2006) found that when they conditioned books such that 2 and 3 year old children chose to look at books in free time, with toys as alternate choices, the children required significantly fewer learn units to acquire textual responses. The book stimuli selected out the children's observing responses, and once the children were observing they were already closer to acquiring print stimuli as discriminative stimuli for textual responses. Thus, an early predictor for children's success in textually responding appears to be the conditioned reinforcement for observing book stimuli. Conditioned reinforcement for books may constitute a new capability. We currently also believe that pre-listener children who do not orient toward speakers and who are having listening and speaking difficulties may need to have unfamiliar and familiar adult voices acquire conditioned stimulus control for observing (Decasper & Spence, 1987). This too may be a crucial stage in the acquisition of listener repertoires.
Writing is a separate behavior from reading, and like the repertoire of speaking, represents a movement up the verbal scale. But writing from a functional verbal perspective requires that the writer affect the behavior of the reader; that is they must observe the effects of their writing and in turn modify their writing until the writing affects the behavior of the reader. In the case of technical writing the writer must provide technical information that affects the reader's behavior, ranging from influencing a shopper through the provision of a shopping list, to the provision of an algorithm that affects complex scientific decisions. Writing, as in the case of speaking, needs to be under the control of the relevant establishing operations if the writing is to be truly verbal. In several experiments we provided establishing operations for writing for students whose writing did not affect the behavior of the reader, using a tactic we call writer immersion. In the writer immersion procedure, all communication is done in written form for extended periods throughout the day. Written responses are revised until the reader responds as the writer requires. This procedure resulted in functionally effective writing, measured in effects on the behavior of readers, and improvements in the structural components of writing for the writer (grammar, syntax, vocabulary, punctuation, spelling) (Greer, Gifaldi & Pereira, 2003a; Keohane, Greer, & Mariano-Lapidus, 2004; Jadlowski, 2000; Madho, 1997; Reilly-Lawson & Greer, 2006). The experience taught the students to write such that they read as the target readers would read. The editing experience appears to evoke writer as own reader outcomes of self-editing, not unlike speaker as own listener (Jadlowski, 2000). This repertoire then appears to be an advanced speaker as own listener stage--one that requires one to read what one writes from the perspective of the target audience whose behavior the writer seeks to influence. Thus, like the reader function, the writer function builds on the speaker as own listener. Some individuals have difficulties in writing and reading that are probably traceable to missing components of the speaker, listener, or speaker as own listener components.
Complex Verbally Governed and Verbally Governing Behavior
Technical Writing. Another key component of the complex cognitive repertoires of individuals involves reading or being verbally governed by print for technical outcomes. Marsico (1998) found that teaching students to follow scripts under conditions that allowed the investigators to observe the control of the print over the students' responses resulted in students "learning to learn" new concepts in math and more complex reading repertoires by acquiring verbally governed responding from print sources. This repertoire allowed the students to be verbally governed by print. As this repertoire becomes more sophisticated it leads to the more complex repertoire of solving complex problems from algorithms as in the case of the following of decision protocols. Keohane and Greer (2005) showed that teacher scientists could perform complex data decision steps using algorithms based on the verbal behavior of the science, and this new repertoire resulted in significant improvements in the outcomes of the teachers' students. Verbal rules guided measurable responses involving data analysis, complex strategic analyses, and tactical decisions that were implemented with the teachers' students.
Nuzzolo-Gomez (2002) found that teachers who received direct learn units on describing tactics, or observed other teachers receive learn units on accurately describing tactics, required significantly fewer learn units to teach their children to achieve instructional objectives. Observations showed that the teachers' instruction was reliably driven by the verbal descriptions of the tactics they learned by direct or indirect instruction. These studies are analyses of the verbal behavior of scientists and the verbal stimulus control involved in either scientific complex problem solving repertoires suggested by Skinner (1957) and demonstrated in Keohane & Greer (2005), or the control of verbal behavior about the science over teacher performance as identified in Nuzzolo-Gomez (2002). We argue that these studies investigated observable responses that are both verbal and nonverbal and that such responses are directly observed instances of thinking.
While neuroscientists could probably locate electrical activity in the brain associated with our putative thinking responses, it is only the behavior outside the skin that distinguishes the electrical activity as thinking as opposed to some other event that might be correlated with the activity. Verbal stimuli control the complex problem solving, not the electrical activity. The electrical activity, although interesting, may be necessary, and important, but is not thinking per se. One might argue that the electrical activity is light in a black box; although we see within "the black box" we do not see outside of the black box. This is an interesting reversal of the black box puzzle. If the electrical activity were to begin before the relevant contingencies in the environment were to be in place the problem in the environment would not be solved.
One of the key components in writing is the process of spelling. Spelling involves two different and initially independent responses: (1) saying the letters for a dictated word and (2) writing the letters. At some point we do emit an untaught response after learning a single one of these behaviors (See Skinner, 1957, 1992, p.99). How does a single stimulus (i.e., hearing the word) come to control these two very different behavioral topographies of writing and orally saying the letters? Recently we found that for children who initially could not perform the untaught function, providing multiple exempla r instruction for a subset of words across the two responses under a single audited vocal stimulus resulted in these students acquiring the repertoire with novel stimuli (Greer, et al. 2004c). Like transformation of establishing operations for mands and tacts, and transformation of stimulus functions across speaker and listener in naming, the transformation of writing and saying in the spelling repertoires is still another environmental source for generative verbal behavior as an overarching operant or a higher order operant (Catania, 1998; Hayes et al., 2000). These repertoires consist of learned arbitrary relations between listening, speaking, and writing. It is not too far-fetched to infer that typically developing children acquire this joint stimulus control across independent responses as higher order operants or relational frames through multiple exemplar experiences. Such multiple exemplar experiences involve the rotation of writing and saying opportunities may occur incidentally rather than as a result of the programmed experiences we provided our children. Once the child has transformation of stimulus control over written and spoken spelling, only a single response need be taught.
In related research, Gautreaux, Keohane, & Greer (2003) found that multiple exemplar instruction also resulted in transformation of selection and production topographies in geometry. That is, middle school children who could not go from multiple -choice responding to production responding prior to multiple exemplar instruction, did so after an instructional history was created by multiple exemplar instruction across a subset of selection and production experiences. This study highlighted the difficulties experienced by some older children that may be due to a lack of prior verbal instructional histories. The replacement of missing verbal capabilities may be the key to solving instructional difficulties experienced later in life by individuals as they encounter more complex subjects. When an individual has difficulty with aspects of reading and writing, it is possible that the remediation of the difficulty only truly occurs when the missing capability is put in place. In effect, they have a missing or inadequate verbal developmental cusp. Inducing that cusp may solve the learning problems.
Aesthetic Writing. In an earlier section we described writing repertoires that were of a technical nature. Aesthetic writing has a different function than technical writing (Skinner, 1957). Aesthetic writing seeks to affect the emotions of the reader. To date, little empirical work has been accomplished with the aesthetic writing repertoire. A critical, if not the most basic component of aesthetic writing, is the writer's use of metaphors as extended tacts. Meincke (2005) and Meincke, Keohane, Gifaldi, and Greer (2003) identified the emergence of novel metaphorical extensions resulting from multiple exemplar instruction. This effort points to the importance of isolating and experimentally analyzing experiential components of aesthetic writing and suggests the role of metaphorical comprehension in reading for aesthetic effects. This also suggests that rather than teaching the aesthetics of reading through literary analysis as an algorithm, a student should have the relevant metaphoric experiences and perhaps these may be pedagogically simulated. It is likely that these metaphoric experiences provide the basis for the aesthetic effects for the reader. In order for the exchange to occur the target audience for the writer must have the repertoires necessary to respond to the emotional effects. Of course, the analysis of aesthetic writing functions is probably more complex than the analysis of technical repertoires, but we believe empirical analyses like the one done by Meincke et al. are becoming increasingly feasible. If so, the aesthetic and functional writer and reader repertoire may be revealed as new stages of verbal behavior.
From Experimental Effects to a Theory of Verbal Development
We believe we have identified several verbal repertoires that are key in children's development of successively complex repertoires of verbal behavior. Providing several of these repertoires to children who did not have them allowed these students to advance in their cognitive, social, technical, and aesthetic capabilities. As a result of this work we were increasingly persuaded that these levels of verbal capabilities did, in fact, represent empirically identifiable developmental cusps.
For our children the capabilities that they acquired were not tied to tautological relationships associated with age (Baer, 1970; Bijou & Baer, 1978; Morris, 2002). Age may simply provide a coincidental relation between experiences that bring about verbal capabilities and the probabilities of increased opportunities for those experiences. Hart and Risely (1996) showed that impoverished children who had no native disabilities, but who had significantly fewer language experiences than their more better off peers, demonstrated significant delays by the time they reached kindergarten. When children with these deficits in experience with language continued in schools that did not or could not compensate for their sparse vocabulary, these children were diagnosed as developmentally disabled by grade 4 (Greenwood, Delquadri, & Hall, 1984). It is not too farfetched to suggest that absence of the kinds of experiences necessary to evoke the higher order verbal operants or cusps that we have identified may also be part of the reason for these delays. We suggest that the presence of incidental multiple exemplar experiences provide the wherewithal for most typically developing children to seamlessly acquire the verbal milestones we described, probably because they have both the environmental experiences and neural capabilities (Gilic, 2004). For children without native disabilities who lack multiple exemplar experiences (Hart & Risely, 1996), as well as children with native disabilities who lack the necessary verbal capabilities, intensive multiple exemplar instruction has induced missing repertoires (Nuzzolo-Gomez & Greer, 2004). Such experiences probably result in changes in behavior both within and outside of the skin. Indeed biological evidence suggests that, "DNA is both inherited and environmentally responsive" (Robinson, 2004, p. 397. Also see Dugatkin, 1996 for research on the influence of the environment on changes in genetically programmed behavior affected by environmental events). What may be an arbitrary isolation of behavior beneath and outside the skin may dissolve with increased research in the environmental effects on both types of behavior.
Our induction of these repertories in children, who did not have them prior to instruction, suggests it is not just age (time) but particular experiences (i.e., environmental contingencies including contingencies that evoke higher order operants) that make certain types of verbal development possible, at least for the children that we studied. Intensive instruction magnified or exaggerated these experiences and provided our children with the wherewithal (i.e., verbal developmental cusps) to achieve new verbal capabilities. We speculate also that the induction of these verbal capabilities in children who do not have them prior to special experiences creates changes in neural activity. Of course, a test of this is the real challenge facing developmental neuroscience (Pinker, 1999). A joint analysis using the science of verbal behavior combined with instrumentation of the neurosciences might prove very useful in assisting children. Incidentally, such an analysis might also act to enrich academic debate towards more useful outcomes.
Tables 1 and 2 showed the levels of verbal functions for the pre-listener through the early reader stages in summary form. We described the evidence that has proved useful in our efforts to induce and expand progressively sophisticated verbal functions. The capabilities that we addressed were originally identified based on the responses of individual children; specifically they were based on our empirical tests for the presence or absence of the repertoires for individual children. In our educational work, when a particular repertoire was missing, we applied the existing research based tactics to provide the child with the repertoire. When we encountered children for whom the existing tactics were not effective, we researched new tactics or investigated potential prerequisite repertoires and related experiences that appeared to be missing for the child. The searches for possible prerequisite repertoires led to the identification of several subcomponents which when taught by providing subcomponent repertoires led to the emergence of verbal capabilities that were not present prior to our having provided the prerequisite instructional experience.
Summary of Identified and Induced Verbal Capabilities
We continue to locate other prerequisites and believe that there are many others that remain to be identified. Examples of rudimentary verbal functions that have been identified in the research include: (a) the emergence of better acquisition rates across all instructional areas as a function of teaching basic listening (Greer et al., 2005a), (b) the induction of parroting (Sundberg, et al., 1996) and then echoics that led to independent mand and tact functions (Yoon, 1996), and relevant autoclitics, for children with no speech or other verbal functions (Ross & Greer, 2003; Tsiouri & Greer, 2003), (c) transformation of establishing operations across the mand and tact function for children for whom a form taught in one function could not be used in an untaught function prior to multiple exemplar instruction (Nuzzolo & Greer, 2004), (d) the identification of interlocking speaker as own listener operants in self-talk with typically developing children (Lodhi & Greer, 1989), (e) induction of conversational units with children who had no history of peer conversational units (Donley & Greer, 1993), (f) the induction of naming in children who did not have naming prior to multiple exemplar instructional experience (Fiorile, 2005; Fiorile & Greer, 2006; Greer, et al., 2005b), (g) the emission of untaught past tenses for regular and irregular verbs as a function of multiple exemplar instruction (Greer & Yuan, 2004), (h) the emission of untaught contractions, morphemes and suffix endings as a function of multiple exemplar experiences or having children tutor using multiple exemplar experiences (i.e., observational learning through multiple exemplars) (Greer, et al., 2004a; Speckman, 2004), (i) faster acquisition rates for textual responses as a function of conditioning books as preferred stimuli for observing (Longano & Greer, 2006; Tsai & Greer, 2003), (j) and the induction or expansion of echoic responding a function of the acquisition of generalized auditory matching (Chavez-Brown, 2004).
The more advanced writer, writer as own reader or self-editing milestones are key complex cognitive repertoires. Research in this area includes: (a) teaching more effective writer effects on readers and structural responses of writing as a function of establishing operations for writing (Madho, 1997. Greer & Gifaldi, 2003; Reilly-Lawson & Greer, 2006), (b) the induction of rule governed or verbally governed responding and its effects on the verbal stimulus control of algorithms (Keohane & Greer, 2005; Marsico, 1998; Nuzzolo-Gomez, 2002), (c) the role of multiple exemplar instruction on the emergence of metaphors (Meincke et al., 2003), (d) transformation of stimulus function across vocal and written responding (Greer, et al., 2004c), and (e) the acquisition of joint stimulus control across selection and production topographies (Gautreaux, et al., 2003). These more complex repertoires appear to build on the presence of speaker as own listener capabilities.
While we are not ready to declare emphatically that the capabilities that we have identified experimentally, or by extrapolation from experiments, have been definitively identified as verbal developmental stages, the evidence to date shows that they are useful for instructional functions. Furthermore, they suggest possible natural fractures in the development of verbal function (4). For typically developing children, these fractures may occur as a result of brief experiences with exemplars. For some typically developing 2-year old children that we have studied, simply having a few experiences with exemplars going from listener to speaker, followed by single exemplars going from speaker to listener resulted in bidirectional naming for 3-dimensional stimuli that they did not have prior to those experiences that were separate and juxtaposed (Gilic, 2004). While our children with language delays required the rapid rotation across listener and speaker exemplars to induce naming, typically developing children may need only the incidental rotation of speaker listener experiences with single stimuli. It would appear that now that these generative or productive verbal capabilities have been traced to experiences for the children we have studied, the claim by some (Pinker, 1999) that productive or generative verbal capabilities is not traceable to learning experience is no longer credible.
Some of the research we described is not yet published and our references include papers presented at conferences or unpublished dissertations not yet submitted for publication. Thus, these are early days in our work on some of the stages. But it is important to note also that we have been on a quest for the last 20 years to remediate learning problems based on verbal behavior deficits in children with and without disabilities. The quest has moved forward based on progressively more complex strategic analyses as we stumbled on what we now believe may be developmental milestones in verbal behavior. We have replicated most of the effects we have identified with numerous children in our CABAS schools in the USA, England, and Ireland (Greer & Keohane, 2004; Greer, Keohane, & Healey, 2002). Thus, we believe that the evidence is robust and we hope that it can be useful to behavior analysts, neuroscientists, and linguists interested in a thorough analysis of the evolution of verbal behavior in children's development.
We have also speculated on the cultural evolution of verbal functions for our species relative to our proposed verbal developmental scheme (i.e., the role of cultural selection). Of course theories on the evolution of language are so extensive that some linguistic societies have banned their proliferation; yet, anthropologists and linguists are now suggesting there is new evidence to support the evolution of language (Holden, 2004). Some linguistic anthropologists may find the evolution of cultural selection of verbal operants and higher order verbal operants useful. It is even possible that the capacity for higher order operants and relational frames constitutes that which has been heretofore attributed to a universal grammar. Speaker and listener responses could have evolved from basic verbal operants to interlocking speaker and listener responding between individuals and within the skin of individuals (self-talk and naming)--an evolution made possible by our anatomical and physiological capacities to acquire higher order operants combined with cultural selection. Moreover, reading and writing functions also probably evolved as an extension of the basic speaker and listener functions; without them reading and writing would not have been possible, at least in the way it has evolved for the species.
"The human species, at its current level of evolution, is basically verbal, but it was not always so. ... A verbal behavior could have arisen from nonverbal sources and its transmission from generation to generation, would have been subject to influences which account for the multiplication of norms and controlling relations and the increasing effectiveness of verbal behavior as a whole." (Skinner, 1992, p.470)
Speaker/writer operants and listener/reader responses constitute an important if not the most important aspect of human behavior as adaptation to what is increasingly a verbal environment. Simply speaking, verbal behavior analysis is the most important subject of a science of behavior. We hope it is not too presumptuous of us to suggest that verbal behavior analysis can contribute to a developmental psychology that treats environmental contributions as seriously as it treats the non-environmental contributions. After all, biology has come to do so (Dugatkin, 1996; Robison, 2004).
While we can simulate human listener and human speaker functions with nonhuman species (Epstein, et al., 1980; Savage-Rumbaugh et al., 1978), the simulation of naming and other speaker-as own listener functions with nonhuman species remains to be demonstrated. Premack (2004) argues from the data that nonhumans lack the capacity for recursion. "Recursion makes it possible for words in a sentence to be widely separated yet be dependent on one another." (Premack, 2004, p. 320). We suggest that recursion may have made possible by the evolution of speaker as own listener capabilities in humans as a function of both neural capabilities and cultural selection. Premack (2204) also presents evidence that teaching is a strictly human endeavor. "Unlike imitation, in which the novice observes the expert, the teacher observes the novice--and not only observes, but judges and modifies." (Premack, 2004, p. 320; D. Premack & A. Premack, 2003). This describes the interaction we have characterized as what takes place in a learn unit. The conversational unit differs from the learn unit in that the conversational unit requires a reciprocal observation. Observational repertoires like those Premack (2004) described may be fundamental components that underlie and presage the evolution of nonverbal to verbal behavior.
While observation has been studied as a phenomenon, few if any studies have sought the possible environmental source for observational learning. We argue that observational learning differs from other indirect effects on behavior in that, observational learning results in the acquisition of new operants. Other types of observational effects on behavior result in the emission of operants that were already in the observer's repertoire. The kind of behavior change identified by Bandura (1986) was most likely of the latter sort since the presence or absence of the operants was not determined prior to the observational experience. Imitation results from a history that reinforces correspondence between the imitator and a model's behavior.
Some children do not have observational learning or have weak observational repertoires. In cases where observational learning has been missing we have induced it by providing certain experiences. It also may be possible that children do not have observational learning until they have certain experiences. In one study, we increased observational learning as a function of having individuals function as tutors using learn units that required tutors to reinforce or correct the responses of their tutees. It was the application of the learn unit per se, specifically the consequence component that produced the new observational repertoire (Greer, et al., 2004a). In another case with children who did not learn by observing peers, we taught them to monitor learn unit responses of their peers and observational learning emerged (Greer et al., 2004b; Pereira-Delgado, 2005).
This observing phenomenon involves a kind of consequent benefit similar to what the listener gains--specifically the extension of sensory reinforcement. Perhaps the teaching capacity involving reinforcement of the observed behavior of the learner is related to particular listener capabilities, while the recursion phenomenon is related to the interlocking speaker-listener capability. It is the interlocking speaker-listener-as-own-listener functions that make the more sophisticated milestones of verbal function possible. These functions make thinking, problem solving, and true social discourse possible . They also support the development of repertoires compellingly described in relational frame theory (Hayes, et al., 2000). Speech, and we argue, the compression of information through auditory stimuli in the human species, makes possible the more advanced speaker as own listener or textual responder as own listener and perhaps by extension the phenomenon of recursion. Regardless of whether our interpretations of the evidence is compelling, the evidence does reveal that a more complete picture of verbal behavior is evolving and that the role of the listener, and particularly the interrelationship between speaker and listener, is key to further advances in our understanding of verbal functions and their development within the individual.
Verbal Behavior Analysis, Comparative Psychology and the Neuroscience of Language
None of the work that we have described or related work in verbal behavior obviates the role of genetically evolved brain functions as neurology correlated with the presence of our suggested milestones of verbal behavior and the generative aspects of behavior cum language. The research in verbal behavior does not question, or eliminate, the importance or usefulness of neuropsychological researches. Alternately, the work in the neuroscience of language does not obviate the environmental verbal functions of language as behavior per se and as higher order operants that are increasingly identified in verbal behavior analysis. They are simply different sciences involved with different aspects of language. On the one hand, work in verbal behavior analysis is beginning to identify key environmental experiences in cultural selection and to suggest how neuropsychology can make the journey from MRI analyses to real verbal function--behaving with language outside of the skin. On the other hand, the work in the neurosciences of language is beginning to identify the behavior beneath the skin. It is compelling to consider the mutual benefit to obtaining a more comprehensive understanding of language by relating the efforts. Most importantly, combining the evidence and types of inquiry from both fields can help us teach a few more children to be truly verbal.
Behavior analysts have simulated language functions in non-humans (Epstein, et al. 1980; Savage-Rumbaugh, et al., 1978) and comparative psychologists have identified differences between the verbal behavior of primates and the verbal behavior of humans (Premack, 2004). Non-human species have not demonstrated a speaker-as-own-listener status. However, research in verbal behavior analysis has led to the acquisition of listener repertoires, speaker repertoires, speaker as own listener repertoires, and generative verbal behavior in humans who did not have those repertoires prior to special environmental experiences. Perhaps work in verbal behavior analysis with individuals who can acquire verbal repertoires as a result of special interventions provides a bridge. While our particular work is driven by applied concerns, it may have some relevance to the basic science of behavior, comparative psychology, and the neuroscience of language.
Reprinted with permission from the 2005 issue of Behavioral Development, 1, 31-48. The references were updated from the original publication, relevant quotations were added, and minor editorial changes were made.
We would like to dedicate this paper to the memory of B. F. Skinner who would have been 100 years old at its writing. His mentorship and encouragement to the first author served to motivate our efforts to master his complex book and engage in our experimental inquires. We are also indebted to others who kept verbal behavior alive in times when the critics were harsh and the audience was narrow. Among these are Jack Michael, Charles Catania, Ernest Vargas, Julie Vargas, Mark Sundberg, U. T. Place, Kurt Salzinger, Joe Spradlin, Joel Greenspoon, and the children we worked with who needed what verbal behavior could offer in order for them to become social and more cognitively capable. While the audience remains narrow, we are confident that the effects of research in verbal behavior will select out a larger audience.
Baer, D. M. (1970). An age -irrelevant concept of development. Merrill Palmer Quarterly, 16, 238-245.
Bandura, A, (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.
Barnes-Holmes, D., Barnes-Holmes, Y., & Cullinan, V. (2001). Relational frame theory and Skinner's Verbal Behavior. The Behavior Analyst, 23, 69-84.
Becker, B.J. (1989). The effect of mands and tacts on conversational units and other verbal operants. (Doctoral dissertation, 1989, Columbia University). Abstract from: UMI Proquest Digital Dissertations [on-line]. Dissertations Abstracts Item: AAT 8913097.
Becker, W. (1992). Direct instruction: A twenty-year review. In R. West & L. Hamerlynck, Design for educational excellence: The legacy of B. F. Skinner (pp.71-112). Longmont CO, Sopris West.
Bijou, S., & Baer, D. M. (1978). A behavior analysis of child development. Englewood Cliffs, NJ: Prentice-Hall.
Carr, E.G. & Durand, V.M. (1985). Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis, 18, 111-126.
Catania, A.C. (1998). Learning. Englewood Cliffs, N.J: Prentice-Hall.
Catania, A. C. (2001. Three types of selection and three centuries. Revista Internacional de Psicologia y Therapia Psicologia, I (1). 1-10.
Catania, A. C., Mathews, B. A., & Shimoff, E. H., (1990). Properties of rule governed behavior and their implications. In D. E. Blackman and H. Lejeune (Eds.) Behavior analysis in theory and practice, (pp. 215-230). Hillsdale NJ: Erlbaum.
Chavez-Brown, M. (2004). The effect of the acquisition of a generalized auditory word match-to-sample repertoire on the echoic repertoire under mand and tact conditions. Dissertation Abstracts International, 66(01), 138A. (UMI No. 3159725).
Chavez-Brown, M. & Greer, R. D. (2003, July) The effect of auditory matching on echoic responding. Paper Presented at the First European Association for Behavior Analysis in Parma Italy.
Chomsky, N. (1959). A review of B.F. Skinner's Verbal Behavior. Language, 35, 26-58.
Chomsky, N. & Place, U. (2000). The Chomsky-Place Correspondence 1993-1994, Edited with an introduction and suggested readings by Ted Schoneberger. The Analysis of Verbal Behavior, 17, 738.
Chu, H.C. (1998). A comparison of verbal behavior and social skills approaches for development of social interaction skills and concurrent reduction of aberrant behaviors of children with developmental disabilities in the context of matching theory. Dissertation Abstracts International, 59(06), 1974A. (UMI No. 9838900).
Culotta, E., & Hanson, B. (2004). First words. Science, 303, 1315
Decasper, A. J & Spence, M. J. (1987). Prenatal maternal speech influences ON newborn's perception of speech sounds. Infant Behavior and Development, 2, 133-150.
Deacon, T. (1997). The symbolic species: The co-evolution of language and the brain. New York: W. W. Norton & Company.
Donley, C. R., & Greer, R. D. (1993). Setting events controlling social verbal exchanges between students with developmental delays. Journal of Behavioral Education, 3(4), 387-401.
Dugatkin, L. A. (1996). The interface between culturally-based preference and genetic preference: Female mate choices in Pecilla Roticulata. Proceedings of the National Academy of Science, USA, 93, 2770-2773.
Emurian, H. H., Hu, X., Wang, J., & Durham, D. (2000). Learning JAVA: A programmed instruction approach using applets. Computers in Human Behavior, 16, 395-422.
Epstein, R, Lanza, R. P., & Skinner, B. F. (1980). Symbolic communication between two pigeons (Columbia livia domestica). Science, 207 (no. 4430), 543-545.
Fiorile, C.A. (2004). An experimental analysis of the transformation of stimulus function from speaker to listener to speaker repertoires. Dissertation Abstracts International, 66(01), 139A. (UMI No. 3159736).
Fiorile, C. A. & Greer, R. D. (2006). The induction of naming in children with no echoic-to-tact responses as a function of multiple exemplar instruction. Manuscript submitted.
Gautreaux, G., Keohane, D. D., & Greer, R. D. (2003, July). Transformation of production and selection functions in geometry as a function of multiple exemplar instruction. Paper presented at the First Congress of the European Association for Behavior Analysis, Parma, Italy.
Gewirtz, J. L., Baer, D. M., & Roth, C. L. (1958). A note on the similar effects of low social availability of an adult and brief social deprivation on young children's behavior. Child Development, 29, 149152.
Gilic, L. (2005). Development of naming in two -year old children. Unpublished doctoral dissertation, Columbia University.
Greenwood, C. R., Delquadri, J. C., & Hall, R. V. (1984). Opportunity to respond and student achievement. In W. L. Heward, T. E. Heron, J. Trapp-Porter, & Hill, D. S., Focus on behavior analysis in Education (pp. 58-88). Columbus OH: Charles Merrill.
Greer, R. D. (1987). A manual of teaching operations for verbal behavior. Yonkers, NY: CABAS and The Fred S. Keller School.
Greer, R. D. (2002. Designing teaching strategies: An applied behavior analysis systems approach. New York: Academic Press.
Greer, R. D., Chavez-Brown, M. Nirgudkar, A. S., Stolfi, L., & Rivera-Valdes, C. (2005a). Acquisition of fluent listener responses and the educational advancement of young children with autism and severe language delays. European Journal of Behavior Analysis, 6 (2), xxxxx-xxx.
Greer, R. D., Gifaldi, H., & Pereira, J. A. (2003, July). Effects of Writer Immersion on Functional Writing by Middle School Students. Paper presented at the First Congress of the European Association for Behavior Analysis, Parma, Italy.
Greer, R. D., & Keohane, D. (2004). A real science and technology of teaching. In J. Moran & R. Malott, (Eds.), Evidence-Based Educational Methods (pp. 23-46). New York: Elsevier/Academic Press.
Greer, R. D., Keohane, D. D. & Healey, O. (2002). Quality and applied behavior analysis. The Behavior Analyst Today, 3 (1), 2002. Retrieved December 20, 2002 from http://www.behavior-analystonline. com
Greer, R. D., Keohane, D., Meincke, K., Gautreaux, G., Pereira, J., Chavez-Brown, M., & Yuan, L. (2004a). Key components of effective tutoring. In D. J. Moran & R. W. Malott, (Eds.), Evidence-Based Educational Methods (pp. 295-334). New York: Elsevier/Academic Press.
Greer, R. D. & McCorkle, N. P. (2003). CABAS[R] Curriculum and Inventory of Repertoires for Children from Pre-School through Kindergarten, 3rd Edition. Yonkers, NY: CABAS[R]/Fred S. Keller School. (Publication for use in CABAS? Schools Only)
Greer, R.D., McCorkle. N. P., & Williams, G. (1989). A sustained analysis of the behaviors of schooling. Behavioral Residential Treatment, 4, 113-141.
Greer, R. D., & McDonough, S. (1999). Is the learn unit the fundamental measure of pedagogy? The Behavior Analyst, 20, 5-16.
Greer, R. D., Nirgudkar, A., & Park, H. (2003, May). The effect of multiple exemplar instruction on the transformation of mand and tact functions. Paper Presented at the International Conference of the Association for Behavior Analysis, San Francisco, CA.
Greer, R. D., Pereira, J. & Yuan, L. (2004b, August). The effects of teaching children to monitor learn unit responses on the acquisition of observational learning. Paper presented at the Second International Conference of the Association of Behavior Analysis, Campinas, Brazil.
Greer, R. D., & Ross, D. E. (2004). Research in the Induction and Expansion of Complex Verbal Behavior. Journal of Early Intensive Behavioral Interventions, 1.2, 141-165. Retrieved May 20, 2005 from http:/www.the-behavior-analyst-today.com
Greer, R.D., & Ross (in press). Verbal behavior analysis: Developing and expanding complex communication in children severe language delays. Boston: Allyn and Bacon.
Greer, R. D., Stolfi, L., Chavez-Brown, M., & Rivera-Valdez, C. (2005b). The emergence of the listener to speaker component of naming in children as a function of multiple exemplar instruction. The Analysis of Verbal Behavior, 21, 123-134.
Greer, R. D. & Yuan, L. (2004, August). Kids say the darnedest things. Paper presented at the International Conference of the Association for Behavior Analysis and the Brazil Association for Behavior Medicine and Therapy.
Greer, R. D. Yuan, L. & Gautreaux, G. (2005c). Novel dictation and intraverbal responses as a function of a multiple exemplar history. The Analysis of Verbal Behavior, 21, 99-116.
Hart, B. M., & Risley, T. R. (1975). Incidental teaching of language in the preschool. Journal of Applied Behavior Analysis, 8, 411-420.
Hart B. & Risely, T. R. (1996). Meaningful Differences in the Everyday Life of America's Children. NY: Paul Brookes.
Hayes, S. Barnes-Homes, D., & Roche, B. (2000). Relational frame theory: A Post-Skinnerian account of human language and cognition. New York: Kluwer/Academic Plenum.
Heagle, A. I., & Rehfeldt, R. A. (2006). Teaching perspective taking skills to typically developing children through derived relational responding. Journal of Early and Intensive Behavior Interventions, 3 (1), 1-34. Available online at http:/www.the-behavior-analyst-online.com
Holden, C. The origin of speech. Science, 303, 1316-1319.
Horne, P. J. & Lowe, C. F. (1996). On the origins of naming and other symbolic behavior. Journal of the Experimental Analysis of Behavior, 65, 185-241.
Jadlowski, S.M. (2000). The effects of a teacher editor, peer editing, and serving as a peer editor on elementary students' self-editing behavior. Dissertation Abstracts International, 61(05), 2796B, (UMI No. 9970212).
Karchmer, M.A. & Mitchell, R.E. (2003). Demographic and achievement characteristics of deaf and hard-of-hearing students. In Marschark, M & Spencer, P.E. (ed.) Deaf Studies, Language, and Education. Oxford, England: Oxford University Press.
Karmali, I. Greer, R. D., Nuzzolo-Gomez, R., Ross, D. E., & Rivera-Valdes, C. (2005). Reducing palilalia by presenting tact corrections to young children with autism, The Analysis of Verbal Behavior, 21, 145-154.
Keohane, D, & Greer, R. D. (2005). Teachers use of a verbally governed algorithm and student learning. Journal of Behavioral and Consultation Therapy, 1 (3), 249-259. Retrieved February 22, 2006 from http:/www.the-behavior-analyst-today.com
Keohane, D, Greer, R. D. & Ackerman, S. (2005a, November). Conditioned observation for visual stimuli and rate of learning. Paper presented at the third conference of the International Association for Behavior Analysis, Beijing, China.
Keohane, D., Greer, R. D., & Ackerman, S. (2005b, November). Training sameness across senses and accelerated learning. Paper presented at the third conference of the International Association for Behavior Analysis, Beijing, China.
Keohane, D. D., Greer, R. D., Mariano-Lapidus (2004, May). Derived suffixes as a function of a multiple exemplar instruction. Paper presented at the Annual Conference of the Association for Behavior Analysis, Boston, MA.
Lamarre, J. & Holland, J. (1985). The functional independence of mands and tacts. Journal of Experimental Analysis of Behavior, 43, 5-19.
Lodhi, S. & Greer, R.D. (1989). The speaker as listener. Journal of the Experimental Analysis of Behavior, 51, 353-360.
Longano, J. & Greer, R. D. (2006). The effects of a stimulus-stimulus pairing procedure on the acquisition of conditioned reinforcement for observing and manipulating stimuli by young children with autism. Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Retrieved February 22 from http://www.behavior-analyst-online.com
Lovaas, O.I. (1977). The autistic child: Language development through behavior modification. New York: Irvington Publishers, Inc.
Lowe, C. F., Horne, P. J., Harris, D. S., & Randle, V. R.L. (2002). Naming and categorization in young children: Vocal tact training. Journal of the Experimental Analysis of Behavior, 78, 527-549.
MacCorquodale, K. (1970). On Chomsky's review of Skinner's Verbal Behavior. Journal of the Experimental Analysis of Behavior, 13, 83-99.
Madho, V. (1997). The effects of the responses of a reader on the writing effectiveness of children with developmental disorders. . (Doctoral dissertation, Columbia University, 1997). Abstract from: UMI Proquest Digital Dissertations [on-line]. Dissertations Abstracts Item: AAT 9809740.
Marion, C., Vause, T., Harapiak, S., Martin, G. L., Yu, T., Sakko, G., & Walters, K. L. (2003). The hierarchical relationship between several visual and auditory discriminations and three verbal operants among individuals with developmental disabilities. The Analysis of Verbal Behavior, 19, 91-106.
Marsico, M.J. (1998). Textual stimulus control of independent math performance and generalization to reading. Dissertation Abstracts International, 59 (01, 133A. (UMI No. 9822227).
McDuff, G. S., Krantz, P. J., McDuff, M. A., & McClannahan, L. E. (1988). Providing incidental teaching for autistic children: A rapid training procedure for therapists. Education and Treatment of Children, 11, 205-217.
Meincke-Mathews, K. (2005). Induction of metaphorical responses in middle school students as a function of multiple exemplar experiences. Dissertation Abstracts International, 66(05), 1716A. (UMI No. 3174851).
Meincke, K., Keohane, D. D., Gifaldi, H. & Greer (2003, July). Novel production of metaphors as a function of multiple exemplar instruction. Paper presented at the First European Association for Behavior Analysis Congress, Parma, Italy.
Michael, J. (1993). Establishing operations. The Behavior Analyst, 16, 191-206.
Michael, J. (1982). Skinner's elementary verbal relations: Some new categories. The Analysis of Verbal Behavior, 1,1-3.
Michael, J. (1984). Verbal behavior. Journal of the Experimental Analysis of Behavior, 42, 363-376.
Morris, E. K. (2002). Age irrelevant contributions to developmental science: In remembrance of Donald M. Baer. Behavioral Development Bulletin, 1, 52-54.
Nuzzolo-Gomez, R. (2002). The Effects of direct and observed supervisor learn units on the scientific tacts and instructional strategies of teachers. Dissertation Abstracts International, 63(03), 907A. (UMI No. 3048206).
Nuzzolo-Gomez, R. & Greer, R. D. (2004). Emergence of Untaught Mands or Tacts with Novel Adjective-Object Pairs as a Function of Instructional History. The Analysis of Verbal Behavior, 24, 30-47.
Park, H. L. (2005) Multiple exemplar instruction and transformation of stimulus function from auditory-visual matching to visual-visual matching. Dissertation Abstracts International, 66(05), 1715A. (UMI No. 3174834).
Pereira-Delgado, J. A. (2005). Effects of peer monitoring on the acquisition of observational learning. Unpublished doctoral dissertation, Columbia University.
Pennisi, E. (2004, February 24). The first language? Science, 303 (5662), 1319-1320.
Pinker, S. (1999). Words and rules. New York: Perennial.
Pistoljevic, N. and Greer, R. D. (2006). The Effects of Daily Intensive Tact Instruction on Preschool Students' Emission of Pure Tacts and Mands in Non-Instructional Setting. Journal of Early and Intensive Behavioral Interventions, 103-120. Available online at http://www.behavior-analystonline. org
Premack, D. (2004, January 16). Is language key to human intelligence? Science, 303, 318-320.
Premack, D. & Premack, A. (2003). Original intelligence. New York: McGraw-Hill.
Reilly-Lawson, T. & Greer, R. D. (2006). Teaching the function of writing to middle school students with academic delays. Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Available online at http://www.behavior-analyst-online.org
Robinson, G. E. (2004, April). Beyond nature and nurture. Science, 304, 397-399.
Rosales-Ruiz, J. & Baer, D. M. (1996). A behavior analytic view of development pp. 155-180. In S. M. Bijou, New Directions in Behavior Development. Reno, NV, Context Press.
Ross, D. E. & Greer, R. D. (2003). Generalized imitation and the mand: Inducing first instances of speech in young children with autism. Research in Developmental Disabilities, 24, 58-74.
Ross, D.E., D. E., Nuzzolo, R., Stolfi, L., & Natarelli, S. (2006). Effects of speaker immersion on the spontaneous speaker behavior of preschool children with communication delays, Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Available online at http://www.behavioranalystonline.com
Savage-Rumbaugh, E. S., Rumbaugh, D. M. & Boysen, S. (1978). Science, 201, 64-66.
Schauffler, G. and Greer, R. D. (2006). The Effects of Intensive Tact Instruction on Audience-Accurate Tacts and Conversational Units and Conversational Units. Journal of Early and Intensive Behavioral Interventions, 120-132. Available online at http://www.behavior-analyst-online.com
Schwartz, B.S. (1994). A comparison of establishing operations for teaching mands. Dissertation Abstracts International, 55(04), 932A. (UMI No. 9424540).
Selinski, J, Greer, R.D., & Lodhi, S. (1991). A functional analysis of the Comprehensive Application of Behavior Analysis to Schooling. Journal of Applied Behavior Analysis, 24, 108-118.
Sidman, M. (1994). Equivalence relations and behavior: A research story. Boston, MA: Authors Cooperative.
Skinner, B. F. (1989). The behavior of the listener. In S. C. Hayes (Ed.), Rule-governed behavior: Cognition, contingencies and instructional control (85-96). New York: Plenum.
Skinner, B.F. (1957, 1992). Verbal Behavior. Acton, MA: Copley Publishing Group and the B. F. Skinner Foundation.
Speckman, J. (2005). Multiple exemplar instruction and the emergence of generative production of suffixes as autoclitic frames. Dissertation Abstracts International, 66(01), 83A. (UMI No. 3159757).
Sundberg, M. L., Loeb, M., Hale, L., & Eighenheer (2001/2002). Contriving establishing operations for teaching mands for information. The Analysis of Verbal Behavior, 18, 15-30.
Sundberg, M.L., Michael, J., Partington, J.W., & Sundberg, C.A. (1996). The role of automatic reinforcement in early language acquisition. The Analysis of Verbal Behavior, 13, 21-37.
Sundberg, M.L. & Partington, J.W. (1998). Teaching language to children with autism or other developmental disabilities. Pleasant Hill CA: Behavior Analysts, Inc.
Tsai, H., & Greer, R. D. (2006). Conditioned preference for books and accelerated acquisition of textual responding by preschool children. Journal of Early and Intensive Behavior Interventions, 3.1, 3561. Available online at http:/www.the-behavior-analyst.com.
Tsiouri, I., & Greer, R. D. (2003). Inducing vocal verbal behavior through rapid motor imitation training in young children with language delays. Journal of Behavioral Education, 12, 185-206.
Twyman, J.S (1996a). An analysis of functional independence within and between secondary verbal operants. Dissertation Abstracts International, 57(05), 2022A. (UMI No. 9631793).
Twyman, J. (1996b). The functional independence of impure mands and tacts of abstract stimulus properties. The Analysis of Verbal Behavior, 13, 1-19.
Vargas, E.A. (1982). Intraverbal behavior: The codic, duplic, and sequelic subtypes. The Analysis of Verbal Behavior, 1, 5-7.
Williams, G. & Greer, R.D. (1993). A comparison of verbal-behavior and linguistic -communication curricula for training developmentally delayed adolescents to acquire and maintain vocal speech. Behaviorology, 1, 31-46.
Yoon, S.Y. (1998). Effects of an adult's vocal sound paired with a reinforcing event on the subsequent acquisition of mand functions. Dissertation Abstracts International, 59(07), 2338A. (UMI No. 9839031).
Author Contact Information
R. Douglas Greer, Ph.D., SBA, SRS
Graduate School of Arts & Sciences and Columbia University Teachers College
Box 76 Teachers College Columbia University
525 West 120th
New York NY, 10027
Dolleen-Day Keohane, Ph.D., SBA, Asst.RS
CABAS Schools and Columbia University Teachers College
2728 Henry Hudson Parkway
Riverdale, NY 10463
(1.) For information on and the evidence base for teaching as a science in CABAS schools and the CABAS[R] System see Greer (2002), Greer, Keohane, & Healy (2002), Selinski, Greer, & Lodhi (1991), Greer, McCorkle, & Williams, 1989, and http:/www.cabas.com. The findings of the research we describe have been replicated extensively with children and adolescents in CABAS[R] Schools in the USA, Ireland, Argentina and England and we believe they are robust. A book that describes the verbal behavior research and procedures in detail is in progress for publication in 2006 (Greer & Ross, in progress).
(2.) We chose the term listener emersion because it seemed particularly appropriate. The Oxford English Dictionary 2nd Edition, Volume V describes one usage of the term emersion as follows, "The action of coming out or issuing (from concealment or confinement). Somewhat rare." (OED, p. 177) Thus, once a child has acquired the listener repertoire, the child may be said to have come out of confinement to a pre-listener status. They have acquired an essential component of what is necessary to progress along the verbal behavior continuum--a verbal behavior development cusp.
(3.) It would seem that a certain history must transpire in order for a point-to-point correspondence between a word spoken by a parent and the repetition of the word by a child to qualify as an echoic operant rather than parroting. The child needs to say the word under the relevant deprivation conditions associated with the mand or the tact and then have that echoic evolve into either a mand or a tact. Once at least one of these events transpires, the parroting can move to an echoic. While more sophisticated operants and higher order operants or relational frames are basic to many sophisticated aspects of verbal behavior, the move from parroting is probably just as complex. The acquisition of echoing is the fundamental speech component of verbal functioning. One wonders how long, and under what conditions, it took for the echoic repertoire to evolve in our species. To evoke true echoics in children who have never spoken is probably one of the major accomplishments of the behavioral sciences. Indeed, the procedures we now use in verbal behavior analysis to induce first instances of vocal verbal operants have never been tried with primates, nor has the procedure to induce parroting. However, procedures for inducing parroting and echoics and other first instances of vocal verbal behavior have been successful in developing functional vocal verbal behavior in individuals who probably would have never spoken without these procedures. Amazing! There are even more fundamental components underlying even these response capabilities and aspects of observation show rich potential (Premack, 2004).
(4.) We use the term natural fracture to differentiate numerically scaled hypothetical relations from relations that are absolute natural events as in the determination of geological time by the identification of strata. To further illustrate our point, "receptive speech" is a hypothetical construct based on an analogy made between the computer "receiving inputs" to auditory speech events. It is an analogy, not a behavior or response class. Measures of receptive behavior are scaled measures tied to that analogy, as in test scores on "receptive" speech. However, listener behavior is composed of actual natural fractures (i.e., the child does or does not respond to spoken speech by another). In still another example, operants are natural fractures, whereas an IQ is a scaled measure of a hypothetical construct. Moreover, acquisitions of higher order operants such as the acquisition of joint stimulus control for spelling are also natural fractures.
Table 1. Evolution of Verbal Milestones and Independence Verbal Milestones Effects on Independent Functioning 1) Pre Listener Humans without listener repertoires are entirely Status dependent on others for their lives. Interdependency is not possible. Entrance to the social community is not possible. 2) Listener Status Humans with basic listener literacy can perform verbally governed behavior (e.g., come here, stop, eat). They can comply with instructions, track tasks (e.g., do this, now do this), and avoid deleterious consequences while gaining habilitative responses. The individual is still dependent, but direct physical or visual contact can be replaced somewhat by indirect verbal governance. Contributions to the well being of society become possible since some interdependency is feasible and the child enters the social community. 3) Speaker Status Humans who are speakers and who are in the in the presence of a listener can govern consequences in their environment by using another individual to mediate the contingencies (e.g., eat now, toilet, coat, help). They emit mands and tacts and relevant autoclitics to govern others. This is a significant step towards controlling the contingencies by the speaker. The culture benefits proportionately too and the capacity to be part of the social community is greatly expanded. 4) Speaker Listener a) Sequelics. Humans with this repertoire can Exchanges with responds as a listener-speaker to intraverbals, Others (Sequelics including impure tacts and impure mands. and Conversational Individuals can respond to questions for mand or Units) tact functions or to intraverbals that do not have mand or tact functions. The individual can respond as a speaker to verbal antecedents and can answer the queries of others such as, "what hurts?" "What do you want?" "What's that?" "What do you see, hear or feel?" One is reinforced as a listener with the effects of the speaker response. b) Conversational Units. Humans with this repertoire carry on conversational units in which they are reinforced as both speaker and listener. The individual engages in interlocking verbal operants of speaker and listener. The individual is reinforced both as a listener for sensory extensions, and also as a speaker in the effects speaking has on having a listener mediate the environment for the speaker. 5) Speaker as Own a) Say and Do. Individuals with this repertoire Listener Status can function as a listener to their own verbal Say Do behavior (e.g., first I do this, then I do Conversational that), reconstructing the verbal behavior given Units by another or eventually constructing verbal Naming speaker-listener behavior). At this stage, the person achieves significant independence. The level of independence is dependent on the level of the person's listener and speaker sophistication. b) Self-talk. When a human functions as a reinforced listener and speaker within the same skin they have one of the repertoires of speaker-as-own-listener. The early evidence of this function is self-talk; young children emit such repertoires when playing with toys, for example (Lodhi & Greer, 1990). c) Naming. When an individual hears a speaker's vocal term for a nonverbal stimulus as a listener and can use it both as a speaker and listener without direct instruction, the individual has another repertoire of speaker as own listener. This stage provides the means to expand verbal forms and functions through incidental exposure. 6) Reader Status Humans who have reading repertoires can supply useful, entertaining, and necessary responses to setting events and environmental contingencies that are obtainable by written text. The reader may use the verbal material without the time constraints controlling the speaker-listener relationship. The advice of the writer is under greater reader control than the advice of a speaker for a listener; that is, one is not limited by time or distance. Advice is accessible as needed independent of the presence of a speaker. 7) Writer Status A competent writer may control environmental contingencies through the mediation of a reader across seconds or centuries in the immediate vicinity of a reader on a remote continent. This stage represents an expansion of the speaker repertoires such that a listener need not be present at the time or at the same location as the writer. The writer affects the behavior of a reader. 8) Writer as Own As writers increase their ability to read their Reader: The Self- own writing from the perspective of the eventual Editing Status audience, writers grow increasingly independent of frequent reliance on prosthetic audiences (e.g., teachers, supervisors, colleagues). A more finished and more effective behavior-evoking repertoire provides the writer with wide-ranging control over environmental contingencies such that time and distance can be virtually eliminated. Writing can be geared to affect different audiences without immediate responses from the target audience 9) Verbal A sophisticated self-editor under the verbal Mediation for expertise associated with formal approaches to Solving Problems: problem solving (e.g., methods of science, logic, authority) can solve complex problems in progressively independent fashion under the control of verbal stimuli (spoken or written). The characterization of the problem is done with precise verbal descriptions. The verbal descriptions occasion other verbal behavior that can in turn direct the action of the person to solve the particular problem. A particular verbal community (i.e., a discipline) is based on verbal expertise and modes of inquiry are made possible. Table 2. Verbal Milestones and Components Mile- Components (Does the Child Have These Capabilities?) stones Pre- * Conditioned reinforcement for voices (voices of others listener controls prolonged auditory observation and can set the stage for visual or other sensory discriminations) (Decasper & Spence, 1987) * Visual tracking (visual stimuli control prolonged observation) (Keohane, Greer, & Ackerman, 2005a) * Capacity for "sameness" across senses (multiple exemplar experiences across matching across olfactory, auditory, visual, gustatory, tactile results in capacity for sameness across senses) (Keohane, Greer, & Ackerman, 2005b) * Basic compliance based on visual contexts and the teacher or parent as a source of reinforcement (The child need not be under any verbal control.) Listener * Discrimination between words and sounds that are not words (Conditioned reinforcement for voices occasions further distinctions for auditory vocal stimuli) * Auditory matching of certain words (as a selection/listener response) (Chavez-Brown, 2005; Greer & Chavez-Brown, 2003) * Generalized auditory matching of words (as a selection/listener response) (Chavez-Brown, 2005) * Basic listener literacy with non-speaker responses (Greer, Chavez-Brown, Nirgudkar, Stolfi, & Rivera-Valdes, 2005) * Visual discrimination instruction to occasion opportunities for instruction in naming (Greer & Ross, in press) * Naming (Greer, Stolfi, Chavez-Brown, & Rivera-Valdes, 2005) * Observational naming and observational learning prerequisites (Greer, Keohane, Meincke, Gautreaux, Pereira, Chavez-Brown, & Yuan, 2004) * Reinforcement as a listener (A listener is reinforced by the effect the speaker has on extending the listener's sensory experience; the listener avoids deleterious consequences and obtains vicarious sensory reinforcement.) (Donley & Greer, 1993) * Listening to one's own speaking (the listener is speaker) (Lodhi & Greer, 1989) * Listening to one's own textual responses in joining print to the naming relation (Park, 2005) * Listening and changing perspectives: Mine, yours, here, there, empathy (extension of listener reinforcement joins speaker) (Heagle & Rehfeldt, 2006) Speaker * Vocalizations * Parroting (Pre-echoic vocalizations with point -to-point correspondence, here-say joins see-do as a higher order operant), auditory matching as a production response (Sundberg, Michael, Partington, & Sundberg, 1996) * Echoics that occur when see-do (imitation) joins hear-say (echoic) as a higher order duplic operant (Ross & Greer, 2003; Tsiouri & Greer, 2003) * [Faulty echoics of echolalia and palilalia related to faulty stimulus control or establishing operation control] (Karmali, Greer, Nuzzolo-Gomez, Ross, & Rivera-Valdes, 2005) * Basic Echoic-to-mand function (a consequence is specified in and out of sight, here-say attains function for a few verbalizations leading to rapid expansion of echoics for functions mediated by a listener) (Ross & Greer, 2003; Yoon, 1996) * Echoic-to-tact function (generalized reinforcement control, the child must have conditioned reinforcement for social attention) (Tsiouri & Greer, 2003) * Mand and tacts and related autoclitics are independent (learning a form in one function does not result in use in another without direct instruction) (Twyman, 1996a, 1996b) * Mands and tacts with basic adjective-object acquire autoclitic functions (a response learned in one function results in usage in another under the control of the relevant establishing operation) (Nuzzolo-Gomez & Greer, 2005). This Transformation of establishing operations across mands and tacts replicated by Greer, Nirgudkar, & Park (2003) * Impure mands (mands under multiple control--deprivation plus verbal stimuli of others, visual, olfactory, tactile, gustatory stimuli) (Carr & Durand, 1985) * Impure tacts (tacts under multiple controls--deprivation of generalized reinforcers plus verbal stimuli of others, visual, olfactory, tactile, gustatory stimuli) (Tsiouri & Greer, 2003) * Tacts and mands emerging from incidental experience (naming and the speaker repertoires) (Fiorile, 2004; Fiorile & Greer, 2006; Greer, et al, 2005b; Gilic, 2005) * Comparatives: smaller/larger, shorter/longer, taller/shorter, warmer/colder in mand and tact functions as generative function (Speckman, 2005) * Generative tense usage (Greer & Yuan, 2004) * "Wh" questions in mand and tact function (i.e., what, who, why, where, when, which) (Pistoljevic & Greer, 2006) * Expansion of tact repertoires resulting in greater "spontaneous" speech (Pistoljevic & Greer, 2006; Schauffler & Greer, 2006) * Speaker Listener Exchanges with Others: Does the Child Have These Capabilities? * Sequelics as speaker (Becker, 1989) * Sequelics as listener-speaker (Becker, 1989; Donley & Greer, 1993) * Conversational units (reciprocal speaker and listener control) (Donley & Greer, 1993) Speaker as * Basic naming from the speaker perspective (learns tact Own and has listener response) (Fiorile & Greer, 2006; Listener Horne & Lowe, 1996) * Observational naming from the speaker perspective (hears others learn tact and has tact) (Fiorile & Greer, 2006; Greer, et al., 2004b) * Verbal governance of own speaker responses (say and do correspondence as extension of listener literacy for correspondence for what others say and nonverbal correspondence that is reinforced) (Rosales-Ruiz & Baer, 1996) * Conversational units in self-talk (listener and speaker functions within one's own skin in mutually reinforcing exchanges) (Lodhi & Greer 1989) Early * Conditioned reinforcement for observing books (Tsai & Reader Greer, 2006) * Textual responses: see word-say word at adequate rate improved by prior conditioning of print stimuli as conditioned reinforcement for observing (Tsai & Greer, 2006) * Match printed word, spoken word by others and self and printed word, spoken word and picture/object, printed word and picture/action (Park, 2005) * Responds as listener to own textual responding (vocal verbalization results in "comprehension" if the verbalizations are in the tact repertoire, e.g., hearing tact occasions match of speech with nonverbal stimuli) Writer * Effortless component motor skills of printing or typing (see-write as extension of see-do) * Acquisition of joint stimulus control across written and spoken responding (learning one response either vocal or written results in the other) (Greer, Yuan, & Gautreaux, 2005) * Writer affects the behavior of a reader for technical functions (mand, tact, autoclitic functions) (Reilly-Lawson & Greer, 2006) * Transformation of stimulus function for metaphoric functions (word used metaphorically such as in, "she is sharp as a pin") (Meincke-Mathews, 2005; Meincke, Greer, Keohane & Mariano-Lapidus, 2003) * Writes to affect the emotions of a reader for aesthetic functions (mand, tact, autoclitic functions as well as simile and metaphor for prose, poetry, and drama and meter and rhyme scheme for poetry) Writer as * Is verbally governed by own writing for revision Own functions (finds discrepancies between what she reads Reader and what she has written, writer and reader in the same skin) (Madho, 1997; Reilly-Lawson & Greer, 2006) * Verbally governs a technical audience by reading what is written as would the target audience (editing without assistance from others, acquire listener function of target audience requiring joint stimulus control between the writer and the listener audience) (Reilly-Lawson & Greer, 2006) * Verbally governs an aesthetic audience as a function of reading what is written as would the target audience (editing without assistance from others, acquire aesthetic listener function of target audience with tolerance for ambiguity) (Meincke-Mathews, 2005) Verbal * (Is verbally governed by print to perform simple Mediation operations (verbal stimuli control operations) for (Marsico, 1998) Problem * Is verbally governed by print to learn new stimulus Solving control and multiple step operations (the characterization of the problem is done with precise verbal descriptions). The verbal descriptions occasion other verbal behavior that can in turn direct the action of the person to solve the particular problem (Keohane & Greer, 2005). A particular verbal community, or discipline, is based on verbal expertise tied to the environment and modes of inquiry are made possible.) | http://www.thefreelibrary.com/The+evolution+of+verbal+behavior+in+children.-a0217040852 | 13 |
22 | On October 3, 1917, six months after the United States declared war on Germany and began its participation in the First World War, the U.S. Congress passes the War Revenue Act, increasing income taxes to unprecedented levels in order to raise more money for the war effort.
The 13th Amendment, which gave Congress the power to levy an income tax, became part of the Constitution in 1913; in October of that year, a new income tax law introduced a graduated tax system, with rates starting at 1 percent and rising to 7 percent for taxpayers with income above $500,000. Though less than 1 percent of the population paid income tax at the time, the amendment marked an important shift, as before most citizens had carried on their economic affairs without government knowledge. In an attempt to assuage fears of excessive government intervention into private financial affairs, Congress added a clause in 1916 requiring that all information from tax returns be kept confidential.
By then, however, preparation for and entry into World War I had greatly increased the government’s need for revenue. Congress responded to this need by passing an initial Revenue Act in 1916, raising the lowest tax rate from 1 percent to 2 percent; those with incomes above $1.5 million were taxed at 15 percent. The act also imposed new taxes on estates and excess business profits.
By 1917, largely due to the new income tax rate, the annual federal budget was almost equal to the total budget for all the years between 1791 and 1916. Still more was required, however, and in October 1917 Congress passed the War Revenue Act, lowering the number of exemptions and greatly increasing tax rates. Under the 1917 act, a taxpayer with an income of only $40,000 was subject to a 16 percent tax rate, while one who earned $1.5 million faced a rate of 67 percent. While only five percent of the U.S. population was required to pay taxes, U.S. tax revenue increased from $809 million in 1917 to a whopping $3.6 billion the following year. By the time World War I ended in 1918, income tax revenue had funded a full one-third of the cost of the war effort. | http://www.history.com/this-day-in-history/war-revenue-act-passed-in-us | 13 |
33 | Lesson Plan: How the Economy Works -- Grades 3-5
Overview: Use the Scholastic News Online Special Report on the economy to help students understand both general economic terms and the roots of the current crisis.
Duration: about 50-100 minutes (1-2 class periods)
Students will be able to:
Understand important terminology related to the American economic system, including credit, debt, and bailout bill
Distinguish between a healthy and weak economy
Materials: Computer(s) with Internet access; Economy Ups and Downs PDF (optional)
Set Up and Prepare: Preview the Special Report prior to the lesson. Make copies of the PDF.
1. To activate prior knowledge, ask students what they have heard about the American economy in the news lately. Chances are, students have heard terms bandied about that they do not completely understand. List these words on the board or interactive whiteboard.
2. Divide students into groups of two or three, and assign each team a word from the list. Demonstrate how to consult the Scholastic News Online economic glossary to check the meaning of the word. Then have each team look up its word and report back to the class.
3. Challenge students to complete the word-search game, which reviews 10 main economic words. Students should complete the word search on their own, consulting the online glossary as needed.
4. Now that students have a working vocabulary of economics, invite them to use some of the other features of the Special Report. Some learners might watch the video report on the New York Stock Exchange, while others may read interviews with members of the U.S. House of Representatives about the crisis and the controversial Bailout Bill. If you wish, ask students to summarize one article, video, or interview.
5. Distribute copies of the Economy Ups and Downs PDF, and review the directions with students. Explain that students will use what they have learned about the economy to complete the chart. Allow time for students to cut out and paste the text boxes in the appropriate columns. Write the words unemployment, credit, layoff, stock, and recession on the board. Let students know that if they get stumped, they can look up these words in the online glossary for clues.
6. Review answers to the PDF (see below). Reassure worried students that although the American economy is weak right now, the government and others are taking steps to turn it around. Also, be sure to congratulate students on tackling a tough subject. Let them know that if they have even a partial understanding of the current economic crisis, they are ahead of many grown-ups! And be sure to check back periodically for news about the economic recovery on Scholastic News Online.
Have students choose any publicly traded company. Have them find out the company’s stock symbol, then check Web sites or the financial section of the newspaper for daily stock quotes.
Send home a note letting parents know about the economy-related resources available at Scholastic News Online. The articles, glossary, and budget tips are tools that parents can use to discuss the crisis at home.
Assess Students: Have each student hand in his or her completed PDF as well as a summary of one article or feature that he or she visited in the Special Report.
Answers to the Economy Ups and Downs PDF: In a healthy economy, people and businesses spend more, the unemployment rate is fairly low, stock prices go up, small businesses grow, and it is easy for people to get credit. In a weak economy, home sales are down, stock prices go down, many companies lay off workers, spending drops, and it is harder for people to get credit. | http://www.scholastic.com/browse/article.jsp?id=3750575 | 13 |
18 | Rhetoric and Composition/Teacher's Handbook/Rhetorical Analysis
Designing a Unit of Study for Teaching Rhetorical Analysis
Have you ever planned a trip to a new destination? If you have, you know that it requires having some knowledge of where you are going, what you would like to do when you get there, where you will stay, and how you will get back home. Designing a unit on teaching rhetorical analysis is not so different from planning a trip. The assignment you give your students plots out the destination at which you want your students to arrive and this becomes their initial "map" for the task. Understanding the rhetorical vehicles of logos, ethos and pathos help them on their way to analyzing a text. The critical thinking process they go through to analyze such a text results in them being able to focus on specific aspects, such as logical fallacies, to determine which textual "souvenirs" work well to persuade an audience and which don't. Overall, though the student is given the tools to embark on their own analytical journey, this process can be fraught with obstacles and difficulties.Included here are some ideas for using this handbook, as well as ideas that may help you guide your students along their individual paths.
The Planning Stage
Guide Questions to Design the Unit (Samples)
Creating a Unit Timeline:
The key to teaching rhetorical analysis is to start small. Students need to understand the "building blocks" of ethos, pathos, and logos before analyzing text. Some helpful methods to include in a unit timeline are:
- Visual Analysis
Just a few of the mediums to consider using here include magazine advertisements, commercials, films, and news clips.
- Close Reading
To prepare students to analyze a large piece of text, it is helpful to start with small pieces of text, such as poetry or song lyrics.
- Practice, Practice, Practice
Now that your students have a grasp on the concepts of analysis, it is time to practice these skills on large pieces of text, such as newspaper editiorials, magazine articles, etc.
Sample Lesson Plans With the above mentioned units to cover, there are many different ways to tackle teaching them.
Teaching Ethos, Pathos, and Logos Lesson 1
Knowing the ways in which these rhetorical stepping stones work in texts and visuals is key to being able to analyze any kind of rhetoric. Students should read the definitions of these terms located in an earlier section of this book. In order to make these terms "come to life" for the students, some sample lesson ideas are included here:
Break students into groups of three. Ask for one person in the group to take some money out of their pocket (it shouldn't matter what amount). Ask another person in the group to be the persuader, and the third person in the group to be the observer. The object of this lesson is for the persuader to use whatever appeals or arguments he/she can think of in two minutes time in order to get the money from the person in the group who has it. The observer has to jot down whatever appeals the persuader uses in those two minutes. During or at the end of the two minute time period, the person with the money has to decide whether or not to "give" their money to the persuader based on the appeals that person has used. When the two minutes is up, take a tally to see how many monied students gave up their cash to the persuader and how many didn't. Then, have the observers tell the class (or write on the board) what appeals they heard the persuader using during the two minutes. Once this has been done, and you have a list of appeals on the board, go through them with the class to see what rhetorical appeal category they fall into. For example, if a persuader said that they needed the money to pay for a parking spot at the hospital to see their dying grandmother, that example would fall under "pathos." If the persuader said that they needed to borrow money now, but would pay the money back with interest, that would be an example of "logos." This activity helps student see the differences between the terms and how effective they each term can be in attempts at persuasion. Most often, the student persuaders who "get" the money (yes, they do have to give it back at the end) have used a combination of appeals, so you can discuss how using a mix of appeals often makes for more persuasive arguments.
Visual Analysis Sample 1 One of the first concepts to teach in analysis is the idea of audience. A great way to do this is to bring in a variety of magazines (any type of magazine will work). Put students into groups and have them look through the articles and advertisments.
Some questions to ask:
- Who is the target audience? Young, old, men, women, the list goes on and on.
- How do you know this?
After determining audience for the entire magazine, the next step would be to look at individual advertisements. Questions to consider:
- Who is the target audience of the ad?
- How is the text organized? What significance does this hold?
- How was the creator attempting to influence or persuade the audience?
- How does it appeal to ethos, pathos, or logos?
- What connections or associations is the reader supposed to make?
Basically, these questions can be applied to any visual or verbal text, commercials, films, etc. After students have practiced these concepts, it might be time to have them write their own analysis of a visual.
Visual Analysis Sample 2 To get students to practice their ability to analyze a variety of visual rhetoric, it can be helpful to have students work with cartoons, logos, and artwork. For this lesson, show students a visual such as the Apple logo or a photograph such as "Candy Cigarette." Give them 10-15 minutes to write about these visuals in their journals or notebooks. Some guiding questions that they can use to help them write might be:
- What is it about the visual that grabs your attention first? Why?
- How does this image connect to the rest of the visual?
- What purpose does this image serve? What do you think the person who created it wanted you to know?
- What is this image about? Describe what ideas, emotions, etc.. are portrayed in the image.
- What kinds of ideas is this image trying to persuade its audience about?
After students have some ideas written down, conduct a class discussion and ask students to volunteer what they observed about the images. If no one offers to volunteer their answers, it might be a good idea to ask each student to offer an answer to only one of the questions listed above.
After the discussion of the students' observations, ask if they see the rhetorical appeals of ethos, logos, and pathos at work in the visual. Exposing students to ways in which these appeals work in a variety of visuals can be key in helping them to understand the differences between the appeals and how they are used to persuade audiences. Doing these exercises aloud in class can help students see and hear the process of analysis and how it differs from that of simple observation.
Close Reading (Sample Assignments)
- Bring samples readings of poetry to the class. As a class or in groups, have students do a line by line analysis of the poem. What does each line mean? How does it contribute the the poem as a whole? Are there any words/phrases you do not understand? Are there any double meanings to any of the phrases or the poem as a whole?
- A similar way to do this would be with song lyrics. Bring examples to class, or have students bring their own examples to do a close reading of. Analyze the piece line by line, and also as a whole.
Practice, Practice, Practice (Sample Assignments)
- Create an ad or logo for a company or product you admire. Keep your audience in mind and include visuals and text that they would find appealing. Attach to this visual a description of how you use each of the rhetorical appeals (ethos, pathos, logos) in this ad to get your ideas across and persuade your audience. The descriptions for each appeal should be at least one paragraph in length.
- Split students into groups and have each group create an ad for the same company or product, but each will target different age groups, categories, etc. For example, have each group create an ad for Nike. How will one group appeal to teens, middle-aged people, men women, etc.?
- Practicing rhetorical analysis with the class as a whole is important. Using editorials from newspapers and magazines, discuss as a class or in groups the elements of the piece. What is the author's message? What is the author's purpose? What rhetorical devices does the author use? What kind of language? Is the author successful in relaying his or her message?
Sample Rhetorical Analysis Assignments
Rhetorical analysis is a way of understanding and interpreting texts by examining and interpreting rhetorical devices used in a piece of writing. You are to find a piece of published work that is persuasive in nature; in other words, it argues a point. Editorials and pieces from opinion/commentary sections of magazines or newspapers will generally work the best. You may find these online at sites such as startribune.com or sctimes.com., or in an actual publication. The piece you choose should be at least 350-500 words in length. Choosing an article that is too short may result in not having enough to write about in your paper, choosing something too long may not fit the parameters of this assignment. Write an essay in which you in which you ANALYZE the author’s rhetorical effectiveness/ineffectiveness. How does the author appeal to ethos, pathos, and logos? You will need to consider the points we have discussed in class, as well as the strategies discussed in Chapters 10 and 11 in your book.
Primary Audience: Educated readers who have not read the text you are analyzing.
Point of View: Objective
General Purpose: To help your readers understand the connections between purpose, audience, subject matter, and rhetorical techniques.
Things to consider when writing the rhetorical analysis:
- Take the time to find an article with a topic you can relate to. Don’t just choose the first article you find.
- Photocopy the article, because it will need to accompany all drafts.
- This paper is NOT a summary. One will be included, but it should be no more than one paragraph in length.
- Your focus is not to agree or disagree with the author’s article, but to analyze how effective or ineffective the author is in presenting the argument.
- Sample Peer Review For Rhetorical Analysis | http://en.wikibooks.org/wiki/Rhetoric_and_Composition/Teacher's_Handbook/Rhetorical_Analysis | 13 |
172 | Economy of England in the Middle Ages
The economy of England in the Middle Ages, from the Norman invasion in 1066, to the death of Henry VII in 1509, was fundamentally agricultural, though even before the invasion the market economy was important to producers. Norman institutions, including serfdom, were superimposed on an existing system of open fields and mature, well-established towns involved in international trade. Over the next five centuries the economy would at first grow and then suffer an acute crisis, resulting in significant political and economic change. Despite economic dislocation in urban and extraction economies, including shifts in the holders of wealth and the location of these economies, the economic output of towns and mines developed and intensified over the period. By the end of the period, England had a weak government, by later standards, overseeing an economy dominated by rented farms controlled by gentry, and a thriving community of indigenous English merchants and corporations.
The 12th and 13th centuries saw a huge development of the English economy. This was partially driven by the growth in the population from around 1.5 million at the time of the creation of the Domesday Book in 1086 to between 4 and 5 million in 1300. England remained a primarily agricultural economy, with the rights of major landowners and the duties of serfs increasingly enshrined in English law. More land, much of it at the expense of the royal forests, was brought into production to feed the growing population or to produce wool for export to Europe. Many hundreds of new towns, some of them planned, sprung up across England, supporting the creation of guilds, charter fairs and other important medieval institutions. The descendants of the Jewish financiers who had first come to England with William the Conqueror played a significant role in the growing economy, along with the new Cistercian and Augustinian religious orders that came to become major players in the wool trade of the north. Mining increased in England, with the silver boom of the 12th century helping to fuel a fast-expanding currency.
Economic growth began to falter by the end of the 13th century, owing to a combination of over-population, land shortages and depleted soils. The loss of life in the Great Famine of 1315–17 shook the English economy severely and population growth ceased; the first outbreak of the Black Death in 1348 then killed around half the English population, with major implications for the post-plague economy. The agricultural sector shrank, with higher wages, lower prices and shrinking profits leading to the final demise of the old demesne system and the advent of the modern farming system of cash rents for lands. The Peasants Revolt of 1381 shook the older feudal order and limited the levels of royal taxation considerably for a century to come. The 15th century saw the growth of the English cloth industry and the establishment of a new class of international English merchant, increasingly based in London and the South-West, prospering at the expense of the older, shrinking economy of the eastern towns. These new trading systems brought about the end of many of the international fairs and the rise of the chartered company. Together with improvements in metalworking and shipbuilding, this represents the end of the medieval economy, and the beginnings of the early modern period in English economics.
Invasion and the early Norman period (1066–1100)
William the Conqueror invaded England in 1066, defeating the Anglo-Saxon King Harold Godwinson at the Battle of Hastings and placing the country under Norman rule. This campaign was followed by fierce military operations known as the Harrying of the North in 1069–70, extending Norman authority across the north of England. William's system of government was broadly feudal in that the right to possess land was linked to service to the king, but in many other ways the invasion did little to alter the nature of the English economy. Most of the damage done in the invasion was in the north and the west of England, some of it still recorded as "wasteland" in 1086. Many of the key features of the English agricultural and financial system remained in place in the decades immediately after the conquest.
Agriculture and mining
Agriculture formed the bulk of the English economy at the time of the Norman invasion. Twenty years after the invasion, 35% of England was covered in arable land, 25% was put to pasture, 15% was covered by woodlands and the remaining 25% was predominantly moorland, fens and heaths. Wheat formed the single most important arable crop, but rye, barley and oats were also cultivated extensively. In the more fertile parts of the country, such as the Thames valley, the Midlands and the east of England, legumes and beans were also cultivated. Sheep, cattle, oxen and pigs were kept on English holdings, although most of these breeds were much smaller than modern equivalents and most would have been slaughtered in winter.
In the century prior to the Norman invasion, England's great estates, owned by the king, bishops, monasteries and thegns, had been slowly broken up as a consequence of inheritance, wills, marriage settlements or church purchases. Most of the smaller landowning nobility lived on their properties and managed their own estates. The pre-Norman landscape had seen a trend away from isolated hamlets and towards larger villages engaged in arable cultivation in a band running north–south across England. These new villages had adopted an open field system in which fields were divided into small strips of land, individually owned, with crops rotated between the field each year and the local woodlands and other common lands carefully managed. Agricultural land on a manor was divided between some fields that the landowner would manage and cultivate directly, called demesne land, and the majority of the fields that would be cultivated by local peasants, who would pay rent to the landowner either through agricultural labour on the lord's demesne fields or through cash or produce. Around 6,000 watermills of varying power and efficiency had been built in order to grind flour, freeing up peasant labour for other more productive agricultural tasks. The early English economy was not a subsistence economy and many crops were grown by peasant farmers for sale to the early English towns.
The Normans initially did not significantly alter the operation of the manor or the village economy. William reassigned large tracts of land amongst the Norman elite, creating vast estates in some areas, particularly along the Welsh border and in Sussex. The biggest change in the years after the invasion was the rapid reduction in the number of slaves being held in England. In the 10th century slaves had been very numerous, although their number had begun to diminish as a result of economic and religious pressure. Nonetheless, the new Norman aristocracy proved harsh landlords. The wealthier, formerly more independent Anglo-Saxon peasants found themselves rapidly sinking down the economic hierarchy, swelling the numbers of unfree workers, or serfs, forbidden to leave their manor and seek alternative employment. Those Anglo-Saxon nobles who had survived the invasion itself were rapidly assimilated into the Norman elite or economically crushed.
Creation of the forests
The Normans also established the royal forests. In Anglo-Saxon times there had been special woods for hunting called "hays", but the Norman forests were much larger and backed by legal mandate. The new forests were not necessarily heavily wooded but were defined instead by their protection and exploitation by the crown. The Norman forests were subject to special royal jurisdiction; forest law was "harsh and arbitrary, a matter purely for the King's will". Forests were expected to supply the king with hunting grounds, raw materials, goods and money. Revenue from forest rents and fines came to become extremely significant and forest wood was used for castles and royal ship building. Several forests played a key role in mining, such as the iron mining and working in the Forest of Dean and lead mining in the Forest of High Peak. Several other groups bound up economically with forests; many monasteries had special rights in particular forests, for example for hunting or tree felling. The royal forests were accompanied by the rapid creation of locally owned parks and chases.
Trade, manufacturing and the towns
Although primarily rural, England had a number of old, economically important towns in 1066. A large amount of trade came through the Eastern towns, including London, York, Winchester, Lincoln, Norwich, Ipswich and Thetford. Much of this trade was with France, the Low Countries and Germany, but the North-East of England traded with partners as far away as Sweden. Cloth was already being imported to England before the invasion through the mercery trade.
Some towns, such as York, suffered from Norman sacking during William's northern campaigns. Other towns saw the widespread demolition of houses to make room for new motte and bailey fortifications, as was the case in Lincoln. The Norman invasion also brought significant economic changes with the arrival of the first Jews to English cities. William I brought over wealthy Jews from the Rouen community in Normandy to settle in London, apparently to carry out financial services for the crown. In the years immediately after the invasion, a lot of wealth was drawn out of England in various ways by the Norman rulers and reinvested in Normandy, making William immensely wealthy as an individual ruler.
The minting of coins was decentralised in the Saxon period; every borough was mandated to have a mint and therefore a centre for trading in bullion. Nonetheless, there was strict royal control over these moneyers, and coin dies could only be made in London. William retained this process and generated a high standard of Norman coins, leading to the use of the term "sterling" as the name for the Norman silver coins.
Governance and taxation
William I inherited the Anglo-Saxon system in which the king drew his revenues from: a mixture of customs; profits from re-minting coinage; fines; profits from his own demesne lands; and the system of English land-based taxation called the geld. William reaffirmed this system, enforcing collection of the geld through his new system of sheriffs and increasing the taxes on trade. William was also famous for commissioning the Domesday Book in 1086, a vast document which attempted to record the economic condition of his new kingdom.
Mid-medieval growth (1100–1290)
The 12th and 13th centuries were a period of huge economic growth in England. The population of England rose from around 1.5 million in 1086 to around 4 or 5 million in 1300, stimulating increased agricultural outputs and the export of raw materials to Europe. In contrast to the previous two centuries, England was relatively secure from invasion. Except for the years of the Anarchy, most military conflicts either had only localised economic impact or proved only temporarily disruptive. English economic thinking remained conservative, seeing the economy as consisting of three groups: the ordines, those who fought, or the nobility; laboratores, those who worked, in particular the peasantry; and oratores, those who prayed, or the clerics. Trade and merchants played little part in this model and were frequently vilified at the start of the period, although they were increasingly tolerated towards the end of the 13th century.
Agriculture, fishing and mining
English agriculture and the landscape
Agriculture remained by far the most important part of the English economy during the 12th and 13th centuries. There remained a wide variety in English agriculture, influenced by local geography; in areas where grain could not be grown, other resources were exploited instead. In the Weald, for example, agriculture centred on grazing animals on the woodland pastures, whilst in the Fens fishing and bird-hunting was supplemented by basket-making and peat-cutting. In some locations, such as Lincolnshire and Droitwich, salt manufacture was important, including production for the export market. Fishing became an important trade along the English coast, especially in Great Yarmouth and Scarborough, and the herring was a particularly popular catch; salted at the coast, it could then be shipped inland or exported to Europe. Piracy between competing English fishing fleets was not unknown during the period. Sheep were the most common farm animal in England during the period, their numbers doubling by the 14th century. Sheep became increasingly widely used for wool, particularly in the Welsh borders, Lincolnshire and the Pennines. Pigs remained popular on holdings because of their ability to scavenge for food. Oxen remained the primary plough animal, with horses used more widely on farms in the south of England towards the end of the 12th century. Rabbits were introduced from France in the 13th century and farmed for their meat in special warrens.
The underlying productivity of English agriculture remained low, despite the increases in food production. Wheat prices fluctuated heavily year to year, depending on local harvests; up to a third of the grain produced in England was potentially for sale, and much of it ended up in the growing towns. Despite their involvement in the market, even the wealthiest peasants prioritised spending on housing and clothing, with little left for other personal consumption. Records of household belongings show most possessing only "old, worn-out and mended utensils" and tools.
The royal forests grew in size for much of the 12th century, before contracting in the late 13th and early 14th centuries. Henry I extended the size and scope of royal forests, especially in Yorkshire; after the Anarchy of 1135–53, Henry II continued to expand the forests until they comprised around 20% of England. In 1217 the Charter of the Forest was enacted, in part to mitigate the worst excesses of royal jurisdiction, and established a more structured range of fines and punishments for peasants who illegally hunted or felled trees in the forests. By the end of the century the king had come under increasing pressure to reduce the size of the royal forests, leading to the "Great Perambulation" around 1300; this significantly reduced the extent to the forests, and by 1334 they were only around two-thirds the size they had been in 1250. Royal revenue streams from the shrinking forests diminished considerably in the early 14th century.
Development of estate management
The Normans retained and reinforced the manorial system with its division between demesne and peasant lands paid for in agricultural labour. Landowners could profit from the sales of goods from their demesne lands and a local lord could also expect to receive income from fines and local customs, whilst more powerful nobles profited from their own regional courts and rights.
During the 12th century major landowners tended to rent out their demesne lands for money, motivated by static prices for produce and the chaos of the Anarchy between 1135 and 1153. This practice began to alter in the 1180s and '90s, spurred by the greater political stability. In the first years of John's reign, agricultural prices almost doubled, at once increasing the potential profits on the demesne estates and also increasing the cost of living for the landowners themselves. Landowners now attempted wherever possible to bring their demesne lands back into direct management, creating a system of administrators and officials to run their new system of estates.
New land was brought into cultivation to meet demand for food, including drained marshes and fens, such as Romney Marsh, the Somerset Levels and the Fens; royal forests from the late 12th century onwards; and poorer lands in the north, south-west and in the Welsh Marches. The first windmills in England began to appear along the south and east coasts in the 12th century, expanding in number in the 13th, adding to the mechanised power available to the manors. By 1300 it has been estimated that there were more than 10,000 watermills in England, used both for grinding corn and for fulling cloth. Fish ponds were created on most estates to provide freshwater fish for the consumption of the nobility and church; these ponds were extremely expensive to create and maintain. Improved ways of running estates began to be circulated and were popularised in Walter de Henley's famous book Le Dite de Hosebondrie, written around 1280. In some regions and under some landowners, investment and innovation increased yields significantly through improved ploughing and fertilisers – particularly in Norfolk, where yields eventually equalled later 18th-century levels.
Role of the Church in agriculture
The Church in England was a major landowner throughout the medieval period and played an important part in the development of agriculture and rural trade in the first two centuries of Norman rule. The Cistercian order first arrived in England in 1128, establishing around 80 new monastic houses over the next few years; the wealthy Augustinians also established themselves and expanded to occupy around 150 houses, all supported by agricultural estates, many of them in the north of England. By the 13th century these and other orders were acquiring new lands and had become major economic players both as landowners and as middlemen in the expanding wool trade. In particular, the Cistercians led the development of the grange system. Granges were separate manors in which the fields were all cultivated by the monastic officials, rather than being divided up between demesne and rented fields, and became known for trialling new agricultural techniques during the period. Elsewhere, many monasteries had significant economic impact on the landscape, such as the monks of Glastonbury, responsible for the draining of the Somerset Levels to create new pasture land.
The military crusading order of the Knights Templar also held extensive property in England, bringing in around £2,200 per annum by the time of their fall. It comprised primarily rural holdings rented out for cash, but also included some urban properties in London. Following the dissolution of the Templar order in France by Philip IV of France, Edward II ordered their properties to be seized and passed to the Hospitaller order in 1313, but in practice many properties were taken by local landowners and the Hospital was still attempting to reclaim them twenty-five years later.
The Church was responsible for the system of tithes, a levy of 10% on "all agrarian produce... other natural products gained via labour... wages received by servants and labourers, and to the profits of rural merchants". Tithes gathered in the form of produce could be either consumed by the recipient, or sold on and bartered for other resources. The tithe was relatively onerous for the typical peasant, although in many instances the actual levy fell below the desired 10%. Many clergy moved to the towns as part of the urban growth of the period, and by 1300 around one in twenty city dwellers was a clergyman. One effect of the tithe was to transfer a considerable amount of agriculture wealth into the cities, where it was then spent by these urban clergy. The need to sell tithe produce that could not be consumed by the local clergy also spurred the growth of trade.
Expansion of mining
Mining did not make up a large part of the English medieval economy, but the 12th and 13th centuries saw an increased demand for metals in the country, thanks to the considerable population growth and building construction, including the great cathedrals and churches. Four metals were mined commercially in England during the period, namely iron, tin, lead and silver; coal was also mined from the 13th century onwards, using a variety of refining techniques.
Iron mining occurred in several locations, including the main English centre in the Forest of Dean, as well as in Durham and the Weald. Some iron to meet English demand was also imported from the continent, especially by the late 13th century. By end of the 12th century, the older method of acquiring iron ore through strip mining was being supplemented by more advanced techniques, including tunnels, trenches and bell-pits. Iron ore was usually locally processed at a bloomery, and by the 14th century the first water-powered iron forge in England was built at Chingley. As a result of the diminishing woodlands and consequent increases in the cost of both wood and charcoal, demand for coal increased in the 12th century and it began to be commercially produced from bell-pits and strip mining.
A silver boom occurred in England after the discovery of silver near Carlisle in 1133. Huge quantities of silver were produced from a semicircle of mines reaching across Cumberland, Durham and Northumberland – up to three to four tonnes of silver were mined each year, more than ten times the previous annual production across the whole of Europe. The result was a local economic boom and a major uplift to 12th-century royal finances. Tin mining was centred in Cornwall and Devon, exploiting alluvial deposits and governed by the special Stannary Courts and Parliaments. Tin formed a valuable export good, initially to Germany and then later in the 14th century to the Low Countries. Lead was usually mined as a by-product of mining for silver, with mines in Yorkshire, Durham and the north, as well as in Devon. Economically fragile, the lead mines usually survived as a result of being subsidised by silver production.
Trade, manufacturing and the towns
Growth of English towns
After the end of the Anarchy, the number of small towns in England began to increase sharply. By 1297, 120 new towns had been established, and in 1350 – by when the expansion had effectively ceased – there were around 500 towns in England. Many of these new towns were centrally planned: Richard I created Portsmouth, John founded Liverpool, and successive monarchs followed with Harwich, Stony Stratford, Dunstable, Royston, Baldock, Wokingham, Maidenhead and Reigate. The new towns were usually located with access to trade routes in mind, rather than defence, and the streets were laid out to make access to the town's market convenient. A growing percentage of England's population lived in urban areas; estimates suggest that this rose from around 5.5% in 1086 to up to 10% in 1377.
London held a special status within the English economy. The nobility purchased and consumed many luxury goods and services in the capital, and as early as the 1170s the London markets were providing exotic products such as spices, incense, palm oil, gems, silks, furs and foreign weapons. London was also an important hub for industrial activity; it had many blacksmiths making a wide range of goods, including decorative ironwork and early clocks. Pewter-working, using English tin and lead, was also widespread in London during the period. The provincial towns also had a substantial number of trades by the end of the 13th century – a large town like Coventry, for example, contained over three hundred different specialist occupations, and a smaller town such as Durham could support some sixty different professions. The increasing wealth of the nobility and the church was reflected in the widespread building of cathedrals and other prestigious buildings in the larger towns, in turn making use of lead from English mines for roofing.
Land transport remained much more expensive than river or sea transport during the period. Many towns in this period, including York, Exeter and Lincoln, were linked to the oceans by navigable rivers and could act as seaports, with Bristol's port coming to dominate the lucrative trade in wine with Gascony by the 13th century, but shipbuilding generally remained on a modest scale and economically unimportant to England at this time. Transport remained very costly in comparison to the overall price of products. By the 13th century, groups of common carriers ran carting businesses, and carting brokers existed in London to link traders and carters. These used the four major land routes crossing England: Ermine Street, the Fosse Way, Icknield Street and Watling Street. A large number of bridges were built during the 12th century to improve the trade network.
In the 13th century, England was still primarily supplying raw materials for export to Europe, rather than finished or processed goods. There were some exceptions, such as very high-quality cloths from Stamford and Lincoln, including the famous "Lincoln Scarlet" dyed cloth. Despite royal efforts to encourage it, however, barely any English cloth was being exported by 1347.
Expansion of the money supply
There was a gradual reduction in the number of locations allowed to mint coins in England; under Henry II, only 30 boroughs were still able to use their own moneyers, and the tightening of controls continued throughout the 13th century. By the reign of Edward I there were only nine mints outside London and the king created a new official called the Master of the Mint to oversee these and the thirty furnaces operating in London to meet the demand for new coins. The amount of money in circulation hugely increased in this period; before the Norman invasion there had been around £50,000 in circulation as coin, but by 1311 this had risen to more than £1 million. At any particular point in time, though, much of this currency might be being stored prior to being used to support military campaigns or to be sent overseas to meet payments, resulting in bursts of temporary deflation as coins ceased to circulate within the English economy. One physical consequence of the growth in the coinage was that coins had to be manufactured in large numbers, being moved in barrels and sacks to be stored in local treasuries for royal use as the king travelled.
Rise of the guilds
The first English guilds emerged during the early 12th century. These guilds were fraternities of craftsmen that set out to manage their local affairs including "prices, workmanship, the welfare of its workers, and the suppression of interlopers and sharp practices". Amongst these early guilds were the "guilds merchants", who ran the local markets in towns and represented the merchant community in discussions with the crown. Other early guilds included the "craft guilds", representing specific trades. By 1130 there were major weavers' guilds in six English towns, as well as a fullers' guild in Winchester. Over the following decades more guilds were created, often becoming increasingly involved in both local and national politics, although the guilds merchants were largely replaced by official groups established by new royal charters.
The craft guilds required relatively stable markets and a relative equality of income and opportunity amongst their members to function effectively. By the 14th century these conditions were increasingly uncommon. The first strains were seen in London, where the old guild system began to collapse – more trade was being conducted at a national level, making it hard for craftsmen to both manufacture goods and trade in them, and there were growing disparities in incomes between the richer and poorer craftsmen. As a result, under Edward III many guilds became companies or livery companies, chartered companies focusing on trade and finance, leaving the guild structures to represent the interests of the smaller, poorer manufacturers.
Merchants and the development of the charter fairs
The period also saw the development of charter fairs in England, which reached their heyday in the 13th century. From the 12th century onwards, many English towns acquired a charter from the Crown allowing them to hold an annual fair, usually serving a regional or local customer base and lasting for two or three days. The practice increased in the next century and over 2,200 charters were issued to markets and fairs by English kings between 1200 to 1270. Fairs grew in popularity as the international wool trade increased: the fairs allowed English wool producers and ports on the east coast to engage with visiting foreign merchants, circumnavigating those English merchants in London keen to make a profit as middlemen. At the same time, wealthy magnate consumers in England began to use the new fairs as a way to buy goods like spices, wax, preserved fish and foreign cloth in bulk from the international merchants at the fairs, again bypassing the usual London merchants.
Some fairs grew into major international events, falling into a set sequence during the economic year, with the Stamford fair in Lent, St Ives' in Easter, Boston's in July, Winchester's in September and Northampton's in November, with the many smaller fairs falling in-between. Although not as large as the famous Champagne fairs in France, these English "great fairs" were still huge events; St Ives' Great Fair, for example, drew merchants from Flanders, Brabant, Norway, Germany and France for a four-week event each year, turning the normally small town into "a major commercial emporium".
The structure of the fairs reflected the importance of foreign merchants in the English economy and by 1273 only one-third of the English wool trade was actually controlled by English merchants. Between 1280 and 1320 the trade was primarily dominated by Italian merchants, but by the early 14th century German merchants had begun to present serious competition to the Italians. The Germans formed a self-governing alliance of merchants in London called the "Hanse of the Steelyard" – the eventual Hanseatic League – and their role was confirmed under the Great Charter of 1303, which exempted them from paying the customary tolls for foreign merchants.[nb 1] One response to this was the creation of the Company of the Staple, a group of merchants established in English-held Calais in 1314 with royal approval, who were granted a monopoly on wool sales to Europe.
Jewish contribution to the English economy
The Jewish community in England continued to provide essential money-lending and banking services that were otherwise banned by the usury laws, and grew in the 12th century by Jewish immigrants fleeing the fighting around Rouen. The Jewish community spread beyond London to eleven major English cities, primarily the major trading hubs in the east of England with functioning mints, all with suitable castles for protection of the often persecuted Jewish minority. By the time of the Anarchy and the reign of Stephen, the communities were flourishing and providing financial loans to the king.
Under Henry II, the Jewish financial community continued to grow richer still. All major towns had Jewish centres, and even smaller towns, such as Windsor, saw visits by travelling Jewish merchants. Henry II used the Jewish community as "instruments for the collection of money for the Crown", and placed them under royal protection. The Jewish community at York lent extensively to fund the Cistercian order's acquisition of land and prospered considerably. Some Jewish merchants grew extremely wealthy, Aaron of Lincoln so much that upon his death a special royal department had to be established to unpick his financial holdings and affairs.
By the end of Henry's reign the king ceased to borrow from the Jewish community and instead turned to an aggressive campaign of tallage taxation and fines. Financial and anti-Semite violence grew under Richard I. After the massacre of the York community, in which numerous financial records were destroyed, seven towns were nominated to separately store Jewish bonds and money records and this arrangement ultimately evolved into the Exchequer of the Jews. After an initially peaceful start to John's reign, the king again began to extort money from the Jewish community, imprisoning the wealthier members, including Isaac of Norwich, until a huge, new taillage was paid. During the Baron's War of 1215–17, the Jews were subjected to fresh anti-Semitic attacks. Henry III restored some order and Jewish money-lending became sufficiently successful again to allow fresh taxation. The Jewish community became poorer towards the end of the century and was finally expelled from England in 1290 by Edward I, being largely replaced by foreign merchants.
Governance and taxation
During the 12th century the Norman kings attempted to formalise the feudal governance system initially created after the invasion. After the invasion the king had enjoyed a combination of income from his own demesne lands, the Anglo-Saxon geld tax and fines. Successive kings found that they needed additional revenues, especially in order to pay for mercenary forces. One way of doing this was to exploit the feudal system, and kings adopted the French feudal aid model, a levy of money imposed on feudal subordinates when necessary; another method was to exploit the scutage system, in which feudal military service could be transmuted to a cash payment to the king. Taxation was also an option, although the old geld tax was increasingly ineffective due to a growing number of exemptions. Instead, a succession of kings created alternative land taxes, such as the tallage and carucage taxes. These were increasingly unpopular and, along with the feudal charges, were condemned and constrained in the Magna Carta of 1215. As part of the formalisation of the royal finances, Henry I created the Chancellor of the Exchequer, a post which would lead to the maintenance of the Pipe rolls, a set of royal financial records of lasting significance to historians in tracking both royal finances and medieval prices.
Royal revenue streams still proved insufficient and from the middle of the 13th century there was a shift away from the earlier land-based tax system towards one based on a mixture of indirect and direct taxation. At the same time, Henry III had introduced the practice of consulting with leading nobles on tax issues, leading to the system whereby the English parliament agreed on new taxes when required. In 1275, the "Great and Ancient Custom" began to tax woollen products and hides, with the Great Charter of 1303 imposing additional levies on foreign merchants in England, with the poundage tax introduced in 1347. In 1340, the discredited tallage tax system was finally abolished by Edward III. Assessing the total impact of changes to royal revenues between 1086 and 1290 is difficult. At best, Edward I was struggling in 1300 to match in real terms the revenues that Henry II had enjoyed in 1100, and considering the growth in the size of the English economy, the king's share of the national income had dropped considerably.
In the English towns the burgage tenure for urban properties was established early on in the medieval period, and was based primarily on tenants paying cash rents rather than providing labour services. Further development of a set of taxes that could be raised by the towns included murage for walls, pavage for streets, and pontage, a temporary tax for the repair of bridges. Combined with the lex mercatoria, which was a set of codes and customary practices governing trading, these provided a reasonable basis for the economic governance of the towns.
The 12th century also saw a concerted attempt to curtail the remaining rights of unfree peasant workers and to set out their labour rents more explicitly in the form of the English Common Law. This process resulted in the Magna Carta explicitly authorising feudal landowners to settle law cases concerning feudal labour and fines through their own manorial courts rather than through the royal courts. These class relationships between lords and unfree peasants had complex economic implications. Peasant workers resented being unfree, but having continuing access to agricultural land was also important. Under those rare circumstances where peasants were offered a choice between freedom but no land, and continued servitude, not all chose freedom and a minority chose to remain in servitude on the land. Lords benefited economically from their control of the manorial courts and dominating the courts made it easier to manipulate land ownership and rights in their own favour when land became in particularly short supply at the end of this period. Many of the labour duties lords could compel from the local peasant communities became less useful over the period. Duties were fixed by custom, inflexible and understandably resented by the workers involved. As a result, by the end of the 13th century the productivity of such forced labour was significantly lower than that of free labour employed to do the same task. A number of lords responded by seeking to commute the duties of unfree peasants to cash alternatives, with the aim of hiring labour instead.
Mid-medieval economic crisis – the Great Famine and the Black Death (1290–1350)
The Great Famine of 1315 began a number of acute crises in the English agrarian economy. The famine centred on a sequence of harvest failures in 1315, 1316 and 1321, combined with an outbreak of the murrain sickness amongst sheep and oxen in 1319–21 and the fatal ergotism fungi amongst the remaining stocks of wheat. Many people died in the ensuing famine, and the peasantry were said to have been forced to eat horses, dogs and cats as well as conducted cannibalism against children, although these last reports are usually considered to be exaggerations. Poaching and encroachment on the royal forests surged, sometimes on a mass scale. Sheep and cattle numbers fell by up to a half, significantly reducing the availability of wool and meat, and food prices almost doubled, with grain prices particularly inflated. Food prices remained at similar levels for the next decade. Salt prices also increased sharply due to the wet weather.
Various factors exacerbated the crisis. Economic growth had already begun to slow significantly in the years prior to the crisis and the English rural population was increasingly under economic stress, with around half the peasantry estimated to possess insufficient land to provide them with a secure livelihood. Where additional land was being brought into cultivation, or existing land cultivated more intensively, the soil may have become exhausted and useless. Bad weather also played an important part in the disaster; 1315–16 and 1318 saw torrential rains and an incredibly cold winter, which in combination badly impacted on harvests and stored supplies. The rains of these years were followed by draught in the 1320s and another fierce winter in 1321, complicating recovery. Disease, independent of the famine, was also high during the period, striking at the wealthier as well as the poorer classes. The commencement of war with France in 1337 only added to the economic difficulties. The Great Famine firmly reversed the population growth of the 12th and 13th centuries and left a domestic economy that was "profoundly shaken, but not destroyed".
The Black Death epidemic first arrived in England in 1348, re-occurring in waves during 1360–62, 1368–69, 1375 and more sporadically thereafter. The most immediate economic impact of this disaster was the widespread loss of life, between around 27% mortality amongst the upper classes, to 40–70% amongst the peasantry.[nb 2] Despite the very high loss of life, few settlements were abandoned during the epidemic itself, but many were badly affected or nearly eliminated altogether. The medieval authorities did their best to respond in an organised fashion, but the economic disruption was immense. Building work ceased and many mining operations paused. In the short term, efforts were taken by the authorities to control wages and enforce pre-epidemic working conditions. Coming on top of the previous years of famine, however, the longer-term economic implications were profound. In contrast to the previous centuries of rapid growth, the English population would not begin to recover for over a century, despite the many positive reasons for a resurgence. The crisis would dramatically affect English agriculture, wages and prices for the remainder of the medieval period.
Late medieval economic recovery (1350–1509)
The events of the crisis between 1290 and 1348 and the subsequent epidemics produced many challenges for the English economy. In the decades after the disaster, the economic and social issues arising from the Black Death combined with the costs of the Hundred Years War to produce the Peasants Revolt of 1381. Although the revolt was suppressed, it undermined many of the vestiges of the feudal economic order, and the countryside became dominated by estates organised as farms, frequently owned or rented by the new economic class of the gentry. The English agricultural economy remained depressed throughout the 15th century; growth at this time came from the greatly increased English cloth trade and manufacturing. The economic consequences of this varied considerably from region to region, but generally London, the South and the West prospered at the expense of the Eastern and the older cities. The role of merchants and trade became increasingly seen as important to the country, and usury gradually became more widely accepted, with English economic thinking increasingly influenced by Renaissance humanist theories.
Governance and taxation
Even before the end of the first outbreak of the Black Death, there were efforts by the authorities to stem the upward pressure on wages and prices, with parliament passing the emergency Ordinance of Labourers in 1349 and the Statute of Labourers in 1351. The efforts to regulate the economy continued as wages and prices rose, putting pressure on the landed classes, and in 1363 parliament attempted unsuccessfully to centrally regulate craft production, trading and retailing. A rising amount of the royal courts' time was involved in enforcing the failing labour legislation – as much as 70% by the 1370s. Many land owners attempted to vigorously enforce rents payable through agricultural service rather than money through their local manor courts, leading to attempts by many village communities to legally challenge local feudal practices using the Domesday Book as a legal basis for their claims. With the wages of the lower classes still rising, the government also attempted to regulate demand and consumption by reinstating the sumptuary laws in 1363. These laws banned the lower classes from consuming certain products or wearing high-status clothes, and reflected the significance of the consumption of high-quality breads, ales and fabrics as a way of signifying social class in the late medieval period.
The 1370s also saw the government facing difficulties in funding the war with France. The impact of the Hundred Years War on the English economy as a whole remains uncertain; one suggestion is that the high taxation required to pay for the conflict "shrunk and depleted" the English economy, whilst others have argued for a more modest or even neutral economic impact for the war. The English government clearly found it difficult to pay for its army and from 1377 turned to a new system of poll taxes, aiming to spread the costs of taxation across the entirety of English society.
Peasants' Revolt of 1381
One result of the economic and political tensions was the Peasants' Revolt of 1381, in which widespread rural discontent was followed by an invasion of London involving thousands of rebels. The rebels had many demands, including the effective end of the feudal institution of serfdom and a cap on the levels of rural rents. The ensuing violence took the political classes by surprise and the revolt was not fully put down until the autumn; up to 7,000 rebels were executed in the aftermath. As a result of the revolt, parliament retreated from the poll tax and instead focused on a system of indirect taxes centring on foreign trade, drawing 80% of tax revenues from the exports of wool. Parliament continued to collect direct tax levies at historically high levels up until 1422, although they reduced them in later years. As a result, successive monarchs found that their tax revenues were uncertain, and Henry VI enjoyed less than half the annual tax revenue of the late 14th century. England's monarchs became increasingly dependent on borrowing and forced loans to meet the gap between taxes and expenditure and even then faced later rebellions over levels of taxation, including the Yorkshire rebellion of 1489 and the Cornish rebellion of 1497 during the reign of Henry VII.
Agriculture, fishing and mining
Collapse of the demesne and the creation of the farming system
The agricultural sector of the English economy, still by far the largest, was transformed by the Black Death. With the shortage of manpower after the Black Death, wages for agricultural labourers rapidly increased and continued to then grow steadily throughout the 15th century. As their incomes increased, labourers' living conditions and diet improved steadily. A trend for labourers to eat less barley and more wheat and rye, and to replace bread in their diet with more meat, had been apparent since before the Black Death, but intensified during this later period. Nonetheless, England's much smaller population needed less food and the demand for agricultural products fell. The position of the larger landowners became increasingly difficult. Revenues from demesne lands were diminishing as demand remained low and wage costs increased; nobles were also finding it more difficult to raise revenue from their local courts, fines and privileges in the years after the Peasants Revolt of 1381. Despite attempts to increase money rents, by the end of the 14th century the rents paid from peasant lands were also declining, with revenues falling as much as 55% between the 1380s and 1420s.
Noble and church landowners responded in various ways. They began to invest significantly less in agriculture and land was increasingly taken out of production altogether. In some cases entire settlements were abandoned, and nearly 1,500 villages were lost during this period. Landowners also abandoned the system of direct management of their demesne lands, which had begun back in the 1180s, and turned instead to "farming" out large blocks of land for fixed money rents. Initially, livestock and land were rented out together under "stock and lease" contracts, but this was found to be increasingly impractical and contracts for farms became centred purely on land. Many of the rights to church parish tithes were also "farmed" out in exchange for fixed rents. This process was encouraged by the trend for tithe revenues being increasing "appropriated" by central church authorities, rather than being used to support local clergy: around 39% of parish tithes had been centralised in this way by 1535. As the major estates transformed, a new economic grouping, the gentry, became evident, many of them benefiting from the opportunities of the farming system. Land distribution remained heavily unequal; estimates suggest that the English nobility owned 20% of English lands, the Church and Crown 33%, the gentry 25%, and the remainder was owned by peasant farmers. Agriculture itself continued to innovate, and the loss of many English oxen to the murrain sickness in the crisis increased the number of horses used to plough fields in the 14th century, a significant improvement on older methods.
Forests, fishing and mining
The royal forests continued to diminish in size and decline in economic importance in the years after the Black Death. Royal enforcement of forest rights and laws became harder after 1348 and certainly after 1381, and by the 15th century the royal forests were a "shadow of their former selves" in size and economic significance. In contrast, the English fishing industry continued to grow, and by the 15th century domestic merchants and financiers owned fleets of up to a hundred fishing vessels operating from key ports. Herring remained a key fishing catch, although as demand for herring declined with rising prosperity, the fleets began to focus instead on cod and other deep-sea fish from the Icelandic waters. Despite being critical to the fishing industry, salt production in England diminished in the 15th century due to competition from French producers. The use of expensive freshwater fish ponds on estates began to decline during this period, as more of the gentry and nobility opted to purchase freshwater fish from commercial river fisheries.
Mining generally performed well at the end of the medieval period, helped by buoyant demand for manufactured and luxury goods. Cornish tin production plunged during the Black Death itself, leading to a doubling of prices. Tin exports also collapsed catastrophically, but picked up again over the next few years. By the turn of the 16th century, the available alluvial tin deposits in Cornwall and Devon had begun to decline, leading to the commencement of bell and surface mining to support the tin boom that had occurred in the late 15th century. Lead mining increased, and output almost doubled between 1300 and 1500. Wood and charcoal became cheaper once again after the Black Death, and coal production declined as a result, remaining depressed for the rest of the period – nonetheless, some coal production was occurring in all the major English coalfields by the 16th century. Iron production continued to increase; the Weald in the South-East began to make increased use of water-power, and overtook the Forest of Dean in the 15th century as England's main iron-producing region. The first blast furnace in England, a major technical step forward in metal smelting, was created in 1496 in Newbridge in the Weald.
Trade, manufacturing and the towns
The percentage of England's population living in towns continued to grow but in absolute terms English towns shrunk significantly as a consequence of the Black Death, especially in the formerly prosperous east. The importance of England's Eastern ports declined over the period, as trade from London and the South-West increased in relative significance. Increasingly elaborate road networks were built across England, some involving the construction of up to thirty bridges to cross rivers and other obstacles. Nonetheless, it remained cheaper to move goods by water, and consequently timber was brought to London from as far away as the Baltic, and stone from Caen brought over the Channel to the South of England. Shipbuilding, particular in the South-West, became a major industry for the first time and investment in trading ships such as cogs was probably the single biggest form of late medieval investment in England.
Rise of the cloth trade
Cloth manufactured in England increasingly dominated European markets during the 15th and early 16th centuries. England exported almost no cloth at all in 1347, but by 1400 around 40,000 cloths[nb 3] a year were being exported – the trade reached its first peak in 1447 when exports reached 60,000. Trade fell slightly during the serious depression of the mid-15th century, but picked up again and reached 130,000 cloths a year by the 1540s. The centres of weaving in England shifted westwards towards the Stour Valley, the West Riding, the Cotswolds and Exeter, away from the former weaving centres in York, Coventry and Norwich.
The wool and cloth trade was primarily now being run by English merchants themselves rather than by foreigners. Increasingly, the trade was also passing through London and the ports of the South-West. By the 1360s, 66–75% of the export trade was in English hands and by the 15th century this had risen to 80%; London managed around 50% of these exports in 1400, and as much as 83% of wool and cloth exports by 1540. The growth in the numbers of chartered trading companies in London, such as the Worshipful Company of Drapers or the Company of Merchant Adventurers of London, continued, and English producers began to provide credit to European buyers, rather than the other way around. Usury grew during the period, and few cases were prosecuted by the authorities.
There were some reversals. The attempts of English merchants to break through the Hanseatic league directly into the Baltic markets failed in the domestic political chaos of the Wars of the Roses in the 1460s and 1470s. The wine trade with Gascony fell by half during the war with France, and the eventual loss of the province brought an end to the English domination of the business and temporary disruption to Bristol's prosperity until wines began to be imported through the city a few years later. Indeed, the disruption to both the Baltic and the Gascon trade contributed to a sharp reduction in the consumption of furs and wine by the English gentry and nobility during the 15th century.
There were advances in manufacturing, especially in the South and West. Despite some French attacks, the war created much coastal prosperity thanks to the huge expenditure on shipbuilding during the war, and the South-West also became a centre for English piracy against foreign vessels. Metalworking continued to grow, and in particular pewter working, which generated exports second only to cloth. By the 15th century pewter working in London was a large industry, with a hundred pewter workers recorded in London alone, and pewter working had also spread from the capital to eleven major cities across England. London goldsmithing remained significant but saw relatively little growth, with around 150 goldsmiths working in London during the period. Iron-working continued to expand and in 1509 the first cast-iron cannon was made in England. This was reflected in the rapid growth in the number of iron-working guilds, from three in 1300 to fourteen by 1422.
The result was a substantial influx of money that in turn encouraged the import of manufactured luxury goods; by 1391 shipments from abroad routinely included "ivory, mirrors, paxes, armour, paper..., painted clothes, spectacles, tin images, razors, calamine, treacle, sugar-candy, marking irons, patens..., ox-horns and quantities of wainscot". Imported spices now formed a part of almost all noble and gentry diets, with the quantities being consumed varying according to the wealth of the household. The English government was also importing large quantities of raw materials, including copper, for manufacturing weapons. Many major landowners tended to focus their efforts on maintaining a single major castle or house rather than the dozens a century before, but these were usually decorated much more luxurious than previously. Major merchants' dwellings, too, were more lavish than in previous years.
Decline of the fair system
Towards the end of the 14th century, the position of fairs began to decline. The larger merchants, particularly in London, began to establish direct links with the larger landowners such as the nobility and the church; rather than the landowner buying from a chartered fair, they would buy directly from the merchant. Meanwhile, the growth of the indigenous England merchant class in the major cities, especially London, gradually crowded out the foreign merchants upon whom the great chartered fairs had largely depended. The crown's control over trade in the towns, especially the emerging newer towns towards the end of the 15th century that lacked central civic government, was increasingly weaker, making chartered status less relevant as more trade occurred from private properties and took place all year around. Nonetheless, the great fairs remained of importance well into the 15th century, as illustrated by their role in exchanging money, regional commerce and in providing choice for individual consumers.
The first studies into the medieval economy of England began in the 1880s, principally around the work of English jurist and historian Frederic Maitland. This scholarship, drawing extensively on documents such as the Domesday Book and the Magna Carta, became known as the "Whiggish" view of economic history, focusing on law and government. Late Victorian writers argued that change in the English medieval economy stemmed primarily from the towns and cities, leading to a progressive and universalist interpretation of development over the period, focusing on trade and commerce. Influenced by the evolution of Norman laws, Maitland argued that there was a clear discontinuity between the Anglo-Saxon and Norman economic systems.
In the 1930s the Whiggish view of the English economy was challenged by a group of scholars at the University of Cambridge, led by Eileen Power. Power and her colleagues widened the focus of study from legal and government documents to include "agrarian, archaeological, demographic, settlement, landscape and urban" evidence. This was combined with a neo-positivist and econometric leaning that was at odds with the older Victorian tradition in the subject. Power died in 1940, and her student and later husband, Michael Postan took forward their work, coming to dominate the post-war field. Postan argued that demography was the principal driving force in the medieval English economy. In a distinctly Malthusian fashion, Postan proposed that the English agrarian economy saw little technical development during the period and by the early 14th century was unable to support the growing population, leading to inevitable famines and economic depression as the population came back into balance with land resources. Postan began the trend towards stressing continuities between the pre- and post-invasion economies, aided by fresh evidence emerging from the use of archaeological techniques to understand the medieval economy from the 1950s onwards.
A Marxist critique of Postan emerged from the 1950s onwards, captured in the academic journal Past & Present. This school of thought agreed that the agrarian economy was central to medieval England, but argued that agrarian issues had less to do with demography than with the mode of production and feudal class relations. In this model the English economy entered the crisis of the early 14th century because of the struggles between landlords and peasant for resources and excessive extraction of rents by the nobility. Similar issues underpinned the Peasants Revolt of 1381 and later tax rebellions. Historians such as Frank Stenton developed the "honour" as a unit of economic analysis and a focus for understanding feudal relations in peasant communities; Rodney Hilton developed the idea of the rise of the gentry as a key feature for understanding the late medieval period.
Fresh work in the 1970s and 1980s challenged both Postan's and Marxist approaches to the medieval economy. Local studies of medieval economics, often in considerable detail and fusing new archaeological techniques and rescue archaeology with historical sources, often ran counter to their broader interpretations of change and development. The degree to which feudalism really existed and operated in England after the initial years of the invasion was thrown into considerable doubt, with historians such as David Crouch arguing that it existed primarily as a legal and fiscal model, rather than an actual economic system. Sociological and anthropological studies of contemporary economies, including the work of Ester Boserup showed many flaws with Postan's key assumptions about demography and land use. The current academic preference is to see the English medieval economy as an "overlapping network of diverse communities", in which active local choices and decisions are the result of independent agency, rather than historical determinism.
- History of the English penny (c. 600 – 1066)
- History of the English penny (1154–1485)
- John and William Merfold
- Hanse is the old English word for "group".
- The precise mortality figures for the Black Death have been debated at length for many years.
- A "cloth" in medieval times was a single piece of woven fabric from a loom of a fixed size; an English broadcloth, for example, was 24 yards long and 1.75 yards wide (22 m by 1.6 m).
- Dyer 2009, p. 14.
- Bartlett, p. 313; Dyer 2009, p. 14.
- Homer, p. 58; Hatcher 1996, p. 40; Bailey, p. 55.
- Hodgett, p. 148; Ramsay, p. xxxi; Kowalesk, p. 248.
- Cantor 1982a, p. 18.
- Bailey, p. 41; Bartlett, p. 321; Cantor 1982a, p. 19.
- Hodgett, p. 57; Bailey, p. 47; Pounds, p. 15.
- Hillaby, p. 16; Dyer 2009, p. 115.
- Blanchard, p. 29.
- Jordan, p. 12; Bailey, p. 46; Aberth, pp. 26–7; Cantor 1982a, p. 18; Jordan, p. 12.
- Hodgett, p. 206; Bailey, p. 46.
- Jones, p. 201.
- Myers, pp. 161–4; Raban, p. 50; Barron, p. 78.
- Geddes, p. 181.
- Dyer 2009, p. 8.
- Bailey, p. 41.
- Cantor 1982a, pp. 17–8.
- Bailey, p. 44.
- Dyer 2009, p. 25.
- Dyer 2009, pp. 27, 29.
- Dyer 2009, pp. 19, 22.
- Dyer 2009, pp. 19–21.
- Bartlett, p. 313.
- Dyer 2009, p. 26.
- Douglas, p. 310.
- Bartlett, p. 319; Douglas, p. 311.
- Dyer 2009, pp. 36–8.
- Douglas, p. 312.
- Dyer 2009, pp. 81–2.
- Dyer 2009, p. 18.
- Huscroft, p. 97.
- Cantor 1982b, p. 63.
- Cantor 1982b, p. 59.
- Cantor 1982a, p. 18; Cantor 1982b, p. 81.
- Stenton, pp. 162, 166.
- Douglas, p. 303.
- Sutton, p. 2.
- Douglas, p. 313.
- Douglas, p. 314.
- Hillaby, pp. 16–7.
- Douglas, pp. 303–4.
- Stenton, p. 162.
- Douglas, p. 299.
- Douglas, pp. 299, 302.
- Cantor 1982a, p. 18, suggests an English population of 4 million; Jordan, p. 12, suggests 5 million.
- Burton, p. 8.
- Wood, p. 15.
- Myers, p. 55.
- Bailey, p. 51.
- Bailey, p. 53.
- Bailey, p. 53; Keen, p. 134.
- Bartlett, p. 368; Bailey, p. 44.
- Cantor 1982b, p. 83.
- Bailey, pp. 44, 48.
- Dyer 2002, p. 164; Dyer 2009, p. 174.
- Dyer 2009, p. 174.
- Cantor 1982b, p. 61.
- Huscroft, p. 173; Birrell, p. 149,
- Cantor 1982b, p. 66.
- Cantor 1982b, p. 68.
- Bartlett, p. 315.
- Postan 1972, p. 107.
- Postan 1972, p. 111.
- Danziger and Gillingham, p. 44.
- Danziger and Gillingham, p. 45.
- Cantor 1982a, p. 19.
- Danziger and Gillingham, p. 47.
- Dyer 2009, p. 131.
- Dyer 2000, p. 102.
- Bailey, p.44; Dyer 2009, p. 128.
- Burton, pp. 55, 69; Dyer 2009, p. 114.
- Dyer 2009, p. 115.
- Dyer 2009, p. 156.
- Dyer 2009, pp. 156–7.
- Danziger and Gillingham, p. 38.
- Forey, pp. 111, 230; Postan 1972, p. 102.
- Forey, p. 230.
- Swanson, p. 89.
- Swanson, p. 90.
- Swanson, p. 89; Dyer 2009, p. 35.
- Dyer 2009, p. 195.
- Swanson, p. 101.
- Hodgett, p. 158; Barnes, p. 245.
- Homer, p. 57; Bayley pp. 131–2.
- Geddes, p. 169; Bailey, p. 54.
- Geddes, p. 169.
- Geddes, pp. 169, 172.
- Blanchard, p. 33.
- Homer, p. 57, pp. 61–2; Bailey, p. 55.
- Homer, pp. 57, 62.
- Homer, p. 62.
- Astill, p. 46.
- Hodgett, p. 57.
- Astill, pp. 48–9.
- Pounds, p. 80.
- Nightingale, p. 92; Danziger and Gillingham, p. 58.
- Geddes, pp. 174–5, 181.
- Homer, pp. 57–8.
- Bailey, p. 46; Homer, p. 64.
- Bartlett, p. 361.
- Bartlett, p. 361; Bailey, p. 52; Pilkinton p. xvi.
- Hodgett, p. 109.
- Bartlett, p. 363; Hodgett p. 109.
- Bartlett, p. 364.
- Hodgett, p. 147.
- Ramsay, p. xxxi.
- Stenton, p. 169.
- Stenton, pp. 169–70.
- Bailey, p. 49.
- Bolton pp. 32–3.
- Stenton, p. 163.
- Ramsay, p. xx.
- Myers, p. 68.
- Hodgett, p. 147; Ramsay, p. xx.
- Myers, p. 69; Ramsay, p. xx.
- Myers, p. 69.
- Myers, p. 69; Ramsay, p. xxiii.
- Dyer 2009, p. 209.
- Danziger and Gillingham, p. 65; Reyerson, p. 67.
- Danziger and Gillingham, p. 65.
- Dyer 2009, p. 192; Harding, p. 109.
- Dyer 2009, p. 209; Ramsay, p. xxiv; Danziger and Gillingham, p. 65.
- Hodgett, p. 148.
- Hodgett, p. 85.
- Postan 1972, pp. 245–7.
- Hillaby, p. 16.
- Hillaby, pp. 21–2.
- Hillaby, p. 22; Stenton, pp. 193–4.
- Stenton, pp. 193–4.
- Stenton, p. 194.
- Stenton, p. 197.
- Hillaby, p. 28.
- Stenton, p. 200.
- Hillaby, p. 29; Stenton, p. 200.
- Stenton, p. 199.
- Hillaby, p. 35.
- Stacey, p. 44.
- Lawler and Lawler, p. 6.
- Bartlett, p. 159; Postan 1972, p. 261.
- Hodgett, p. 203.
- Brown, Alfred 1989, p. 76.
- Carpenter, p. 51.
- Tait, pp. 102–3.
- Cooper, p.127.
- Swedberg, p. 77.
- Bartlett, p. 321.
- Danziger and Gillingham, pp. 41–2.
- Bartlett, p. 316.
- Postan 1972, p. 169.
- Dyer 2009, p. 134.
- Cantor 1982a, p. 20; Aberth, p. 14.
- Aberth, pp. 13–4.
- Richardson, p. 32.
- Jordan, pp. 38, 54; Aberth, p. 20.
- Jordan, p.54.
- Postan 1972, pp. 26–7; Aberth, p. 26; Cantor 1982a, p. 18; Jordan, p. 12.
- Aberth, p. 34; Jordan, pp. 17, 19.
- Jordan, p. 17.
- Fryde and Fryde, p. 754.
- Jordan, p. 78; Hodgett, p. 201.
- Dyer 2009, pp. 271, 274; Hatcher 1996, p. 37.
- Dyer 2009, p. 272; Hatcher 1996, p. 25.
- Dyer 2009, p. 274.
- Dyer 2009, pp. 272–3.
- Dyer 2009, p. 273.
- Fryde and Fryde, p. 753.
- Hatcher 1996, p. 61.
- Dyer 2009, p. 278.
- Kowaleski, p. 233.
- Hatcher 1996, p. 36; Lee, p. 127.
- Dyer 2009, pp. 300–1.
- Wood, pp. 120, 173.
- Fryde and Fryde, p. 753; Bailey, p. 47.
- Ramsay, p. xxii; Jones, p. 14.
- Jones, p. 15.
- Jones, p. 17.
- Jones, p. 16.
- Jones, p. 16; Woolgar, p. 20.
- Postan 1942, p. 10; McFarlane, p. 139.
- Jones, p. 21.
- Jones, p. 2.
- Jones, pp. 114–5.
- Jones, p. 207; McFarlane, p. 143.
- McFarlane, p. 143.
- McFarlane, p. 143; Hodgett, p. 204.
- McFarlane, p. 143; Hodgett, p. 204; Fletcher and MacCulloch, pp. 20–2.
- Fryde and Fryde, p. 753; Bailey, pp. 46–7.
- Bailey, p. 47.
- Dyer 2000, p. 91.
- Hodgett, p. 205.
- Hodgett, p. 206.
- Swanson, p. 94.
- Swanson, pp. 94, 106.
- Aberth, pp. 27–8.
- Cantor 1982b, p. 69.
- Dyer 2000, p. 107.
- Homer, p. 58.
- Hatcher 1996, p. 40.
- Bailey, p. 55.
- Bailey, p. 54.
- Geddes, p. 174.
- Bailey, p. 48.
- Hodgett, p. 110.
- Kowaleski, p. 235.
- Hodgett, p. 142.
- Lee, p. 127.
- Wood, p. 173.
- Postan 1972, p. 219.
- Kowaleski, p. 238; Postan 1972, p. 219; Pilkinton, p. xvi.
- Hatcher 2002, p. 266.
- Kowaleski, pp. 235, 252.
- Homer, p. 73.
- Homer, pp. 68, 70.
- Homer, p. 70.
- Geddes, p. 184.
- Ramsay, pp. xxxi–xxxii.
- Woolgar, p. 30.
- Ramsay, p. xxxii.
- Kermode, pp. 19–21.
- Dyer 2009, pp. 319–20.
- Ramsay, p. xxiv.
- Dyer 2009, p. 4.
- Dyer 2009, p. 4; Coss, p. 81.
- Rahman, pp. 177–8.
- Gerrard, p. 86.
- Crouch, pp. 178–9.
- Langdon, Astill and Myrdal, pp. 1–2.
- Dyer 2009, p. 5.
- Gerrard, pp. 98, 103.
- Coss, p. 86.
- Dyer 2009, p. 5; Langdon, Astill and Myrdal, p. 1.
- Crouch, p. 181; Coss, p. 81.
- Hinton, pp. vii–viii.
- Crouch, p. 271; Coss, p. 81.
- Dyer 2009, p. 5; Langdon, Astill and Myrdal, p. 2.
- Crouch, p. 186.
- Dyer 2009, pp. 7–8; Langdon, Astill and Myrdal, p. 3.
- Aberth, John. (2001) From the Brink of the Apocalypse: Confronting Famine, War, Plague and Death in the Later Middle Ages. London: Routledge. ISBN 0-415-92715-3.
- Abulafia, David. (ed) (1999) The New Cambridge Medieval History: c. 1198-c. 1300. Cambridge: Cambridge University Press. ISBN 978-0-521-36289-4.
- Anderson, Michael. (ed) (1996) British Population History: From the Black Death to the Present Day. Cambridge: Cambridge University Press. ISBN 978-0-521-57884-4.
- Archer, Rowena E. and Simon Walker. (eds) (1995) Rulers and Ruled in Late Medieval England. London: Hambledon Press. ISBN 978-1-85285-133-0.
- Armstrong, Lawrin, Ivana Elbl and Martin M. Elbl. (eds) (2007) Money, Markets and Trade in Late Medieval Europe: Essays in Honour of John H. A. Munro. Leiden: BRILL. ISBN 978-90-04-15633-3.
- Astill, Grenville. (2000) "General Survey 600–1300," in Palliser (ed) 2000.
- Astill, Grenville and John Langdon (eds) (2007) Medieval Farming and Technology: the Impact of Agricultural Change in Northwest Europe. Leiden: BRILL. ISBN 978-90-04-10582-9.
- Bailey, Mark. (1996) "Population and Economic Resources," in Given-Wilson (ed) 1996.
- Barron, Caroline. (2005) London in the Later Middle Ages: Government and People 1200–1500. Oxford: Oxford University Press. ISBN 978-0-19-928441-2.
- Barnes, Carl F. (2005) "A Note on Villard de Honnecourt and Metal," in Bork (ed) 2005.
- Bartlett, Robert. (2000) England under the Norman and Angevin Kings, 1075–1225. Oxford: Oxford University Press. ISBN 978-0-19-925101-8.
- Bayley, J. (2009) "Medieval Precious Metal Refining: Archaeology and Contemporary Texts Compared," in Martinon-Torres and Rehren (eds) 2009.
- Birrell, Jean. (1988) "Forest Law and the Peasantry in the Later Thirteenth Century," in Coss and Lloyd (eds) 1988.
- Blair, John and Nigel Ramsay. (eds) (2001) English Medieval Industries: Craftsmen, Techniques, Products. London: Hambledon Press. ISBN 978-1-85285-326-6.
- Blanchard, Ian. (2002) "Lothian and Beyond: the Economy of the "English Empire" of David I," in Britnell and Hatcher (eds) 2002.
- Bolton, J. K. (2007) "English Economy in the Early Thirteenth Century," in Church (ed) 2007.
- Bork, Robert Odell. (ed) (2005) De Re Metallica: The Uses of Metal in the Middle Ages. Aldershot, UK: Ashgate. ISBN 978-0-7546-5048-5.
- Britnell, Richard and John Hatcher (eds). (2002) Progress and Problems in Medieval England: Essays in Honour of Edward Miller. Cambridge: Cambridge University Press. ISBN 978-0-521-52273-1.
- Britnell, Richard and Ben Dodds (eds) (2008) Agriculture and Rural Society after the Black Death: common themes and regional variations. Hatfield, UK: University of Hatfield Press. ISBN 978-1-902806-79-2.
- Brown, R. Allen. (ed) (1989) Anglo-Norman Studies XI: Proceedings of the Battle Conference 1988. Woodbridge, UK: Boydell. ISBN 978-0-85115-526-5.
- Brown, Alfred L. (1989) The Governance of Late Medieval England, 1272–1461. Stanford: Stanford University Press. ISBN 978-0-8047-1730-4.
- Burton, Janet E. (1994) Monastic and Religious Orders in Britain, 1000–1300. Cambridge: Cambridge University Press. ISBN 978-0-521-37797-3.
- Cantor, Leonard (ed). (1982) The English Medieval Landscape. London: Croom Helm. ISBN 978-0-7099-0707-7.
- Cantor, Leonard. (1982a) "Introduction: the English Medieval Landscape," in Cantor (ed) 1982.
- Cantor, Leonard. (1982b) "Forests, Chases, Parks and Warrens," in Cantor (ed) 1982.
- Carpenter, David. (2004) The Struggle for Mastery: The Penguin History of Britain 1066–1284. London: Penguin Books. ISBN 978-0-14-014824-4.
- Church, S. D. (ed) (2007) King John: New Interpretations." Woodbridge, UK: Boydell Press. ISBN 978-0-85115-947-8.
- Cooper, Alan. (2006) Bridges, Law and Power in Medieval England, 700–1400. Woodbridge, UK: Boydell Press. ISBN 978-1-84383-275-1.
- Coss, Peter. (2002) "From Feudalism to Bastard Feudalism," in Fryde, Monnet and Oexle (eds) (2002)
- Coss, Peter and S.D. Lloyd (eds). (1988) Thirteenth Century England II: Proceedings of the Newcastle upon Tyne Conference 1987. Woodbridge, UK: Boydell Press. ISBN 978-0-85115-513-5.
- Crouch, David. (2005) The Birth of Nobility: Constructing Aristocracy in England and France : 900–1300. Harlow, UK: Pearson. ISBN 978-0-582-36981-8.
- Danziger, Danny and John Gillingham. (2003) 1215: The Year of the Magna Carta. London: Coronet Books. ISBN 978-0-7432-5778-7.
- Dobbin, Frank. (ed) (2004) The Sociology of the Economy. New York: Russell Sage Foundation. ISBN 978-0-87154-284-7.
- Douglas, David Charles. (1962) William the Conqueror: the Norman Impact upon England. Berkeley: University of California Press.
- Dyer, Christopher. (2000) Everyday life in medieval England. London: Hambledon. ISBN 978-1-85285-201-6.
- Dyer, Christopher. (2009) Making a Living in the Middle Ages: The People of Britain, 850 – 1520. London: Yale University Press. ISBN 978-0-300-10191-1.
- Fletcher, Anthony and Diarmaid MacCulloch. (2008) Tudor Rebellions. Harlow, UK: Pearson Education. ISBN 978-1-4058-7432-8.
- Forey, Alan. (1992) The Military Orders from the Twelfth to the Early Fourteenth Centuries. London: Macmillan. ISBN 0-333-46235-1.
- Fryde, E. B. and Natalie Fryde. (1991) "Peasant Rebellion and Peasant Discontents," in Miller (ed) 1991.
- Fryde, Natalie, Pierre Monnet and Oto Oexle. (eds) (2002) Die Gegenwart des Feudalismus. Göttingen, Germany: Vandenhoeck and Ruprecht. ISBN 978-3-525-35391-2.
- Geddes, Jane. (2001) "Iron," in Blair and Ramsay (eds) 2001.
- Gerrard, Christopher. (2003) Medieval Archaeology: Understanding Traditions and Contemporary Approaches. Abingdon, UK: Routledge. ISBN 978-0-415-23463-4.
- Given-Wilson, Chris (ed). (1996) An Illustrated History of Late Medieval England. Manchester: Manchester University Press. ISBN 978-0-7190-4152-5.
- Hamilton, J. S. (ed) (2006) Fourteenth Century England, Volume 4. Woodbridge, UK: Boydell Press. ISBN 978-1-84383-220-1.
- Harriss, G. L. (1975) King, Parliament and Public Finance in Medieval England to 1369. Oxford: Clarendon Press. ISBN 0-19-822435-4
- Hatcher, John. (1996) "Plague, Population and the English Economy," in Anderson (ed) 1996.
- Hatcher, John. (2002) "The great slump of the mid-fifteenth century," in Britnell and Hatcher (eds) 2002.
- Harding, Alan. (1997) England in the Thirteenth Century. Cambridge: Cambridge University Press. ISBN 978-0-521-31612-5.
- Hicks, Michael (eds). (2001) The Fifteenth Century 2: Revolution and Consumption in Late Medieval England. Woodbridge, UK: Boydell. ISBN 978-0-85115-832-7.
- Hillaby, Joe. (2003) "Jewish Colonisation in the Twelfth Century," in Skinner (ed) 2003.
- Hinton, David. (2002) Archaeology, Economy and Society: England from the Fifth to the Fifteenth Century. Abingdon, UK: Routledge. ISBN 978-0-203-03984-7.
- Hodgett, Gerald. (2006) A Social and Economic History of Medieval Europe. Abingdon, UK: Routledge. ISBN 978-0-415-37707-2.
- Homer, Ronald F. (2010) "Tin, Lead and Pewter," in Blair and Ramsay (eds) 2001.
- Huscroft, Richard. (2005) Ruling England, 1042–1217. Harlow, UK: Pearson. ISBN 978-0-582-84882-5.
- Jones, Dan. (2010) Summer of Blood: The Peasants' Revolt of 1381. London: Harper. ISBN 978-0-00-721393-1.
- Jordan, William Chester. (1997) The Great Famine: Northern Europe in the Early Fourteenth Century. Princeton: Princeton University Press. ISBN 978-0-691-05891-7.
- Keen, Laurence. (1989) "Coastal Salt Production in Norman England," in Brown R. (ed) 1989.
- Kermode, Jenny. (1998) Medieval Merchants: York, Beverley and Hull in the Later Middle Ages. Cambridge: Cambridge University Press. ISBN 978-0-521-52274-8.
- Kowalski, Maryanne. (2007) "Warfare, Shipping, and Crown Patronage: The Economic Impact of the Hundred Years War on the English Port Towns," in Armstrong, Elbl and Elbl (eds) 2007.
- Langdon, John, Grenville Astill and Janken Myrdal. (1997) "Introduction," in Astill and Langdon (eds) 1997.
- Lawler, John and Gail Gates Lawler. (2000) A Short Historical Introduction to the Law of Real Property. Washington DC: Beard Books. ISBN 978-1-58798-032-9.
- Lee, John. (2001) "The Trade of Fifteenth Century Cambridge and its Region," in Hicks (ed) 2001.
- Martinon-Torres, Marcos and Thilo Rehren (eds). (2009) Archaeology, History and Science: Integrating Approaches to Ancient Materials. Walnut Creek, California: Left Coast Press. ISBN 978-1-59874-350-0.
- McFarlane, Kenneth Bruce. (1981) England in the Fifteenth Century: Collected Essays. London: Hambledon Press. ISBN 978-0-907628-01-9.
- Miller, Edward. (ed) (1991) The Agrarian History of England and Wales, Volume III: 1348–1500. Cambridge: Cambridge University Press. ISBN 978-0-521-20074-5.
- Myers, A. R. (1971) England in the Late Middle Ages. Harmondsworth, UK: Penguin. ISBN 0-14-020234-X.
- Nightingale, Pamela. (2002) "The growth of London in the medieval English economy," in Britnell and Hatcher (eds) 2002.
- Palliser, D. M. (ed) (2000) The Cambridge Urban History of Britain: 600 – 1540, Volume 1. Cambridge: Cambridge University Press. ISBN 978-0-521-44461-3.
- Pilkinton, Mark Cartwright. (1997) Bristol. Toronto: University of Toronto Press. ISBN 978-0-8020-4221-7.
- Postan, M. M. (1942) "Some Social Consequences of the Hundred Years War," in Economic History Review, XII (1942).
- Postan, M. M. (1972) The Medieval Economy and Society. Harmondsworth, UK: Penguin. ISBN 0-14-020896-8.
- Pounds, Norman John Greville. (2005) The Medieval City. Westport, CT: Greenwood Press. ISBN 978-0-313-32498-7.
- Raban, Sandra. (2000) England Under Edward I and Edward II, 1259–1327. Oxford: Blackwell. ISBN 978-0-631-22320-7.
- Rahman, M. M. (2005) Encyclopaedia of Historiography. New Delhi: Anmol. ISBN 978-81-261-2305-6.
- Ramsay, Nigel. (2001) "Introduction," in Blair and Ramsay (eds) 2001.
- Reyerson, Kathryn L. (1999) "Commerce and communications," in Abulafia (ed) 1999.
- Richardson, Amanda. "Royal Landscapes," in Hamilton (ed) 2006.
- Skinner, Patricia (ed). (2003) The Jews in Medieval Britain: Historical, Literary, and Archaeological Perspectives. Woodbridge, UK: Boydell. ISBN 978-0-85115-931-7.
- Stacey, Robert C. (2003) "The English Jews under Henry III," in Skinner (ed) 2003.
- Stenton, Doris Mary. (1976) English Society in the Early Middle Ages (1066–1307). Harmondsworth, UK: Penguin. ISBN 0-14-020252-8.
- Sutton, Anne. F. (2005) The Mercery of London: Trade, Goods and People, 1130–1578. Aldershot, UK: Ashgate. ISBN 978-0-7546-5331-8.
- Swanson, Robert N. "A universal levy: tithes and economy agency," in Dodd and Britnell (eds) 2008.
- Swedberg, Richard. (2004) "On Legal Institutions and Their Role in the Economy," in Dobbin (ed) 2004.
- Tait, James. (1999) The Medieval English Borough: Studies on its Origins and Constitutional History. Manchester: Manchester University Press. ISBN 978-0-7190-0339-4.
- Wood, Diana. (2002) Medieval Economic Thought. Cambridge: Cambridge University Press. ISBN 978-0-521-45893-1.
- Woolgar, Christopher. (1995) "Diet and Consumption in Gentry and Noble Households: A Case Study from around the Wash," in Archer and Walker (eds) 1995. | http://en.wikipedia.org/wiki/Economy_of_England_in_the_Middle_Ages | 13 |
62 | The German reunification (German: Deutsche Wiedervereinigung) was the process in 1990 in which the German Democratic Republic (GDR/East Germany) joined the Federal Republic of Germany (FRG/West Germany), and when Berlin reunited into a single city, as provided by its then Grundgesetz constitution Article 23. The end of the unification process is officially referred to as German unity (German: Deutsche Einheit), celebrated on 3 October (German Unity Day).
The East German regime started to falter in May 1989, when the removal of Hungary's border fence opened a hole in the Iron Curtain. It caused an exodus of thousands of East Germans fleeing to West Germany and Austria via Hungary. The Peaceful Revolution, a series of protests by East Germans, led to the GDR's first free elections on 18 March 1990, and to the negotiations between the GDR and FRG that culminated in a Unification Treaty, while negotiations between the GDR and FRG and the four occupying powers produced the so-called "Two Plus Four Treaty" (Treaty on the Final Settlement with Respect to Germany) granting full sovereignty to a unified German state, whose two parts had previously still been bound by a number of limitations stemming from their post-World War II status as occupied regions. The united Germany remained a member of the European Community (later the European Union) and of NATO.
There is debate as to whether the events of 1990 should be properly referred to as a "reunification" or a "unification". Proponents of the former use the term in contrast with the initial unification of Germany in 1871. Also when the Saarland joined the West German Federal Republic of Germany on 1 January 1957, this was termed the Small Reunification. Popular parlance, which uses "reunification", is deeply affected by the 1989 opening of the Berlin Wall (and the rest of the inner German border) and the physical reunification of the city of Berlin (itself divided only since 1945). Others,[who?] however, argue that 1990 represented a "unification" of two German states into a larger entity which, in its resulting form, had never before existed (see History of Germany).
For political and diplomatic reasons, West German politicians carefully avoided the term "reunification" during the run-up to what Germans frequently refer to as die Wende. The official and most common term in German is "Deutsche Einheit" (in English "German unity"). German unity is the term that Hans-Dietrich Genscher used in front of international journalists to correct them when they asked him about "reunification" in 1990.
After 1990, the term "die Wende" became more common. The term generally refers to the events (mostly in Eastern Europe) that led up to the actual reunification; in its usual context, this term loosely translates to "the turning point", without any further meaning. When referring to the events surrounding unification, however, it carries the cultural connotation of the time and the events in the GDR that brought about this "turnaround" in German history. However, civil rights activists from Eastern Germany rejected the term Wende as it was introduced by SED's Secretary General Egon Krenz.
Precursors to reunification
In 1945, the Third Reich ended in defeat and Germany was divided into two separate areas, with the east controlled as part of the Communist Soviet Bloc and the west aligned to Capitalist Europe (which formed into the European Community), including a division in military alliance that formed into the Warsaw Pact and NATO, respectively. The capital city of Berlin was divided into four occupied sectors of control, under the Soviet Union, the United States, the United Kingdom and France. Germans lived under such imposed divisions throughout the ensuing Cold War.
Into the 1980s, the Soviet Union experienced a period of economic and political stagnation, and they correspondingly decreased intervention in Eastern Bloc politics. In 1987, US President Ronald Reagan gave a speech at Brandenburg Gate challenging Soviet leader Mikhail Gorbachev to "tear down this wall" that had separated Berlin. The wall had stood as an icon for the political and economic division between East and West, a division that Churchill had referred to as the "Iron Curtain". In early 1989, under a new era of Soviet policies of glasnost (openness), perestroika (economic restructuring) and taken to even more progressive levels by Gorbachev, the Solidarity movement took hold in Poland. Further inspired by other images of brave defiance, a wave of revolutions swept throughout the Eastern Bloc that year. In May 1989, Hungary removed their border fence and thousands of East Germans escaped to the West. The turning point in Germany, called "Die Wende", was marked by the "Peaceful Revolution" leading to the removal of the Berlin Wall with East and West Germany subsequently entering into negotiations toward eliminating the division that had been imposed upon Germans more than four decades earlier.
Process of reunification
On 28 November 1989—two weeks after the fall of the Berlin Wall—West German Chancellor Helmut Kohl announced a 10-point program calling for the two Germanies to expand their cooperation with the view toward eventual reunification.
Initially, no timetable was proposed. However, events rapidly came to a head in early 1990. First, in March, the Party of Democratic Socialism—the former Socialist Unity Party of Germany—was heavily defeated in East Germany's first free elections. A grand coalition was formed under Lothar de Maizière, leader of the East German wing of Kohl's Christian Democratic Union, on a platform of speedy reunification. Second, East Germany's economy and infrastructure underwent a swift and near-total collapse. While East Germany had long been reckoned as having the most robust economy in the Soviet bloc, the removal of Communist discipline revealed the ramshackle foundations of that system. The East German mark had been practically worthless outside of East Germany for some time before the events of 1989–90, further magnifying the problem.
Economic merger
Discussions immediately began for an emergency merger of the Germanies' economies. On 18 May 1990, the two German states signed a treaty agreeing on monetary, economic and social union. This treaty is called "Vertrag über die Schaffung einer Währungs-, Wirtschafts- und Sozialunion zwischen der Deutschen Demokratischen Republik und der Bundesrepublik Deutschland" in German and came into force on 1 July 1990, with the Deutsche Mark replacing the East German mark as the official currency of East Germany. The Deutsche Mark had a very high reputation among the East Germans and was considered stable. While the GDR transferred its financial policy sovereignty to West Germany, the West started granting subsidies for the GDR budget and social security system. At the same time many West German laws came into force in the GDR. This created a suitable framework for a political union by diminishing the huge gap between the two existing political, social, and economic systems.
German Reunification Treaty
"German reunification treaty", called "Einigungsvertrag" or "Wiedervereinigungsvertrag" in German, had been negotiated between the two Germanies since 2 July 1990, signed on 31 August of that year and finally approved by large majorities in the legislative chambers of both countries on 20 September 1990. The amendments to the Federal Republic's Basic Law that were foreseen in the Unification Treaty or necessary for its implementation were adopted by the Federal Statute of 23 September 1990. With that last step, Germany was officially reunited at 00:00 CET on 3 October 1990. East Germany joined the Federal Republic as the five Länder (states) of Brandenburg, Mecklenburg-Vorpommern, Saxony, Saxony-Anhalt and Thuringia. These states had been the five original states of East Germany, but had been abolished in 1952 in favour of a centralised system. As part of the 18 May treaty, the five East German states had been reconstituted on 23 August. At the same time, East and West Berlin reunited into one city, which became a city-state along the lines of the existing city-states of Bremen and Hamburg.
Constitutional merger
The process chosen was one of two options implemented in the West German constitution (Basic Law) of 1949. Via that document's (then-existing) Article 23, any new prospective Länder could adhere to the Basic Law by a simple majority vote. By this process, the five reconstituted states of East Germany voted to join West Germany. The initial eleven joining states of 1949 constituted the Trizone. West Berlin had been proposed as the 12th state, but was legally inhibited by Allied objections since Berlin as a whole was legally a quadripartite occupied area. In 1957 the Saar Protectorate joined West Germany under the Article 23 procedure as Saarland. As the five refounded eastern German states formally joined the Federal Republic using the Article 23 procedure, the area in which the Basic Law was in force simply extended to include them. The alternative would have been for East Germany to join as a whole along the lines of a formal union between two German states that then would have had to, amongst other things, create a new constitution for the newly established country.
Under the model that was chosen, however, the territory of the former German Democratic Republic was simply incorporated into the Federal Republic of Germany, and accordingly the Federal Republic of Germany, now enlarged to include the Eastern States, continued legally to exist under the same legal personality that was founded in May 1949.
Thus, the reunification was not a merger that created a third state out of the two, but an incorporation, by which West Germany absorbed East Germany. Accordingly, on Unification Day, 3 October 1990, the German Democratic Republic ceased to exist, giving way to five new Federal States, and East and West Berlin were also unified as a single city-state, forming a sixth new Federal State. The new Federal States immediately became parts of the Federal Republic of Germany, so that it was enlarged to include the whole territory of the former East Germany and Berlin.
International effects
The practical result of that model is that the now expanded Federal Republic of Germany continued to be a party to all the treaties it had signed prior to the moment of reunification, and thus inherited the old West Germany's seats at the UN, NATO, the European Communities, etc.; also, the same Basic Law and the same laws that were in force in the Federal Republic continued automatically in force, but now applied to the expanded territory.
To facilitate this process and to reassure other countries, some changes were made to the "Basic Law" (constitution). Article 146 was amended so that Article 23 of the current constitution could be used for reunification. After the five "New Länder" of East Germany had joined, the constitution was amended again to indicate that all parts of Germany are now unified. Article 23 was rewritten and it can still be understood as an invitation to others (e.g. Austria) to join, although the main idea of the change was to calm fears in (for example) Poland, that Germany would later try to rejoin with former eastern territories of Germany that were now Polish or parts of other countries in the East. The changes effectively formalised the Oder-Neisse line as Germany's permanent eastern border. These amendments to the Basic Law were mandated by Article I, section 4 of the Two Plus Four Treaty.
Day of German Unity
While the Basic Law was modified rather than replaced by a constitution as such, it still permits the adoption of a formal constitution by the German people at some time in the future.
To commemorate the day that marks the official unification of the former East and West Germany in 1990, 3 October has since then been the official German national holiday, the Day of German Unity (Tag der deutschen Einheit). It replaced the previous national holiday held in West Germany on 17 June commemorating the Uprising of 1953 in East Germany and the national holiday on 7 October in the GDR.
Foreign opposition
For decades, Germany's allies had stated their support for reunification; Israeli Prime Minister Yitzhak Shamir, who speculated that a country that "decided to kill millions of Jewish people" in the Holocaust "will try to do it again", was one of the few world leaders to publicly oppose it. As reunification became a realistic possibility, however, significant NATO and European opposition emerged in private.
A poll of four countries in January 1990 found that a majority of surveyed Americans and French supported reunification, while British and Poles were more divided. 69% of Poles and 50% of French and British stated that they worried about a reunified Germany becoming "the dominant power in Europe". Those surveyed stated several concerns, including Germany again attempting to expand its territory, a revival of Nazism, and the German economy becoming too powerful. While Britons, French and Americans favored Germany remaining a member of NATO, a majority of Poles supported neutrality for the reunified nation.
Britain and France
Before the fall of the Berlin Wall, British Prime Minister Margaret Thatcher told Soviet President Mikhail Gorbachev that neither the United Kingdom nor Western Europe wanted the reunification of Germany. Thatcher also clarified that she wanted the Soviet leader to do what he could to stop it, telling Gorbachev "We do not want a united Germany". Although she welcomed East German democracy, Thatcher worried that a rapid reunification might weaken Gorbachev, and favoured Soviet troops staying in East Germany as long as possible to act as a counterweight to a united Germany.
|“||We defeated the Germans twice! And now they're back!||”|
—Margaret Thatcher, December 1989
Thatcher, who carried in her handbag a map of Germany's 1937 borders to show others the "German problem", feared that its "national character", size and central location in Europe would cause the nation to be a "destabilizing rather than a stabilizing force in Europe". In December 1989, she warned fellow European Community leaders at a Strasbourg summit that Kohl attended, "We defeated the Germans twice! And now they're back!" Although Thatcher had stated her support for German self-determination in 1985, she now argued that Germany's allies had only supported reunification because they had not believed it would ever happen. Thatcher favoured a transition period of five years for reunification, during which the two Germanies would remain separate states. Although she gradually softened her opposition, as late as March 1990 Thatcher summoned historians and diplomats to a seminar at Chequers to ask "How dangerous are the Germans?" and the French ambassador in London reported that Thatcher had told him, "France and Great Britain should pull together today in the face of the German threat."
Similarly, a representative of French President François Mitterrand reportedly told an aide to Gorbachev, "France by no means wants German reunification, although it realises that in the end it is inevitable." At the Strasbourg summit, Mitterrand and Thatcher discussed the fluidity of Germany's historical borders. On 20 January 1990, Mitterrand told Thatcher that a unified Germany could "make more ground than even Hitler had". He predicted that "bad" Germans would reemerge, who might seek to regain former German territory lost after World War II and would likely dominate Hungary, Poland, and Czechoslovakia, leaving "only Romania and Bulgaria for the rest of us". The two leaders saw no way to prevent reunification, however, as "None of us was going to declare war on Germany". Mitterrand recognized before Thatcher that reunification was inevitable and adjusted his views accordingly; unlike her, he was hopeful that participation in a single currency and other European institutions could control a united Germany. Mitterrand still wanted Thatcher to publicly oppose unification, however, to obtain more concessions from Germany.
The rest of Europe
Other European leaders' opinion of reunification was "icy". Italy's Giulio Andreotti warned against a revival of "pan-Germanism", and the Netherlands' Ruud Lubbers questioned the German right to self-determination. They shared Britain and France's concerns over a return to German militarism and the economic power of a reunified nation. The consensus opinion was that reunification, if it must occur, should not occur until at least 1995 and preferably much later.
The four powers
|“||The United States – and President George H. W. Bush – recognized that Germany had gone through a long democratic transition. It had been a good friend, it was a member of NATO. Any issues that had existed in 1945, it seemed perfectly reasonable to lay them to rest. For us, the question wasn't should Germany unify? It was how and under what circumstances? We had no concern about a resurgent Germany...||”|
The victors of World War II—the United States, the Soviet Union, the United Kingdom, and France, comprising the Four-Power Authorities—retained authority over Berlin, such as control over air travel and its political status. The Soviet Union sought early to use reunification as a way to push Germany out of NATO into neutrality, removing nuclear weapons from its territory. West Germany interpreted a 21 November 1989 diplomatic message on the topic, however, as meaning that only two weeks after the Wall's collapse the Soviet leadership already anticipated reunification. This belief, and the worry that his rival Genscher might act first, encouraged Kohl on 28 November to announce a detailed "Ten Point Program for Overcoming the Division of Germany and Europe". While his speech was very popular within West Germany Kohl's speech caused concern among other European governments, with whom he had not discussed the plan.
The Americans did not share the Europeans' and Russians' historical fears over German expansionism, but wished to ensure that Germany would stay within NATO. In December 1989, the administration of President George H. W. Bush made a united Germany's continued NATO membership a requirement for supporting reunification. Kohl agreed, although less than 20% of West Germans supported remaining within NATO; he also wished to avoid a neutral Germany, as he believed that would destroy NATO, cause the United States and Canada to leave Europe, and Britain and France would form an alliance. The United States increased its support of Kohl's policies, as it feared that otherwise Oskar Lafontaine, a critic of NATO, might become Chancellor.
Horst Teltschik, Kohl's foreign policy advisor, later recalled that Germany would have paid "100 billion deutschmarks" had the Soviets demanded it. The USSR did not make such great demands, however, with Gorbachev stating in February 1990 that "The Germans must decide for themselves what path they choose to follow". In May 1990 he repeated his remark in the context of NATO membership while meeting Bush, amazing both the Americans and Germans.
During a NATO-Warsaw Pact conference in Ottawa, Canada, Genscher persuaded the four powers to treat the two Germanys as equals instead of defeated junior partners, and for the six nations to negotiate alone. Although the Dutch, Italians, Spanish, and other NATO powers opposed such a structure, which meant that the alliance's boundaries would change without their participation, the six nations began negotiations in March 1990. After Gorbachev's May agreement on German NATO membership, the Soviets further agreed that Germany would be treated as an ordinary NATO country, with the exception that former East German territory would not have foreign NATO troops or nuclear weapons. In exchange, Kohl agreed to reduce the sizes of both Germanies' militaries, renounce weapons of mass destruction, and accept the postwar Oder-Neisse line as Germany's eastern border. In addition, Germany agreed to pay about 55 billion deutschmarks to the Soviet Union in gifts and loans, the equivalent of eight days of the West German GDP.
Mitterrand agreed to reunification in exchange for a commitment from Kohl to the European Economic and Monetary Union.[not in citation given] The British insisted to the end, against Soviet opposition, that NATO be allowed to hold maneuvers in the former East Germany. After the Americans intervened, both the UK and France ratified the Treaty on the Final Settlement with Respect to Germany in September 1990, thus finalizing the reunification for purposes of international law. Thatcher later wrote that her opposition to reunification had been an "unambiguous failure".
On 14 November 1990, the united Germany and Poland signed the German–Polish Border Treaty, finalizing Germany's boundaries as permanent along the Oder-Neisse line, and thus, renouncing any claims to Silesia, East Brandenburg, Farther Pomerania, and the southern area of the former province of East Prussia. The treaty also granted certain rights for political minorities on either side of the border. The following month, the first all-German free elections since 1932 were held, resulting in an increased majority for the coalition government of Chancellor Helmut Kohl.
On 15 March 1991, the Treaty on the Final Settlement with Respect to Germany entered into force, putting an end to the remaining limitations on German sovereignty that resulted from the post World War II arrangements.
Inner reunification
Vast differences between the former East Germany and West Germany (for example, in lifestyle, wealth, political beliefs and other matters) remain, and it is therefore still common to speak of eastern and western Germany distinctly. The eastern German economy has struggled since unification, and large subsidies are still transferred from west to east. The former East Germany area has often been compared to the underdeveloped Southern Italy and the Southern United States during Reconstruction after the American Civil War. While the East German economy has recovered recently, the differences between East and West remain present.
Politicians and scholars have frequently called for a process of "inner reunification" of the two countries and asked whether there is "inner unification or continued separation". "The process of German unity has not ended yet", proclaimed Chancellor Angela Merkel, who grew up in East Germany, in 2009. Nevertheless, the question of this "inner reunification" has been widely discussed in the German public, politically, economically, culturally, and also constitutionally since 1989.
Politically, since the fall of the Wall, the successor party of the former East German socialist state party has become a major force in German politics. It was renamed PDS, and, later, merged with the Western leftist party WASG to form the party Die Linke (The Left).
Constitutionally, the Basic Law (Grundgesetz), the West German constitution, provided two pathways for a unification. The first was the implementation of a new all-German constitution, safeguarded by a popular referendum. Actually, this was the original idea of the "Grundgesetz" in 1949 - it was named a "basic law" instead of a "constitution" because it was considered provisional. The second way was more technical: the implementation of the constitution in the East, using a paragraph originally designed for the West German states (Bundesländer) in case of internal re-organization like the merger of two states. While this latter option was chosen as the most feasible one, the first option was partly regarded as a means to foster the "inner reunification".
A public manifestation of coming to terms with the past (Vergangenheitsbewältigung) is the existence of the so-called Birthler-Behörde, the Federal Commissioner for the Stasi Archives, which collects and maintains the files of the East German security apparatus.
The economic reconstruction of former Eastern Germany following the reunification required large amounts of public funding which turned some areas into boom regions, although overall unemployment remains higher than in the former West. Unemployment was part of a process of deindustrialization starting rapidly after 1990. Causes for this process are disputed in political conflicts up to the present day. Most times bureaucracy and lack of efficiency of the East German Economy are highlighted and the de-industrialization seen as inevitable outcome of the "Wende". But many East German critics point out that it was the shock-therapy style of privatization which did not leave room for east German Enterprises to adapt and that alternatives like a slow transition had been possible. Reunification did, however, lead to a large rise in the average standard of living in former Eastern Germany and a stagnation in the West as $2 trillion in public spending was transferred East. Between 1990 and 1995, gross wages in the east rose from 35% to 74% of western levels, while pensions rose from 40% to 79%.:209 Unemployment reached double the western level as well.
In terms of media usage and reception, the country remains partially divided especially among the older generations. Mentality gaps between East and West persist, but so does sympathy. Additionally, the daily-life exchange of Easterners and Westerners is not so large as expected. Young people's knowledge of the former East Germany is very low. Some people in Eastern Germany engage in "Ostalgie", which is a certain nostalgia for the time before the wall came down.
Reunified Berlin from an Urban Planning Perspective
|This article or section may contain previously unpublished synthesis of published material that conveys ideas not attributable to the original sources. (October 2012)|
While the fall of the Berlin Wall had broad economic, political and social impacts globally, it also had significant consequence for the local urban environment. In fact, the events of 9 November 1989 saw East Berlin and West Berlin, two halves of a single city that had ignored one another for the better part of 40 years, finally "in confrontation with one another". As expressed by Grésillon "the fall of the Berlin Wall [marked] the end of 40 years of divided political, economic and cultural histories" and was "accompanied by a strong belief that [the city] was now back on its 'natural' way to become again a major metropolis" In the context of urban planning, in addition to a wealth of new opportunity and the symbolism of two former independent nations being re-joined, the reunification of Berlin presented numerous challenges. The city underwent massive redevelopment, involving the political, economic and cultural environment of both East and West Berlin. However, the "scar" left by the Wall, which ran directly through the very heart of the city had consequences for the urban environment that planning still needs to address. Despite planning efforts, significant disparity between East and West remain.
Planning issues facing reunification
The reunification of Berlin presented legal, political and technical challenges for the urban environment. The political division and physical separation of the city for more than 30 years saw the East and the West develop their own distinct urban forms, with many of these differences still visible to this day. East and West Berlin were directed by two separate political and urban agendas. East Berlin developed a mono-centric structure with lower level density and a functional mix in the city's core, while West Berlin was poly-centric in nature, with a high-density, multi functional city centre. The two political systems allocated funds differently to post-war reconstruction, based on political priorities, and this had consequences for the reunification of the city. West Berlin had received considerably more financial assistance for reconstruction and refurbishment. There was considerable disparity in the general condition of many of the buildings; at the time of reunification, East Berlin still contained many leveled areas, which were previous sites of destroyed buildings from World War II, as well as damaged buildings that had not been repaired. An immediate challenge facing the reunified city was the need for physical connectivity between the East and the West, specifically the organisation of infrastructure. In the period following World War II, approximately half of the railway lines were removed in East Berlin.
Policy for Reunification
As urban planning in Germany is the responsibility of city government, the integration of East and West Berlin was in part complicated by the fact that the existing planning frameworks became obsolete with the fall of the Wall. Prior to the reunification of the city, the Land Use Plan of 1988 and General Development Plan of 1980 defined the spatial planning criteria for West and East Berlin, respectively. Although these existing frameworks were in place before reunification, after the fall of the Wall there was a need to develop new instruments in order to facilitate the spatial and economic development of the re-unified city. The first Land Use Plan following reunification was ultimately enacted in 1994. The urban development policy of reunified Berlin, termed "Critical Reconstruction", aimed to facilitate urban diversity by supporting a mixture of land functions. This policy directed the urban planning strategy for the reunified city. A "Critical Reconstruction" policy agenda was to redefine Berlin's urban identity through its pre-war and pre-Nazi legacy. Elements of "Critical Reconstruction" were also reflected in the overall strategic planning document for downtown Berlin, entitled "Inner City Planning Framework". Reunification policy emphasised restoration of the traditional Berlin landscape. To this effect, the policy excluded the "legacy of socialist East Berlin and notably of divided Berlin from the new urban identity". Ultimately, following the collapse of the German Democratic Republic on 3 October 1990, all planning projects under the socialist regime were abandoned. Vacant lots, open areas and empty fields in East Berlin were subject to redevelopment, in addition to space previously occupied by the Wall and associated buffering zone. Many of these sites were positioned in central, strategic locations of the reunified city. The removal of the Wall saw the city's spatial structure reoriented.
After the Fall of the Wall
Berlin's urban organisation experienced significant upheaval following the physical and metaphorical collapse of the Wall, as the city sought to "re-invent itself as a 'Western' metropolis".
Redevelopment of vacant lots, open areas and empty fields as well as space previously occupied by the Wall and associated buffering zone were based on land use priorities as reflected in "Critical Reconstruction" policies. Green space and recreational areas were allocated 38% of freed land; 6% of freed land was dedicated to mass-transit systems to address transport inadequacies.
Reunification initiatives also included construction of major office and commercial projects, as well as the renovation of housing estates in East Berlin.
Another key priority was reestablishing Berlin as the capital of Germany, and this required buildings to serve government needs, including the "redevelopment of sites for scores of foreign embassies".
With respect to redefining the city's identity, emphasis was placed on restoring Berlin's traditional landscape. "Critical Reconstruction" policies sought to disassociate the city's identity from its Nazi and socialist legacy, though some remnants were preserved, with walkways and bicycle paths established along the border strip to preserve the memory of the Wall. In the centre of East Berlin much of the modernist heritage of the East German state was gradually removed. Reunification saw the removal of politically motivated street names and monuments in the East (Large 2001) in an attempt to reduce the socialist legacy from the face of East Berlin.
Immediately following the fall of the Wall, Berlin experienced a boom in the construction industry. Redevelopment initiatives saw Berlin turn into one of the largest construction sites in the world through the 1990s and early 2000s.
The fall of the Berlin Wall also had economic consequences. Two German systems covering distinctly divergent degrees of economic opportunity suddenly came into intimate contact. Despite development of sites for commercial purposes, Berlin struggled to compete in economic terms with key West German centres such as Stuttgart and Düsseldorf. The intensive building activity directed by planning policy resulted in the over-expansion of office space, "with a high level of vacancies in spite of the move of most administrations and government agencies from Bonn"
Berlin was marred by disjointed economic restructuring, associated with massive deindustrialisation. Hartwich asserts that while the East undoubtedly improved economically, it was "at a much slower pace than [then Chancellor Helmut] Kohl had predicted".
Facilitation of economic development through planning measures failed to close the disparity between East and West, not only in terms of the economic opportunity but also housing conditions and transport options. Tölle states that "the initial euphoria about having become one unified people again was increasingly replaced by a growing sense of difference between Easterners ("Ossis") and Westerners ("Wessis")" The fall of the Wall also instigated immediate cultural change. The first consequence was the closure in East Berlin of politically oriented cultural institutions.
The fall of the Berlin Wall and the factors descibed above led to mass migration from East Berlin and East Germany, producing a large labour supply shock in the West. Emigration from the East, totaling 870,000 people between 1989 and 1992 alone, led to worse employment outcomes for the least-educated workers, for blue-collar workers, for men and for foreign nationals
At the close of the century, it became evident that despite significant investment and planning, Berlin was yet to retake "its seat between the European Global Cities of London and Paris." Yet ultimately, the disparity between East and West portions of Berlin has led to the city achieving a new urban identity.
A number of locales of East Berlin, characterised by dwellings of in-between use of abandoned space for little to no rent, have become the focal point and foundation of Berlin's burgeoning creative activities. According to Berlin Mayor Klaus Wowereit, "the best that Berlin has to offer, its unique creativity. Creativity is Berlin's future." Overall, the Berlin government's engagement in creativity is strongly centered on marketing and promotional initiatives instead of creative production.
Creativity has been the catalyst for the city's "thriving music scene, active nightlife, and bustling street scene" all of which have become important attractions for the German capital. The industry is a key component of the city's economic make-up with more than 10% of all Berlin residents employed in cultural sectors.
See also
- History of Germany since 1945
- Stalin Note – 1952 German reunification proposal
- Good Bye, Lenin!
- Chinese reunification
- Cypriot reunification
- Irish reunification
- Korean reunification
- Yemeni unification
- Vertrag zwischen der Bundesrepublik Deutschland und der Deutschen Demokratischen Republik über die Herstellung der Einheit Deutschlands (Einigungsvertrag) Unification Treaty signed by the Federal Republic of Germany and the German Democratic Republic in Berlin on 31 August 1990 (official text, in German).
- Krenz speech of 18 October 1989 in the German radio archive
- Helmut Kohl's 10-Point Plan for German Unity
- "Vertrag über die Schaffung einer Währungs-, Wirtschafts- und Sozialunion zwischen der Deutschen Demokratischen Republik und der Bundesrepublik Deutschland". Die Verfassungen in Deutschland. Retrieved 2013-03-22.
- German Unification Monetary union. Cepr.org (1 July 1990). Retrieved on 19 October 2010.
- Embassy of the Federal Republic of Germany London – A short history of German reunification. London.diplo.de. Retrieved on 19 October 2010.
- "United States and Soviet Union Sign German Reunification Treaty". NBC Learn. Retrieved 2013-03-22.
- "Merkel to mark 20th anniversary of German reunification treaty". Deutschland.de. Retrieved 2013-03-22.
- "Soviet Legislature Ratifies German Reunification Treaty". AP News Archive. Retrieved 2013-03-22.
- Opening of the Berlin Wall and Unification: German History. Germanculture.com.ua. Retrieved on 19 October 2010.
- However, in many fields it functioned de facto as if it were a component State of West Germany.
- Germany Today – The German Unification Treaty – travel and tourist information, flight reservations, travel bargains, hotels, resorts, car hire. Europe-today.com. Retrieved on 19 October 2010.
- Wiegrefe, Klaus (29 September 2010). "An Inside Look at the Reunification Negotiations". Der Spiegel. Retrieved 4 October 2010.
- Skelton, George (26 January 1990). "THE TIMES POLL : One Germany: U.S. Unfazed, Europeans Fret". Los Angeles Times. Retrieved 16 June 2012.
- "Thatcher told Gorbachev Britain did not want German reunification". The Times (London). 11 September 2009. Retrieved 8 November 2009.
- Kundnani, Hans (28 October 2009). "Margaret Thatcher's German war". The Times. Retrieved 5 October 2010.
- Volkery, Carsten (9 November 2009). "The Iron Lady's Views on German Reunification/'The Germans Are Back!'". Der Spiegel. Retrieved 5 October 2010.
- Anne-Laure Mondesert (AFP) (31 October 2009). "London and Paris were shocked by German reunification". Calgary Herald. Retrieved 9 November 2009.
- Peter Allen (2 November 2009). "Margaret Thatcher was 'horrified' by the prospect of a reunited Germany". Telegraph (London). Retrieved 9 November 2009.
- "'I Preferred To See It as an Acquisition'". Der Spiegel. September 29, 2010. Retrieved 7 October 2010.
- Kohl, Helmut; Riemer, Jeremiah (translator). "Helmut Kohl's Ten-Point Plan for German Unity (November 28, 1989)". German History in Documents and Images. Retrieved 16 June 2012.
- Knight, Ben (8 November 2009). "Germany's neighbors try to redeem their 1989 negativity". Deutsche Welle. Retrieved 9 November 2009.
- The territory of the League of Nations mandate of the Free City of Danzig, annexed by Poland in 1945 and comprising the city of Gdańsk (Danzig) and a number of neighbouring cities and municipalities, had never been claimed by any official side, because West Germany followed the legal position of Germany in its borders of 1937, thus before any Nazi annexations.
- Poland Germany border – Oder Neisse. Polandpoland.com (14 November 1990). Retrieved on 19 October 2010.
- Underestimating East Germany – Magazine. The Atlantic (15 October 2010). Retrieved on 19 October 2010.
- After the fall 20 years ago this week the crumbling of the Berlin Wall began an empire s end. Anniston Star. Retrieved on 19 October 2010.
- National identity in eastern Germany ... .Google Books (30 October 1990). Retrieved on 19 October 2010.
- (German) Umfrage: Ost- und Westdeutsche entfernen sich voneinander – Nachrichten Politik – WELT ONLINE. Welt.de. Retrieved on 19 October 2010.
- (German) Partei des Demokratischen Sozialismus – Wikipedia. De.wikipedia.org. Retrieved on 19 October 2010.
- In fact, a new constitution was drafted by a "round table" of dissidents and delegates from East German civil society only to be discarded later, a fact that upset many East German intellectuals. See Volkmar Schöneburg: Vom Ludergeruch der Basisdemokratie. Geschichte und Schicksal des Verfassungsentwurfes des Runden Tisches der DDR, in: Jahrbuch für Forschungen zur Geschichte der Arbeiterbewegung, No. II/2010.
- Gastbeitrag: Nicht für die Ewigkeit – Staat und Recht – Politik. Faz.Net. Retrieved on 19 October 2010.
- Aus Politik und Zeitgeschichte, Nr. 18 2009, 27 April 2009 – Das Grundgesetz – eine Verfassung auf Abruf?. Das-parlament.de. Retrieved on 19 October 2010.
- DDR-Geschichte: Merkel will Birthler-Behörde noch lange erhalten. Spiegel.de (15 January 2009). Retrieved 19 October 2010.
- Facts about Germany: Society. Tatsachen-ueber-deutschland.de. Retrieved on 19 October 2010.
- For example the economist Jörg Roesler - see: Jörg Roesler: Ein Anderes Deutschland war möglich. Alternative Programme für das wirtschaftliche Zusammengehen beider deutscher Staaten, in: Jahrbuch für Forschungen zur Geschichte der Arbeiterbewegung, No. II/2010, pp.34-46. The historian Ulrich Busch pointed out that the currency union as such had come to early- see Ulrich Busch: Die Währungsunion am 1. Juli 1990: Wirtschaftspolitische Fehlleistung mit Folgen, in: Jahrbuch für Forschungen zur Geschichte der Arbeiterbewegung, No. II/2010, pp.5-24.
- Sauga, Michael (September 6, 2011). "Designing a Transfer Union to Save the Euro". Der Spiegel.
- Parkes, K. Stuart (1997). Understanding contemporary Germany. Taylor & Francis. ISBN 0-415-14124-9.
- (German) Ostdeutschland: Das verschmähte Paradies | Campus | ZEIT ONLINE. Zeit.de (29 September 2008). Retrieved on 19 October 2010.
- (German) Partnerschaft: Der Mythos von den Ost-West-Ehepaaren – Nachrichten Panorama – WELT ONLINE. Welt.de. Retrieved on 19 October 2010.
- Politics and History – German-German History – Goethe-Institut. Goethe.de. Retrieved on 19 October 2010.
- Zeitchik, Steven (7 October 2003). "German Ostalgie : Fondly recalling the bad old days". The New York Times. Retrieved 3 May 2010.[dead link]
- Grésillon, B (April 1999). "Berlin, cultural metropolis: Changes in the cultural geography of Berlin since reunification". ECUMENE 6: 284–294.
- Grésillon, B (April 1999). "Berlin, cultural metropolis: Changes in the cultural geography of Berlin since reunification". ECUMENE: 284.
- Tölle, A (2010). "Urban identity policies in Berlin: From critical reconstruction to reconstructing the Wall.". Institute of Socio-Economic Geography and Spatial Management 27: 348–357.
- Hartwich, O. M. (2010). After the Wall: 20 years on. Policy 25 (4). pp. 8–11.
- Organisation for Economic Co-operation and Development (2003). Urban renaissance [electronic resource]: Berlin: Towards an integrated strategy for social cohesion and economic development / organisation for economic co-operation and development. Paris: OECD Publishing.
- Schwedler, Hanns-Uve. "Divided cities - planning for unification.". European Academy of the Urban Environment. Retrieved 14 May 2012.
- Loeb, Carolyn (January 2006). "Planning reunification: the planning history of the fall of the Berlin Wall". Planning Perspectives 21: 67–87. Retrieved 14 May 2012.
- MacDonogh, G (1997). Berlin. Great Britain: Sinclair-Stevenson. pp. 456–457.
- Schwedler, Hanns-Uve (2001). Urban Planning and Cultural Inclusion Lessons from Belfast and Berlin. Palgrave Macmillan. ISBN 978-0-333-79368-8.
- Urban, F (2007). "Designing the past in East Berlin before and after the German Reunification.". Progress in Planning 68: 1–55.
- Frank, D. H. (13). "The Effect of Migration on Natives' Employment Outcomes: Evidence from the Fall of the Berlin Wall.". INSEAD Working Papers Selection.
- Krätke, S (2004). "City of talents? Berlin's regional economy, socio-spatial fabric and "worst practice" urban governance". International Journal of Urban and Regional Research 28 (3): 511–529.
- Häußermann, H; Kapphan (2005). "Berlin: from divided to fragmented city. Transformation of Cities in Central and Eastern Europe. Towards Globalization". United Nations University Press: 189–222.
- Organisation for Economic Co-operation and Development. (2003). Urban renaissance [electronic resource]: Berlin: Towards an integrated strategy for social cohesion and economic development / organisation for economic co-operation and development. Paris: OECD Publishing.
- Organisation for Economic Co-operation and Development. (2003). Urban renaissance [electronic resource]: Berlin: Towards an integrated strategy for social cohesion and economic development / organisation for economic co-operation and development. Paris: OECD Publishing. p. 20.
- Hartwich, O. M. (2010). After the Wall: 20 years on. Policy 25 (4). p. 8.
- Tölle, A (2010). "Urban identity policies in Berlin: From critical reconstruction to reconstructing the Wall.". Institute of Socio-Economic Geography and Spatial Management 27: 352.
- Smith, E. O. (1994). The German Economy. London: Routledge. p. 266.
- Jakob, D., 2010. Constructing the creative neighbourhood: hopes and limitations of creative city policies in Berlin. City, Culture and Society, 1, 4, December 2011, 193-198., D (December 2010). "Constructing the creative neighbourhood: hopes and limitations of creative city policies in Berlin". City, Culture and Society 1 (4): 193–198.
- Jakob, D. (December 2010). "Constructing the creative neighbourhood: hopes and limitations of creative city policies in Berlin.". City, Culture and Society 1 (4): 193–198.
- Presse- und Informationsamt des Landes Berlin Berlin: Pressemitteilung, Presse- und Informationsamt des Landes. (2007). Wowereit präsentierte den "Berlin Day" in New York.
- Florida, R. L. (2005). Cities and the creative class. New York: Routledge. p. 99.
- Senatsverwaltung für Wirtschaft Arbeit und Frauen in Berlin (2009). Kulturwirtschaft in Berlin. Entwicklungen und Potenziale. Berlin: Senatsverwaltung für Wirtschaft Arbeit und Frauen in Berlin.
Further reading
- Maier, Charles S., Dissolution: The Crisis of Communism and the End of East Germany (Princeton University Press, 1997).
- Zelikow, Philip and Condoleezza Rice, Germany Unified and Europe Transformed: A Study in Statecraft (Harvard University Press, 1995 & 1997).
Primary sources
- Jarausch, Konrad H., and Volker Gransow, eds. Uniting Germany: Documents and Debates, 1944–1993 (1994), primary sources in English translation
|Wikimedia Commons has media related to: German reunification|
- The Unification Treaty (Berlin, 31 August 1990) website of CVCE (Centre of European Studies)
- Hessler, Uwe, "The End of East Germany", dw-world.de, August 23, 2005.
- Berg, Stefan, Steffen Winter and Andreas Wassermann, "Germany's Eastern Burden: The Price of a Failed Reunification", Der Spiegel, September 5, 2005.
- Wiegrefe, Klaus, "An Inside Look at the Reunification Negotiations"Der Spiegel, September 29, 2010.
- German Embassy Publication, Infocus: German Unity Day
- Problems with Reunification from the Dean Peter Krogh Foreign Affairs Digital Archives | http://en.wikipedia.org/wiki/Reunification_of_Germany | 13 |
17 | Activity 1. First-person Point of View: Compare and Contrast
Ask students to compare and contrast the use of first person point of view in the Benjy, Quentin, and Jason chapters. Teachers might consider assigning a one-page reader response paper due the first day of this lesson, so that students are prepared to discuss preliminary ideas in class. In class, students should begin by discussing their general response to this chapter. They most likely will begin to feel more grounded and comfortable discussing the novel's plot. Teachers can use the beginning of class discussion to ensure that students are on the same page in terms of the novel's basic plot. Drawing from students' initial reactions to the chapter and its narrative point of view, teachers might consider using some or all of the following questions to guide discussion:
- All three chapters so far use the first-person point of view. How is the use of the first person different in each chapter?
- Does Benjy's first-person narration reveal more about his own character or more about the Compson family?
- Does Quentin's first-person narrative reveal more about his own character or more about the Compson family (or both)?
- What about Jason's first-person narration?
As students begin discussing how each narrator differs, as well as the similarities they share, ask students to help complete the chart (available here as a PDF) on a black/whiteboard (save the completed chart for use in Lesson Four, where students will complete the chart by filling in the "Dilsey" section). Note: this chart can serve as an effective substitute for the reaction essay suggested above as an at-home exercise.
Activity 2. Narrative Structure and Characterization
Students will review several key passages from this chapter that help to describe Jason as a character in relation to the Compson family and its "new system." The focus of this activity is the relationship between narrative structure (form) and characterization (content).
Key passages include the following:
- I saw red. When I recognised that red tie, after all I had told her, I forgot about everything. I never thought about my head even until I came to the first forks and had to stop. Yet we spend money and spend money on roads and dam if it isn't like trying to drive over a sheet of corrugated iron roofing. I'd like to know how a man could be expected to keep up with even a wheelbarrow. I think too much of my car; I'm not going to hammer it to pieces like it was a ford. Chances were they had stolen it, anyway, so why should they give a dam. Like I say blood always tells. If you've got blood like that in you, you'll do anything. I says whatever claim you believe she has on you has already been discharged; I says from now on you have only yourself to blame because you know what any sensible person would do. I says if I've got to spend half my time being a dam detective, at least I'll go where I can get paid for it.
Note to Teacher: This passage illustrates how Faulkner's noticeably shorter and direct sentences quicken the pace of the novel, reflecting Jason's own hot-headedness and fast-paced actions (symbolized by his car, and his obsession with it, and his seeing "red"). First ask students to think about the pacing and tone of the sentences - how does that shape Jason's character in their mind? Ask students to consider the symbolism of the color red itself (Jason's literal and figurative "I saw red," as well as the red family blood, etc).
- I'll be damned if they dont dress like they were trying to make every man they passed on the street want to reach out and clap his hand on it. And so I was thinking what kind of a dam man would wear a red tie when all of a sudden I knew he was one of those show folks well as if she'd told me. Well, I can stand a lot; if I couldn't dam if I wouldn't be in a hell of a fix, so when they turned the corner I jumped down and followed. Me, without any hat, in the middle of the afternoon, having to chase up and down back alleys because of my mother's good name. Like I say you cant do anything with a woman like that, if she's got it in her. If it's in her blood, you cant do anything with her. The only thing you can do is to get rid of her, let her go on and live with her own sort.
I went on to the street, but they were out of sight. And there I was, without any hat, looking like I was crazy too. Like a man would naturally think, one of them is crazy and another one drowned himself and the other one was turned out into the street by her husband, what's the reason the rest of them are not crazy too. All the time I could see them watching me like a hawk, waiting for a chance to say Well I'm not surprised I expected it all the time the whole family's crazy. Selling land to send him to Harvard and paying taxes to support a state University all the time that I never saw except twice at a baseball game and not letting her daughter's name be spoken on the place until after a while Father wouldn't even come down town anymore but just sat there all day with the decanter I could see the bottom of his nightshirt and his bare legs and hear the decanter clinking until finally T.P. had to pour it for him and she says You have no respect for your Father's memory and I says I dont know why not it sure is preserved well enough to last only if I'm crazy too
Note to Teacher: This passage reveals Jason's temper. His concern about not having on a hat is tied to his attempt at upholding the family's image and once good name. His reference to Quentin's (Caddy's daughter's) blood indicates his belief that Quentin inherited her mother's disrespect for the Compson name and social standing (albeit a declining standing). Jason's violence and meanness toward Quentin throughout this chapter suggests that he symbolically regards her as the embodiment of the decaying Compson family. Not only does she represent Caddy; she represents Quentin as well. The second paragraph suggests that Jason believes that he is the only hope for the family. This paragraph provides a clear summary of the novel's course of events, which most likely have been unclear until the Jason chapter, especially until this passage.
As you review these sample passages (and other key passages of your own selection), ask students the following questions about each passage:
- What 2-3 adjectives best describe Jason in this passage?
- What effect does Jason have at this point in the novel on the unfolding plot?
- What effect does Jason have at this point in the novel on the other characters?
Then lead a general discussion about this chapter, using the following guiding questions:
- What kind of new system does Jason envision for the Compson family?
- How does this system differ from the Compson family as presented by Benjy and Quentin?
- Does Jason succeed in creating this new Compson family system? Why or why not?
Activity 3. The Changing South
Not only does the Jason chapter reveal the final stages of the Compson's family's decline, but it also portrays the changing South-economically (stock market vs. aristocratic wealth via land and slave holdings); technologically (fast cars vs. slow horse/buggies); and socially (new roles for women and African-Americans in the South). Divide students into small groups to explore these topics (assigning two groups for each topic if necessary). Each group will conduct online research related to one of the following topics. Mention to students that each group will present to the class at large a 5-to-10-minute overview of the topic and its relevance to the Jason chapter and the novel in general. Students can access the novel and some of the activity materials via the EDSITEment LaunchPad.
- Blood/The Old South: The significance of flesh and blood and the importance of Southern family heritage. First ask students to review the Compson family tree from the EDSITEment-reviewed University of Mississippi "William Faulkner on the Web." Then ask this group to read pages 20 through 34 of "The Old South" from the EDSITEment-reviewed University of North Carolina's "Documenting the American South." After reviewing this editorial, students should answer the following questions.
This Old South aristocracy was of threefold structure - it was an aristocracy of wealth, of blood, and of honor. It was not the wealth of the shoddy aristocracy that here and there, even in the New South, has forced itself into notice and vulgarly flaunts its acquisitions. It came by inheritance of generations chiefly, as with the nobility of England and France. Only in the aristocracy of the Old World could there be found a counterpart to the luxury, the ease and grace of inherited wealth, which characterized the ruling class of the Old South. There were no gigantic fortunes as now, and wealth was not increased or diminished by our latter-day methods of speculation or prodigal and nauseating display. The ownership of a broad plantation, stately country and city homes, of hundreds of slaves, of accumulations of money and bonds, passed from father to children for successive generations. (20-21)
His aristocracy of wealth was as nothing compared to his aristocracy of blood. An old family name that had held its place in the social and political annals of his State for generations was a heritage vastly dearer to him than wealth. Back to the gentle-blooded Cavaliers who came to found this Western world, he delighted to trace his ancestry. There could be no higher honor to him than to link his name with the men who had planted the tree of liberty and made possible a great republic. (24)
- What is the significance of "blood" in this editorial?
- What five adjectives describe aristocratic individuals of the Old South?
- How did many southern aristocrats accumulate wealth? How does this differ from the wealth of the New South?
- What issues were important to them?
- How does this writer describe the African-American servants for families of the Old South?
- Can you compare and contrast this editorial's description of African-American servants to Dilsey and the other members of the Gibson family?
- How did many southern aristocrats accumulate wealth?
- How is the Compson family's sale of Benjy's pasture symbolic?
- What effect(s) did this sale have on the Compson family and Jason's place within that family?
Read from page 30, beginning:
"SIDE by side with the aristocrat, waiting deferentially to do his bidding, with a grace and courtliness hardly surpassed by his master, I place the negro servant of the Old South"
to this concluding paragraph on page 33:
"Whenever you find a negro whose education comes not from books and college only, but from the example and home teaching and training of his white master and mistress, you will generally find one who speaks the truth, is honest, self-respecting and self-restraining, docile and reverent, and always the friend of the Southern white gentleman and lady. Here and there in the homes of the New South these graduates from the school of slavery are to be found in the service of old families and their descendants, and the relationship is one of peculiar confidence and affection …
- How does this writer describe the African-American servants for families of the Old South?
- What is your impression of this writer’s view of African-Americans and what he thinks of their role in society?
- Can you compare and contrast this editorial’s description of African-American servants to Dilsey and the other members of the Gibson family?
- Jason's Greed: Not only was Jason stealing the money Caddy sent for Quentin by cashing the checks for himself; he was "speculating" (i.e., gambling) at the Western Union on the cotton stock market. Ask students to review the following passage:
"Keep still," I says. "I'll get it." I went up stairs and got the bank book out of her desk and went back to town. I went to the bank and deposited the check and the money order and the other ten, and stopped at the telegraph office. It was one point above the opening. I had already lost thirteen points, all because she had to come helling in there at twelve, worrying me about that letter.
"What time did that report come in?" I says. "About an hour ago," he says.
"An hour ago?" I says. "What are we paying you for?" I says. "Weekly reports? How do you expect a man to do anything? The whole dam top could blow off and we'd not know it."
"I dont expect you to do anything," he says. "They changed that law making folks play the cotton market."
"They have?" I says. "I hadn't heard. They must have sent the news out over the Western Union."
I went back to the store. Thirteen points. Dam if I believe anybody knows anything about the dam thing except the ones that sit back in those New York offices and watch the country suckers come up and beg them to take their money. Well, a man that just calls shows he has no faith in himself, and like I say if you aren't going to take the advice, what's the use in paying money for it. Besides, these people are right up there on the ground; they know everything that's going on. I could feel the telegram in my pocket. I'd just have to prove that they were using the telegraph company to defraud. That would constitute a bucket shop. And I wouldn't hesitate that long, either. Only be damned if it doesn't look like a company as big and rich as the Western Union could get a market report out on time. Half as quick as they'll get a wire to you saying Your account closed out. But what the hell do they care about the people. They're hand in glove with that New York crowd. Anybody could see that
Next ask students to explore the following links related to the New York Stock Exchange, From the EDSITEment-reviewed America's Library:
Students should consider the following questions:How does an investor make money in the stock market? How was Jason trying to make money? What risks are involved in investing in the stock market? What is the difference between wealth gained via the stock market and wealth via land ownership? What is Jason's job? Jason's Symbolic Car: Turn to the EDSITEment-reviewed Smithsonian Institution's National Museum of American History's virtual exhibition "America on the Move." Explore in particular the following chapters of the exhibition:
African-Americans in the Old and New South: The Sound and the Fury is set during the time of Jim Crow laws, which legally maintained segregation and generated racism and Southern white hatred toward African Americans. This group will explore the following resources as they consider the transition from the Old South to the New South.
- "Americans Adopt the Automobile"
- The "Road Improvements" paragraph
- Images of 1920s automobiles
Students should consider the following questions:
- How would you describe Jason's relationship with his car?
- What is the relationship between the car and Jason's sense of his own manliness?
- What is significant about the fact that Jason's car (the gasoline) gives him headaches (consider that Jason is the new "head" of the family)? What about the fact that he cannot drive his own car back after chasing Quentin and the circus performer?
- How would you compare Jason's car to the car driving by Gerald's mother in the Quentin chapter (1910)? Describe Gerald's background.
Have each student group present each topic to the full class. Wrap up this activity with a general class discussion, guided by the following questions:
- How would you compare Jason's obsession with money and his car to Southern aristocratic wealth?
- How would you describe the pace of the 1910 Quentin chapter in contrast to the pace of the 1928 Jason chapter? What is significant about these differences in relation to the changing South at large?
- How does the setting differ between the present 1928 Jason chapter and all recollections (in the Benjy and Quentin chapters) of the Compson children's childhoods? | http://edsitement.neh.gov/lesson-plan/faulkners-sound-and-fury-narration-voice-and-compson-familys-new-system%20 | 13 |
14 | “Wake Nicodemus:” African American Settlement on the Plains of Kansas
The townspeople of Nicodemus (LOC photo)
Fleeing from new forms of oppression that were emerging in the post-Reconstruction Era South, a group of African American settlers established the community of Nicodemus on the windswept plains of Kansas in 1877. Here they began turning the dense sod, building homes and businesses, and forging new lives for themselves. Established as an all Black community, the founders of Nicodemus envisioned a town built on the ideals of independence and self-determination. The community experienced rapid social and economic growth in the early years and many speculated that Nicodemus would become a major stop for the railroad. It became clear by 1888, however, that the railroad and the predicted economic boom would not come. This did not mark the end of the great experiment that was Nicodemus. Although the population of the town itself dwindled to only a few dozen souls, many African American families stayed in the area, settling on farms in the surrounding township. From this time onward Nicodemus became a community of primarily Black farm families.1 This living community is the only remaining all Black town west of the Mississippi River that was settled in the 1800s on the western plains by former slaves.
The U.S. Congress, recognizing the importance of Nicodemus' contribution to America’s history, enacted legislation establishing Nicodemus NHS as a unit of the NPS in November 1996. The legislation directs the NPS to cooperate with the people of Nicodemus to preserve its five remaining historic structures—First Baptist Church, African Methodist Episcopal Church, St. Francis Hotel, the First District School, and Nicodemus Township Hall, and keep alive the memory of the many roles African-Americans played throughout the American West.
Nicodemus NHS preserves, protects, and interprets the only remaining western town established by African Americans during the Reconstruction Era following the Civil War. The town of Nicodemus is symbolic of the pioneer spirit of African Americans who dared to leave the only region they had been familiar with to seek personal freedom and the opportunity to develop their talents and capabilities. The site was named for a legendary African-American slave who purchased his freedom.
In May and June 2006, students, under the guidance of Dr. Margaret Wood, Washburn University, conducted archeological testing on the Thomas Johnson/Henry Williams farm site (14GH102), located approximately four km north of Nicodemus, Kansas. The objective of this research was to identify and explore archeological sites related to the settlement period and early occupation of Nicodemus.
Thomas Johnson, one of the earliest settlers to Nicodemus, homesteaded a piece of land just outside of the town of Nicodemus in 1878. He and his extended family farmed the land and adjacent properties for over a decade. Johnson’s grandson, Henry Williams continued to farm Johnson’s original claim until the middle of twentieth century and the property is still in the hands of a close family member. This farm became the focus of archeological investigations during the 2006 field season.
A Brief History of Nicodemus
In the summer of 1877, the Reverend M. M. Bell and his congregation watched cautiously as a white man, W. R. Hill, stepped to the podium of a small African American Baptist church near Lexington, Kentucky. Hill began to speak. The “Great Solomon Valley of Kansas,” he said, “offered the opportunity of a lifetime if they were brave enough to seize it.” The area, he extolled, was blessed with rich soil, plentiful water, stone for building, timber for fuel, a mild climate, and a herd of wild horses waiting to be tamed to the plow. Government land was available for homesteading and Kansas was a state known for its liberal leanings, especially in regard to the rights of blacks.2
As the early settlers to Nicodemus were later to discover, Hill’s claims were wildly exaggerated. In truth, Graham County was relatively unsettled in 1877 precisely because it was not as conducive to agriculture as Hill had claimed. Many considered the area, with an annual rainfall of less than thirty inches, too dry for farming. No timber was available for building or fuel except for a few stands of softwood along the rivers or streams. Transportation was a problem as well. No railroads or stage lines served the area in 1877 and the nearest towns were more than a day’s ride away.3
Nonetheless, 350 people from the Lexington area paid a five-dollar fee and signed up for the September migration to Kansas. Winter was only a few months away and few had sufficient capital to see them through the cold season. These concerns, however, were overshadowed by the desire to build a future for themselves and their children in a land where a person was judged by the quality of their character and not by the color of their skin. Traveling on trains, in wagons and, sometimes, on foot, they arrived in Nicodemus on September 17, 1877. The barren landscape surrounding Nicodemus was a stark contrast to the wooded hill country of Kentucky. Fifty of the most disillusioned returned to eastern Kansas the following day, while those that remained set about making preparations for the coming winter. Between the spring of 1878 and 1879 over 225 additional migrants from Kentucky and Mississippi arrived in the colony. Before long the settlers had established themselves on 70 farms scattered across the landscape.4
Nicodemus homesteaders pose in front of farmhouse (LOC photo)
The settlers’ most immediate need was housing. Faced with a lack of timber, transportation and money, the dugout was the only realistic option for shelter in the area. As soon as basic shelters were constructed, colonists began to break ground for gardens and crops. Whole families–men, women and children–participated in the backbreaking task of breaking the prairie. In spite of all of this labor, the first crops failed. Determined to make their efforts pay off, farm families continued to turn and cultivate the land. In the spring of 1879 the average farm had 7 acres or more in cultivation.5
Nicodemus experienced steady growth and prosperity throughout the first half of the 1880s. The expected arrival of the railroad and the financial success of farmers in the area brought an influx of capital and businessmen into the community. Unfortunately, Nicodemus’s golden age was short lived. In 1887, the railroad failed to fulfill its promises and bypassed Nicodemus in favor of a small town a few miles to the southwest. The impact was devastating for the burgeoning community. The failure of commercial development in the town was coupled by a series of droughts lasting from the late 1880s until the mid 1890s that besieged the farm families living in the surrounding township. Given these challenging conditions, many farmers lost or sold their property in this period. Between 1885 and 1900 land ownership rates in Nicodemus Township plummeted from 96 percent to 54 percent. In this same time period the number of farms in the Township fell from 70 to 48. After 1900, however, farmers began to recover from these catastrophes and both land ownership and farming conditions improved.
Thomas Johnson/Henry Williams Family
Thomas Johnson and his family emigrated to Nicodemus in 1878. He filed his claim in 1885 and received the patent for his property in April of 1889. Johnson’s properties formed an L-shaped area of land in Section 23 of Nicodemus Township. His property was bounded by the properties of his sons, daughters, and son-in-law.6
Thomas Johnson brought his extended family with him including his wife Zerina; his son Henry, daughter-in-law Mary, and granddaughter Ella; his daughter Ella (who also may have been known as Ellen); his daughter Emma Williams, son-in-law Charles Williams, and an adopted grandchild Mack Switzer; his widowed sister-in-law Mary Johnson, nephew Joseph, and great nieces Liz and Clarina Johnson.7 In all, 13 people made up the Johnson/Williams families in 1880. While the extended family probably migrated as a group, they are listed as five distinct households in the census records. Maps of the township, however, show only a single dugout on the properties owned by the Johnson and Williams families. Whether they lived together or in distinct households is unclear.
In 1885, Thomas, Henry, and Joseph Johnson and Charles Williams each worked 160 acres, valued at $2700,8 according to Kansas State census records. Collectively, the family owned $85 worth of machinery, 4 milk cows, 9 head of cattle, 2 pigs and 1 horse. Like other farmers in the area, the Johnson and Williams families grew mostly corn and wheat, although some sorghum and hay for feed was under cultivation as well. Each farm also had a half an acre planted in potatoes and, collectively, the women of the family produced six hundred pounds of butter.
In 1886, A.G. Tallman of the Western Cyclone newspaper visited the Thomas Johnson farm. In his description, Tallman suggests that the extended family operated as a common unit, sharing equipment, pooling labor, and diversifying crops among the various farms.
A visit to the farm of Thos. Johnson is enough to encourage the homesick farmer of any state. He and his children, all with families, own a tract of land of a thousand acres about one fourth of which is under cultivation. He has about 60 fine hogs and the whole family owns a large herd of cattle and several fine horses. They have recently bought a twine binder and are in the midst of harvest. When you make a visit to Mr. J’s farm do not forget to look at Mrs. Johnson’s chickens and ducks.9
The collective nature of this farming venture is further suggested by information from the agricultural census. Henry Johnson planted over 37 acres in corn and wheat, while Joseph Johnson owned a barn and Henry Williams had invested in expensive farm machinery. Each family seems to have specialized in some sort of production with little duplication of effort across the family groups. Oral history and agricultural census data suggests that Thomas Johnson lived on his property in a dugout near Spring Creek. The flowing stream provided much-needed fresh water for animals and could be used for various household needs, food, and religious rituals. Thomas Johnson was not only a farmer and a Civil War veteran; he was also a deacon in the Baptist Church. When he moved to Nicodemus he organized a congregation and held revival services in his dugout. Spring Creek became an ideal location for baptism ceremonies.
During the late 1880s the entire region experienced a series of droughts that devastated family farms. The difficult environmental and economic times weighed heavily on the Johnson and Williams families which were compounded by additional setbacks when Joseph Johnson lost his barn, feed and many of his animals to a grass fire in 1887.10 The combined effects of the drought and fire led Thomas Johnson and most of the extended family to sell their claims in 1889. The only family from the original Johnson/Williams group to remain at Nicodemus was the family of Charles Williams. By 1900, Emma Williams was a widow raising four children. Her sons Henry and Neil worked on the land.
Although the property of Thomas Johnson passed through several hands it ultimately ended up in the ownership of the Midway Land Company of Wyandot County, Kansas. The land was corporately owned until 1906 when Henry Williams, son of Charles and Emma Williams and grandson of Thomas Johnson, purchased the property. Henry Williams and his wife Cora raised six children on this property. They worked and lived on this land until the middle of the twentieth century.
Archeology at the Thomas Johnson/Henry Williams Farm
Constructed approximately 130 years ago, dugout and sod structures associated with the settlement of Nicodemus have been obscured by the passage of time and the powers of nature. Dugouts were constructed with hand tools in the slope of a hill or a stream bank. Excavations were carved into the slope and then covered with a roof of poles, brush, and enough dirt to keep out the rain. A fireplace built into the earthen walls served for heat and cooking. The front wall and entrance were constructed with scrap lumber, stone, or sod. If glass could be obtained, a window that provided light and ventilation was included.11 The floor and walls of the dugout were made of packed earth that was sometimes plastered with a clay-like mixture of water and a yellow mineral native to the area.12 Dugouts were seen as a temporary solution, however, and over time these modest homes were replaced by sod and frame houses.
Several depressions were noted during a 2006 survey of the Henry Williams farm property . These were possible features associated with Thomas Johnson’s early occupation of the property. Field school students excavated 11 shovel test pits and 19 one-meter square test units, and were successful in identifying 2 structures and 17 associated features. The construction materials, size, and artifacts present suggest that one of the structures was a hybrid semi-subterranean dug-out/sod-up house The other major structural feature was a root cellar.
Located close to the creek and on a gently sloping area of land, the domestic structure is apparent as a series of indistinct mounds and a slight depression. This structure also has a C-shaped footprint and is oriented southwest to northeast. A 2 m wide doorway faces southeast. The center area of the C-shape is only slightly lower than the surrounding mounds. At its greatest extent the house measures 12 m in length (southwest to northeast) and 10 m wide.
Wood starts the careful excavation of a cut limestone stone wall
Excavation revealed a limestone wall of regularly shaped soft limestone that extends 1 m into the ground. limestone is available from nearby quarry sites. Only the north wall of the area tested, however, appeared to be composed of limestone. The west (rear) wall of the structure was covered with a white plaster.
Only 151 nails were recovered from the excavations. This suggests that the walls of the house above ground surface were constructed from sod blocks rather than wood. The nails were probably used to secure window frames (window glass n = 76) and tar paper. The floor of the house was finished with a hard-packed white magnesia made of ground limestone to keep down dirt and insects.
Immediately above the floor of the house is a deposit of fine ashy soil associated with the last use and abandonment of the house. This layer contained a high frequency of material culture; over 495 artifacts representing 44 percent of the assemblage were recovered from the interior of the domestic structure. The domestic use of this building is clearly indicated by the types of artifacts recovered. Architectural artifacts include 98 wire nails and four cut nails, 16 pieces of window glass, a knob and a rivet. Personal and clothing items included 2 beads, 5 safety pins, a collar stud, 23 buttons, 2 snaps and a fragment of a shell hair comb. A variety of food related artifacts including three canning jar fragments, a spoon, mixing bowl sherds, tea ware, portions of a platter, tea pot, tea cups, and a saucer were present. All of the ceramics were a general whiteware/ironstone or porcelain type. A marble, doll parts, two white clay tobacco pipes and one whiskey bottle fragment also came from this layer. There were several diagnostic items in this layer including four coins (three pennies, one dime) dating to 1890, 1900 and 1904. We also recovered two pieces of Black Bakelite or plastic comb fragments. Bakelite was produced beginning in 1907. Finer forms of plastic that were commonly used in personal grooming products like combs and toothbrushes were not produced until 1915. It is not clear whether the comb fragments are Bakelite or plastic.
The diagnostic materials suggest that this semi-subterranean house was probably occupied as late as 1915. The abandonment of the structure probably occurred when the nearby frame farmhouse was built around 1919. The last occupants of this home were probably Henry Williams (grandson of Thomas Johnson), his wife, and six children. Excavations did not clearly reveal when the house was constructed. The presence of red transfer print whiteware (circa 1829-1859) in the lower levels of test units placed outside the north limestone wall of the house hint at either an early occupation date (Thomas Johnson) or the use of old ceramics by the inhabitants of the home.
Historical sources, however, strongly point to the likelihood that this house was built by Thomas Johnson around 1877. Agricultural census data indicate that Thomas Johnson had a spring on his property that he used for both domestic and agricultural purposes. Only 50 m from the ruins of the home are the remains of a pitcher pump marking the location of a natural spring. While natural springs are not unheard of in this area, they are relatively rare.
The root cellar is the most obvious feature on the site, consisting of a C-shaped depression cut into the sloping landscape. At its greatest extent the root cellar measures 7.7 m in length (SW to NE) and 6.1 m in width (SE to NW). These measurements include the soil that was probably mounded on the exterior of the structure to provide insulation and support. A doorway measuring approximately 2 meters wide is located along the south side of the structure. Mounds on either side flank the doorway. These mounds define the outline of the C-shaped building. In the center of the C-shape is a significant depression that is approximately 1 m deeper than the surrounding landscape. The packed dirt floor of the structure was situated approximately 1.4 m below the surface.
The walls of this structure were plain, untreated soil and it appears that a simple wood frame (possibly covered by sod) made up the roof. Partial excavation of a test trench revealed a stain that defined the north and east walls of the feature.
Artifacts from the bottom-most levels suggest the function of the structure. Seventy eight percent of all excavated food related material was identified as fragments of glass home canning jars. Several clusters of peach pits were also scattered across the floor. Rough, simple iron door hinges, a few bottle fragments, and faunal remains from small mammals, including mice and weasels, were recovered from the floor.
Several stains extending horizontally into the wall. These stains occurred in groupings and were irregular in size and shape. The stains that make up each grouping however are level relative to one another and probably represent shelves and bins that were built into the wall of the root cellar. The irregularity of shape and size of the stains suggest that the people who constructed the root cellar used some of the scarce, but locally available, wood from trees along the banks of the nearby Spring Creek. Although good diagnostic material was not present in the root cellar, it was probably used in association with the nearby house, and abandoned around the same time.
Use of Local Resources
The Johnson and Williams families used locally available materials very intensively. Limestone blocks from a nearby quarry and sod from nearby fields were the primary components of the house. Shelves, bins, floor supports, and roofs were made from branches of nearby trees. Even the magnesia that was ground into the floor was primarily made of local soft limestone. A closer look at the faunal and floral assemblage reinforces the general impression of self-reliance. Both the root cellar and the domestic structure yielded a fair amount of faunal material, but each tells a slightly different story about the subsistence strategies of those who lived on the site.
Surprisingly, the assemblage contains a large number of wild mammal bones and a relative scarcity of domestic mammal bones. We had expected to recover a greater quantity of domestic mammal bone, given the fact that we were working on a farm site located in a region of the country known for its livestock production. This, however, was not the case. While wild mammals made up 37 percent (n=148) of the faunal assemblage, domestic mammals only constituted four percent (n=19) of the assemblage. Bird remains contributed 35 percent (n=141) of the faunal material, although this number probably over-represents the contribution of fowl to the diet. Eighty eight percent (n=124) of the bird related material was eggshell. Also a surprise, frog bones made up 13 percent (n=53) of the faunal assemblage.
Thirteen bones (9 percent) of large domestic mammals were identified. Seven of these were pig bones and two were cattle bones. The remaining four bones were only identifiable as large mammals. Three of the large mammal artifacts were dental molars and portions of jaws. All of these were recovered from the root cellar. Analysis of tooth eruption and tooth ware patterns indicates that all of the identifiable pig bones in the assemblage came from young individuals.
While the pig remains suggest that juvenile individuals were being butchered, cattle remains suggest that older individuals were being selected for slaughter. The single cow tooth recovered was heavily worn.13 The family who lived on this farm may have only slaughtered older cows who had outlived their use as dairy cattle. The fact that most of the large mammal assemblage was made up of jaw and skull fragments suggests that the residents were consuming portions of the animal with less meat and were selling the rest.
Thirty seven percent (n=148) of the faunal assemblage was comprised of small mammals; however, the majority of the specimens (n=123) came from the root cellar. Identified species include weasel, marten, possum, skunk, rabbit, gopher, squirrel and rat. Many of these small mammals may have been attracted to the food, or hunted other mammals in the root cellar. Some of these animals, however, may have been hunted or trapped by the family that lived on the site.
Spring Creek flows through the Thomas Johnson/Henry Williams Farm site and many visitors to the site indicated that they remembered fishing near the farm. Undoubtedly the creek would have been a good source of supplementary food. Yet, we found no fish bones or scales. This may be due to our small sample size or our recovery methods. We did however find a large number of frog (amphibian) bones and a single crayfish element. A total of 54 frog bones were recovered from the site. These constituted 13 percent of the faunal assemblage. The floor and occupation levels of the root cellar (Feature 1) yielded the most amphibian bones (n=52, 96% of amphibian assemblage). The frog bones represent only four lower skeletal elements including: the ilium, femur and tibiofibula, and the urostyle. The overrepresentation of these elements suggests that the residents of the farm were preparing frog legs. Indeed, the frog skeletal elements show signs of butchering and femurs are consistently cut across the acetabulum. The abundance of frog legs in the root cellar feature suggests that these delicacies may have been canned.
Residents of this farm were utilizing a large portion of their natural environment. While they were farmers growing crops and raising livestock for sale, there is only sparse indication that they consumed many of these resources themselves. Research on other Kansas homestead sites is revealing a similar pattern on farms located in risky marginal environments like those around Nicodemus. Residents appear to rely heavily on local resources while selling their produce and livestock outside the community.
It would be simple to chalk these patterns up to rural poverty. While the farm families in and around Nicodemus were not wealthy by any stretch of the imagination the rest of the assemblage indicates that they decorated their homes with a sense of style. The presence of tea wares, decorative vases and candy dishes all suggest that these are not necessarily people living on the edge. A variety of decorative buttons suggest fashionable clothing, and toys for children also show some level of disposable income.
Family and Landscape
The results of both archeological and historical research on the Thomas Johnson/Henry Williams farm site (14GH102) reveals a story of ingenuity, pride and the struggle to survive in a harsh and punishing environment. The material remains of this site give us glimpses into the web of kinship and community that link not only people and places but also the present and the past at Nicodemus. For the generations of people who lived in and around Nicodemus, the central ingredient of collective independence and autonomy was, and continues to be, kin based interdependence. It is this interdependence that has allowed this community to survive both socially and economically.
Henry Williams returned to his grandfather's farm and built his home (LOC photo)
Like Thomas Johnson, the first group of immigrants to Nicodemus was made up of a handful of large extended families. Most of these families had been slaves on a single plantation in Georgetown and, thus, had close familial and personal ties. These family names persist in the area today. As extended families established themselves on the land, they sought to file multiple claims, each in the name of a different head of household, to maximize continuous acreage owned and controlled by the family. The Johnson and Williams families clearly utilized this strategy by filing claims to an entire section and portions of two adjacent sections. By dividing up the tasks, equipment, and resources necessary for farming they may have hoped to minimize the hazards associated with this risky endeavor. This web of family support, however, was not enough to overcome the fickle nature of the environment of western Kansas. A series of droughts and other catastrophes drove most of the Johnson family off the land. Only Emma (Johnson) Williams and her family remain on their original claim in Nicodemus after 1889.
Speaking of the environmental and economic tragedies of the late 1880s and early 1890s, Lula Craig indicated “it took many years for families to get back the land they lost.” Clearly, the Williams family considered this land their own and sought over time to reclaim it. Henry Williams settled on his grandfather’s original claim with his own wife and family, working the land immediately adjacent to his mother’s land.
The connections between family and land are strong in Nicodemus. On the Thomas Johnson/Charles Williams farm we see the sacrifice and ingenuity that African American families made in order to achieve independence. We also see the links of family bonds and filial responsibility that cross generations. This property was not only claimed by multiple generations of the same family, it also placed family members on the landscape in such a way that they could be near each other and support one another. Thomas Johnson and his extended family settled adjacent properties in order to combine their efforts and struggles in this new and challenging endeavor.
While many of the family members sold their lands, Charles Williams, Emma Williams and their children eventually reclaimed some of the property. Henry Williams may have grown up on this property, living with his parents and grand parents. As an adult, he established his own home, but within close proximity of his widowed mother and younger siblings. On this one relatively small piece of land we see the material remnants of kinship and family that linked people through time and space.
The farmers of Nicodemus bought economic independence and autonomy, by ingenuity and the skillful use of locally available raw materials. On the other hand, the interdependence of kin relations is powerfully encased in the use of the landscape and the transfer of land over generations. Ultimately, as the people of Nicodemus knew, it was the land and their relationships with others that afforded them their autonomy and freedom.
- Kenneth Hamilton, “The Settlement of Nicodemus: Its Origins and Early Promotion,” in Promised Land on the Solomon: Black Settlement in Nicodemus, Kansas (Washington, DC: National Park Service, Government Printing Office, 1986) 2-34: Kenneth Hamilton, Black Towns and Profit.
- James N. Leiker, “Race Relations in the Sunflower State” Kansas History (2002): 214.
- Daniel Hickman, Notes Concerning The Nicodemus Colony, transcribed by G. A Root, (Graham County History File, KSHS Archives, June 1913).
- Glen Schwendemann, “Nicodemus: Negro Haven on the Solomon,” The Kansas Historical Quarterly 34 (Spring 1968), 14.
- O. C. Gibbs, “The Negro Exodus: A Visit to the Nicodemus Colony” (Chicago Tribune, 25 April 1879).
- Orval L. McDaniel, A History of Nicodemus, Graham County, Kansas (Masters Thesis, Fort Hays State College, July 1950): 126-128.
- Census of the United States, 1880.
- Census of the State of Kansas, 1885.
- A.G. Tallman, “Visit to the Thomas Johnson Farm.” (Western Cyclone, 1 July 1886).
- Lula Craig Manuscript, Folder 5, RHMSD250.5 (University of Kansas: Spenser Library).
- O. C. Gibbs, “The Negro Exodus: A Visit to the Nicodemus Colony” (Chicago Tribune, 25 April 1879).
- Bertha Carter, “Oral Interview,” unpublished field notes, Nicodemus, (May 2006).
- Analysis of select faunal material was completed by Dr. Tanner, Kansas (September, 2006).
Learn more about Nicodemus National Historic Site.
Dr. Margaret Wood, Washburn University | http://www.nps.gov/archeology/SITES/npSites/nicodemus.htm | 13 |
22 | A poll tax is a voluntary tax that must be paid to exercise the right to vote. The 24th Amendment of the US Constitution, ratified in 1964, outlawed the use of this kind of tax as a condition of voting. Poll taxes were levied in several states after the Civil War for the purpose of preventing poor people from voting. In the southern states, this effectively suppressed African American and Native American voters.
The poll tax, also known as the community charge, was the changing of rates so everyone payed the same amount of local taxation within a council area, introduced by Margaret Thatcher in Scotland in 1988 and in England and Wales in 1990. The thinking behind it was that instead of the tax being based on the price of the house , the total tax would be the same for everyone, thus ensuring that everyone paid the same amount. While popular with those who had previously been paying high levels of taxation, it proved unpopular with the poor, many of whom were left more then they could afford. Disquiet with the poll tax, combined with her growing personal unpopularity and a sense within her party that she was an electoral liability, forced Thatcher from office in November 1990. She was replaced by John Major.
The Poll Tax in the UK was replaced in 1993 by the Council Tax. The rate of Council Tax is set by the Local Government, who then receive 25% of the revenue to fund local services. The tax is split into several bands with the band you pay based upon the value of your main house. There are discounts for single occupiers of a residence. Council Tax is often seen as unpopular and regressive, and there are some political parties that wish to replace the flat bands based on house value with a local income tax based upon yearly income.
There were also poll taxes levied in the late fourteenth century. While there was no income tax as we know it today, in medieval Britain there was a range of taxes, tithes, rents, duties and other means whereby the king and his nobles (and the Church) could develop income. The old Anglo-Saxon geld, or land tax, was still in use. Tallage, the right of the king to arbitrarily levy tax on his subjects was replaced by the need for Parliamentary consent in 1340, however the amount of revenue required still varied according to circumstances, especially the state of play in the war with France (Hundred Years War). At times of success, revenue from ransoms and from captured foreign estates flowed into the country but, during periods of French ascendency, the need for taxation rose and a poll tax was instituted – first in 1377, set at four pence per person, then in 1379 (“the evil subsidy”) at a shilling a head; a large amount for a peasant farmer but nothing to the Church or land-owning barons. There was widespread discontent. The announcement of plans for a third poll tax was the trigger for the Peasants Revolt of 1381
Occasional and short-lived poll taxes were levied from the 15th to the 17th centuries.
- ↑ http://www.usconstitution.net/constamnotes.html#Am24
- ↑ http://news.bbc.co.uk/onthisday/hi/dates/stories/march/31/newsid_2530000/2530763.stm | http://conservapedia.com/Poll_tax | 13 |
102 | A Brief History of North Carolina Money
Keeper, North Carolina Collection Gallery
In his 1977 book The Age of Uncertainty, noted economist John Kenneth Galbraith observes, “Money . . . ranks with love as man’s greatest joy. And it ranks with death as his greatest source of anxiety.” Money has certainly been the cause of widespread anxiety and social instability in North Carolina’s past, as it has been in other states. Scarcities of coins, overproduction of paper money, short-sighted fiscal policies, inadequate or nonexistent banking regulations, and counterfeiting plagued and complicated the daily lives of our Tar Heel ancestors, often making it difficult for them to buy or sell basic goods and services. It was not until after the Civil War that the United States’ monetary system began to centralize and stabilize under federal authority. Until that development, North Carolina and other states had to depend largely on the uncertain paper moneys issued by their own governmental officials and by private banks and other businesses.
Due to monetary and commercial restrictions imposed on the colonies by England (Great Britain after 1707), coinage was often in short supply in many locations in the Carolinas and elsewhere in North America during the 1600s and 1700s. English pence and shillings did circulate in these regions, and there were early efforts in Massachusetts, Connecticut, Maryland, and in a few other colonies to strike or import supplies of coinage to ease shortages. Those efforts, however, proved to be isolated and very limited in scope, so England’s American subjects routinely employed barter in their transactions, directly exchanging food, clothing, tools, livestock, and other items with one another. They also relied increasingly on foreign coins that filtered into their local economies through unauthorized trade and incidental dealings with visitors and new settlers in their communities. Dutch “lions,” French deniers and écus, Portuguese “joes,” and Spanish cobs and milled dollars were among the copper, silver, and gold pieces found inside the pockets, purses, and lock boxes of American colonists.
Insufficient supplies of coinage and shortfalls in revenues finally prompted some of England’s “New World” colonies to produce money in another form: paper. Massachusetts was the first colony to issue paper money in 1690 (four years before the Bank of England did so), followed by South Carolina in 1703, and New York in 1709. In 1712 and 1713, North Carolina’s colonial assembly approved the issue of 12,000 pounds in bills of credit to pay for military equipment and supplies during North Carolina’s war against the Tuscarora Indians. Since no printer resided in North Carolina at the time, all of that currency had to be handwritten. North Carolina’s subsequent authorizations of 1715, 1722, and 1729—which consisted of over 47,000 more bills with a combined face value of 76,000 pounds—were also penned entirely by hand.
At first it appeared that paper money might be a convenient, quick solution to the colonies’ financial problems. Unfortunately, the over-production of these currencies and counterfeiting of bills began to erode public confidence in domestic issues. By 1720, bogus money had become a very serious problem in North Carolina. That year Royal Governor Charles Eden complained in a speech to the assembly that the “quantity of counterfeit currency among us” was harming the economy and bringing ruin to “many honest homes and families.” Fourteen years later, Governor Gabriel Johnston echoed his predecessor’s concerns about “the great Multiplicity” of counterfeit bills being passed by “Vagabond and Idle people.” By then, by 1734, North Carolina’s government discontinued its production of handwritten money. North Carolina still had no printer living within it borders, so officials had to contract a craftsman in another colony to produce North Carolina’s authorizations of 1734 and 1735. The use of printed currencies did not stop counterfeiting. Even the adoption of far harsher penalties for the crime did not markedly curb the practice. Legislation passed by North Carolina’s assembly in 1745 required that for a first offense, anyone convicted of forging, altering, or knowingly passing counterfeit bills would publicly “stand in the Pillory for the space of two hours, have his ears nailed to the same and cut off.” After a second conviction, the offender would be summarily executed “without benefit of Clergy.”
North Carolina at last gained the services of a printer in 1749, when James Davis of Virginia moved to New Bern to establish his shop. As soon as Davis had his press set up there, North Carolina’s government assigned him the task of printing the money it had authorized in 1748 to build coastal defenses against possible Spanish attacks. Colonial records document that officials paid Davis half his annual salary on October 17, 1749, for completing the “stamping and emitting the sum of Twenty one Thousand Three Hundred and Fifty pounds [in] public Bills of Credit.” Later, in 1754, the prospects of armed confrontations with French forces and their Native American allies prodded North Carolina once again to issue bills to underwrite the construction of additional fortifications and to equip troops. Problems with these currencies and later bills released by North Carolina and other colonies finally moved the British government to take restrictive actions. Parliament’s Currency Act of 1764 and further legislation in 1773 further tightened London’s control over North America’s monetary affairs and forbade nearly all forms of colonial currency. Prior to the implementation of those controls, in the period between 1712 and its issue of 1771, North Carolina alone had authorized eighteen emissions of paper money that in face value exceeded 343,000 pounds.
During the Revolution, deficiencies in coinage and the accelerating instability of American paper moneys proved to be greater threats to the goal for independence than were British muskets. In North Carolina the provisional government that replaced royal authority had an empty treasury and qualified as a poor risk for any potential loans. Therefore state officials, like their colonial predecessors, had to resort once again to printing currency to supply North Carolina’s troops and to finance other governmental needs. In Philadelphia, the Continental Congress also authorized the printing of currency to cover its administrative and war-related expenses. Over 241 million dollars in “Continental currency” flooded North Carolina and the other states during the Revolution. Unsupported by any reserves of silver or gold, this congressional currency rapidly depreciated, so much so that Americans long after the war used the phrase “Not worth a Continental” to describe anything of little or no value.
With further regard to North Carolina’s currencies in this period, the bills and notes it issued on the eve of the Revolution and during the fight for independence were products of several craftsmen. Silversmith William Tisdale of New Bern is credited with printing the authorization of 1775; Gabriel Lewyn of Baltimore, Maryland, produced the state’s 1776 issue; James Davis, its 1778 issue and two authorizations of 1780; and Hugh Walker of Wilmington, the intervening issue of 1779. Walker, it should be added, produced North Carolina’s 1779 currency, because Davis was unable to print it in New Bern due to a smallpox epidemic in the town and fears that the disease would spread across the region if bills contaminated with smallpox were issued from there.
Overall, North Carolina’s elected officials authorized the printing of more than eight million dollars in currency between August 1775 and the state’s last wartime issue of May 10, 1780. That given dollar amount is misleading, for it is only the total face value of those currencies, not a true measure of their buying power. As in the case of the paper moneys issued by the Continental Congress and other new states, the real value of North Carolina’s currency fell dramatically during the Revolution. By December 1780, North Carolina’s bills were being generally accepted or exchanged at an rate of 725:1, meaning that 725 dollars in North Carolina currency was equal to only one dollar in silver or gold.
After the Revolution, all of the Continental dollars and other paper currencies issued during the war had been thoroughly discredited. Nevertheless, North Carolina’s government still found it necessary to print its own currency in 1783 and again in 1785 to meet its obligations. Public mistrust for paper money, especially for any bills with face values in dollars, prompted North Carolina and several other states to revert to using colonial-era denominations, utilizing the British duodecimal system of pence, shillings, and pounds.
Later, when North Carolina formally ratified the United States Constitution and joined the Union in November 1789, the state agreed to abide by the founding document’s provisions, including its restrictions regarding the minting and printing of money. Article I, Section 10, of the Constitution forbids any state from minting coins or emitting “Bills of Credit” (paper money) or “make any Thing but gold and silver Coin a Tender in Payment of Debts.” Although these restrictions sought to unify the nation’s monetary supply by not allowing states to produce their own money, the provisions did not specifically prohibit private individuals or businesses from doing so. As a result, the vast majority of currency that circulated in the economy between the American Revolution and the Civil War was paper money issued by private banks. The Bank of Cape Fear in Wilmington and the Bank of New Bern, both chartered in 1804, were North Carolina’s first banks and were the first to issue bank notes in the state. Citizens here and elsewhere relied heavily on such bank notes, as well as on other forms of money, such as scrip or “due bills” issued by merchants, private academies, and later by insurance companies and textile mills.
During the early 1800s, North Carolina’s state government was one that
appeared to defy openly the Constitution’s restrictions on the production
and issue of state-sanctioned currencies. On three occasions—in 1815,
1817, and 1824—North Carolina’s legislature authorized the printing
of state treasury notes. The chief purpose of that money was to make it possible
for North Carolina’s cash-strapped government to buy bank stocks. Produced
in denominations of less than a dollar (known as fractionals), those notes
did not remain just within the realm of the state’s banking system;
they found their way into the economy, remaining in circulation for many years
and being used by citizens to transact business and often to pay their taxes.
Being a relatively poor state with few factories, North Carolina’s small-farm economy grew slowly during the first half of the nineteenth century. The state, along with the rest of the South, therefore did not require the same levels of capital and expansion in its banking system as those demanded in the industrial North. Despite this, the number of banks in North Carolina did rise steadily before the Civil War. By 1860, there were thirty-six private banks with branches operating in communities across the state. Nearly all of them issued currencies that were printed under contract by engraving firms in New York and Philadelphia. With each passing decade, the selection of bank notes produced by those companies became more elaborate and beautiful, employing more details in their design and more complex uses of color to thwart the ever-improving skills of counterfeiters.
While the first half of the nineteenth century was, numismatically speaking, an era dominated by paper money, that period also witnessed a bona fide “golden age” of coinage in North Carolina. In 1799, the United States’ first documented discovery of gold occurred in the state’s western piedmont, in Cabarrus County. There a small boy named Conrad Reed found on his father’s farm a seventeen-pound rock heavily laced with the precious metal. By the 1820s, large-scale mining operations were in place in the region, with millions of dollars in gold being shipped to Philadelphia for coining at the United States Mint. In fact, prior to 1829, North Carolina furnished all the native gold coined by the national mint.
The inherent dangers of transporting valuable supplies of raw gold to Pennsylvania made it obvious to many North Carolina prospectors and businessmen that they needed the services of local craftsmen who could accurately assay and transform their gold dust, ore, and nuggets into accurate and convenient gold pieces. In response to those needs, both private and public minting operations were established in North Carolina, as well as in northern Georgia during the 1830s. German-born metal smith Christopher Bechtler, Sr, set up a shop in Rutherford County, N.C., in 1830. Initially, Bechtler concentrated on crafting jewelry and watches; but soon he, his son, and a nephew began to use a hand press and dies made in the Bechtler shop to strike coins with the gold that people brought to them. The Bechtlers produced coins in three denominations. a dollar, a “quarter-eagle” ($2.50), and a “half-eagle” ($5).
The success of the Bechtler family’s private coining business and that of private minter Templeton Reid in Georgia were among factors that convinced the federal government in 1835 to found a branch of the United States Mint in Dahlonega, Georgia; one in Charlotte; and another branch farther south, in New Orleans. In North Carolina the Charlotte Mint and the Bechtlers together would coin over eight million dollars in gold before the Civil War.
With the outbreak of war in 1861, North Carolina and the other states that left the Union were no longer bound by the United States Constitution’s monetary restrictions. Yet, the Confederate Constitution, like its federal counterpart, stipulated that “No State shall . . . coin money; [and] make anything but gold and silver coin a tender in payment of debts . . . . ” Such provisions, especially in time of war, quickly proved unrealistic and entirely unworkable. The Confederate government lacked the metal reserves to strike coinage, so from the outset of the conflict, the Confederacy and its constituent states were forced to rely on a mind-boggling array of paper currencies to run the South’s economy and to finance its battle against the North. The Confederate treasury alone issued at least seventy different types of notes that in face value amounted to nearly two billion dollars. This massive volume of paper money was supplemented by a wide range of official state currencies and by hundreds of notes of different sizes and designs distributed by private banks and other businesses throughout the South. This confusing mix of money steadily increased the stresses on the Confederacy’s economy, broke public confidence in the South’s fiscal policies, and hampered the collection of taxes at all levels of government.
North Carolina’s state convention and legislature authorized the printing of $16,420,000 in treasury notes during the war. That dollar amount seems puny when compared to the monstrous sums issued by the Confederate government, but for North Carolina it proved too much. As elsewhere in the South, the strains of war made it impossible for the state’s currency to hold its value. More and more money was needed to buy ever-dwindling supplies of food and other necessities. Between 1862 and 1865, for examples, the price of wheat rose more than 1,600 per cent; bacon soared 2,300 per cent; and flour almost 2,800 per cent. By early 1865, a North Carolinian needed as much as $600 to buy a pair of basic shoes and $1,500 to purchase a simple overcoat. Such exorbitant increases in the cost of living and accusations of profiteering often caused civil unrest on the homefront. Earlier, in 1863, a crowd composed largely of soldiers' wives confronted a store owner in Salisbury about his high price for flour. When the owner dismissed their complaints and brusquely closed the front door of his storehouse, some of the women reopened the door with hatchets. The owner then quickly agreed to provide the ladies with ten barrels of flour at reduced prices.
Growth of a National Banking System
The confusion and financial problems associated with the multitude of currencies used by Americans both before and during the Civil War convinced the federal government to alter and refine the nation’s monetary structure, adopting changes that led ultimately to the creation of common standards for notes used in the United States. During the war, the Union government had also relied on various forms of paper money. National bank notes were one type of currency created by federal officials in 1863. Printed under the authority and supervision of the United States government, those notes were issued to private banks, which, in turn, introduced them into the economy. This new system extended into South after the Confederacy’s destruction in 1865, with the National Bank of Charlotte becoming the first such institution in North Carolina to receive a national banking charter from the United States government. Other national banks opened in the state, as did thousands more throughout the reunified, growing nation. According to Arthur and Ira Friedberg’s 2001 catalog Paper Money of the United States, a total of 14,348 national banks were established in the country between 1863 and 1935. Only 146 of this large number—barely one percent—consisted of North Carolina national banks. Due to this small number of North Carolina national banks, the North Carolina notes that survive from the era usually command very high prices in numismatic markets today.
Money collectors broadly categorize national bank notes into two types: a
larger type and a smaller, less ornate type issued after 1928. Regardless
of where they circulated, though, all national bank notes of the same type
and series look identical in their basic design. Their only minor differences
or variations pertain to overprints of the issuing bank’s name, location,
and individual charter number. For instance, the Citizens National Bank of
Raleigh was the 1,766th national bank chartered, so all of its notes (one
of which is illustrated on this page) carry that bank’s title and the
bold charter number 1766.
During the decades before the Civil War, another private supplier of money began to issue much-needed coinage from Rutherford County, N.C. German-born jeweler Christopher Bechtler, Sr., his son Augustus, and a nephew coined millions of dollars in gold excavated in the region. Years before the more famous strike in California, North Carolina was home to the United States’ first gold rush. Prospectors swarmed over the state’s western piedmont, where their panning, digging, and blasting uncovered rich surface deposits and winding subterranean veins of the precious metal. In fact, prior to 1829, North Carolina’s mining industry furnished all the native gold coined by the United States Mint in Philadelphia. By 1837, the level of gold production in the state and the Bechtlers’ continuing success prompted the federal government to establish a branch of the national mint in Charlotte. Together, that branch mint and the Bechtlers would produce almost nine million dollars in gold coinage during the antebellum period.
Following the formation of the national banking system, the United States government continued to assume greater and greater control over the printing and distribution of the nation’s currency. By 1887, the Bureau of Engraving and Printing in Washington, D.C., became responsible for producing all the paper money that circulated throughout the country, including national bank notes. The establishment of the Federal Reserve System in 1913 consolidated and intensified federal controls over the nation’s money supply. That system, which remains in force today, consists of twelve Federal Reserve districts that disburse United States currency to thousands of commercial banks. Those banks essentially buy money from the Federal Reserve by paying a percentage of interest known as the “discount rate.” The banks then loan and invest that money in various enterprises. They also help to monitor the condition of the nation’s currency by replacing notes and coins in circulation that are overly worn or damaged. North Carolina’s banks are located within the Fifth Federal Reserve District, which is headquartered in Richmond, Virginia.
After the establishment of the Federal Reserve System, public and private involvement in making and issuing currencies effectively ended on state and local levels. There have been, however, exceptions and occasions after 1913 when money shortages and other circumstances compelled local authorities to issue their own forms of money. During the Great Depression in the 1930s, Cumberland County and the City of Gastonia are two of many examples in North Carolina where cash-poor local governments issued scrip to help fund the operation of schools, the construction and repair of county and municipal roads, and the administration of other vital community services disrupted by the United States’ dire economic troubles. A piece of scrip, unlike a Federal Reserve note, is not legal tender, meaning it is not recognized by law as an acceptable payment for all public and private debts anywhere in the nation. Scrip is instead a form of paper money that is be used within a geographically defined area, usually for a specific purpose, and for a limited or fixed period of time.
Another, more up-to-date example of a North Carolina scrip is the “Plenty” in Orange County. First issued in 2002 by an incorporated, non-profit organization in Carrboro, the Plenty’s purpose is to support local commerce and safeguard area jobs through the use of a community-based currency. The Plenty’s face values of one, one-half, and one-quarter represent units based on an hourly wage of ten dollars per hour. Hence, residents in and around the town of Carrboro accept this scrip, under varying conditions, on par with with United States currency of $10, $5, and $2.50. Plenty notes, which are printed with soy-based inks on a watermarked paper composed of recycled bamboo and hemp, feature very colorful decorative elements and the motto “In Each Other We Trust.” As far as imagery is concerned, all three denominations of the Plenty carry the same large oak tree and landscape on their faces. Their backs are distinguished by insets of local scenery and images of trout lilies, the eastern box turtle, and great blue heron.
On the national level, the United States Mint in 1999 launched a commemorative-coin program that has reconnected all the states, at least symbolically, to America’s money supply and to the nation’s numismatic history. The Mint over a ten-year period is striking five commemorative state quarters each year. Each of those twenty-five-cent pieces has the same portrait of President George Washington on its obverse but a unique design on its reverse. Design themes in the series include popular tourist sites, historic events, and other symbols associated with each state. The order of the coins’ release is chronological, in sequence with the dates when the states ratified the Constitution and joined the Union. When North Carolina entered the Union in November 1789, it was the twelfth state to do so. The United States quarter showcasing North Carolina is therefore the twelfth coin in the series. Issued in 2001, North Carolina’s “First Flight” quarter depicts the Wright Brothers’ first successful powered flight along the dunes at Kitty Hawk in Dare County on December 17, 1903.
Today, it is estimated that more than 675 billion dollars in United States currency are being used in daily transactions or held in vaults throughout the world. Over time this vast supply of dollars, at least in terms of actual coinage and Federal Reserve notes, will shrink as money increasingly assumes an electronic form. In North Carolina and elsewhere, most citizens now receive their salaries and conduct much of their business through computer networks. Money is no longer just coins or pieces of paper that people physically exchange; rather, it is more often simply groups of numbers in a data base that are subtracted or added to accounts through on-line banking services and the scanning of plastic debit and credit cards. Such electronic transactions will continue to increase and expand globally, although cash in its traditional forms will also continue to be used in the United States and in foreign economies for many years to come. | http://www.lib.unc.edu/dc/money/ncmoney.html | 13 |
18 | Saylor.org's Ancient Civilizations of the World/Wars and Expansion
5th century BC
From the perspective of Athenian culture in Classical Greece, the period generally referred to as the 5th century BC encroaches slightly on the 4th century BC. This century is essentially studied from the Athenian outlook because Athens has left us more narratives, plays, and other written works than the other ancient Greek states. In this context, one might consider that the first significant event of this century occurs in 510 BC, with the fall of the Athenian tyrant and Cleisthenes' reforms. However, a broader view of the whole Greek world might place its beginning at the Ionian revolt of 500 BC, the event that provoked the Persian invasion of 492 BC. The Persians (called "Medes") were finally defeated in 490 BC. A second Persian attempt failed in 481-479 BC. The Delian League then formed, under Athenian hegemony and as Athens' instrument. Athens' excesses caused several revolts among the allied cities, all of which were put down by force, but Athenian dynamism finally awoke Sparta and brought about the Peloponnesian War in 431 BC. After both forces were spent, a brief peace came about; then the war resumed to Sparta's advantage. Athens was definitively defeated in 404 BC, and internal Athenian agitations mark the end of the 5th century BC in Greece.
Since the beginning, Sparta had been ruled by a "diarchy." This meant that Sparta had two kings serving concurrently throughout its entire history. The two kingships were both hereditary and were either from the Agiad dynasty or the Eurypontid dynasty. Allegedly, the hereditary lines of these two dynasties spring, respectively, from Eurysthenes and Procles, twin descendants of Hercules. Eurysthenes and Procles were said to have conquered Sparta two generations after the Trojan War.
In 510 BC, Spartan troops helped the Athenians overthrow their king, the tyrant Hippias. Cleomenes I, king of Sparta, put in place a pro-Spartan oligarchy headed by Isagoras. But his rival Cleisthenes, with the support of the middle class and aided by democrats, managed to take over. Cleomenes intervened in 508 and 506 BC, but could not stop Cleisthenes, now supported by the Athenians. Through his reforms, the people endowed their city with isonomic institutions—i.e., ones that all have the same rights—and established ostracism.
The isonomic and isegoric democracy was first organized into about 130 demes, which became the foundational civic element. The 10,000 citizens exercised their power via the assembly (the ekklesia, in Greek) of which they all were part, headed by a council of 500 citizens chosen at random.
The city's administrative geography was reworked, the goal being to have mixed political groups — not federated by local interests linked to the sea, to the city, or to farming — whose decisions (declaration of war, etc.) would depend on their geographical situation. Also, the territory of the city was divided into thirty trittyes as follows:
- ten trittyes in the coastal "Paralie"
- ten trittyes in "Asty", the urban center
- ten trittyes in rural "Mesogia".
A tribe consisted of three trittyes, taken at random, one from each of the three groups. Each tribe therefore always acted in the interest of all three sectors.
This is this corpus of reforms that would in the end allow the emergence of a wider democracy in the 460s and 450s BC.
The Persian Wars
In Ionia (the modern Aegean coast of Turkey), the Greek cities, which included great centers such as Miletus and Halicarnassus, were unable to maintain their independence and came under the rule of the Persian Empire in the mid 6th century BC. In 499 BC, these poleis rose against Persian rule in what it has come to be known as the Ionian Revolt(499 BC-493 BC), and Athens and some other Greek cities sent aid, but were quickly forced to back down after defeat in 494 BC at the battle of Lade. Asia Minor returned to Persian control.
In 492 BC, the Persian general, Mardonius led a campaign through Thrace and Macedonia and, while victorious, he was wounded and forced to retreat back into Asia Minor. In addition, the naval fleet of around 1,200 ships that accompanied Mardonius on the expedition was wrecked by a storm off the coast of Mount Athos. Later, the generals Artaphernes and Datis submitted the Aegean islands through a naval expedition.
In 490 BC, Darius the Great of Persian, having suppressed the Ionian cities, sent a fleet to punish the Greeks. 100,000 Persians (historians are uncertain about the number; it varies from 18,000 to 100,000) landed in Attica intending to take Athens, but were defeated at the Battle of Marathon by a Greek army of 9,000 Athenian hoplites and 1,000 Plateans led by the Athenian general Miltiades. The Persian fleet continued to Athens but, seeing it garrisoned, decided not to attempt an assault.
Ten years later, in 480 BC, Darius' successor Xerxes I sent a much more powerful force of 300,000 by land, with 1,207 ships in support, across a double pontoon bridge over the Hellespont. This army took Thrace, before descending on Thessaly and Boetia, whilst the Persian navy skirted the coast and resupplied the ground troops. The Greek fleet, meanwhile, dashed to block Cape Artemision. After being delayed by Leonidas I, the Spartan king of the Agiad Dynasty, at the Battle of Thermopylae (a battle made famous by the 300 Spartans who faced the entire Persian Army), Xerxes advanced into Attica, where he captured and burned Athens. But the Athenians had evacuated the city by sea, and under the command of Themistocles defeated the Persian fleet at the Battle of Salamis.
In 483 BC, during the time of peace between the two Persian invasions, a vein of silver ore had been discovered in the Laurion (a small mountain range near Athens), and the hundreds of talents mined there had paid for the construction of 200 warships to combat Aeginetan piracy. A year later, the Greeks, under the Spartan Pausanias, defeated the Persian army at Plataea. Following the Battle of Plataea, the Persians began withdrawing from Greece and never attempted an invasion again.
The Athenian fleet then turned to chasing the Persians from the Aegean Sea, defeating their fleet decisively in the Battle of Mycale; then in 478 BC the fleet captured Byzantium. In the course of doing so Athens enrolled all the island states and some mainland ones into an alliance called the Delian League, so named because its treasury was kept on the sacred island of Delos. The Spartans, although they had taken part in the war, withdrew into isolation afterwards, allowing Athens to establish unchallenged naval and commercial power.
The Peloponnesian War
In 431 BC war broke out between Athens and Sparta and its allies. The war was not really a struggle between two city-states as it was a struggle between two coalitions, or leagues of city-states. These two leagues were the Delian League in which Athens was the leading member, and the Peloponnesian League, which was led by Sparta.
The Delian League grew out of the necessity of presenting a unified front of all Greek city-states against Persian aggression. In 481 BC, Greek city-states, including Sparta, met in the first of a series of "congresses" that strove to unify all the Greek city-states against the danger of another Persian invasion. This coalition of city-states formed in 481 BC became known as the "Hellenic League" and included Sparta. As noted above, the expected Persian invasion of Greece under King Xerxes occurred in September 481 BC when the Athenian navy defeated the Persian navy. The Persian land forces were delayed in 480 BC, by a much smaller force of 300 Spartans, 400 Thebans and 700 men from Boeotian Thespiae at the Battle of Thermopylae. The Persians finally left Greece in 479 BC following their defeat at Plataea.
The Battle of Plataea in 479 BC was the final battle of Xerxes' invasion of Greece. After the Battle of Plataea, the Persians never again tried to invade Greece. With the disappearance of this external threat, cracks appeared in the united front of the Hellenic League. In 477 BC, Athens became the recognised leader of a coalition of city-states that did not include Sparta. This coalition met and formalized their relationship at the holy city of Delos. Thus, the League took the name "Delian League." The official purpose of this new League was to liberate Greek cities still under Persian control. However, it became increasingly apparent that the Delian League was really a front for Athenian imperialism throughout the Aegean.
A competing coalition of Greek city-states centered around Sparta arose and became more important as the external Persian threat subsided. This coalition became known as the Peloponnesian League. However, unlike the Hellenic League and the Delian League, the Spartan League was not a response to any external threat — Persian or otherwise. The Spartan League was unabashedly an instrument of Spartan policy aimed at the security of Lacedaemon (the prefecture on the Peloponnese Peninsula in which Sparta was located) and Spartan dominance over the Peloponnese Peninsula. Sometimes the Spartan League is called the "Peloponnesian League." This term is ambiguous on two scores. The "Peloponnesian League" was not really a "league" at all. Nor was it really "Peloponnesian." There was no equality at all between the members as might be implied by the term "league." Furthermore, most of its members were not from the Peloponnese, but rather were located outside the Peloponnese Peninsula. Indeed, the terms "Spartan League" or "Peloponnesian League" are actually modern terms. Contemporaries actually used the term the "Lacedaemonians and their Allies" to describe the so-called league.
The Spartan League had its origins in Sparta's conflict with another city on the Peloponnese Peninsula--Argos. In the 7th century BC, Argos dominated the Peloponnese Peninsula. Even in the period of time after 600 BC, the Argives attempted to control the northeastern part of the Peloponnese Peninsula. The rise of Sparta in the 6th century, naturally, brought Sparta in conflict with Argos. However, with the conquest of the Peloponnesian city-state of Tegea in 550 BC and the defeat of the Argives 546 BC, the Spartan's control began to reach well beyond the borders of Lacedaemon.
As these two coalitions grew, their separate interests kept coming into conflict. Under the influence of King Archidamus II (who ruled Sparta from 476 BC through 427 BC), Sparta, in the late summer or early autumn of 446 BC, concluded the Thirty Years Peace with Athens. This treaty took effect the next winter in 445 BC Under the terms of this treaty, Greece was formally divided into two large power zones. Sparta and Athens agreed to stay within their own power zone and not to interfere in the other's power zone. Despite the Thirty Years Peace, it was clear that eventual war was inevitable. As noted above, at all times during its history down to 221 BC, Sparta was a "diarchy" with two kings ruling the city-state concurrently. One line of hereditary kings were from the Eurypontid Dynasty while the other king was from the Agiad Dynansty. With the conclusion of the Thirty Years Peace treaty Archidamus II, the Eurypontid King at the time, felt he had successfully prevented Sparta from entering into a war with its neighbors. However, the strong war party in Sparta soon won out and in 431 BC Archidamus was forced into going to war with the Delian League. However, in 427 BC, Archidamus II died and his son, Agis II succeeded to the Eurypontid throne of Sparta.
The immediate causes of the Peloponnesian War vary from account to account. However three causes are fairly consistent among the ancient historians, namely Thucydides and Plutarch. Prior to the war, Corinth and one of its colonies, Corcyra (modern-day Corfu), got into a dispute, in 435 BC, over the new Corcyran colony of Epidamnus. War broke out between Corinth and Corcyra. Sparta refused to become involved in the conflict and urged an arbitrated settlement of the struggle. In 433 BC, Corcyra, sought the assistance of Athens in the war on Corinth. Corinth was known to be a traditional enemy of Athens. However, to further encourage Athens to enter the conflict, Corcyra pointed out, to Athens, how useful a friendly relationship with Corcyra would be, given the strategic locations of Corcyra itself and the colony of Epidamnus on the east shore of the Adriatic Sea. Furthermore, Corcyra promised that Athens would have the use of their (Corcyra's) navy, which was the third largest navy in Greece. This was too good of an offer for Athens to refuse. Accordingly, Athens signed a defensive alliance with Corcyra.
The next year, in 432 BC, Corinth and Athens argued over control of Potidaea (near modern-day Nea Potidaia), eventually leading to an Athenian siege of Potidaea. In 434-433 BC Athens issued the "Megarian Decrees", a series of economic decrees that placed economic sanctions on the Megarian people. Athens was accused by the Peloponnesian allies of violating the Thirty Years Peace through all of the aforementioned actions, and, accordingly, Sparta formally declared war on Athens.
Many historians consider these to be merely the immediate causes of the war. They would argue that the underlying cause was the growing resentment on the part of Sparta and its allies at the dominance of Athens over Greek affairs. The war lasted 27 years, partly because Athens (a naval power) and Sparta (a land-based military power) found it difficult to come to grips with each other.
Sparta's initial strategy was to invade Attica, but the Athenians were able to retreat behind their walls. An outbreak of plague in the city during the siege caused heavy losses, including that of Pericles. At the same time the Athenian fleet landed troops in the Peloponnese, winning battles at Naupactus (429 BC) and Pylos (425 BC). But these tactics could bring neither side a decisive victory. After several years of inconclusive campaigning, the moderate Athenian leader Nicias concluded the Peace of Nicias (421 BC).
In 418 BC, however, hostility between Sparta and the Athenian ally Argos led to a resumption of hostilities. Alcibiades was one of the most influential voices in persuading the Athenians to ally with Argos against the Spartans. At the Mantinea Sparta defeated the combined armies of Athens and her allies. Accordingly, Argos and the rest of the Peloponnesus was brought back under the control of Sparta. The return of peace allowed Athens to be diverted from meddling in the affairs of the Peloponnesus and to concentrate on building up the empire and putting their finances in order. Soon trade recovered and tribute began, once again, rolling into Athens. A strong "peace party" arose, which promoted avoidance of war and continued concentration on the economic growth of the Athenian Empire. Concentration on the Athenian Empire, however, brought Athens into conflict with another Greek state.
Ever since the formation of the Delian League in 477 BC, the island of Melos had refused to join. By refusing to join the League, however, Melos reaped the benefits of the League without bearing any of the burdens. In 425 BC, an Athenian army under Cleon attacked Melos to force the island to join the Delian League. However, Melos fought off the attack and was able to maintain its neutrality. Further conflict was inevitable and in the spring of 416 BC the mood of the people in Athens was inclined toward military adventure. The island of Melos provided an outlet for this energy and frustration for the military party. Furthermore there appeared to be no real opposition to this military expedition from the peace party. Enforcement of the economic obligations of the Delian League upon rebellious city-states and island was a means by which continuing trade and prosperity of Athens could be assured. Melos was alone among all the Cycladic Islands located in the southwest Aegean Sea had resisted joining the Delian League. This continued rebellion provided a bad example to the rest of the members of the Delian League.
The debate between Athens and Melos over the issue of joining the Delian League is presented by Thucydides in his Melian Dialogue. The debate did not in the end resolve any of the differences between Melos and Athens and Melos was invaded in 416 BC, and soon occupied by Athens. This success on the part of Athens whetted the appetite of the people of Athens for further expansion of the Athenian Empire. Accordingly, the people of Athens were ready for military action and tended to support the military party, led by Alcibiades.
Thus, in 415 BC, Alcibiades found support within the Athenian Assembly for his position when he urged that Athens launch a major expedition against Syracuse, a Peloponnesian ally in Sicily. Segesta, a town in Sicily, had requested Athenian assistance in their war with the another Sicilian town — the town of Selinus. Although Nicias was a skeptic about the Sicilian Expedition, he was appointed along with Alcibiades to lead the expedition.
However, unlike the expedition against Melos, the citizens of Athens were deeply divided over the Alcibiades' proposal for an expedition to far off Sicily. The peace party was desperate to foil Alcibiades. Thus, in June 415 BC, on the very eve of the departure of the Athenian fleet for Sicily, a band of vandals in Athens defaced the many statues of the god Hermes, that were scattered throughout the city of Athens. This action was blamed on Alcibiades and was seen as a bad omen for the coming campaign. In all likelihood, the coordinated action against the statues of Hermes was the action of the peace party. Having lost the debate on the issue, the peace party was desperate to weaken Alcibiades' hold on the people of Athens. Successfully blaming Alcibiades for the action of the vandals would have weakened Alcibiades and the war party in Athens. Furthermore, it is unlikely that Alcibiades would have deliberately defaced the statues of Hermes on the very eve of his departure with the fleet. Such defacement could only have been interpreted as a bad omen for the expedition that he had long advocated.
Even before the fleet reached Sicily, word arrived to the fleet that Alcibiades was to be arrested and charged with sacrilege of the statues of Hermes. Due to these accusations against him, Alcibiades fled to Sparta before the expedition actually landed in Sicily. When the fleet landed in Sicily and the battle was joined, the expedition was a complete disaster. The entire expeditionary force was lost and Nicias was captured and executed. This was one of the most crushing defeats in the history of Athens.
Meanwhile, Alcibiades betrayed Athens and became a chief advisor to the Spartans and began to counsel them on the best way to defeat his native land. Alcibiades persuaded the Spartans to begin building a real navy for the first time — large enough to challenge the Athenian superiority at sea. Additionally, Alcibiades persuaded the Spartans to ally themselves with their traditional foes — the Persians. As noted below, Alcibiades soon found himself in controversy in Sparta when he was accused of having seduced Timaea, the wife of Agis II, the Eurypontid king of Sparta. Accordingly, Alcibiades was required to flee from Sparta and seek the protection of the Persian Court.
Sparta had now built a fleet (with the financial help of the Persians) to challenge Athenian naval supremacy, and had found a new military leader in Lysander, who attacked Abydos and seized the strategic initiative by occupying the Hellespont, the source of Athens' grain imports. Threatened with starvation, Athens sent its last remaining fleet to confront Lysander, who decisively defeated them at Aegospotami (405 BC). The loss of her fleet threatened Athens with bankruptcy. In 404 BC Athens sued for peace, and Sparta dictated a predictably stern settlement: Athens lost her city walls, her fleet, and all of her overseas possessions. Lysander abolished the democracy and appointed in its place an oligarchy called the "Thirty Tyrants" to govern Athens.
Meanwhile, in Sparta, Timaea gave birth to a child. The child was given the name Leotychidas, son of Agis II, after the great grandfather of Agis II — King Leotychidas of Sparta. However, because of her alleged dalliance with Alcibiades, it was widely rumoured that the young Leotychidas was actually fathered by Alcibiades. Indeed, Agis II, himself, refused to acknowledge Leotychidas as his son until he relented in front of witnesses, on his death bed in 400 BC.
Upon the death of Agis II, Leotychidas attempted to claim the Eurypontid throne for himself. However, there was an outcry against this attempted succession. The outcry was led by the victorious navarch (admiral) Lysander, who was at the height of his influence in Sparta. Lysander argued that Leotychidas was a bastard and could not inherit the Eurypontid throne. Accordingly, Lysander backed the hereditory claim of Agesilaus, son of Agis by another wife, other than Timaea. Based on the support of Lysander, Agesilaus became the Eurypontid king as Agesilaus II, expelled Leotychidas from the country, and took over all of Agis' estates and property.
4th century BC
The end of the Peloponnesian War left Sparta the master of Greece, but the narrow outlook of the Spartan warrior elite did not suit them to this role. Within a few years the democratic party regained power in Athens and in other cities. In 395 BC, the Spartan rulers removed Lysander from office, and Sparta lost her naval supremacy. Athens, Argos, Thebes, and Corinth, the latter two former Spartan allies, challenged Sparta's dominance in the Corinthian War, which ended inconclusively in 387 BC. That same year Sparta shocked the Greeks by concluding the Treaty of Antalcidas with Persia. The agreement turned over the Greek cities of Ionia and Cyprus, reversing a hundred years of Greek victories against Persia. Sparta then tried to further weaken the power of Thebes, which led to a war in which Thebes allied with its old enemy Athens.
Then the Theban generals Epaminondas and Pelopidas won a decisive victory at Leuctra (371 BC). The result of this battle was the end of Spartan supremacy and the establishment of Theban dominance, but Athens herself recovered much of her former power because the supremacy of Thebes was short-lived. With the death of Epaminondas at Mantinea (362 BC) the city lost its greatest leader and his successors blundered into an ineffectual ten-year war with Phocis. In 346 BC, the Thebans appealed to Philip II of Macedon to help them against the Phocians, thus drawing Macedon into Greek affairs for the first time.
The Peloponnesian War was a radical turning point for the Greek world. Before 403 BC, the situation was more defined, with Athens and its allies (a zone of domination and stability, with a number of island cities benefiting from Athens' maritime protection), and other states outside this Athenian Empire. The sources denounce this Athenian supremacy (or hegemony) as smothering and disadvantageous.
After 403 BC, things became more complicated, with a number of cities trying to create similar empires over others, all of which proved short-lived. The first of these turnarounds was managed by Athens as early as 390 BC, allowing it to re-establish itself as a major power without regaining its former glory.
The Fall of Sparta
This empire was powerful but short-lived. In 405 BC, the Spartans were masters of all - of Athens' allies and of Athens itself - and their power was undivided. By the end of the century, they could not even defend their own city. As noted above, in 400 BC, Agesilaus became king of Sparta.
Foundation of a Spartan empire
The subject of how to reorganize the Athenian Empire as part of the Spartan Empire provoked much heated debate among Sparta's full citizens. The admiral Lysander felt that the Spartans should rebuild the Athenian empire in such a way that Sparta profited from it. Lysander tended to be too proud to take advice from others. Prior to this, Spartan law forbade the use of all precious metals by private citizens, with transactions being carried out with cumbersome iron ingots (which generally discouraged their accumulation) and all precious metals obtained by the city becoming state property. Without the Spartans' support, Lysander's innovations came into effect and brought a great deal of profit for him - on Samos, for example, festivals known as Lysandreia were organized in his honor. He was recalled to Sparta, and once there did not attend to any important matters.
Sparta refused to see Lysander or his successors dominate. Not wanting to establish a hegemony, they decided after 403 BC not to support the directives that he had made.
Agesilaus came to power by accident at the start of the 4th century BC. This accidental accession meant that, unlike the other Spartan kings, he had the advantage of a Spartan education. The Spartans at this date discovered a conspiracy against the laws of the city conducted by Cinadon and as a result concluded there were too many dangerous worldly elements at work in the Spartan state.
In the Persian Court, Alcibiades now betrayed both: helping Sparta build a navy commensurate with the Athenian navy. Alcibiades advised that a victory of Sparta over Athens was not in the best interest of the Persian Empire. Rather, long and continuous warfare between Sparta and Athens would weaken both city-states and allow the Persians to easily dominate the Helles (Greek) peninsula.
Among the war party in Athens, a belief arose that the catastrophic defeat of the military expedition to Sicily in 415 BC through 413 BC could have been avoided if Alcibiades had been allowed to lead the expedition. Thus, despite his treacherous flight to Sparta and collaboration with Sparta and, later, with the Persian Court, there arose a demand among the war party that Alcibiades be allowed to return to Athens without being arrested. Alcibiades negotiated with his supporters on the Athenian controlled island of Samos. Alcibiades felt that "radical democracy" was his worst enemy. Accordingly, he asked his supporters to initiate a coup to establish an oligarchy in Athens. If the coup were successful Alcibiades promised to return to Athens. In 411 BC, a successful oligarchic coup was mounted in Athens, which became known as "the 400." However, a parallel attempt by the 400 to overthrow democracy in Samos failed. Alcibiades was immediately made an admiral (navarch) in the Athenian navy. Later, due to democratic pressures, the 400 was replaced by a broader oligarchy called "the 5000." Alcibiades did not immediately return to Athens. In early 410 BC, Alcibiades led an Atheneian fleet of eighteen triremes (ships) against the Persian-financed Spartan fleet at Abydos near the Hellespont. The Battle of Abydos had actually begun before the arrival of Alcibiades and had been inclining slightly toward the Athenians. However, with the arrival of Alcibiades, the Athenian victory over the Spartans became a rout. Only the approach of nightfall and the movement of Persian troops to the coast where the Spartans had beached their ships, saved the Spartan navy from total destruction.
Following the advice that Alcibiades had provided the Persian Court, the Persian Empire had been playing Sparta and Athens off against each other. However, as weak as the Spartan navy was after the Battle of Abydos, the Persian navy sought to prove direct assistance to the Spartans. Thus following the Battle of Abydos, Alcibiades pursued and met the combined Spartan and Persian fleets at the Battle of Cyzicus later in the spring of 410 BC. Alcibiades and the Athenian navy won a significant victory against the combined navies.
Agesilaus, the Eurypontid King of Sparta, employed a political dynamic that played on a feeling of pan-Hellenic sentiment and launched a successful campaign against the Persian empire. Once again, the Persian empire played both sides against each other. With access to Persian gold, the Persian Court supported Sparta in the rebuilding of their navy and supported the Athenians, who used Persian subsidies to rebuild their long walls (destroyed in 404 BC) as well as to reconstruct their fleet and win a number of victories.
For most of the first years of his reign, Agesilaus had been engaged in a war against Persia in the Aegean Sea and in Asia Minor. In 394 BC, the Spartan authorities decided to force Agesilaus to return to mainland Greece. Sparta had been attacked by Thebes and other allied Greek city-states. While Agesilaus had a large part of the Spartan Army was in Asia Minor, the Spartan forces protecting the homeland had been attacked by a coalition of forces from Thebes, Corinth, Athens and Argos. At the Battle of Haliartus the Spartans had been defeated by the Thebean forces. Worse yet, Lysander, Sparta's chief military leader had been killed at Haliartus. This was the start of what became known as the "Corinthian War." Upon hearing of the Spartan loss at Haliartus and of the death of Lysander, Agesilaus headed out of Asia Minor, back across the Hellspont, across Thrace and back towards Greece. At the Battle of Coronea, Agesilaus and his Spartan Army defeated a Theban force. For six more years, Sparta fought the allied city-states of Thebes, Corinth, Athens and Argos in the Corinthian War (395 BC to 387 BC). During the war, Corinth drew support from a coalition of traditional Spartan enemies — Argos, Athens and Thebes. However, the war descended into guerrilla tactics and Sparta decided that it could not fight on two fronts and so chose to ally with Persia. The long Corinthian War finally ended with the Peace of Antalcidas or the King's Peace, in which the "Great King" of Persia, Artaxerxes II, pronounced a "treaty" of peace between the various city-states of Greece which broke up all "leagues" of city-states on Greek mainland and in the islands of the Aegean Sea. Although this was looked upon as "independence" for some city-states, the effect of the unilateral "treaty" was highly favorable to the interests of the Persian Empire.
The Corinthian War revealed a significant dynamic that was occurring in Greece. While Athens and Sparta fought each other to exhaustion, Thebes was rising to a position of dominance among the various Greek city-states.
The peace of Antalcidas
In 387 BC, an edict was promulgated by the Persian king, preserving the Greek cities of Asia Minor and Cyprus as well as the independence of the Greek Aegean cities, except for Lymnos, Imbros and Skyros, which were given over to Athens. It dissolved existing alliances and federations and forbade the formation of new ones. This is an ultimatum that benefited Athens only to the extent that Athens held onto three islands. While the "Great King," Artaxerxes, was the guarantor of the peace, Sparta was to act as Persia's agent in enforcing the Peace. To the Persians this document is known as the "King's Peace." To the Greeks, this document is known as the Peace of Antalcidas, after the Spartan diplomat, Antalcidas, who was sent to Persia to negotiate a treaty for Sparta. Sparta had been worried about the developing closer ties between Athens and Persia. Accordingly, Altalcidas was sent to Persia to get whatever agreement he could from the "Great King". Accordingly, the "Peace of Antalcidas is not a negotiated peace at all. Rather it is a surrender to the interests of Persia, drafted entirely along its interests.
On the other hand, this peace had unexpected consequences. In accordance with it, the Boeotian League or Boeotian confederacy was dissolved in 386 BC. This confederacy was dominated by Thebes, a city hostile to the Spartan hegemony. Sparta carried out large-scale operations and peripheral interventions in Epirus and in the north of Greece, resulting in the capture of the fortress of Thebes, the Cadmea, after an expedition in the Chalcidice and the capture of Olynthos. It was a Theban politician who suggested to the Spartan general Phoibidas that Sparta should seize Thebes itself. This act was sharply condemned, though Sparta eagerly ratified this unilateral move by Phoibidas. The Spartan attack was successful and Thebes was placed under Spartan control.
Clash with Thebes
In 378 BC, the reaction to Spartan control over Thebes was broken by a popular uprising within Thebes. Elsewhere in Greece, the reaction against Spartan hegemony began when, Sphodrias, another Spartan general, tried to carry out a surprise attack on the Piraeus. Although the gates of Piraeus were no longer fortified, Sphodrias was driven off before the Piraeus. Back in Sparta, Sphodrias was put on trial for the failed attack, but was acquitted by the Spartan court. Nonetheless, the attempted attack triggered an alliance between Athens and Thebes. Sparta would now have to fight them both together. Athens was trying to recover from their defeat in the Peloponnesian War at the hands of Sparta's "navarch" (admiral), Lysander in the disaster of 404 BC. The rising spirit of rebellion against Sparta also fueled Thebes' attempt to restore the former Boeotian confederacy. In Boeotia, Thebian leaders, Pelopidas and Epaminondas, reorganized the Thebian army and began to free the towns of Boeotia from their Spartan garrisons, one by one, and incorporated these towns into the revived Boeotian League. Pelopidas won a great victory for Thebes over a much larger Spartan force in the Battle of Tegyra in 375 BC.
Thebian authority grew so spectacularly in such a short time that Athens came to mistrust the growing Theban power. Athens began to consolidate its position again through the formation of a second Athenian League. Attention was drawn to growing power of Thebes, when it began interfering in the political affairs of its neighbor, Phocis and, particularly, after Thebes razed the city of Platea in 375 BC Platea had been a long-term ally of Athens. The destruction of Platea caused Athens to negotiate an alliance with Sparta against Thebes, in that same year of 375 BC. In 371, the Thebian army, led by Epaminondas, inflicted a bloody defeat on Spartan forces at Battle of Leuctra. Sparta lost a large part of its army and 400 of its 2,000 citizen-troops. The Battle of Leuctra was a watershed in Greek history. Epaminondas' victory over the Sparta forces at Leuctra ended a long history of Spartan military prestige and dominance over Greece and the period of Spartan hegemony was over. However, Spartan hegemony was not replaced by Thebian, but rather Athenian hegemony.
The rise of Athens
Return to the 5th century BC
The Athenians forbade themselves any return to the situation in the 5th century. In Aristotle's decree, Athens claimed its goal was to prevent Spartan hegemony, with the Spartans clearly denounced as "warmongers". Athens' hegemony was no longer a centralized system but an alliance in which the allies had a voice. The Athenians did not sit on the council of the allies, nor was this council headed by an Athenian. It met regularly and served as a political and military counterweight to Athens. This new league was a quite moderate and much looser organization.
Financing the league
It was important to erase the bad memories of the former league. Its financial system was not adopted, with no tribute being paid. Instead, syntaxeis were used, irregular contributions as and when Athens and its allies needed troops, collected for a precise reason and spent as quickly as possible. These contributions were not taken to Athens — unlike the 5th century BC system, there was no central exchequer for the league — but to the Athenian generals themselves.
The Athenians had to make their own contribution to the alliance, the eisphora. They reformed how this tax was paid, creating a system in advance, the Proseiphora, in which the richest individuals had to pay the whole sum of the tax then be reimbursed by other contributors. This system was quickly assimilated into a liturgy.
Athenian hegemony halted
This league responded to a real and present need. On the ground, however, the situation within the league proved to have changed little from that of the 5th century BC, with Athenian generals doing what they wanted and able to extort funds from the league. Alliance with Athens again looked unattractive and the allies complained.
The main reasons for the eventual failure were structural. This alliance was only valued out of fear of Sparta, which evaporated after Sparta's fall in 371 BC, losing the alliance its sole raison d'etre. The Athenians no longer had the means to fullfil their ambitions, and found it difficult merely to finance their own navy, let alone that of an entire alliance, and so could not properly defend their allies. Thus, the tyrant of Pherae was able to destroy a number of cities with impunity. From 360, Athens lost its reputation for invincibility and a number of allies (such as Byzantium and Naxos in 364) decided to secede.
In 357 BC, the revolt against the league spread, and between 357 and 355, Athens had to face war against its allies, a war whose issue was marked by a decisive intervention by the king of Persia in the form of an ultimatum to Athens, demanding that Athens recognise its allies' independence under penalty of Persia's sending 200 triremes against Athens. Athens had to renounce the war and leave the confederacy to weaken itself more and more. The Athenians had failed in all their plans and were unable to propose a durable alliance.
Theban hegemony - tentative and with no future
5th century BC Boeotian confederacy (447–386 BC)
This was not Thebes' first attempt at hegemony. It had been the most important city of Boeotia and the center of the previous Boeotian confederacy of 447, resurrected since 386.
That confederacy is well known to us from a papyrus found at Oxyrhyncus and known as "The Anonyme of Thebes". Thebes headed it and set up a system under which charges were divided up between the different cities of the confederacy. Citizenship was defined according to wealth, and Thebes counted 11,000 active citizens.
It was divided up into 11 districts, each providing a federal magistrate called a Boeotarch, a certain number of council members, 1,000 hoplites and 100 horsemen. From the 5th century BC the alliance could field an infantry force of 11,000 men, in addition to an elite corps and a light infantry numbering 10,000; but its real power derived from its cavalry force of 1,100, commanded by a federal magistrate independent of local commanders. It also had a small fleet that played a part in the Peloponnesian War by providing 25 triremes for the Spartans. At the end of the conflict, the fleet consisted of 50 triremes and was commanded by a navarch.
All this constituted a significant enough force that the Spartans were happy to see the Boeotian confederacy dissolved by the king's peace. This dissolution, however, did not last, and in the 370s there was nothing to stop the Thebans (who had lost the Cadmea to Sparta in 382 BC) from reforming this confederacy.
Pelopidas and Epaminondas endowed Thebes with democratic institutions similar to those of Athens, the Thebans revived the title of "Boetarch" lost in the Persian king's peace and - with victory at Leuctra and the destruction of Spartan power - the pair achieved their stated objective of renewing the confederacy. Epaminondas rid the Peloponnesus of pro-Spartan oligarchies, replacing them with pro-Theban democracies, constructed cities, and rebuilt a number of those destroyed by Sparta. He equally supported the reconstruction of the city of Messene thanks to an invasion of Laconia that also allowed him to liberate the helots and give them Messene as a capital.
He decided in the end to constitute small confederacies all round the Peloponnessus, forming an Arcadian confederacy (The king's peace had destroyed a previous Arcadian confederacy and put Messene under Spartan control.)
Confrontation between Athens and Thebes
The strength of the Boeotian League explains Athens' problems with her allies in the second Athenian League. Epaminondas succeeded in convincing his countrymen to build a fleet of 100 triremes to pressure cities into leaving the Athenian league and joining a Boeotian maritime league. Epaminondas and Pelopidas also reformed the army of Thebes to introduce new and more effective means of fighting. Thus, the Thebian army was able to carry the day against the coalition of other Greek states at the battle of Leuctra in 371 BC and the battle of Mantinea in 362 BC.
Sparta also remained an important power in the face of Thebian strength. However, some of the cities allied with Sparta turned against her, because of Thebes. In 367 BC, both Sparta and Athens sent delegates to Artaxerxes II, the Great King of Persia. These delegates sought to have the Artaxerxes, once again, declare Greek independence and a unilateral common peace, just as he had done in twenty years earlier in 387 BC. That unilateral peace treaty, commonly called the "King's Peace", or the "Peace of Antalcidas", breaking all bonds between the various city-states of Greece. As noted above, this had meant the destruction of the Boeotian League in 387 BC. Sparta and Athens now hoped the same thing would happen with a new declaration of a similar "Kings Peace" by the Great King of the Persian Empire. Thebes sent Pelopidas to argue against this attempt at a new unilateral "peace treaty" guaranteed by the Persian Empire. Now however, twenty years later in 367 BC, the Great King was convinced by Pelopidas and the Thebian diplomats that Thebes, and the Boeotian League, would be the best agent of Persian interests in Greece. Accordingly, the Great King did not issue a new "King's Peace." Thus, to deal with Thebes, Athens and Sparta were thrown back on their own resources. Thebes, meanwhile, expanded their influence beyond the bounds of Boeotia. In 364 BC, the Thebeans defeated the army of Alexander of Pherae in the Battle of Cynoscephalae, located in southeastern Thessaly in northern Greece. Pelopidas led this Thebian Army to Cynoscephalae. However, during the battle, Pelopides was killed.
The confederal framework of Sparta's relationship with her allies was really an artificial one, since it attempted to bring together cities that had never been able to agree on much at all in the past. Such was the case with the cities of Tegea and Mantinea, which re-allied in the Arcardian confederacy. The Mantineans received the support of the Athenians and the Tegeans that of the Thebans. In 362 BC Thebian general, Epaminondas, led a Thebian army against a coalition of Athenian, Spartan, Elisian, Mantinean and Achean forces. Battle was joined at Mantinea. The Thebans prevailed, but this triumph was short-lived, for Epaminondas died in the battle, stating that "I bequeath to Thebes two daughters, the victory of Leuctra and the victory at Mantinea".
Despite the victory at Mantinea, in the end, the Thebans abandoned their policy of intervention in the Peloponnesus. This event is looked upon as a watershed in Greek history. Thus, Xenophon concludes his history of the Greek world at this point, in 362 BC. The end of this period was even more confused than its beginning. Greece had failed and, according to Xenophon, the history of the Greek world was no longer intelligible.
The idea of hegemony disappeared. From 362 BC onward, there was no longer a single city that could exert hegemonic power in Greece. The Spartans were greatly weakened; the Athenians were in no condition to operate their navy, and after 365 no longer had any allies; Thebes could only exert an ephemeral dominance, and had the means to defeat Sparta and Athens but not to be a major power in Asia Minor.
Other forces also intervened, such as the Persian king, who was appointed himself as arbitrator between the Greek cities, with the tacit agreement of the cities themselves. This situation reinforced the conflicts and there was a proliferation of civil wars, with the confederal framework a repeated trigger for wars. One war led to another, each longer and more bloody, and the cycle could not be broken. Hostilities even took place during winter for the first time, with the 370 invasion of Laconia.
Rise of Macedon
Thebes sought to maintain its position until finally eclipsed by the rising power of Macedon in 346 BC. The energetic leadership within Macedon began in 359 BC when Phillip of Macedonia was made regent for his nephew, Amyntas. Within a short time, Phillip was acclaimed king as, Phillip II of Macedonia, in his own right with succession of the throne established on his own heirs (Alexander the Great).
"Classical Greece" (Wikipedia) http://en.wikipedia.org/wiki/Classical_Greece | http://en.m.wikibooks.org/wiki/Saylor.org's_Ancient_Civilizations_of_the_World/Wars_and_Expansion | 13 |
37 | Religion, caste, and language are major determinants of social and political organization in India today. The government has recognized 18 languages as official; Hindi is the most widely spoken.
The caste system reflects Indian occupational and socially defined hierarchies. Sanskrit sources refer to four social categories, priests (Brahmin), warriors (kshatriya), traders (vayisha) and farmers (shudra). Although these categories are understood throughout India, they describe reality only in the most general terms. They omit, for example, the tribes and low castes once known as ‘untouchables.’ In reality, society in India is divided into thousands of jatis, local, endogamous groups, organized hierarchically according to complex ideas of purity and pollution. Despite economic modernization and laws countering discrimination against the lower end of the class structure, the caste system remains an important source of social identification for most Hindus and a potent factor in the political life of the country.
During the second millennium B.C., pastoral, Aryan-speaking tribes migrated from the northwest into the subcontinent. As they settled in the middle Ganges River valley, they adapted to antecedent cultures.
The political map of ancient and medieval India was made up of myriad kingdoms with fluctuating boundaries. In the 4th and 5th centuries A.D., northern India was unified under the Gupta Dynasty. During this period, known as India's Golden Age, Hindu culture and political administration reached new heights.
Islam spread across the subcontinent over a period of 500 years. In the 10th and 11th centuries, Turks and Afghans invaded India and established sultanates in Delhi. In the early 16th century, the Chaghtai Turkish adventurer and distant relative of Timurlang, Babur, established the Mughal Dynasty, which lasted for 200 years. South India followed an independent path, but by the 17th century it too came under the direct rule of influence of the expanding Mughal Empire. While most of Indian society in its thousands of villages remained untouched by the political struggles going on around them, Indian courtly culture evolved into a unique blend of Hindu and Muslim traditions.
The first British outpost in South Asia was established by the English East India Company in 1619 at Surat on the northwestern coast. Later in the century, the Company opened permanent trading stations at Madras, Bombay, and Calcutta, each under the protection of native rulers.
The British expanded their influence from these footholds until, by the 1850s, they controlled most of present-day India, Pakistan, and Bangladesh. In 1857, a rebellion in north India led by mutinous Indian soldiers caused the British Parliament to transfer all political power from the East India Company to the Crown. Great Britain began administering most of India directly while controlling the rest through treaties with local rulers.
In the late 1800s, the first steps were taken toward self-government in British India with the appointment of Indian councilors to advise the British viceroy and the establishment of provincial councils with Indian members; the British subsequently widened participation in legislative councils. Beginning in 1920, Indian leader Mohandas K. Gandhi transformed the Indian National Congress political party into a mass movement to campaign against British colonial rule. The party used both parliamentary and nonviolent resistance and noncooperation to achieve independence.
On August 15, 1947, India became a dominion within the Commonwealth, with Jawaharlal Nehru as Prime Minister. Enmity between Hindus and Muslims led the British to partition British India, creating East and West Pakistan, where there were Muslim majorities. India became a republic within the Commonwealth after promulgating its Constitution on January 26, 1950.
After independence, the Congress Party, the party of Mahatma Gandhi and Jawaharlal Nehru, ruled India under the influence first of Nehru and then his daughter and grandson, with the exception of two brief periods in the 1970s and 1980s.
Prime Minister Nehru governed the nation until his death in 1964. He was succeeded by Lal Bahadur Shastri, who also died in office. In 1966, power passed to Nehru's daughter, Indira Gandhi, Prime Minister from 1966 to 1977. In 1975, beset with deepening political and economic problems, Mrs. Gandhi declared a state of emergency and suspended many civil liberties. Seeking a mandate at the polls for her policies, she called for elections in 1977, only to be defeated by Moraji Desai, who headed the Janata Party, an amalgam of five opposition parties.
In 1979, Desai's Government crumbled. Charan Singh formed an interim government, which was followed by Mrs. Gandhi's return to power in January 1980. On October 31, 1984, Mrs. Gandhi was assassinated, and her son, Rajiv, was chosen by the Congress (I)--for "Indira"--Party to take her place. His Congress government was plagued with allegations of corruption resulting in an early call for national elections in 1989.
In the 1989 elections Rajiv Gandhi and Congress won more seats than any other single party, but he was unable to form a government with a clear majority. The Janata Dal, a union of opposition parties, then joined with the Hindu-nationalist Bharatiya Janata Party (BJP) on the right and the communists on the left to form the government. This loose coalition collapsed in November 1990, and Janata Dal, supported by the Congress (I), came to power for a short period, with Chandra Shekhar as Prime Minister. That alliance also collapsed, resulting in national elections in June 1991.
On May 27, 1991, while campaigning in Tamil Nadu on behalf of Congress (I), Rajiv Gandhi was assassinated, apparently by Tamil extremists from Sri Lanka. In the elections, Congress (I) won 213 parliamentary seats and returned to power at the head of a coalition, under the leadership of P.V. Narasimha Rao. This Congress-led government, which served a full 5-year term, initiated a gradual process of economic liberalization and reform, which opened the Indian economy to global trade and investment. India's domestic politics also took new shape, as the nationalist appeal of the Congress Party gave way to traditional alignments by caste, creed, and ethnicity leading to the founding of a plethora of small, regionally based political parties.
The final months of the Rao-led government in the spring of 1996 were marred by several major political corruption scandals, which contributed to the worst electoral performance by the Congress Party in its history. The Hindu-nationalist Bharatiya Janata Party (BJP) emerged from the May 1996 national elections as the single-largest party in the Lok Sabha but without a parliamentary majority. Under Prime Minister Atal Bihari Vajpayee, the subsequent BJP coalition lasted only 13 days. With all political parties wishing to avoid another round of elections, a 14-party coalition led by the Janata Dal formed a government known as the United Front, under the former Chief Minister of Karnataka, H.D. Deve Gowda. His government collapsed after less than a year, when the Congress Party withdrew his support in March 1997. Inder Kumar Gujral replaced Deve Gowda as the consensus choice for Prime Minister at the head of a 16-party United Front coalition.
In November 1997, the Congress Party again withdrew support from the United Front. In new elections in February 1998, the BJP won the largest number of seats in Parliament--182--but fell far short of a majority. On March 20, 1998, the President inaugurated a BJP-led coalition government with Vajpayee again serving as Prime Minister. On May 11 and 13, 1998, this government conducted a series of underground nuclear tests, forcing U.S. President Clinton to impose economic sanctions on India pursuant to the 1994 Nuclear Proliferation Prevention Act.
In April 1999, the BJP-led coalition government fell apart, leading to fresh elections in September. The National Democratic Alliance--a new coalition led by the BJP--gained a majority to form the government with Vajpayee as Prime Minister in October 1999.
The Kargil conflict in 1999 and an attack on the Indian Parliament in December 2001 led to increased tensions with Pakistan. Hindu nationalists have long agitated to build a temple on a disputed site in Ayodhya. In February 2002, a mob of Muslims attacked a train carrying Hindu volunteers returning from Ayodhya to the state of Gujarat, and 57 were burnt alive. Over 900 people were killed and 100,000 left homeless in the resulting anti-Muslim riots throughout the state. This led to accusations that the state government had not done enough to contain the riots, or arrest and prosecute the rioters.
The ruling BJP-led coalition was defeated in a five-stage election held in April and May of 2004, and a Congress-led coalition took power on May 22 with Manmohan Singh as Prime Minister.
The government exercises its broad administrative powers in the name of the president, whose duties are largely ceremonial. A special electoral college elects the president and vice president indirectly for 5-year terms. Their terms are staggered, and the vice president does not automatically become president following the death or removal from office of the president.
Real national executive power is centered in the Council of Ministers (cabinet), led by the prime minister. The president appoints the prime minister, who is designated by legislators of the political party or coalition commanding a parliamentary majority in the Lok Sabha. The president then appoints subordinate ministers on the advice of the prime minister.
India's bicameral parliament consists of the Rajya Sabha (Council of States) and the Lok Sabha (House of the People). The Council of Ministers is responsible to the Lok Sabha.
The legislatures of the states and union territories elect 233 members to the Rajya Sabha, and the president appoints another 12. The members of the Rajya Sabha serve 6-year terms, with one-third up for election every 2 years. The Lok Sabha consists of 545 members, who serve 5-year terms; 543 are directly elected, and two are appointed.
India's independent judicial system began under the British, and its concepts and procedures resemble those of Anglo-Saxon countries. The Supreme Court consists of a chief justice and 25 other justices, all appointed by the president on the advice of the prime minister.
India has 28 states* and 7 union territories. At the state level, some of the legislatures are bicameral, patterned after the two houses of the national parliament. The states' chief ministers are responsible to the legislatures in the same way the prime minister is responsible to parliament.
Each state also has a presidentially appointed governor, who may assume certain broad powers when directed by the central government. The central government exerts greater control over the union territories than over the states, although some territories have gained more power to administer their own affairs. Local governments in India have less autonomy than their counterparts in the United States. Some states are trying to revitalize the traditional village councils, or panchayats, to promote popular democratic participation at the village level, where much of the population still lives.
Principal Government Officials
India maintains an embassy in the United States at 2107 Massachusetts Avenue NW, Washington, DC 20008 (tel. 202-939-7000, fax 202-265-4351, email firstname.lastname@example.org) and consulates general in New York, Chicago, Houston, and San Francisco. The embassy’s web site is http://www.indianembassy.org/.
Emerging as the nation’s single largest party in the April/May 2004 Lok Sabha (Lower House of Parliament) election, Congress currently leads a coalition government under Prime Minister Manmohan Singh. Party President Sonia Gandhi was re-elected by the Party National Executive in May 2004. She is also a Member of Parliament and leader of the Congress delegation in the Lok Sabha. Congress prides itself as a secular, left of center party, and has been the historically dominant political party in India. Although its performance in national elections had steadily declined during the last 12 years, its surprise victory in 2004, was a result of recruiting strong allies into the UPA, the anti-incumbency factor among voters, and winning the votes of many poor, rural and Muslim voters. The political fortunes of the Congress had suffered badly in the 1990’s as major groups in its traditional vote bank were lost to emerging regional and caste-based parties, such as the Bahujan Samaj Party and the Samajwadi Party. In November 2003, elections in five states reduced the number of Congress ruled states from 14.5 to 11.5–the Congress shares power with the People’s Democratic Party in the state of Jammu and Kashmir--and convinced the BJP to move up the Lok Sabha elections from October to May.
The Bharatiya Janata Party (BJP), led by Venkaiah Naidu, holds the second-largest number of seats in the Lok Sabha. Former Prime Minister Atal Bihari Vajpayee serves as Chairman of the BJP Parliamentary Party, and former Deputy Prime Minister L.K. Advani is Leader of the Opposition. The Hindu-nationalist BJP draws its political strength mainly from the "Hindi Belt" in the northern and western regions of India.
The party holds power in the states of Gujarat, Jharkhand, Goa, Arunachal Pradesh, Madhya Pradesh, Rajasthan, Chhattisgarh, and Orissa–in coalition with the Biju Janata Dal–and in Haryana–in coalition with the Indian National Lok Dal. Popularly viewed as the party of the northern upper caste and trading communities, the BJP has made strong inroads into the lower caste vote bank in recent national and state assembly elections. The party must balance the competing interests of Hindu nationalists, (who advocate construction of a temple on a disputed site in Ayodhya), and center-right modernizers who see the BJP as a party of economic and political reform.
Four Communist and Marxist parties are united in a bloc called the "Left Front," which controls 59 parliamentary seats. The Left Front rules the state of West Bengal and participates in a governing coalition in Kerala. Although it has not joined the government, Left Front support provides the crucial seats necessary for the UPA to retain power in New Delhi; without its support, the UPA government would fall. It advocates a secular and Communist ideology and opposes many aspects of economic liberalization and globalization.
The next general election is scheduled for 2009.
India is continuing to move forward with market-oriented economic reforms that began in 1991. Recent reforms include liberalized foreign investment and exchange regimes, industrial decontrol, significant reductions in tariffs and other trade barriers, reform and modernization of the financial sector, significant adjustments in government monetary and fiscal policies and safeguarding intellectual property rights.
Real GDP growth for the fiscal year ending March 31, 2004 was 8.17%, up from the drought-depressed 4.0% growth in the previous year. Growth for the year ending March 31, 2005 is expected to be between 6.5% and 7.0%. Foreign portfolio and direct investment in-flows have risen significantly in recent years. They have contributed to the $120 billion in foreign exchange reserves at the end of June 2004. Government receipts from privatization were about $3 billion in fiscal year 2003-04.
However, economic growth is constrained by inadequate infrastructure, a cumbersome bureaucracy, corruption, labor market rigidities, regulatory and foreign investment controls, the "reservation" of key products for small-scale industries and high fiscal deficits. The outlook for further trade liberalization is mixed. India eliminated quotas on 1,420 consumer imports in 2002 and has announced its intention to continue to lower customs duties. However, the tax structure is complex with compounding effects of various taxes.
The United States is India's largest trading partner. Bilateral trade in 2003 was $18.1 billion and is expected to reach $20 billion in 2004. Principal U.S. exports are diagnostic or lab reagents, aircraft and parts, advanced machinery, cotton, fertilizers, ferrous waste/scrap metal and computer hardware. Major U.S. imports from India include textiles and ready-made garments, internet-enabled services, agricultural and related products, gems and jewelry, leather products and chemicals.
The rapidly growing software sector is boosting service exports and modernizing India's economy. Revenues from IT industry are expected to cross $20 billion in 2004-05. Software exports were $12.5 billion in 2003-04. PC penetration is 8 per 1,000 persons, but is expected to grow to 10 per 1,000 by 2005. The cellular mobile market is expected to surge to over 50 million subscribers by 2005 from the present 36 million users. The country has 52 million cable TV customers.
The United States is India's largest investment partner, with total inflow of U.S. direct investment estimated at $3.7 billion in 2003. Proposals for direct foreign investment are considered by the Foreign Investment Promotion Board and generally receive government approval. Automatic approvals are available for investments involving up to 100% foreign equity, depending on the kind of industry. Foreign investment is particularly sought after in power generation, telecommunications, ports, roads, petroleum exploration/processing and mining.
India's external debt was $112 billion in 2003, up from $105 billion in 2002. Bilateral assistance was approximately $2.62 billion in 2002-03, with the United States providing about $130.2 million in development assistance in 2003. The World Bank plans to double aid to India to almost $3 billion over the next four years, beginning in July 2004.
The Defence Committee of the Cabinet takes decisions on all matter of policy concerning defense. That committee consists of the Prime Minister, the Defence Minister, the Home Minister, the Finance Minister, and the Transport & Communications Minister.
Jointness is coming to the Indian armed forces. There is a position Chief of Integrated Service Command that looks after the integration of the defense services under the proposed Chief of Defence Staff plan. A Joint Integrated Defence Staff supports this organization with elements from the three services and various departments in the Ministry of Defence and the Ministry of External Affairs.
The Indian Army numbers over 1.1 million strong and fields 34 divisions. Its primary task is to safeguard the territorial integrity of the country against external threats. The Army has been heavily committed in the recent past to counterterrorism operations in Jammu and Kashmir, as well as the in the Northeast. Its current modernization program focuses on obtaining equipment to be used in combating terror. The Army will often find itself providing aid to civil authorities and assisting the government in organizing relief operations.
The Indian Navy is by far the most capable navy in the region. They currently operate one aircraft carrier with two on order, 14 submarines, and 15 major surface combatants. The navy is capable of projecting power within the Indian Ocean basin and occasionally operates in the South China Sea, the Mediterranean Sea and the Arabian Gulf. Fleet introduction of the Brahmos cruise missiles (expected in 2005) and the possible lease of nuclear submarines from Russia will add significantly to the Indian Navy’s flexibility and striking power. The Navy’s primary missions are the defense of India and of India’s vital sea lines of communication. India relies on the sea for 90% of its oil and natural gas and over 90% of its foreign trade.
Although small, the Indian Coast Guard has been expanding rapidly in recent years. Indian Navy officers typically fill top Coast Guard positions to ensure coordination between the two services. India’s Coast Guard is responsible for control of India’s huge exclusive economic zone.
The Indian Air Force is in the process of becoming a viable 21st century western-style force through modernization and new tactics. Force modernization is key in this revolution, with the likes of new SU-30MKI becoming the backbone of a power projection capability. Other significant modernization efforts include the induction of a new advanced jet trainer (BAE Hawk) and the indigenously produced advanced light helicopter (Dhruv).
|Other African Countries|
|Maps of Asian countries: Bangladesh | China | India | Indonesia | Japan | Pakistan | Philippines | Russia | Turkey | Vietnam || | http://www.mapup.com/asia/india.html | 13 |
18 | Media literacy is a repertoire of competences that enable people to analyze, evaluate, and create messages in a wide variety of media modes, genres, and forms.
Media Education is the process of teaching and learning about media. It is about developing young people's critical and creative abilities when it comes to the media. Media education should not be confused with educational technology or with educational media. Surveys repeatedly show that, in most industrialized countries, children now spend more time watching television than they do in school, or also on any other activity apart from sleeping Media Education has no fixed location, no clear ideology and no definitive recipients; it is subject to whims of a financial market bigger than itself. Being able to understand the media enables people to analyze, evaluate, and create messages in a wide variety of mediums, genres, and forms. A person who is media literate is informed. There are many reasons why media studies are absent from the primary and secondary school curricula, including cuts in budgets and social services as well as over-packed schedules and expectations.
Education for media literacy often uses an inquiry-based pedagogic model that encourages people to ask questions about what they watch, hear, and read. Media literacy education provides tools to help people critically analyze messages, offers opportunities for learners to broaden their experience of media, and helps them develop creative skills in making their own media messages. Critical analysis can include identifying author, purpose and point of view, examining construction techniques and genres, examining patterns of media representation, and detecting propaganda, censorship, and bias in news and public affairs programming (and the reasons for these). Media literacy education may explore how structural features—such as media ownership, or its funding model -- affect the information presented.
Media literate people should be able to skillfully create and produce media messages, both to show understanding of the specific qualities of each medium, as well as to create independent media and participate as active citizens. Media literacy can be seen as contributing to an expanded conceptualization of literacy, treating mass media, popular culture and digital media as new types of 'texts' that require analysis and evaluation. By transforming the process of media consumption into an active and critical process, people gain greater awareness of the potential for misrepresentation and manipulation (especially through commercials and public relations techniques), and understand the role of mass media and participatory media in constructing views of reality.
Media literacy education is sometimes conceptualized as a way to address the negative dimensions of mass media, popular culture and digital media, including media violence, gender and racial stereotypes, the sexualization of children, and concerns about loss of privacy, cyberbullying and Internet predators. By building knowledge and competencies in using media and technology, media literacy education may provide a type of protection to children and young people by helping them make good choices in their media consumption habits, and patterns of usage.
Concepts of media education
Media education can be in many ways. In general, media education has come to be defined in terms of conceptual understandings of the media. Usually this means key concepts or key aspects. This approach does not specify particular objects of study and this enables media education to remain responsive to students' interests and enthusiasms. David Buckingham has come up with four key concepts that "provide a theoretical framework which can be applied to the whole range of contemporary media and to 'older' media as well: Production, Language, Representation, and Audience." These concepts are defined by David Buckingham as follows:
Production involves the recognition that media texts are consciously made. Some media texts are made by individuals working alone, just for themselves or their family and friends, but most are produced and distributed by groups of people often for commercial profit. This means recognizing the economic interests that are at stake in media production, and the ways in which profits are generated. More confident students in media education should be able to debate the implications of these developments in terms of national and cultural identities, and in terms of the range of social groups that are able to gain access to media.
Studying media production means looking at:
- Technologies: what technologies are used to produce and distribute media texts?
- Professional practices: Who makes media texts?
- The industry: Who owns the companies that buy and sell media and how do they make a profit?
- Connections between media: How do companies sell the same products across different media?
- Regulation: Who controls the production and distribution of media, and are there laws about this?
- Circulation and distribution: How do texts reach their audiences?
- Access and participation: Whose voices are heard in the media and whose are excluded?
Every medium has its own combination of languages that it uses to communicate meaning. For example, television uses verbal and written language as well as the languages of moving images and sound. Particular kinds of music or camera angles may be used to encourage certain emotions. When it comes to verbal language, making meaningful statements in media languages involves "paradigmatic choices" and "syntagmatic combinations". By analyzing these languages, one can come to a better understanding of how meanings are created.
Studying media languages means looking at:
- Meanings: How does media use different forms of language to convey ideas or meanings?
- Conventions: How do these uses of languages become familiar and generally accepted?
- Codes: How are the grammatical 'rules' of media established and what happens when they are broken?
- Genres: How do these conventions and codes operate in different types of media contexts?
- Choices: What are the effects of choosing certain forms of language, such as a certain type of camera shot?
- Combinations: How is meaning conveyed through the combination or sequencing of images, sounds, or words?
- Technologies: How do technologies affect the meanings that can be created?
The notion of 'representation' is one of the first established principles of media education. The media offers viewers a facilitated outlook of the world and they re-represent reality. Media production involves selecting and combining incidents, making events into stories, and creating characters. Media representations allow viewers to see the world in some particular ways and not others. Audiences also compare media with their own experiences and make judgements about how realistic they are. Media representations can be seen as real in some ways but not in others: viewers may understand that what they are seeing is only imaginary and yet they still know it can explain reality.
Studying media representations means looking at:
- Realism: Is this text intended to be realistic? Why do some texts seem more realistic than others?
- Telling the truth: How do media claim to tell the truth about the world?
- Presence and absence: What is included and excluded from the media world?
- Bias and objectivity: Do media texts support particular views about the world? Do they use moral or political values?
- Stereotyping: How do media represent particular social groups? Are those representations accurate?
- Interpretations: Why do audiences accept some media representations as true, or reject others as false?
- Influences: Do media representations affect our views of particular social groups or issues?
Studying audiences means looking at how demographic audiences are targeted and measured, and how media are circulated and distributed throughout. It means looking at different ways in which individuals use, interpret, and respond to media. The media increasingly have had to compete for people's attention and interest because research has shown that audiences are now much more sophisticated and diverse than has been suggested in the past. Debating views about audiences and attempting to understand and reflect on our own and others' use of media is therefore a crucial element of media education.
Studying media audiences means looking at:
- Targeting: How are media aimed at particular audiences?
- Address: How do the media speak to audiences?
- Circulation: How do media reach audiences?
- Uses: How do audiences use media in their daily lives? What are their habits and patterns of use?
- Making sense: How do audiences interpret media? What meanings do they make?
- Pleasures: What pleasures do audiences gain from media?
- Social differences: What is the role of gender. social class, age, and ethnic background in audience behavior?
UNESCO and media education
UNESCO has had a long standing experience with media literacy and education. The organization has supported a number of initiatives to introduce media and information literacy as an important part of lifelong learning. Most recently, the UNESCO Action for Media Education and Literacy brought together experts from numerous regions of the world to "catalyze processes to introduce media and information literacy components into teacher training curricula worldwide."
UNESCO questionnaire
In 2001, a media education survey was sent out by UNESCO in order to better understand which countries were incorporating media studies into different school's curriculum as well as to help develop new initiatives in the field of media education. A questionnaire was sent to a total of 72 experts on media education in 52 different countries around the world. The people who received this questionnaire were people involved in academics (such as teachers), policy makers, and educational advisers. The questionnaire addressed three key areas:
1) “Media education in schools: the extent, aims, and conceptual basis of current provision; the nature of assessment; and the role of production by students.”
2) "Partnerships: the involvement of media industries and media regulators in media education; the role of informal youth groups; the provision of teacher education.”
3) “The development of media education: research and evaluation of media education provision; the main needs of educators; obstacles to future development; and the potential contribution of UNESCO.”
The results from the answers of the survey were double-sided. It was noted that media education had been making a very uneven progress because while in one country there was an abundant amount of work towards media education, another country may have hardly even heard of the concept. One of the main reasons why media education has not taken full swing in some countries is because of the lack of policy makers addressing the issue. In some developing countries, educators say that media education was only just beginning to register as a concern because they were just starting to develop basic print literacy.
In the countries that media education existed at all, it would be offered as an elective class or an optional area of the school system rather than being on its own. Many countries argued that media education should not be a separate part of the curriculum but rather should be added to a subject already established. The countries which deemed media education as a part of the curriculum included the United States, Canada, Mexico, New Zealand, and Australia. Many countries lacked even just basic research on media education as a topic, including Russia and Sweden. Some said that popular culture is not worthy enough of study. But all of the correspondents realized the importance of media education as well as the importance of formal recognition from their government and policy makers that media education should be taught in schools.
Media literacy education is actively focused on the instructional methods and pedagogy of media literacy, integrating theoretical and critical frameworks rising from constructivist learning theory, media studies and cultural studies scholarship. This work has arisen from a legacy of media and technology use in education throughout the 20th century and the emergence of cross-disciplinary work at the intersections of scholarly work in media studies and education. Voices of Media Literacy, a project of the Center for Media Literacy representing first-person interviews with media literacy pioneers active prior to 1990 in English-speaking countries, provides historical context for the rise of the media literacy field and is available at http://www.medialit.org/voices-media-literacy-international-pioneers-speak Media education is developing in Great Britain, Australia, South Africa, Canada, the United States, with a growing interest in the Netherlands, Italy, Greece, Austria, Switzerland, India, Russia and among many other nations. UNESCO has played an important role in supporting media and information literacy by encouraging the development of national information and media literacy policies, including in education UNESCO has developed training resources to help teachers integrate information and media literacy into their teaching and provide them with appropriate pedagogical methods and curricula.
United Kingdom
Education for what is now termed media literacy has been developing in the UK since at least the 1930s. In the 1960s, there was a paradigm shift in the field of media literacy to emphasize working within popular culture rather than trying to convince people that popular culture was primarily destructive. This was known as the popular arts paradigm. In the 1970s, there came a recognition that the ideological power of the media was tied to the naturalization of the image. Constructed messages were being passed off as natural ones. The focus of media literacy also shifted to the consumption of images and representations, also known as the representational paradigm. Development has gathered pace since the 1970s when the first formal courses in Film Studies and, later, Media Studies, were established as options for young people in the 14-19 age range: over 100,000 students (about 5% of this age range) now take these courses annually. Scotland has always had a separate education system from the rest of the UK and began to develop policies for media education in the 1980s. In England, the creation of the National Curriculum in 1990 included some limited requirements for teaching about the media as part of English. The UK is widely regarded as a leader in the development of education for media literacy. Key agencies that have been involved in this development include the British Film Institute, the English and Media Centre Film Education and the Centre for the Study of Children, Youth and Media at the Institute of Education, London.
In Australia, media education was influenced by developments in Britain related to the inoculation, popular arts and demystification approaches. Key theorists who influenced Australian media education were Graeme Turner and John Hartley who helped develop Australian media and cultural studies. During the 1980s and 1990s, Western Australians Robyn Quin and Barrie MacMahon wrote seminal text books such as Real Images, translating many complex media theories into classroom appropriate learning frameworks. In most Australian states, media is one of five strands of the Arts Key Learning Area and includes "essential learnings" or "outcomes" listed for various stages of development. At the senior level (years 11 and 12), several states offer Media Studies as an elective. For example, many Queensland schools offer Film, Television and New Media, while Victorian schools offer VCE Media. Media education is supported by the teacher professional association Australian Teachers of Media which publishes a range of resources and the excellent Screen Education.
In South Africa, the increasing demand for Media Education has evolved from the dismantling of apartheid and the 1994 democratic elections. The first national Media Education conference in South Africa was actually held in 1990 and the new national curriculum has been in the writing stages since 1997. Since this curriculum strives to reflect the values and principles of a democratic society there seems to be an opportunity for critical literacy and Media Education in Languages and Culture courses.
In areas of Europe, media education has seen many different forms. Media education was introduced into the Finnish elementary curriculum in 1970 and into high schools in 1977. But the media education we know today did not evolve in Finland until the 1990s. Media education has been compulsory in Sweden since 1980 and in Denmark since 1970. In both these countries, media education evolved in the 1980s and 1990s as media education gradually moved away from moralizing attitudes towards an approach that is more searching and pupil-centered. In 1994, the Danish education bill gave recognition to media education but it is still not an integrated part of the school. The focus in Denmark seems to be on information technology.
France has taught film from the inception of the medium, but it has only been recently that conferences and media courses for teachers have been organized with the inclusion of media production. Germany saw theoretical publications on media literacy in the 1970s and 1980s, with a growing interest for media education inside and outside the educational system in the 80s and 90s. In the Netherlands media literacy was placed in the agenda by the Dutch government in 2006 as an important subject for the Dutch society. In April, 2008, an official center has been created (mediawijsheid expertisecentrum = medialiteracy expertisecenter) by the Dutch government. This center is more a network organization existing out of different partners who have their own expertise with the subject of media education. The idea is that media education will become a part of the official curriculum.
The history of media education in Russia goes back to the 1920s. The first attempts to instruct in media education (on the press and film materials, with the vigorous emphasis on the communist ideology) appeared in the 1920s but were stopped by Joseph Stalin’s repressions. The end of the 1950s - the beginning of the 1960s was the time of the revival of media education in secondary schools, universities, after-school children centers (Moscow, Saint Petersburg, Voronezh, Samara, Kurgan, Tver, Rostov on Don, Taganrog, Novosibirsk, Ekaterinburg, etc.), the revival of media education seminars and conferences for the teachers. During the time when the intensive rethinking of media education approaches was on the upgrade in the Western hemisphere, in Russia of the 1970s–1980s media education was still developing within the aesthetic concept. Among the important achievements of 1970s-1990s one can recall the first official programs of film and media education, published by Ministry of Education, increasing interest of Ph.D. to media education, experimental theoretic and practical work on media education by O.Baranov (Tver), S.Penzin (Voronezh), G.Polichko, U.Rabinovich (Kurgan), Y.Usov (Moscow), Aleksandr Fyodorov (Taganrog), A.Sharikov (Moscow) and others. The important events in media education development in Russia are the registration of the new specialization (since 2002) for the pedagogical universities – ‘Media Education’ (№ 03.13.30), and the launch of a new academic journal ‘Media Education’ (since January 2005), partly sponsored by the ICOS UNESCO ‘Information for All’. Additionally, the Internet sites of Russian Association for Film and Media Education (English and Russian versions) were created. Taking into account the fact that UNESCO defines media education as the priority field of the cultural educational development in the 21st century, media literacy has good prospects in Russia.
In North America, the beginnings of a formalized approach to media literacy as a topic of education is often attributed to the 1978 formation of the Ontario-based Association for Media Literacy (AML). Before that time, instruction in media education was usually the purview of individual teachers and practitioners. Canada was the first country in North America to require media literacy in the school curriculum. Every province has mandated media education in its curriculum. For example, the new curriculum of Quebec mandates media literacy from Grade 1 until final year of secondary school (Secondary V). The launching of media education in Canada came about for two reasons. One reason was the concern about the pervasiveness of American popular culture and the other was the education system-driven necessity of contexts for new educational paradigms. Canadian communication scholar Marshall McLuhan ignited the North American educational movement for media literacy in the 1950s and 1960s. Two of Canada's leaders in Media Literacy and Media Education are Barry Duncan and John Pungente. Duncan passed away on June 6, 2012, even after retired from classroom teaching but was still active in media education. Pungente is a Jesuit priest who has promoted media literacy since the early 1960s.
Media Awareness Network (MNet), a Canadian non-profit media education organization, hosts a Web site which contains hundreds of free lesson plans to help teachers integrate media into the classroom. MNet also has created award-winning educational games on media education topics, several of which are available free from the site, and has also conducted original research on media issues, most notable the study Young Canadians in a Wired World. MNet also hosts the Talk Media Blog, a regular column on media education issues.
The United States
Media literacy education has been an interest in the United States since the early 20th century, when high school English teachers first started using film to develop students' critical thinking and communication skills. However, media literacy education is distinct from simply using media and technology in the classroom, a distinction that is exemplified by the difference between "teaching with media" and "teaching about media." In the 1950s and 60s, the ‘film grammar’ approach to media literacy education developed in the United States, where educators began to show commercial films to children, having them learn a new terminology consisting of words such as fade, dissolve, truck, pan, zoom, and cut. Films were connected to literature and history. To understand the constructed nature of film, students explored plot development, character, mood and tone. Then, during the 1970s and 1980s, attitudes about mass media and mass culture began to shift. Around the English-speaking world, educators began to realize the need to “guard against our prejudice of thinking of print as the only real medium that the English teacher has a stake in.” A whole generation of educators began to not only acknowledge film and television as new, legitimate forms of expression and communication, but also explored practical ways to promote serious inquiry and analysis—- in higher education, in the family, in schools and in society. Typically, U.S. media literacy education includes a focus on news, advertising, issues of representation, and media ownership. Media literacy competencies can also be cultivated in the home, through activities including co-viewing and discussion.
Media literacy education began to appear in state English education curriculum frameworks by the early 1990s as a result of increased awareness in the central role of visual, electronic and digital media in the context of contemporary culture. Nearly all 50 states have language that supports media literacy in state curriculum frameworks. In 2004, Montana developed educational standards around media literacy that students are required to be competent in by grades 4, 8, and 12. Additionally, an increasing number of school districts have begun to develop school-wide programs, elective courses, and other after-school opportunities for media analysis and production.
Media literacy education is now gaining momentum in the United States because of the increased emphasis on 21st century literacy, which now incorporates media and information literacy, collaboration and problem-solving skills, and emphasis on the social responsibilities of communication. More than 600 educators are members of the National Association for Media Literacy Education (NAMLE), a national membership group that hosts a bi-annual conference. In 2009, this group developed an influential policy document, the Core Principles of Media Literacy Education in the United States. It states, "The purpose of media literacy education is to help individuals of all ages develop the habits of inquiry and skills of expression that they need to be critical thinkers, effective communicators and active citizens in today’s world. Principles include: (1) Media Literacy Education requires active inquiry and critical thinking about the messages we receive and create; (2) Media Literacy Education expands the concept of literacy in all forms of media (i.e., reading and writing); (3) Media Literacy Education builds and reinforces skills for learners of all ages. Like print literacy, those skills necessitate integrated, interactive, and repeated practice; (4) Media Literacy Education develops informed, reflective and engaged participants essential for a democratic society; (5) Media Literacy Education recognizes that media are a part of culture and function as agents of socialization; and (6) Media Literacy Education affirms that people use their individual skills, beliefs and experiences to construct their own meanings from media messages.
In the United States, various stakeholders struggle over nuances of meaning associated with the conceptualization of the practice on media literacy education. Educational scholars may use the term critical media literacy to emphasize the exploration of power and ideology in media analysis. Other scholars may use terms like new media literacy to emphasize the application of media literacy to user-generated content or 21st century literacy to emphasize the use of technology tools. As far back as 2001, the Action Coalition for Media Education (ACME) split from the main media literacy organization as the result of debate about whether or not the media industry should support the growth of media literacy education in the United States. Renee Hobbs of Temple University in Philadelphia wrote about this general question as one of the "Seven Great Debates" in media literacy education in an influential 1998 Journal of Communication article.
The media industry has supported media literacy education in the United States. Make Media Matter is one of the many blogs (an “interactive forum”) the Independent Film Channel features as a way for individuals to assess the role media plays in society and the world. The television program, The Media Project, offers a critical look at the state of news media in contemporary society. During the 1990s, the Discovery Channel supported the implementation of Assignment: Media Literacy, a statewide educational initiative for K-12 students developed in collaboration with the Maryland State Board of Education.
Because of the decentralized nature of the education system in a country with 70 million children now in public or private schools, media literacy education develops as the result of groups of advocates in school districts, states or regions who lobby for its inclusion in the curriculum. There is no central authority making nationwide curriculum recommendations and each of the fifty states has numerous school districts, each of which operates with a great degree of independence from one another. However, most U.S. states include media literacy in health education, with an emphasis on understanding environmental influences on health decision-making. Tobacco and alcohol advertising are frequently targeted as objects for "deconstruction, " which is one of the instructional methods of media literacy education. This resulted from an emphasis on media literacy generated by the Clinton White House. The Office of National Drug Control Policy (ONDCP) held a series of conferences in 1996 and 1997 which brought greater awareness of media literacy education as a promising practice in health and substance abuse prevention education. The medical and public health community now recognizes the media as a cultural environmental influence on health and sees media literacy education as a strategy to support the development of healthy behavior.
Interdisciplinary scholarship in media literacy education is emerging. In 2009, a scholarly journal was launched, the Journal of Media Literacy Education, to support the work of scholars and practitioners in the field. Universities such as Appalachian State University, Columbia University, Ithaca College, New York University, the University of Texas-Austin, Temple University, and the University of Maryland offer courses and summer institutes in media literacy for pre-service teachers and graduate students. Brigham Young University offers a graduate program in media education specifically for inservice teachers. The Salzburg Academy for Media and Global Change is another institution that educates students and professionals from around the world the importance of being literate about the media.
See also
- Information and media literacy
- Information literacy
- Postliterate society
- Visual literacy
- Buckingham, David (2007). Media education : literacy, learning and contemporary culture (Reprinted. ed.). Cambrdige [u.a]: Polity Press. ISBN 0745628303.
- Rideout, Livingstone, Bovill. "Children's Usage of Media Technologies". SAGE Publications. Retrieved 2012-04-25.
- The European Charter for Media Literacy. Euromedialiteracy.eu. Retrieved on 2011-12-21.
- See Corporate media and Public service broadcasting
- e.g., Media Literacy Resource Guide.
- Frau-Meigs, D. 2008. Media education: Crossing a mental rubicon.” It will also benefit generations to come in order to function in a technological and media filled world. In Empowerment through media education: An intercultural dialogue, ed. Ulla Carlsson, Samy Tayie, Genevieve Jacqui¬not-Delaunay and Jose Manuel Perez Tornero, (pp. 169 – 180). Goteborg University, Sweden: The International Clearinghouse on Children, Youth and Media, Nordicom in cooperation with UNESCO, Dar Graphit and Mentor Association.
- "UNESCO Media Literacy". Retrieved 2012-04-25.
- Domaille, Buckingham, Kate, David. "Where Are We Going and How Can We Get there?". Retrieved 2012-04-25.
- Media and Information Literacy. Portal.unesco.org. Retrieved on 2011-12-21.
- Buckingham, David
- Education. BFI (2010-11-03). Retrieved on 2011-12-21.
- English and Media Centre | Home. Englishandmedia.co.uk. Retrieved on 2011-12-21.
- Home. Film Education. Retrieved on 2011-12-21.
- at Zerolab.info. Cscym.zerolab.info. Retrieved on 2011-12-21.
- Culver, S., Hobbs, R. & Jensen, A. (2010). Media Literacy in the United States. International Media Literacy Research Forum.
- Hazard, P. and M. Hazard. 1961. The public arts: Multi-media literacy. English Journal 50 (2): 132-133, p. 133.
- Hobbs, R. & Jensen, A. (2009). The past, present and future of media literacy education. Journal of Media Literacy Education 1(1), 1 -11.
- What's Really Best for Learning?
- Hobbs, R. (2005). Media literacy and the K-12 content areas. In G. Schwarz and P. Brown (Eds.) Media literacy: Transforming curriculum and teaching. National Society for the Study of Education, Yearbook 104. Malden, MA: Blackwell (pp. 74 – 99).
- Core Principles of MLE : National Association for Media Literacy Education. Namle.net. Retrieved on 2011-12-21.
- Hobbs, R. (2006) Multiple visions of multimedia literacy: Emerging areas of synthesis. In Handbook of literacy and technology, Volume II. International Reading Association. Michael McKenna, Linda Labbo, Ron Kieffer and David Reinking, Editors. Mahwah: Lawrence Erlbaum Associates (pp. 15 -28).
- Hobbs, R. (1998). The seven great debates in the media literacy movement. Journal of Communication, 48 (2), 9-29.
- Journal of Media Literacy Education
- Fedorov, Alexander. Media Education and Media Literacy LAP Lambert Academic Publishing, 364 p.
- Study on Assessment Criteria for Media Literacy Levels PDF
- Study on the Current Trends and Approches to Media Literacy in Europe PDF
- A Journey to Media Literacy Community - A space for collaboration to promote media literacy concepts as well as a learning tool to become media-wise.
- Audiovisual and Media Policies - Media Literacy at the European Commission
- Center for Media Literacy - providing the CML MediaLit Kit with Five Core Concepts and Five Key Questions of media literacy
- EAVI - European Association for Viewers' Interests - Not for profit international organisation working in the field of media literacy
- Information Literacy and Media Education
- National Association for Media Literacy Education
- Project Look Sharp - an initiative of Ithaca College to provide materials, training and support for the effective integration of media literacy with critical thinking into classroom curricula at all education levels.
- Media Education Lab at the University of Rhode Island - Improves the practices of digital and media literacy education through scholarship and community service.
- MED - Associazione italiana per l'educazione ai media e alla comunicazione - the Italian Association for Media Literacy Education. | http://en.wikipedia.org/wiki/Media_education | 13 |
36 | Human rights in the United States
||This article's lead section may not adequately summarize key points of its contents. (May 2013)|
Human rights in the United States are legally protected by the Constitution of the United States, including the amendments, state constitutions, conferred by treaty, and enacted legislatively through Congress, state legislatures, and state referenda and citizen's initiatives. Federal courts in the United States have jurisdiction over international human rights laws as a federal question, arising under international law, which is part of the law of the United States.
The first human rights organization in the Thirteen Colonies of British America, dedicated to the abolition of slavery, was formed by Anthony Benezet in 1775. A year later, the Declaration of Independence advocated to the monarch of England (who was asserting sovereignty through a divine right of kings), for civil liberties based on the self-evident truth “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” This view of human liberties postulates that fundamental rights are not granted by a divine or supernatural being to monarchs who then grant them to subjects, but are granted by a divine or supernatural being to each man (but not woman) and are inalienable and inherent.
After the Revolutionary War, the former thirteen colonies, free of the English monarch's claim of sovereignty, went through a pre-government phase of more than a decade, with much debate about the form of government they would have. The United States Constitution, adopted in 1787 through ratification at a national convention and conventions in the colonies, created a republic that guaranteed several rights and civil liberties; the Constitution significantly referred to "Persons", not "Men" as was used in the Declaration of Independence, omitted any reference to the supernatural imagination (such as a "Creator" or "God") and any authority derived or divined therefrom, and allowed "affirmation" in lieu of an "oath" if preferred. The Constitution thus eliminated any requirement of supernatural grant of human rights and provided that they belonged to all Persons (presumably meaning men and women, and perhaps children, although the developmental distinction between children and adults poses issues and has been the subject of subsequent amendments, as discussed below). Some of this conceptualization may have arisen from the significant Quaker segment of the population in the colonies, especially in the Delaware Valley, and their religious views that all human beings, regardless of sex, age, or race or other characteristics, had the same Inner light. Quaker and Quaker-derived views would have informed the drafting and ratification of the Constitution, including through the direct influence of some of the Framers of the Constitution, such as John Dickinson (politician) and Thomas Mifflin, who were either Quakers themselves or came from regions founded by or heavily populated with Quakers.
Dickinson, Mifflin and other Framers who objected to slavery were outvoted on that question, however, and the original Constitution sanctioned slavery (although not based on race or other characteristic of the slave) and, through the Three-Fifths Compromise, counted slaves (who were not defined by race) as three-fifths of a Person for purposes of distribution of taxes and representation in the House of Representatives (although the slaves themselves were discriminated against in voting for such representatives). See Three-Fifths Compromise.
As the new Constitution took effect in practice, concern over individual liberties and concentration of power at the federal level, gave rise to the amendment of the Constitution through adoption of the Bill of Rights (the first ten amendments of the Constitution).
Courts and legislatures also began to vary in the interpretation of "Person," with some jurisdictions narrowing the meaning of "Person" to cover only people with property, only men, or only white men. For example, although women had been voting in some states, such as New Jersey, since the founding of the United States, and prior to that in the colonial era, other states denied them the vote. In 1756 Lydia Chapin Taft voted, casting a vote in the local town hall meeting in place of her deceased husband. In 1777 women lost the right to exercise their vote in New York, in 1780 women lost the right to exercise their vote in Massachusetts, and in 1784 women lost the right to exercise their vote in New Hampshire. From 1775 until 1807, the state constitution in New Jersey permitted all persons worth over fifty pounds (about $7,800 adjusted for inflation, with the election laws referring to the voters as "he or she") to vote; provided they had this property, free black men and single women regardless of race therefore had the vote until 1807, but not married women, who could have no independent claim to ownership of fifty pounds (anything they owned or earned belonged to their husbands by the Common law of Coverture). In 1790, the law was revised to specifically include women, but in 1807 the law was again revised to exclude them, an unconstitutional act since the state constitution specifically made any such change dependent on the general suffrage. See Women's suffrage in the United States. Through the doctrine of coverture, many states also denied married women the right to own property in their own name, although most allowed single women (widowed, divorced or never married) the "Person" status of men, sometimes pursuant to the common law concept of a femme sole. Over the years, a variety of claimants sought to assert that discrimination against women in voting, in property ownership, in occupational license, and other matters was unconstitutional given the Constitution's use of the term "Person", but the all-male courts did not give this fair hearing. See, e.g., Bradwell v. Illinois.
In the 1860s, after decades of conflict over southern states' continued practice of slavery, and northern states' outlawing it, the Civil War was fought, and in its aftermath the Constitution was amended to prohibit slavery and to prohibit states' denying rights granted in the Constitution. Among these amendments was the Fourteenth Amendment, which included an Equal Protection Clause which seemed to clarify that courts and states were prohibited in narrowing the meaning of "Persons". After the Fourteenth Amendment to the United States Constitution was adopted, Susan B. Anthony, buttressed by the equal protection language, voted. She was prosecuted for this, however, and ran into an all-male court ruling that women were not "Persons"; the court leveed a fine but it was never collected.
Fifty years later, in 1920, the Constitution was amended again, with the Nineteenth Amendment to definitively prohibit discrimination against women's suffrage.
In the 1970s, the Burger Court made a series of rulings clarifying that discrimination against women in the status of being Persons violated the Constitution and acknowledged that previous court rulings to the contrary had been Sui generis and an abuse of power. The most often cited of these is Reed v. Reed, which held that any discrimination against either sex in the rights associated with Person status must meet a strict scrutiny standard.
The 1970s also saw the adoption of the Twenty-seventh Amendment, which prohibited discrimination on the basis of age, for Persons 18 years old and over, in voting. Other attempts to address the developmental distinction between children and adults in Person status and rights have been addressed mostly by the Supreme Court, with the Court recognizing in 2012, in Miller v. Alabama a political and biological principle that children are different from adults.
In the 20th century, the United States took a leading role in the creation of the United Nations and in the drafting of the Universal Declaration of Human Rights. Much of the Universal Declaration of Human Rights was modeled in part on the U.S. Bill of Rights. Even as such, the United States is in violation of the Declaration, in as much that "everyone has the right to leave any country" because the government may prevent the entry and exit of anyone from the United States for foreign policy, national security, or child support rearage reasons by revoking their passport. The United States is also in violation of the United Nations' human rights Convention on the Rights of the Child which requires both parents to have a relationship with the child. Conflict between the human rights of the child and those of a mother or father who wishes to leave the country without paying child support or doing the personal work of child care for his child can be considered to be a question of Negative and positive rights.
Domestic legal protection structure
According to Human Rights: The Essential Reference, "the American Declaration of Independence was the first civic document that met a modern definition of human rights." The Constitution recognizes a number of inalienable human rights, including freedom of speech, freedom of assembly, freedom of religion, the right to keep and bear arms, freedom from cruel and unusual punishment, and the right to a fair trial by jury.
Constitutional amendments have been enacted as the needs of the society evolved. The Ninth Amendment and Fourteenth Amendment recognize that not all human rights have yet been enumerated. The Civil Rights Act and the Americans with Disabilities Act are examples of human rights that were enumerated by Congress well after the Constitution's writing. The scope of the legal protections of human rights afforded by the US government is defined by case law, particularly by the precedent of the Supreme Court of the United States.
Within the federal government, the debate about what may or may not be an emerging human right is held in two forums: the United States Congress, which may enumerate these; and the Supreme Court, which may articulate rights that the law does not spell out. Additionally, individual states, through court action or legislation, have often protected human rights not recognized at federal level. For example, Massachusetts was the first of several states to recognize same sex marriage.
Effect of international treaties
In the context of human rights and treaties that recognize or create individual rights, there are self-executing and non-self-executing treaties. Non-self-executing treaties, which ascribe rights that under the Constitution may be assigned by law, require legislative action to execute the contract (treaty) before it can apply to law. There are also cases that explicitly require legislative approval according to the Constitution, such as cases that could commit the U.S. to declare war or appropriate funds.
Treaties regarding human rights, which create a duty to refrain from acting in a particular manner or confer specific rights, are generally held to be self-executing, requiring no further legislative action. In cases where legislative bodies refuse to recognize otherwise self-executing treaties by declaring them to be non-self-executing in an act of legislative non-recognition, constitutional scholars argue that such acts violate the separation of powers—in cases of controversy, the judiciary, not Congress, has the authority under Article III to apply treaty law to cases before the court. This is a key provision in cases where the Congress declares a human rights treaty to be non-self-executing, for example, by contending it does not add anything to human rights under U.S. domestic law. The International Covenant on Civil and Political Rights is one such case, which, while ratified after more than two decades of inaction, was done so with reservations, understandings, and declarations.
The Equal Protection Clause of the Fourteenth Amendment to the United States Constitution guarantees that "the right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude." In addition, Fifteenth Amendment to the United States Constitution prohibits the denial of a citizen of the right to vote based on that citizen's "race, color, or previous condition of servitude".
The United States was the first major industrialized country to enact comprehensive legislation prohibiting discrimination on the basis of race and national origin in the workplace in the Civil Rights Act of 1964 (CRA), while most of the world contains no such recourse for job discrimination. The CRA is perhaps the most prominent civil rights legislation enacted in modern times, has served as a model for subsequent anti-discrimination laws and has greatly expanded civil rights protections in a wide variety of settings. The United States' 1991 provision of recourse for victims of such discrimination for punitive damages and full back pay has virtually no parallel in the legal systems of any other nation.
In addition to individual civil recourse, the United States possesses anti-discrimination government enforcement bodies, such as the Equal Employment Opportunity Commission, while only the United Kingdom and Ireland possess faintly analogous bureaucracies. Beginning in 1965, the United States also began a program of affirmative action that not only obliges employers not to discriminate, but requires them to provide preferences for groups protected under the Civil Rights Act to increase their numbers where they are judged to be underrepresented.
Such affirmative action programs are also applied in college admissions. The United States also prohibits the imposition of any "...voting qualification or prerequisite to voting, or standard, practice, or procedure ... to deny or abridge the right of any citizen of the United States to vote on account of race or color," which prevents the use of grandfather clauses, literacy tests, poll taxes and white primaries.
Prior to the passage of the Thirteenth Amendment to the United States Constitution, slavery was legal in some states of the United States until 1865. Influenced by the principles of the Religious Society of Friends, Anthony Benezet formed the Pennsylvania Abolition Society in 1775, believing that all ethnic groups were considered equal and that human slavery was incompatible with Christian beliefs. Benezet extended the recognition of human rights to Native Americans and he argued for a peaceful solution to the violence between the Native and European Americans. Benjamin Franklin became the president of Benezet's abolition society in the late 18th century. In addition, the Fourteenth Amendment was interpreted to permit what was termed Separate but equal treatment of minorities until the United States Supreme Court overturned this interpretation in 1954, which consequently overturned Jim Crow laws.Native Americans did not have citizenship rights until the Dawes Act of 1887 and the Indian Citizenship Act of 1924.
Following the 2008 presidential election, Barack Obama was sworn in as the first African-American president of the United States on January 20, 2009. In his Inaugural Address, President Obama stated "A man whose father less than 60 years ago might not have been served at a local restaurant can now stand before you to take a most sacred oath....So let us mark this day with remembrance, of who we are and how far we have traveled".
The Nineteenth Amendment to the United States Constitution prohibits the states and the federal government from denying any citizen the right to vote because of that citizen's sex. While this does not necessarily guarantee all women the right to vote, as suffrage qualifications are determined by individual states, it does mean that states' suffrage qualifications may not prevent women from voting due to their gender.
The United States was the first major industrialized country to enact comprehensive CRA legislation prohibiting discrimination on the basis of gender in the workplace while most of the world contains no such recourse for job discrimination. The United States' 1991 provision of recourse for discrimination victims for punitive damages and full back pay has virtually no parallel in the legal systems of any other nation. In addition to individual civil recourse, the United States possesses anti-discrimination government enforcement bodies, such as the Equal Employment Opportunity Commission, while only the United Kingdom and Ireland possess faintly analogous bureaucracies. Beginning in 1965, the United States also began a program of affirmative action that not only obliges employers not to discriminate, but also requires them to provide preferences for groups protected under the CRA to increase their numbers where they are judged to be underrepresented. Such affirmative action programs are also applied in college admissions.
The United States was also the first country to legally define sexual harassment in the workplace. Because sexual harassment is therefore a Civil Rights violation, individual legal rights of those harassed in the workplace are comparably stronger in the United States than in most European countries. The Selective Service System does not require women to register for a possible military draft and the United States military does not permit women to serve in some front-line combat units.
The United States was the first country in the world to adopt sweeping antidiscrimination legislation for people with disabilities, the Americans with Disabilities Act of 1990 (ADA). The ADA reflected a dramatic shift toward the employment of persons with disabilities to enhance the labor force participation of qualified persons with disabilities and in reducing their dependence on government entitlement programs. The ADA amends the CRA and permits plaintiffs to recover punitive damages. The ADA has been instrumental in the evolution of disability discrimination law in many countries, and has had such an enormous impact on foreign law development that its international impact may be even larger than its domestic impact. Although ADA Title I was found to be unconstitutional, the Supreme Court has extended the protection to people with Acquired immune deficiency syndrome (AIDS).
It is important to note that federal benefits such as Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) are often administratively viewed in the United States as being primarily or near-exclusively the entitlement only of impoverished U.S. people with disabilities, and not applicable to those with disabilities who make significantly above-poverty level income. This is proven in practice by the general fact that in the U.S., a disabled person on SSI without significant employment income who is suddenly employed, with a salary or wage at or above the living wage threshold, often discovers that government benefits they were previously entitled to have ceased, because supposedly the new job "invalidates" the need for this assistance. The U.S. is the only industrialized country in the world to have this particular approach to physical disability assistance programming.
The Constitution of the United States explicitly recognizes certain individual rights. The 14th Amendment has several times been interpreted using the living constitution doctrine, for example, civil rights for people of color, disability rights, and women's rights were long unrecognized. There may exist additional gender-related civil rights that are presently not recognized by US law but it does not explicitly state any sexual orientation rights. Some states have recognized sexual orientation rights, which are discussed below.
The United States Federal Government does not have any substantial body of law relating to marriage; these laws have developed separately within each state. The Full faith and credit clause of the US Constitution ordinarily guarantees the recognition of a marriage performed in one state by another. However, the Congress passed the Defense of Marriage Act of 1996, which affirmed that no state (or other political subdivision within the United States) need recognize a marriage between persons of the same sex, even if the marriage was concluded or recognized in another state and the Federal Government may not recognize same-sex marriages for any purpose, even if concluded or recognized by one of the states. The US Constitution does not grant the federal government any authority to limit state recognition of sexual orientation rights or protections. DOMA only limits the interstate recognition of individual state laws and does not limit state law in any way. DOMA has been ruled to be unconstitutional by various US federal courts for violating the 14th Amendment to the United States Constitution (specifically its due process and equal protection clauses) and will potentially be reviewed by the Supreme Court of the United States.
Wisconsin was the first state to pass a law explicitly prohibiting discrimination on the basis of sexual orientation. In 1996, Hawaii ruled same-sex marriage is a Hawaiian constitutional right. Massachusetts, Connecticut, Iowa, Vermont, New Hampshire, New York, Washington D.C., Washington State, and Illinois are the only states that allow same-sex marriage. Same-sex marriage rights were established by the California Supreme Court in 2008, and over 18,000 same-sex couples were married. In November 2008 voters passed Proposition 8, amending the state constitution to deny same-sex couples marriage rights, which was upheld in a May 2009 decision that also allowed existing same-sex marriages to stand.
Privacy is not explicitly stated in the United States Constitution. In the Griswold v. Connecticut case, the Supreme Court ruled that it is implied in the Constitution. In the Roe v. Wade case, the Supreme Court used privacy rights to overturn most laws against abortion in the United States. In the Cruzan v. Director, Missouri Department of Health case, the Supreme Court held that the patient had a right of privacy to terminate medical treatment. In Gonzales v. Oregon, the Supreme Court held that the Federal Controlled Substances Act can not prohibit physician-assisted suicide allowed by the Oregon Death with Dignity Act. The Supreme Court upheld the constitutionality of criminalizing oral and anal sex in the Bowers v. Hardwick 478 U.S. 186 (1986) decision; however, it overturned the decision in the Lawrence v. Texas 539 U.S. 558 (2003) case and established the protection to sexual privacy.
The United States maintains a presumption of innocence in legal procedures. The Fourth, Fifth, Sixth Amendment to the United States Constitution and Eighth Amendment to the United States Constitution deals with the rights of criminal suspects. Later the protection was extended to civil cases as well In the Gideon v. Wainwright case, the Supreme Court requires that indigent criminal defendants who are unable to afford their own attorney be provided counsel at trial. Since the Miranda v. Arizona case, the United States requires police departments to inform arrested persons of their rights, which is later called Miranda warning and typically begins with "You have the right to remain silent."
Freedom of religion
The establishment clause of the first amendment prohibits the establishment of a national religion by Congress or the preference of one religion over another. The clause was used to limit school praying, beginning with Engel v. Vitale, which ruled government-led prayer unconstitutional. Wallace v. Jaffree banned moments of silence allocated for praying. The Supreme Court also ruled clergy-led prayer at public high school graduations unconstitutional with Lee v. Weisman.
The free exercise clause guarantees the free exercise of religion. The Supreme Court's Lemon v. Kurtzman decision established the "Lemon test" exception, which details the requirements for legislation concerning religion. The Employment Division v. Smith decision, the Supreme Court maintained a "neutral law of general applicability" can be used to limit religion exercises. In the City of Boerne v. Flores decision, the Religious Freedom Restoration Act was struck down as exceeding congressional power; however, the decision's effect is limited by the Gonzales v. O Centro Espirita Beneficente Uniao do Vegetal decision, which requires states to express compelling interest in prohibiting illegal drug use in religious practices.
Freedom of expression
The United States, like other liberal democracies, is supposed to be a constitutional republic based on founding documents that restrict the power of government to preserve the liberty of the people. The freedom of expression (including speech, media, and public assembly) is an important right and is given special protection, as declared by the First Amendment of the constitution. According to Supreme Court precedent, the federal and lower governments may not apply prior restraint to expression, with certain exceptions, such as national security and obscenity. There is no law punishing insults against the government, ethnic groups, or religious groups. Symbols of the government or its officials may be destroyed in protest, including the American flag. Legal limits on expression include:
- Solicitation, fraud, specific threats of violence, or disclosure of classified information
- Advocating the overthrow of the U.S. government through speech or publication, or organizing political parties that advocate the overthrow of the U.S. government (the Smith Act)
- Civil offenses involving defamation, fraud, or workplace harassment
- Copyright violations
- Federal Communications Commission rules governing the use of broadcast media
- Crimes involving sexual obscenity in pornography and text only erotic stories.
- Ordinances requiring mass demonstrations on public property to register in advance.
- The use of free speech zones and protest free zones.
- Military censorship of blogs written by military personnel claiming some include sensitive information ineligible for release. Some critics view military officials as trying to suppress dissent from troops in the field. The US Constitution specifically limits the human rights of active duty members, and this constitutional authority is used to limit speech rights by members in this and in other ways.
In two high profile cases, grand juries have decided that Time magazine reporter Matthew Cooper and New York Times reporter Judith Miller must reveal their sources in cases involving CIA leaks. Time magazine exhausted its legal appeals, and Mr. Cooper eventually agreed to testify. Miller was jailed for 85 days before cooperating. U.S. District Chief Judge Thomas F. Hogan ruled that the First Amendment does not insulate Time magazine reporters from a requirement to testify before a criminal grand jury that's conducting the investigation into the possible illegal disclosure of classified information.
Approximately 30,000 government employees and contractors are currently employed to monitor telephone calls and other communications.
Right to peaceably assemble
Although Americans are supposed to enjoy the freedom to peacefully protest, protesters are sometimes mistreated, beaten, arrested, jailed or fired upon.
On February 19, 2011, Ray McGovern was dragged out of a speech by Hillary Clinton on Internet freedom, in which she said that people should be free to protest without fear of violence. McGovern, who was wearing a Veterans for Peace t-shirt, stood up during the speech and silently turned his back on Clinton. He was then assaulted by undercover and uniformed police, roughed up, handcuffed and jailed. He suffered bruises and lacerations in the attack and required medical treatment.
On May 4, 1970, Ohio National Guardsmen opened fire on protesting students at Kent State University, killing four students. Investigators determined that 28 Guardsmen fired 61 to 67 shots. The Justice Department concluded that the Guardsmen were not in danger and that their claim that they fired in self-defense was untrue. The nearest student was almost 100 yards away at the time of the shooting.
On March 7, 1965, approximately 600 civil rights marchers were violently dispersed by state and local police near the Edmund Pettus Bridge outside of Selma, Alabama.
In June 2009, the ACLU asked the Department of Defense to stop categorizing political protests as "low-level terrorism" in their training courses.
During the fall of 2011, large numbers of protesters taking part in the "Occupy movement" in cities around the country were arrested on various charges during protests for economic and political reforms.
Freedom of movement
|This section requires expansion. (July 2009)|
As per § 707(b) of the Foreign Relations Authorization Act, Fiscal Year 1979,United States passports are required to enter and exit the country, and as per the Passport Act of 1926 and Haig v. Agee, the Presidential administration may deny or revoke passports for foreign policy or national security reasons at any time. Perhaps the most notable example of enforcement of this ability was the 1948 denial of a passport to U.S. Representative Leo Isacson, who sought to go to Paris to attend a conference as an observer for the American Council for a Democratic Greece, a Communist front organization, because of the group's role in opposing the Greek government in the Greek Civil War.
The United States prevents U.S. citizens to travel to Cuba, citing national security reasons, as part of an embargo against Cuba that has been condemned as an illegal act by the United Nations General Assembly. The current exception to the ban on travel to the island, permitted since April 2009, has been an easing of travel restrictions for Cuban-Americans visiting their relatives. Restrictions continue to remain in place for the rest of the American populace.
On June 30, 2010, the American Civil Liberties Union filed a lawsuit on behalf of ten people who are either U.S. citizens or legal residents of the U.S., challenging the constitutionality of the government's "no-fly" list. The plaintiffs have not been told why they are on the list. Five of the plaintiffs have been stranded abroad. It is estimated that the "no-fly" list contained about 8,000 names at the time of the lawsuit.
The Secretary of State can deny a passport to anyone imprisoned, on parole, or on supervised release for a conviction for international drug trafficking or sex tourism, or to anyone who is behind on their child support payments.
The following case precedents are typically cited in defense of unencumbered travel within the United States:
"The use of the highway for the purpose of travel and transportation is not a mere privilege, but a common fundamental right of which the public and individuals cannot rightfully be deprived." Chicago Motor Coach v. Chicago, 337 Ill. 200; 169 N.E. 22 (1929).
"The right of the citizen to travel upon the public highways and to transport his property thereon, either by carriage or by automobile, is not a mere privilege which a city may prohibit or permit at will, but a common law right which he has under the right to life, liberty, and the pursuit of happiness." Thompson v. Smith, Supreme Court of Virginia, 155 Va. 367; 154 S.E. 579; (1930).
"Undoubtedly the right of locomotion, the right to move from one place to another according to inclination, is an attribute of personal liberty, and the right, ordinarily, of free transit from or through the territory of any State is a right secured by the 14th amendment and by other provisions of the Constitution." Schactman v. Dulles, 225 F.2d 938; 96 U.S. App. D.C. 287 (1955).
"The right to travel is a well-established common right that does not owe its existence to the federal government. It is recognized by the courts as a natural right." Schactman v. Dulles 225 F.2d 938; 96 U.S. App. D.C. 287 (1955) at 941.
"The right to travel is a part of the liberty of which the citizen cannot be deprived without due process of law under the Fifth Amendment." Kent v. Dulles, 357 US 116, 125 (1958).
Freedom of association
Freedom of association is the right of individuals to come together in groups for political action or to pursue common interests.
In 2008, the Maryland State Police admitted that they had added the names of Iraq War protesters and death penalty opponents to a terrorist database. They also admitted that other "protest groups" were added to the terrorist database, but did not specify which groups. It was also discovered that undercover troopers used aliases to infiltrate organizational meetings, rallies and group e-mail lists. Police admitted there was "no evidence whatsoever of any involvement in violent crime" by those classified as terrorists.
National security exceptions
The United States government has declared martial law, suspended (or claimed exceptions to) some rights on national security grounds, typically in wartime and conflicts such as the United States Civil War,Cold War or the War against Terror. 70,000 Americans of Japanese ancestry were legally interned during World War II under Executive Order 9066. In some instances the federal courts have allowed these exceptions, while in others the courts have decided that the national security interest was insufficient. Presidents Lincoln, Wilson, and F.D. Roosevelt ignored such judicial decisions.
Sedition laws have sometimes placed restrictions on freedom of expression. The Alien and Sedition Acts, passed by President John Adams during an undeclared naval conflict with France, allowed the government to punish "false" statements about the government and to deport "dangerous" immigrants. The Federalist Party used these acts to harass supporters of the Democratic-Republican Party. While Woodrow Wilson was president, another broad sedition law called the Sedition Act of 1918, was passed during World War I. It also caused the arrest and ten-year sentencing of Socialist Party of America Presidential candidate Eugene V. Debs for speaking out against the atrocities of World War I, although he was later released early by President Warren G. Harding. Countless others, labeled as "subverts" (especially the Wobblies), were investigated by the Woodrow Wilson Administration.
Presidents have claimed the power to imprison summarily, under military jurisdiction, those suspected of being combatants for states or groups at war against the United States. Abraham Lincoln invoked this power in the American Civil War to imprison Maryland secessionists. In that case, the Supreme Court concluded that only Congress could suspend the writ of habeas corpus, and the government released the detainees. During World War II, the United States interned thousands of Japanese-Americans on alleged fears that Japan might use them as saboteurs.
The Fourth Amendment of the United States Constitution forbids unreasonable search and seizure without a warrant, but some administrations have claimed exceptions to this rule to investigate alleged conspiracies against the government. During the Cold War, the Federal Bureau of Investigation established COINTELPRO to infiltrate and disrupt left-wing organizations, including those that supported the rights of black Americans.
National security, as well as other concerns like unemployment, has sometimes led the United States to toughen its generally liberal immigration policy. The Chinese Exclusion Act of 1882 all but banned Chinese immigrants, who were accused of crowding out American workers.
Nationwide Suspicious Activity Reporting Initiative
The federal government has set up a data collection and storage network that keeps a wide variety of data on tens of thousands of Americans who have not been accused of committing a crime. Operated primarily under the direction of the Federal Bureau of Investigation, the program is known as the Nationwide Suspicious Activity Reporting Initiative or SAR. Reports of suspicious behavior noticed by local law enforcement or by private citizens are forwarded to the program, and profiles are constructed of the persons under suspicion. see also Fusion Center.
Labor rights in the United States have been linked to basic constitutional rights. Comporting with the notion of creating an economy based upon highly skilled and high wage labor employed in a capital-intensive dynamic growth economy, the United States enacted laws mandating the right to a safe workplace, Workers compensation, Unemployment insurance, fair labor standards, collective bargaining rights, Social Security, along with laws prohibiting child labor and guaranteeing a minimum wage. While U.S. workers tend to work longer hours than other industrialized nations, lower taxes and more benefits give them a larger disposable income than those of most industrialized nations, however the advantage of lower taxes have been challenged. See: Disposable and discretionary income. U.S. workers are among the most productive in the world. During the 19th and 20th centuries, safer conditions and workers' rights were gradually mandated by law.
In 1935, the National Labor Relations Act recognized and protected "the rights of most workers in the private sector to organize labor unions, to engage in collective bargaining, and to take part in strikes and other forms of concerted activity in support of their demands." However, many states hold to the principle of at-will employment, which says an employee can be fired for any or no reason, without warning and without recourse, unless violation of State or Federal civil rights laws can be proven. In 2011, 11.8% of U.S. workers were members of labor unions with 37% of public sector (government) workers in unions while only 6.9% of private sector workers were union members.
The Universal Declaration of Human Rights, adopted by the United Nations in 1948, states that “everyone has the right to a standard of living adequate for the health and well-being of oneself and one’s family, including food, clothing, housing, and medical care.” In addition, the Principles of Medical Ethics of the American Medical Association require medical doctors to respect the human rights of the patient, including that of providing medical treatment when it is needed. Americans' rights in health care are regulated by the US Patients' Bill of Rights.
Unlike most other industrialized nations, the United States does not offer most of its citizens subsidized health care. The United States Medicaid program provides subsidized coverage to some categories of individuals and families with low incomes and resources, including children, pregnant women, and very low-income people with disabilities (higher-earning people with disabilities do not qualify for Medicaid, although they do qualify for Medicare). However, according to Medicaid's own documents, "the Medicaid program does not provide health care services, even for very poor persons, unless they are in one of the designated eligibility groups."
Nonetheless, some states offer subsidized health insurance to broader populations. Coverage is subsidized for persons age 65 and over, or who meet other special criteria through Medicare. Every person with a permanent disability, both young and old, is inherently entitled to Medicare health benefits — a fact not all disabled US citizens are aware of. However, just like every other Medicare recipient, a disabled person finds that his or her Medicare benefits only cover up to 80% of what the insurer considers reasonable charges in the U.S. medical system, and that the other 20% plus the difference in the reasonable amount and the actual charge must be paid by other means (typically supplemental, privately held insurance plans, or cash out of the person's own pocket). Therefore, even the Medicare program is not truly national health insurance or universal health care the way most of the rest of the industrialized world understands it.
The Emergency Medical Treatment and Active Labor Act of 1986, an unfunded mandate, mandates that no person may ever be denied emergency services regardless of ability to pay, citizenship, or immigration status. The Emergency Medical Treatment and Labor Act has been criticized by the American College of Emergency Physicians as an unfunded mandate.
46.6 million residents, or 15.9 percent, were without health insurance coverage in 2005. This number includes about ten million non-citizens, millions more who are eligible for Medicaid but never applied, and 18 million with annual household incomes above $50,000. According to a study led by the Johns Hopkins Children's Center, uninsured children who are hospitalized are 60% more likely to die than children who are covered by health insurance.
The Fourth, Fifth, Sixth and Eighteenth Amendments of the Bill of Rights, along with the Fourteenth Amendment, ensure that criminal defendants have significant procedural rights that are unsurpassed by any other justice system. The Fourteenth Amendment's incorporation of due process rights adds these constitutional protections to the state and local levels of law enforcement. Similarly, the United States possesses a system of judicial review over government action more powerful than any other in the world.
The USA was the only country in the G8 to have carried out executions in 2011. Three countries in the G20 carried out executions in 2011: China, Saudi Arabia and the USA. The USA and Belarus were the only two of the 56 Member States of the Organization for Security and Cooperation in Europe to have carried out executions in 2011.
Capital punishment is controversial. Death penalty opponents regard the death penalty as inhumane and criticize it for its irreversibility and assert that it lacks a deterrent effect, as have several studies and debunking studies that claim to show a deterrent effect. According to Amnesty International, "the death penalty is the ultimate, irreversible denial of human rights."
The 1972 US Supreme Court case Furman v. Georgia 408 U.S. 238 (1972) held that arbitrary imposition of the death penalty at the states' discretion constituted cruel and unusual punishment in violation of the Eighth Amendment to the United States Constitution. In California v. Anderson 64 Cal.2d 633, 414 P.2d 366 (Cal. 1972), the Supreme Court of California classified capital punishment as cruel and unusual and outlawed the use of capital punishment in California, until it was reinstated in 1976 after the federal supreme court rulings Gregg v. Georgia, 428 U.S. 153 (1976), Jurek v. Texas, 428 U.S. 262 (1976), and Proffitt v. Florida, 428 U.S. 242 (1976). As of January 25, 2008, the death penalty has been abolished in the District of Columbia and fourteen states, mainly in the Northeast and Midwest.
The UN special rapporteur recommended to a committee of the UN General Assembly that the United States be found to be in violation of Article 6 the International Covenant on Civil and Political Rights in regards to the death penalty in 1998, and called for an immediate capital punishment moratorium. The recommendation of the special rapporteur is not legally binding under international law, and in this case the UN did not act upon the lawyer's recommendation.
Since the reinstatement of the death penalty in 1976 there have been 1077 executions in the United States (as of May 23, 2007). There were 53 executions in 2006. Texas overwhelmingly leads the United States in executions, with 379 executions from 1976 to 2006; the second-highest ranking state is Virginia, with 98 executions.
A ruling on March 1, 2005, by the Supreme Court in Roper v. Simmons prohibits the execution of people who committed their crimes when they were under the age of 18. Between 1990 and 2005, Amnesty International recorded 19 executions in the United States for crime committed by a juvenile.
It is the official policy of the European Union and a number of non-EU nations to achieve global abolition of the death penalty. For this reason the EU is vocal in its criticism of the death penalty in the US and has submitted amicus curiae briefs in a number of important US court cases related to capital punishment. The American Bar Association also sponsors a project aimed at abolishing the death penalty in the United States, stating as among the reasons for their opposition that the US continues to execute minors and the mentally retarded, and fails to protect adequately the rights of the innocent.
Some opponents criticize the over-representation of blacks on death row as evidence of the unequal racial application of the death penalty. This over-representation is not limited to capital offenses, in 1992 although blacks account for 12% of the US population, about 34 percent of prison inmates were from this group. In McCleskey v. Kemp, it was alleged the capital sentencing process was administered in a racially discriminatory manner in violation of the Equal Protection Clause of the Fourteenth Amendment.
In 2003, Amnesty International reported those who kill whites are more likely to be executed than those who kill blacks, citing of the 845 people executed since 1977, 80 percent were put to death for killing whites and 13 percent were executed for killing blacks, even though blacks and whites are murdered in almost equal numbers.
The United States is seen by social critics, including international and domestic human rights groups and civil rights organizations, as a state that violates fundamental human rights, because of disproportionately heavy, in comparison with other countries, reliance on crime control, individual behavior control (civil liberties), and societal control of disadvantaged groups through a harsh police and criminal justice system. The U.S. penal system is implemented on the federal, and in particular on the state and local levels. This social policy has resulted in a high rate of incarceration, which affects Americans from the lowest socioeconomic backgrounds and racial minorities the hardest.
Some have criticized the United States for having an extremely large prison population, where there have been reported abuses. As of 2004 the United States had the highest percentage of people in prison of any nation. There were more than 2.2 million in prisons or jails, or 737 per 100,000 population, or roughly 1 out of every 136 Americans. According to The National Council on Crime and Delinquency, since 1990 the incarceration of youth in adult jails has increased 208%. In some states youth - juvenile is defined as young as 13 years old. The researchers for this report found that Juveniles often were incarcerated to await trial for up to two years and subjected to the same treatment of mainstream inmates. The incarcerated adolescent is often subjected to a highly traumatic environment during this developmental stage. The long term effects are often irreversible and detrimental. "Human Rights Watch believes the extraordinary rate of incarceration in the United States wreaks havoc on individuals, families and communities, and saps the strength of the nation as a whole."
Examples of mistreatment claimed include prisoners left naked and exposed in harsh weather or cold air; "routine" use of rubber bullets and pepper spray;,solitary confinement of violent prisoners in soundproofed cells for 23 or 24 hours a day; and a range of injuries from serious injury to fatal gunshot wounds, with force at one California prison "often vastly disproportionate to the actual need or risk that prison staff faced." Such behaviors are illegal, and, "Professional standards clearly limit staff use of force to that which is necessary to control prisoner disorder."
Human Rights Watch raised concerns with prisoner rape and medical care for inmates. In a survey of 1,788 male inmates in Midwestern prisons by Prison Journal, about 21% claimed they had been coerced or pressured into sexual activity during their incarceration and 7% claimed that they had been raped in their current facility. Tolerance of serious sexual abuse and rape in United States prisons are consistently reported as widespread. It has been fought against by organizations such as Stop Prisoner Rape.
The United States has been criticized for having a high amount of non-violent and victim-less offenders incarcerated, as half of all persons incarcerated under State jurisdiction are for non-violent offences and 20 percent are incarcerated for drug offences, mostly for possession of cannabis.
The United States is the only country in the world allowing sentencing of young adolescents to life imprisonment without the possibility of parole. There are currently 73 Americans serving such sentences for crimes they committed at the age of 13 or 14. In December 2006 the United Nations took up a resolution calling for the abolition of this kind of punishment for children and young teenagers. 185 countries voted for the resolution and only the United States against.
In a 1999 report, Amnesty International said it had "documented patterns of ill-treatment across the U.S., including police beatings, unjustified shootings and the use of dangerous restraint techniques." According to a 1998 Human Rights Watch report, incidents of police use of excessive force had occurred in cities throughout the U.S., and this behavior goes largely unchecked. An article in USA Today reports that in 2006, 96% of cases referred to the U.S. Justice Department for prosecution by investigative agencies were declined. In 2005, 98% were declined. In 2001, the New York Times reported that the U.S. government is unable or unwilling to collect statistics showing the precise number of people killed by the police or the prevalence of the use of excessive force. Since 1999, at least 148 people have died in the United States and Canada after being shocked with Tasers by police officers, according to a 2005 ACLU report. In one case, a handcuffed suspect was tasered nine times by a police officer before dying, and six of those taserings occurred within less than three minutes. The officer was fired and faced the possibility of criminal charges.
War on Terrorism
Inhumane treatment and torture of captured non-citizens
International and U.S. law prohibits torture and other ill-treatment of any person in custody in all circumstances. However, the United States Government has categorized a large number of people as unlawful combatants, a United States classification used mainly as an excuse to bypass international law, which denies the privileges of prisoner of war (POW) designation of the Geneva Conventions.
Certain practices of the United States military and Central Intelligence Agency have been condemned by some sources domestically and internationally as torture. A fierce debate regarding non-standard interrogation techniques exists within the US civilian and military intelligence community, with no general consensus as to what practices under what conditions are acceptable.
Abuse of prisoners is considered a crime in the United States Uniform Code of Military Justice. According to a January 2006 Human Rights First report, there were 45 suspected or confirmed homicides while in US custody in Iraq and Afghanistan; "Certainly 8, as many as 12, people were tortured to death."
Abu Ghraib prison abuse
In 2004, photos showing humiliation and abuse of prisoners were leaked from Abu Ghraib prison, causing a political and media scandal in the US. Forced humiliation of the detainees included, but is not limited to nudity, rape, human piling of nude detainees, masturbation, eating food out of toilets, crawling on hand and knees while American soldiers were sitting on their back sometimes requiring them to bark like dogs, and hooking up electrical wires to fingers, toes, and penises.Bertrand Ramcharan, acting UN High Commissioner for Human Rights stated that while the removal of Saddam Hussein represented "a major contribution to human rights in Iraq" and that the United States had condemned the conduct at Abu Ghraib and pledged to bring violators to justice, "willful killing, torture and inhuman treatment" represented a grave breach of international law and "might be designated as war crimes by a competent tribunal."
In addition to the acts of humiliation, there were more violent claims, such as American soldiers sodomizing detainees (including an event involving an underage boy), an incident where a phosphoric light was broken and the chemicals poured on a detainee, repeated beatings, and threats of death. Six military personnel were charged with prisoner abuse in the Abu Ghraib torture and prisoner abuse scandal. The harshest sentence was handed out to Charles Graner, who received a 10-year sentence to be served in a military prison and a demotion to private; the other offenders received lesser sentences.
In their report The Road to Abu Ghraib, Human Rights Watch describe how:
The severest abuses at Abu Ghraib occurred in the immediate aftermath of a decision by Secretary Rumsfeld to step up the hunt for "actionable intelligence" among Iraqi prisoners. The officer who oversaw intelligence gathering at Guantanamo was brought in to overhaul interrogation practices in Iraq, and teams of interrogators from Guantanamo were sent to Abu Ghraib. The commanding general in Iraq issued orders to "manipulate an internee's emotions and weaknesses." Military police were ordered by military intelligence to "set physical and mental conditions for favorable interrogation of witnesses." The captain who oversaw interrogations at the Afghan detention center where two prisoners died in detention posted "Interrogation Rules of Engagement" at Abu Ghraib, authorizing coercive methods (with prior written approval of the military commander) such as the use of military guard dogs to instill fear that violate the Geneva Conventions and the Convention against Torture and Other Cruel, Inhuman Degrading Treatment or Punishment.
Enhanced interrogation and waterboarding
On February 6, 2008, the CIA director General Michael Hayden stated that the CIA had used waterboarding on three prisoners during 2002 and 2003, namely Khalid Shaikh Mohammed, Abu Zubayda and Abd al-Rahim al-Nashiri.
The June 21, 2004, issue of Newsweek stated that the Bybee memo, a 2002 legal memorandum drafted by former OLC lawyer John Yoo that described what sort of interrogation tactics against suspected terrorists or terrorist affiliates the Bush administration would consider legal, was "...prompted by CIA questions about what to do with a top Qaeda captive, Abu Zubaydah, who had turned uncooperative ... and was drafted after White House meetings convened by George W. Bush's chief counsel, Alberto Gonzales, along with Defense Department general counsel William Haynes and David Addington, Vice President Dick Cheney's counsel, who discussed specific interrogation techniques," citing "a source familiar with the discussions." Amongst the methods they found acceptable was waterboarding.
In November 2005, ABC News reported that former CIA agents claimed that the CIA engaged in a modern form of waterboarding, along with five other "enhanced interrogation techniques", against suspected members of al Qaeda.
UN High Commissioner for Human Rights, Louise Arbour, stated on the subject of waterboarding "I would have no problems with describing this practice as falling under the prohibition of torture," and that violators of the UN Convention Against Torture should be prosecuted under the principle of universal jurisdiction.
Bent Sørensen, Senior Medical Consultant to the International Rehabilitation Council for Torture Victims and former member of the United Nations Committee Against Torture has said:
It’s a clear-cut case: Waterboarding can without any reservation be labeled as torture. It fulfils all of the four central criteria that according to the United Nations Convention Against Torture (UNCAT) defines an act of torture. First, when water is forced into your lungs in this fashion, in addition to the pain you are likely to experience an immediate and extreme fear of death. You may even suffer a heart attack from the stress or damage to the lungs and brain from inhalation of water and oxygen deprivation. In other words there is no doubt that waterboarding causes severe physical and/or mental suffering – one central element in the UNCAT’s definition of torture. In addition the CIA’s waterboarding clearly fulfills the three additional definition criteria stated in the Convention for a deed to be labeled torture, since it is 1) done intentionally, 2) for a specific purpose and 3) by a representative of a state – in this case the US.
Lt. Gen. Michael D. Maples, the director of the Defense Intelligence Agency, concurred by stating, in a hearing before the Senate Armed Services Committee, that he believes waterboarding violates Common Article 3 of the Geneva Conventions.
The CIA director testified that waterboarding has not been used since 2003.
In April 2009, the Obama administration released four memos in which government lawyers from the Bush administration approved tough interrogation methods used against 28 terror suspects. The rough tactics range from waterboarding (simulated drowning) to keeping suspects naked and denying them solid food.
These memos were accompanied by the Justice Department's release of four Bush-era legal opinions covering (in graphic and extensive detail) the interrogation of 14 high-value terror detainees using harsh techniques beyond waterboarding. These additional techniques include keeping detainees in a painful standing position for long periods (Used often, once for 180 hours), using a plastic neck collar to slam detainees into walls, keeping the detainee's cell cold for long periods, beating and kicking the detainee, insects placed in a confinement box (the suspect had a fear of insects), sleep-deprivation, prolonged shackling, and threats to a detainee's family. One of the memos also authorized a method for combining multiple techniques.
Details from the memos also included the number of times that techniques such as waterboarding were used. A footnote said that one detainee was waterboarded 83 times in one month, while another was waterboarded 183 times in a month. This may have gone beyond even what was allowed by the CIA's own directives, which limit waterboarding to 12 times a day. The Fox News website carried reports from an unnamed US official who claimed that these were the number of pourings, not the number of sessions.
Physicians for Human Rights has accused the Bush administration of conducting illegal human experiments and unethical medical research during interrogations of suspected terrorists. The group has suggested this activity was a violation of the standards set by the Nuremberg Trials.
The United States maintains a detention center at its military base at Guantánamo Bay, Cuba where numerous enemy combatants of the war on terror are held. The detention center has been the source of various controversies regarding the legality of the center and the treatment of detainees.Amnesty International has called the situation "a human rights scandal" in a series of reports. 775 detainees have been brought to Guantánamo. Of these, many have been released without charge. As of March 2013, 166 detainees remain at Guantanamo. The United States assumed territorial control over Guantánamo Bay under the 1903 Cuban-American Treaty, which granted the United States a perpetual lease of the area. United States, by virtue of its complete jurisdiction and control, maintains "de facto" sovereignty over this territory, while Cuba retained ultimate sovereignty over the territory. The current government of Cuba regards the U.S. presence in Guantánamo as illegal and insists the Cuban-American Treaty was obtained by threat of force in violation of international law.
A delegation of UN Special Rapporteurs to Guantanamo Bay claimed that interrogation techniques used in the detention center amount to degrading treatment in violation of the ICCPR and the Convention Against Torture.
In 2005 Amnesty International expressed alarm at the erosion in civil liberties since the 9/11 attacks. According to Amnesty International:
- The Guantánamo Bay detention camp has become a symbol of the United States administration’s refusal to put human rights and the rule of law at the heart of its response to the atrocities of 11 September 2001. It has become synonymous with the United States executive’s pursuit of unfettered power, and has become firmly associated with the systematic denial of human dignity and resort to cruel, inhuman or degrading treatment that has marked the USA’s detentions and interrogations in the "war on terror".
Amnesty International also condemned the Guantánamo facility as "...the gulag of our times," which raised heated conversation in the United States. The purported legal status of "unlawful combatants" in those nations currently holding detainees under that name has been the subject of criticism by other nations and international human rights institutions including Human Rights Watch and the International Committee of the Red Cross. The ICRC, in response to the US-led military campaign in Afghanistan, published a paper on the subject. HRW cites two sergeants and a captain accusing U.S. troops of torturing prisoners in Iraq and Afghanistan.
The US government argues that even if detainees were entitled to POW status, they would not have the right to lawyers, access to the courts to challenge their detention, or the opportunity to be released prior to the end of hostilities—and that nothing in the Third Geneva Convention provides POWs such rights, and POWs in past wars have generally not been given these rights. The U.S. Supreme Court ruled in Hamdan v. Rumsfeld on June 29, 2006, that they were entitled to the minimal protections listed under Common Article 3 of the Geneva Conventions. Following this, on July 7, 2006, the Department of Defense issued an internal memo stating that prisoners would in the future be entitled to protection under Common Article 3.
United States citizens and foreign nationals are occasionally captured and abducted outside of the United States and transferred to secret US administered detention facilities, sometimes being held incommunicado for periods of months or years, a process known as extraordinary rendition.
According to The New Yorker, "The most common destinations for rendered suspects are Egypt, Morocco, Syria, and Jordan, all of which have been cited for human-rights violations by the State Department, and are known to torture suspects."
In November 2001, Yaser Esam Hamdi, a U.S. citizen, was captured by Afghan Northern Alliance forces in Konduz, Afghanistan, amongst hundreds of surrendering Taliban fighters and was transferred into U.S. custody. The U.S. government alleged that Hamdi was there fighting for the Taliban, while Hamdi, through his father, has claimed that he was merely there as a relief worker and was mistakenly captured. Hamdi was transferred into CIA custody and transferred to the Guantanamo Bay Naval Base, but when it was discovered that he was a U.S. citizen, he was transferred to naval brig in Norfolk, Virginia and then he was transferred brig in Charleston, South Carolina. The Bush Administration identified him as an unlawful combatant and denied him access to an attorney or the court system, despite his Fifth Amendment right to due process. In 2002 Hamdi's father filed a habeas corpus petition, the Judge ruled in Hamdi's favor and required he be allowed a public defender; however, on appeal the decision was reversed. In 2004, in the case of Hamdi v. Rumsfeld the U.S. Supreme court reversed the dismissal of a habeas corpus petition and ruled detainees who are U.S. citizens must have the ability to challenge their detention before an impartial judge.
In December 2004, Khalid El-Masri, a German citizen, was apprehended by Macedonian authorities when traveling to Skopje because his name was similar to Khalid al-Masri, an alleged mentor to the al-Qaeda Hamburg cell. After being held in a motel in Macedonia for over three weeks he was transferred to the CIA and extradited to Afghanistan. While held in Afghanistan, El-Masri claims he was sodomized, beaten, and repeatedly interrorgated about alleged terrorist ties. After being in custody for five months, Condoleezza Rice learned of his detention and ordered his release. El-Masri was released at night on a desolate road in Albania, without apology or funds to return home. He was intercepted by Albanian guards, who believed he was a terrorist due to his haggard and unkept appearance. He was subsequently reunited with his wife who had returned to her family in Lebanon with their children because she thought her husband had abandoned them. Using isotope analysis, scientists at the Bavarian archive for geology in Munich analyzed his hair and verified that he was malnourished during his disappearance.
According to the Human Rights Watch report (September 2012) the United States government during the U.S. President Bush republican administration “waterboarding” tortured opponents of Muammar Gaddafi during interrogations, then transferred them to mistreatment in Libya. President Barack Obama has denied water torture.
Unethical human experimentation in the United States
Well-known cases include:
- Albert Kligman's dermatology experiments
- Greenberg v. Miami Children's Hospital Research Institute
- Henrietta Lacks
- Human radiation experiments
- Jesse Gelsinger
- Monster Study
- Moore v. Regents of the University of California
- Medical Experimentation on Black Americans
- Milgram experiment
- Radioactive iodine experiments
- Plutonium injections
- Stanford prison experiment
- Surgical removal of body parts to try to improve mental health
- Tuskegee syphilis experiment
- Willowbrook State School
According to Canadian historian Michael Ignatieff, during and after the Cold War, the United States placed greater emphasis than other nations on human rights as part of its foreign policy, awarded foreign aid to facilitate human rights progress, and annually assessed the human rights records of other national governments.
The U.S. Department of State publishes a yearly report "Supporting Human Rights and Democracy: The U.S. Record" in compliance with a 2002 law that requires the Department to report on actions taken by the U.S. Government to encourage respect for human rights. It also publishes a yearly "Country Reports on Human Rights Practices.". In 2006 the United States created a "Human Rights Defenders Fund" and "Freedom Awards." The "Ambassadorial Roundtable Series", created in 2006, are informal discussions between newly confirmed U.S. Ambassadors and human rights and democracy non-governmental organizations. The United States also support democracy and human rights through several other tools.
The "Human Rights and Democracy Achievement Award" recognizes the exceptional achievement of officers of foreign affairs agencies posted abroad.
- In 2006 the award went to Joshua Morris of the embassy in Mauritania who recognized necessary democracy and human rights improvements in Mauritania and made democracy promotion one of his primary responsibilities. He persuaded the Government of Mauritania to re-open voter registration lists to an additional 85,000 citizens, which includes a significant number of Afro-Mauritanian minority individuals. He also organized and managed the largest youth-focused democracy project in Mauritania in 5 years.
- Nathaniel Jensen of the embassy in Vietnam was runner-up. He successfully advanced the human rights agenda on several fronts, including organizing the resumption of a bilateral Human Rights Dialogue, pushing for the release of Vietnam’s prisoners of concern, and dedicating himself to improving religion freedom in northern Vietnam.
Under legislation by congress, the United States declared that countries utilizing child soldiers may no longer be eligible for US military assistance, in an attempt to end this practice.
The U.S. has signed and ratified the following human rights treaties:
- International Covenant on Civil and Political Rights (ICCPR) (ratified with 5 reservations, 5 understandings, and 4 declarations.)
- Optional protocol on the involvement of children in armed conflict
- International Convention on the Elimination of All Forms of Racial Discrimination
- Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment
- Protocol relating to the Status of Refugees
- Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution and Child Pornography
Non-binding documents voted for:
International Bill of Rights
The International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR) are the legal treaties that enshrine the rights outlined in the Universal Declaration of Human Rights. Together, and along with the first and second optional protocols of the ICCPR they constitute the International bill of rights The US has not ratified the ICESCR or either of the optional protocols of the ICCPR.
The US's ratification of the ICCPR was done with five reservations – or limits – on the treaty, 5 understandings and 4 declarations. Among these is the rejection of sections of the treaty that prohibit capital punishment. Included in the Senate's ratification was the declaration that "the provisions of Article 1 through 27 of the Covenant are not self-executing", and in a Senate Executive Report stated that the declaration was meant to "clarify that the Covenant will not create a private cause of action in U.S. Courts." This way of ratifying the treaty was criticized as incompatible with the Supremacy Clause by Louis Henkin.
As a reservation that is "incompatible with the object and purpose" of a treaty is void as a matter of international law, Vienna Convention on the Law of Treaties, art. 19, 1155 U.N.T.S. 331 (entered into force Jan. 27, 1980) (specifying conditions under which signatory States can offer "reservations"), there is some issue as to whether the non-self-execution declaration is even legal under domestic law. At any rate, the United States is but a signatory in name only.
International Criminal Court
The U.S. has not ratified the Rome Statute of the International Criminal Court (ICC), which was drafted for prosecuting individuals above the authority of national courts in the event of accusations of genocide, crimes against humanity, war crimes, and crime of aggression. Nations that have accepted the Rome Statute can defer to the jurisdiction of the ICC or must surrender their jurisdiction when ordered.
The US rejected the Rome Statute after its attempts to include the nation of origin as a party in international proceedings failed, and after certain requests were not met, including recognition of gender issues, "rigorous" qualifications for judges, viable definitions of crimes, protection of national security information that might be sought by the court, and jurisdiction of the UN Security Council to halt court proceedings in special cases. Since the passage of the statute, the US has actively encouraged nations around the world to sign "bilateral immunity agreements" prohibiting the surrender of US personnel before the ICC and actively attempted to undermine the Rome Statute of the International Criminal Court. The US Congress also passed a law, American Service-Members' Protection Act (ASPA) authorizing the use of military force to free any US personnel that are brought before the court rather than its own court system. Human Rights Watch criticized the United States for removing itself from the Statute.
Judge Richard Goldstone, the first chief prosecutor at The Hague war crimes tribunal on the former Yugoslavia, echoed these sentiments saying:
I think it is a very backwards step. It is unprecedented which I think to an extent smacks of pettiness in the sense that it is not going to affect in any way the establishment of the international criminal court...The US have really isolated themselves and are putting themselves into bed with the likes of China, the Yemen and other undemocratic countries.
While the US has maintained that it will "bring to justice those who commit genocide, crimes against humanity and war crimes," its primary objections to the Rome Statute have revolved around the issues of jurisdiction and process. A US ambassador for War Crimes Issues to the UN Security Council said to the US Senate Foreign Relations Committee that because the Rome Statute requires only one nation to submit to the ICC, and that this nation can be the country in which an alleged crime was committed rather than defendant’s country of origin, U.S military personnel and US foreign peaceworkers in more than 100 countries could be tried in international court without the consent of the US. The ambassador states that "most atrocities are committed internally and most internal conflicts are between warring parties of the same nationality, the worst offenders of international humanitarian law can choose never to join the treaty and be fully insulated from its reach absent a Security Council referral. Yet multinational peacekeeping forces operating in a country that has joined the treaty can be exposed to the court's jurisdiction even if the country of the individual peacekeeper has not joined the treaty."
Other treaties not signed or signed but not ratified
Where the signature is subject to ratification, acceptance or approval, the signature does not establish the consent to be bound. However, it is a means of authentication and expresses the willingness of the signatory state to continue the treaty-making process. The signature qualifies the signatory state to proceed to ratification, acceptance or approval. It also creates an obligation to refrain, in good faith, from acts that would defeat the object and the purpose of the treaty.
The U.S. has not ratified the following international human rights treaties:
- First Optional Protocol to the International Covenant on Civil and Political Rights (ICCPR)
- Second Optional Protocol to the International Covenant on Civil and Political Rights, aiming at the abolition of the death penalty
- Optional Protocol to CEDAW
- Optional Protocol to the Convention against Torture
- Convention relating to the Status of Refugees (1951)
- Convention Relating to the Status of Stateless Persons (1954)
- Convention on the Reduction of Statelessness (1961)
- International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families
The US has signed but not ratified the following treaties:
- Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) (signed but not ratified)
- Convention on the Rights of the Child (CRC) (signed but not ratified)
- International Covenant on Economic, Social and Cultural Rights (signed but not ratified)
Non-binding documents voted against:
Inter-American human rights system
The US is a signatory to the 1948 American Declaration of the Rights and Duties of Man and has signed but not ratified the 1969 American Convention on Human Rights. It is a member of Inter-American Convention on the Granting of Political Rights to Women (1948). It does not accept the adjudicatory jurisdiction of the Costa Rica-based Inter-American Court of Human Rights.
- Protocol to the American Convention on Human Rights to Abolish the Death Penalty (1990)
- Additional Protocol to the American Convention on Human Rights in the Area of Economic, Social and Cultural Rights
- Inter-American Convention to Prevent and Punish Torture (1985)
- Inter-American Convention on the Prevention, Punishment and Eradication of Violence Against Women (1994)
- Inter-American Convention on Forced Disappearance of Persons (1994)
- Inter-American Convention on the Elimination of All Forms of Discrimination against Persons with Disabilities
Coverage of violations in the media
Studies have found that the New York Times coverage of worldwide human rights violations is seriously biased, predominantly focusing on the human rights violations in nations where there is clear U.S. involvement, while having relatively little coverage of the human rights violations in other nations.Amnesty International's Secretary General Irene Khan explains, "If we focus on the U.S. it's because we believe that the U.S. is a country whose enormous influence and power has to be used constructively ... When countries like the U.S. are seen to undermine or ignore human rights, it sends a very powerful message to others."
According to Freedom in the World, an annual report by US based think-tank Freedom House, which rates political rights and civil liberties, in 2007, the United States was ranked "Free" (the highest possible rating), together with 92 other countries.
According to the annual Worldwide Press Freedom Index published by Reporters Without Borders, due to wartime restrictions the United States was ranked 53rd from the top in 2006 (out of 168), 44th in 2005. 22nd in 2004, 31st in 2003 and 17th in 2002.
According to the annual Corruption Perceptions Index, which was published by Transparency International, the United States was ranked 20th from the top least corrupt in 2006 (out of 163), 17th in 2005, 18th in 2003, and 16th in 2002.
According to the Gallup International Millennium Survey, the United States ranked 23rd in citizens' perception of human rights observance when its citizens were asked, "In general, do you think that human rights are being fully respected, partially respected or are they not being respected at all in your country?"
In the aftermath of the devastation caused by Hurricane Katrina, criticism by some groups commenting on human rights issues was made regarding the recovery and reconstruction issues The American Civil Liberties Union and the National Prison Project documented mistreatment of the prison population during the flooding, while United Nations Special Rapporteur Doudou Diène delivered a 2008 report on such issues. The United States was elected in 2009 to sit on the United Nations Human Rights Council (UNHRC), which the U.S. State Department had previously asserted had lost its credibility by its prior stances and lack of safeguards against severe human rights violators taking a seat. In 2006 and 2007, the UNHCR and Martin Scheinin were critical of the United States regard permitting executions by lethal injection, housing children in adult jails, subjecting prisoners to prolonged isolation in supermax prisons, using enhanced interrogation techniques and domestic poverty gaps.
Criticism of the US Human rights record
US Human rights abuses
Organizations involved in US human rights
People involved in US human rights
Notable comments on Human Rights
- Ellis, Joseph J. (1998) . American Sphinx: The Character of Thomas Jefferson. Vintage Books. p. 63. ISBN 0-679-76441-0.
- Lauren, Paul Gordon (2007). "A Human Rights Lens on U.S. History: Human Rights at Home and Human Rights Abroad". In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: Portraits of the Movement III. Praeger Publishers. p. 4. ISBN 0-275-98824-4.
- Brennan, William, J., ed. Schwartz, Bernard, The Burger Court: counter-revolution or confirmation?, Oxford University Press US, 1998,ISBN 0-19-512259-3, page 10
- Schneebaum, Steven M. (Summer, 1998). "Human rights in the United States courts: The role of lawyers". Washington & Lee Law Review. Retrieved 2009-06-10.
- Declaration of Independence
- Henkin, Louis; Rosenthal, Albert J. (1990). Constitutionalism and rights: the influence of the United States constitution abroad. Columbia University Press. pp. 2–3. ISBN 0-231-06570-1.
- Morgan, Edmund S. (1989). Inventing the People: The Rise of Popular Sovereignty in England and America. W. W. Norton & Company. ISBN 0393306232.
- See, e.g., "Article VI". U.S. Constitution. 1787. "The Senators and Representative before mentioned, and the Members of the several State Legislatures, and all executive and judicial Officers, both of the United Sates and of the several States, shall be bound by Oath or Affirmation, to support this Constitution; but no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States."
- See, e.g.,Fischer, David Hackett. Albion's Seed: Four British Folkways in America. Oxford University Press. pp. 490–498. ISBN 978-0-19-506905-1. ""On the subject of gender, the Quakers had a saying: "In souls, there is no sex." to g"
- "Blackstone Valley". Blackstonedaily.com. Retrieved June 29, 2011.
- "Blackstone Valley". Blackstonedaily.com. Retrieved June 29, 2011.
- "Women in Politics: A Timeline". International Women's Democracy Center. Retrieved January 3, 2012.
- "The Liz Library Presents: The Woman Suffrage Timeline". Thelizlibrary.org. August 26, 1920. Retrieved June 29, 2011.
- [dead link]
- Ignatieff, Michael (2005). "Introduction: American Exceptionalism and Human Rights". American Exceptionalism and Human Rights. Princeton University Press. ISBN 0-691-11648-2.
- National Coordinating Committee for UDHR (1998). "Drafting and Adoption: The Universal Declaration of Human Rights". Franklin and Eleanor Roosevelt Institute. Retrieved 02-07-2008.
- See Haig v. Agee, Passport Act of 1926, 18 U.S.C. § 1185(b) and the Personal Responsibility and Work Opportunity Act of 1996
- Devine, Carol; Carol Rae Hansen, Ralph Wilde, Hilary Poole (1999). Human rights: The essential reference. Greenwood Publishing Group. pp. 26–29. ISBN 1-57356-205-X, 9781573562058 Check
|isbn=value (help). Retrieved 6/11/2009.
- Leebrick, Kristal (2002). The United States Constitution. Capstone Press. pp. 26–39. ISBN 0-7368-1094-3.
- Burge, Kathleen (2003-11-18). "SJC: Gay marriage legal in Mass.". Boston Globe.
- Foster v. Neilson, 27 U.S. 253, 314-15 (1829) U.S. Supreme Court, Chief Justice Marshall writing: “Our constitution declares a treaty to be the law of the land. It is, consequently, to be regarded in courts of justice as equivalent to an act of the legislature, whenever it operates of itself without the aid of any legislative provision. But when the terms of the stipulation import a contract, when either of the parties engages to perform a particular act, the treaty addresses itself to the political, not the judicial department, and the legislature must execute the contract before it can become a rule for the Court.” at 314, cited in Martin International Human Rights and Humanitarian Law et al.
- Martin, F.International Human Rights and Humanitarian Law. Cambridge University Press. 2006. p. 221 and following. ISBN 0-521-85886-0, ISBN 978-0-521-85886-1
- McWhirter,Darien A.m Equal Protection', Oryx Press, 1995, page 1
- Gould, William B., Agenda for Reform: The Future of Employment Relationships and the Law< MIT Press, 1996, ISBN 0-262-57114-5, page 27
- Nivola, Pietro S. (2002). Tense commandments: federal prescriptions and city problems,. Brookings Institution Press. pp. 127–8. ISBN 0-8157-6094-9.
- Capozzi, Irene Y., The Civil Rights Act: background, statutes and primer, Nova Publishers, 2006, ISBN 1-60021-131-3, page 6
- James W. Russell, Double standard: social policy in Europe and the United States, Rowman & Littlefield, 2006, ISBN 0-7425-4693-4, pages 147-150
- Lauren, Paul Gordon (2003). "My Brother's and Sister's Keeper: Visions and the Birth of Human Rights". The Evolution of International Human Rights: Visions Seen (Second ed.). University of Pennsylvania Press. p. 33. ISBN 0-8122-1854-X.
- Benezet also stated that "Liberty is the right of every human creature, as soon as he breathes the vital air. And no human law can deprive him of the right which he derives from the law of nature." Grimm, Robert T., Anthony Benezet (1716–1784), Notable American Philanthropists: Biographies of Giving and Volunteering, Greenwood Publishing Group, 2002, ISBN 1-57356-340-4, pages 26-28
- Vorenberg, Michael, Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment, Cambridge University Press, 2001, page 1
- Martin, Waldo E. (1998). Brown v. Board of Education: a brief history with documents. Palgrave Macmillan. pp. 3&231. ISBN 0-312-12811-8.
- Brown v. Board of Education, 98 F. Supp. 797 (August 3, 1951).
- "Barack Obama Becomes 44th President of the United States". America.gov. Retrieved 2009-05-23.
- Nineteenth Amendment, CRS/LII Annotated Constitution
- Zippel, Kathrina, SEXUAL HARASSMENT AND TRANSNATIONAL RELATIONS:WHY THOSE CONCERNED WITH GERMAN-AMERICAN RELATIONS SHOULD CARE, American Institute for Contemporary German Studies, The Johns Hopkins University, 2002, page 6
- Employment Discrimination Law Under Title VII, Oxford University Press US, 2008, ISBN 0-19-533898-7
- Rostker v. Goldberg, 453 U.S. 57 (1981)
- Midgley, James, and Michelle Livermore, The Handbook of Social Policy, SAGE, 2008, ISBN 1-4129-5076-7, page 448
- Blanck, Peter David and David L. Braddock, The Americans with Disabilities Act and the emerging workforce: employment of people with mental retardation, AAMR, 1998, ISBN 0-940898-52-7, page 3
- Capozzi, Irene Y., The Civil Rights Act: background, statutes and primer, Nova Publishers, 2006, ISBN 1-60021-131-3, page 60-61
- Lawson, Anna, Caroline Gooding, Disability rights in Europe: from theory to practice, Hart Publishing, 2005, ISBN 1-84113-486-4, page 89
- Jones, Nancy Lee, The Americans with Disabilities Act (ADA): overview, regulations and interpretations, Nova Publishers, 2003, ISBN 1-59033-663-1, pages 7-13
- "Flashpoints USA: God and Country". PBS. 2007-01-27. Retrieved 2007-06-03.
- HRC: Wisconsin Non-Discrimination Law Wisconsin law explicitly prohibits discrimination based on sexual orientation in employment, housing, public education, credit and public accommodations. Citations: WIS. STAT. §36.12, § 106.50, § 106.52§ 111.31, § 230.18, § 224.77.
- Egelko, Bob (2009-02-04). "State high court to hear Prop. 8 case March 5". San Francisco Chronicle. Retrieved 2009-02-12.
- "California Supreme Court filings pertaining to Proposition 8". Retrieved 2009-01-14.
- McCarthy v. Arndstein, 266 U.S. 34 (1924)
- "100 Documents That Shaped America:President Franklin Roosevelt's Annual Message (Four Freedoms) to Congress (1941)". U.S. News & World Report. U.S. News & World Report, L.P. Retrieved 2008-04-11.
- Near v. Minnesota
- "Dennis vs. United States". Audio Case Files. Retrieved 2008-09-06.
- "U.S. Army clamping down on soldiers' blogs". Reuters (CNN). 2007-05-02. Archived from the original on 2007-05-22. Retrieved 2007-05-27.
- "Soldiers' Iraq Blogs Face Military Scrutiny". NPR. 2004-08-24. Retrieved 2007-06-14.
- Borland, John (2001-02-26). "Battle lines harden over Net copyright". CNET. Retrieved 2007-05-28.
- "Fatal Flaws in the Bipartisan Campaign Reform Act of 2002" (PDF). Brookings Institution. Archived from the original on 2007-06-16. Retrieved 2007-05-27.
- Fareed Zakaria (September 4, 2010). "What America Has Lost". Newsweek.
- Rachel Quigley (February 19, 2011). "War veteran, 71, dragged out for staging silent protest during Hillary Clinton address... on freedom of speech". Daily Mail.
- "Anti-Bush protesters sue over arrests". Herald Tribune. August 7, 2003.
- Jarrett Murphy (September 3, 2004). "A Raw Deal For RNC Protesters?". CBS News.
- Rick Hampson (May 4, 2010). "1970 Kent State shootings are an enduring history lesson". USA Today.
- "Selma-to-Montgomery March". National Park Service. Retrieved February 20, 2011.
- "Pentagon Exam Calls Protests 'Low-Level Terrorism,' Angering Activists". Fox News Channel. June 17, 2009.
- "Dozens of Occupy protesters arrested in Texas, Oregon". CNN News. October 31, 2011.
- Pub.L. 95–426, 92 Stat. 993, enacted October 7, 1978, 18 U.S.C. § 1185(b)
- Haig v. Agee, 453 U.S. 280 (1981), at 302
- "FOREIGN RELATIONS: Bad Ammunition". Time. 12 April 1948.
- "UN condemns US embargo on Cuba". BBC News. 12 Nov. 2007. Retrieved 14 Apr. 2009. http://news.bbc.co.uk/2/hi/americas/2455923.stm
- Padgett, Tim. "Will Obama Open Up All U.S. Travel to Cuba?" Time Magazine. 14 Apr. 2009. Retrieved 14 Apr. 2009.
- Scott Shane (July 1, 2010). "A.C.L.U. Sues Over No-Fly List". The New York Times.
- Abbie Boudreau and Scott Zamost (July 14, 2010). "Thousands of sex offenders receive U.S. passports". CNN.
- "COINTELPRO". PBS. Retrieved June 25, 2010.
- Lisa Rein (October 8, 2008). "Md. Police Put Activists' Names On Terror Lists". The Washington Post.
- Constitutional Dictatorship: Crisis Government in the Modern Democracies. Clinton Rossiter. 2002. Page X. ISBN 0-7658-0975-3
- Dana Priest and William M. Arkin (December 20, 2010). "Monitoring America: How the U.S. Sees You". CBS News.
- NWI Right to Organize
- Azari-Rad, Hamid; Philips, Peter; Prus, Mark J. (2005). The economics of prevailing wage laws. Ashgate Publishing, Ltd. p. 3. ISBN 0-7546-3255-5.
- United States of America Working conditions, Information about Working conditions in United States of America
- A Curriculum of United States Labor History for Teachers
- "Union Members Summary". U.S. Dept. of Labor, Bureau of Labor Statistics. January 27, 2012.
- Steven Greenhouse (January 27, 2012). "Union Membership Rate Fell Again in 2011". The New York Times.
- National Health Care for the Homeless Council. "Human Rights, Homelessness and Health Care".
- American Medical Association. "Principles of medical ethics".
- Overview - What is Not Covered, U.S. Department of Health & Human Services
- Centers for Medicare & Medicaid Services: Emergency Medical Treatment & Labor Act
- American College of Emergency Physicians Fact Sheet: EMTALA. Retrieved 2007-11-01.
- Rowes, Jeffrey (2000). "EMTALA: OIG/HCFA Special Advisory Bulletin Clarifies EMTALA, American College of Emergency Physicians Criticizes It". Journal of Law, Medicine & Ethics 28 (1): 9092. Archived from the original on 2008-01-29. Retrieved 2008-01-02.
- "The number of uninsured Americans is at an all-time high". CBPP. 2006-08-29. Retrieved 2007-05-28.
- N. Gregory Mankiw (November 4, 2007). "Beyond Those Health Care Numbers". The New York Times.
- "Lack of Insurance May Have Figured In Nearly 17,000 Childhood Deaths, Study Shows". John Hopkins Children's Center. October 29. 2009.
- Lieberman, Jethro Koller (1999). A practical companion to the Constitution: how the Supreme Court has ruled on issues from abortion to zoning. University of California Press. p. 382. ISBN 0-520-21280-0.
- Lieberman, Jethro Koller, A practical companion to the Constitution: how the Supreme Court has ruled on issues from abortion to zoning, University of California Press, 1999, ISBN 0-520-21280-0, page 6
- Death sentences and executions in 2011 Amnesty International March 2012
- Dan Malone (Fall 2005). Cruel and Unusual: Executing the mentally ill. Amnesty International Magazine.
- "Abolish the death penalty". Amnesty International. Retrieved 2008-01-25.
- "The Death Penalty and Deterrence". Amnestyusa.org. 2008-02-22. Retrieved 2009-05-23.
- Sheila Berry (2000-09-22). "Death Penalty No Deterrent". Truthinjustice.org. Retrieved 2009-05-23.
- "John W. Lamperti | Capital Punishment". Math.dartmouth.edu. 1973-03-10. Retrieved 2009-05-23.[dead link]
- "Discussion of Recent Deterrence Studies | Death Penalty Information Center". Deathpenaltyinfo.org. Retrieved 2009-05-23.
- Melissa S. Green (May 2005). "History of the Death Penalty & Recent Developments". Justice Center, University of Alaska Anchorage. Retrieved 2008-01-25. Unknown parameter
- "Death Penalty Policy By State". Death Penalty Information Center. Retrieved 2008-01-25.
- Rights Watch (1998). Death Penalty Issue Addressed by Special Rapporteur XXXV (2). UN Chronicle.
- Death Penalty Info
- Death Penalty Info: Executions by Year
- List of individuals executed in Texas
- List of individuals executed in Virginia
- "S court bans juvenile executions". BBC News. 2005-03-01. Retrieved 2007-06-03.
- "Executions of child offenders since 1990". Amnesty International. Retrieved 2007-06-03.
- "Abolition of the Death Penalty". The EU's Human rights & Democratisation Policy. Retrieved 2007-06-02.
- "Death Penalty Moratorium Implementation Project". The American Bar Association. Retrieved 2008-01-25.
- "Why a moratorium?". American Bar Association (Death Penalty Moratorium Implementation Project). Retrieved 2008-01-25.
- Free, Marvin D. Jr. (November 1997). "The Impact of Federal Sentencing Reforms on African Americans". Journal of Black Studies 28 (2): 268–286. ISSN 0021-9347. JSTOR 2784855.
- "Death Penalty Discrimination: Those Who Murder Whites Are More Likely To Be Executed". Associated Press (CBS News). 2003-04-24. Retrieved 2007-06-03.
- Amnesty International, Human Rights in United States of America, Amnesty International
- "One in 100: Behind Bars in America 2008", Pew Research Center
- "One in 31: The Long Reach of American Corrections", Pew Research Center, released March 2, 2009
- Tuhus-Dubrow, Rebecca (2003-12-19). "Prison Reform Talking Points". The Nation. Retrieved 2007-05-27.
- [The Consequences Aren’t Minor, The Impact of Trying Youth as Adults and Strategies for Reform - A Campaign for Youth Justice Report March 2007 pg 7.]
- "Facts about Prisons and Prisoners" (PDF). The Sentencing Project. December 2006. Retrieved 2007-05-27.
- Fellner, Jamie. "US Addiction to Incarceration Puts 2.3 Million in Prison". Human Rights Watch. Retrieved 2007-06-02.
- Speech by Bonnie Kerness, January 14, 2006, before the United Nations Committee on the Elimination of Discrimination Against Women
- Journal of Law & Policy Vol 22:145 - http://law.wustl.edu/Journal/22/p145Martin.pdf
- Amnesty International Report 1998
- "Inhumane Prison Conditions Still Threaten Life, Health of Alabama Inmates Living with HIV/AIDS, According to Court Filings". Human Rights Watch. Retrieved 2006-06-13.
- Cindy Struckman-Johnson & David Struckman-Johnson (2000). "Sexual Coercion Rates in Seven Midwestern Prisons for Men" (PDF). The Prison Journal.
- Abramsky, Sasha (January 22, 2002). Hard Time Blues: How Politics Built a Prison Nation. Thomas Dunne Books.
- Hardaway, Robert (October 30, 2003). No Price Too High: Victimless Crimes and the Ninth Amendment. Praeger Publishers. ISBN 0-275-95056-5.
- "Prisoners in 2005" (PDF). United States Department of Justice: Office of Justice Programs. November 2006. Archived from the original on 2007-04-09. Retrieved 2007-06-03.
- "America's One-Million Nonviolent Prisoners". Center on Juvenile and Criminal Justice. Retrieved 2007-06-003.
- "Race, Rights and Police Brutality". Amnesty International USA. 1999. Retrieved 2007-12-22.
- "Report Charges Police Abuse in U.S. Goes Unchecked". Human Rights Watch. July 7, 1998. Retrieved 2007-12-22.
- Johnson, Kevin (2007-12-17). "Police brutality cases on rise since 9/11". USA Today. Retrieved 2007-12-22.
- Butterfield, Fox (2001-04-29). "When the Police Shoot, Who's Counting?". The New York Times. Retrieved 2007-12-22.
- "Unregulated Use of Taser Stun Guns Threatens Lives, ACLU of Northern California Study Finds". ACLU. October 6, 2005. Retrieved 2007-12-22.
- "Man dies after cop hits him with Taser 9 times". CNN. undated article. Retrieved 2008-09-06.
- "Human Rights Watch: Summary of International and U.S. Law Prohibiting Torture and Other Ill-treatment of Persons in Custody". May 24, 2004. Retrieved 2007-05-27.
- ICRC official statement: The relevance of IHL in the context of terrorism, 21 July 2005
- "CIA's Harsh Interrogation Techniques Described". 2005-11-18. Retrieved 2007-05-27.
- "Conclusions and recommendations of the Committee against Torture" (PDF). The United Nations Committee against Torture. 2006-05-19. Archived from the original on 2006-12-11. Retrieved 2007-06-02.
- "Non-standard interrogation techniques" are alleged to have at times included:
Extended forced maintenance of "stress positions" such as standing or squatting; psychological tricks and "mind games"; sensory deprivation; exposure to loud music and noises; extended exposure to flashing lights; prolonged solitary confinement; denigration of religion; withholding of food, drink, or medical care; withholding of hygienic care or toilet facilities; prolonged hooding; forced injections of unknown substances; sleep deprivation; magneto-cranial stimulation resulting in mental confusion; threats of bodily harm; threats of rendition to torture-friendly states or Guantánamo; threats of rape or sodomy; threats of harm to family members; threats of imminent execution; prolonged constraint in contorted positions (including strappado, or "Palestinian hanging"); facial smearing of real or simulated feces, urine, menstrual blood, or semen; sexual humiliation; beatings, often requiring surgery or resulting in permanent physical or mental disability; release or threat of release to attack dogs, both muzzled or un-muzzled; near-suffocation or asphyxiation via multiple detainment hoods, plastic bags, water-soaked towels or blankets, duct tape, or ligatures; gassing and chemical spraying resulting in unconsciousness; confinement in small chambers too small to fully stand or recline; underwater immersion just short of drowning (i.e. dunking); and extended exposure to extreme temperatures below freezing or above 120 °F (48 °C).
- "Human Rights First Releases First Comprehensive Report on Detainee Deaths in U.S. Custody". Human Rights First. 2006-02-22. Retrieved 2007-05-28.
- Higham, Scott; Stephens, Joe (2004-05-21). "New Details of Prison Abuse Emerge". The Washington Post. p. A01. Retrieved 2007-06-23.
- "UN Says Abu Ghraib Abuse Could Constitute War Crime", By Warren Hoge, New York Times, June 4, 2004
- "Prisoner Abuse: The Accused". ABC News. Retrieved 2007-05-28.
- The Road to Abu Ghraib, Human Rights Watch
- Price, Caitlin. "CIA chief confirms use of waterboarding on 3 terror detainees". Jurist Legal News & Research. University of Pittsburgh School of Law. Retrieved 2008-05-13.
- "CIA finally admits to waterboarding". The Australian. 2008-02-07. Retrieved 2008-02-18.
- Hirsh, Michael; John Barry, Daniel Klaidman (2004-06-21). "A tortured debate: amid feuding and turf battles, lawyers in the White House discussed specific terror-interrogation techniques like 'water-boarding' and 'mock-burials'". Newsweek. Retrieved 2007-12-20.
- "Waterboarding qualifies as torture: UN". Retrieved 2008-02-24.
- Bent Sørensen on waterboarding as torture
- Former member of UN Committee Against Torture: "Yes, waterboarding is torture" International Rehabilitation Council for Torture Victims, February 12, 2008
- Violating international law Army Official: Yes, Waterboarding Breaks International Law By Paul Kiel, Talking Points Memo, February 27, 2008
- White House defends waterboarding; CIA chief uncertain, Associated Press, February 7, 2008
- No charges against CIA officials for waterboarding: WTOP, April 16, 2009
- BBC website, CIA torture exemption 'illegal', Sunday, 19 April 2009
- The Guardian, Obama releases Bush torture memos, 16 April 2009
- "Justice Department Memos on Interrogation Techniques". The New York Times. Retrieved 2009-04-30.
- BBC Today Programme, 20 April 2009
- Despite Reports, Khalid Sheikh Mohammed Was Not Waterboarded 183 Times, Joseph Abrams, Fox News Channel, April 28, 2009
- Bob Drogin (June 8, 2010). "Physicians group accuses CIA of testing torture techniques on detainees". Los Angeles Times.
- "Evidence Indicates that the Bush Administration Conducted Experiments and Research on Detainees to Design Torture Techniques and Create Legal Cover". Physicians for Human Rights. June 7, 2010.
- Monbiot, George. One rule for them.
- In re Guantanamo Detainee Cases, 355 F.Supp.2d 443 (D.D.C. 2005).
- "Guantánamo Bay - a human rights scandal". Amnesty International. Archived from the original on 2006-02-06. Retrieved 2006-03-15.
- Julian, Finn and Julie Tate (2013-03-16). "Guantanamo Bay detainees’ frustrations simmering, lawyers and others say". The Washington Post. Retrieved 2013-03-18. "A majority of the 166 detainees remaining at Guantanamo Bay are housed in Camp 6"
- Amy, Goodman (2013-03-14). "Prisoner protest at Guantánamo Bay stains Obama's human rights record". The Guardian. Retrieved 2013-03-18. "Prisoner letters and attorney eyewitness accounts, however, support the claim that well over 100 of the 166 Guantánamo prisoners are into at least the second month of the strike."
- De Zayas, Alfred. (2003.) The Status of Guantánamo Bay and the Status of the Detainees.
- ECONOMIC, SOCIAL AND CULTURAL RIGHTS CIVIL AND POLITICAL RIGHTS Situation of detainees at Guantánamo Bay Report of the Chairperson-Rapporteur of the Working Group on Arbitrary Detention, Leila Zerrougui; the Special Rapporteur on the independence of judges and lawyers, Leandro Despouy; the Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, Manfred Nowak; the Special Rapporteur on freedom of religion or belief, Asma Jahangir; and the Special Rapporteur on the right of everyone to the enjoyment of the highest attainable standard of physical and mental health, Paul Hunt
- "Guantánamo and beyond: The continuing pursuit of unchecked executive power". Amnesty International. 2005-05-13. Archived from the original on 2007-05-09. Retrieved 2007-05-29.
- The legal situation of unlawful/unprivileged combatants (IRRC March 2003 Vol.85 No 849). See Unlawful combatant.
- "New Account of Torture by U.S. Tropps, Soldiers Say Failures by Command Led to Abuse". Human Rights Watch. 2005-09-24. Retrieved 2007-05-29.
- "Huckabee Says Guantanamo Bay Offers Better Conditions to Detainees Than Most U.S. Prisons - You Decide 2008". Fox News Channel. 2007-06-11. Retrieved 2009-05-23.
- "Guantanamo Detainees Info Sheet #1 – November 14, 2005" (PDF). White House. Retrieved 2007-11-17.
- "Hamdan v. Rumsfeld" (PDF). June 29 2006. Retrieved 2007-02-10.
- "US detainees to get Geneva rights". BBC. 2006-07-11. Retrieved January 5, 2010.
- "White House: Detainees entitled to Geneva Convention protections". CNN. 2006-07-11.[dead link]
- "White House Changes Gitmo Policy". CBS News. 2006-07-11.
- Mayer, Jane (2005-02-14). "Outsourcing Torture". The New Yorker. Retrieved 2007-05-29.
- Markon, Jerry (2006-05-19). "Lawsuit Against CIA is Dismissed". The Washington Post. Retrieved 2007-05-29.
- Georg Mascolo, Holger Stark: The US Stands Accused of Kidnapping. SPIEGEL ONLINE, February 14, 2005
- "Map of Freedom in the World". freedomhouse.org. 2004-05-10. Retrieved 2009-05-23.
- [http://www.hrw.org/news/2012/09/05/us-torture-and-rendition-gaddafi-s-libya US: Torture and Rendition to Gaddafi’s Libya Human Rights Watch September 6, 2012
- Delivered Into Enemy Hands US-Led Abuse and Rendition of Opponents to Gaddafi’s Libya Human Rights Watch 2012
- HRW: USA käytti vesikidutusta libyalaisiin yle 6.9.2012 (Finnish)
- "Supporting Human Rights and Democracy: The U.S. Record". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- "Human Rights". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-05-28.
- "International Human Rights Week". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Archived from the original on 2007-05-09. Retrieved 2007-05-28.
- "Ambassadorial Roundtable Series". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Archived from the original on 2007-05-09. Retrieved 2007-05-28.
- "Bureau of Democracy, Human Rights, and Labor". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- "2006 Human Rights and Democracy Achievement Award". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- Jo Becker, children’s rights advocate (2008-12-11). "US Limits Military Aid to Nations Using Child Soldiers | Human Rights Watch". Human Rights Watch. Retrieved 2009-05-23.
- U.S. reservations, declarations, and understandings, International Covenant on Civil and Political Rights, 138 Cong. Rec. S4781-01 (daily ed., April 2, 1992).
- Status of US international treaty ratifications
- DPI Press Kit
- "OHCHR International law". OHCHR. Retrieved 2009-06-23.
- UN OHCHR Fact Sheet No.2 (Rev.1), The International Bill of Human Rights
- OHCHR Reservations and declarations on ratificatons
- America'S Problem With Human Rights
- 138 Cong. Rec. S4781-84 (1992)
- S. Exec. Rep. No. 102-23 (1992)
- Louis Henkin, U.S. Ratification of Human Rights Treaties: The Ghost of Senator Bricker, 89 Am. J. Int’l L. 341, 346 (1995)
- 98/07/23 Amb. Scheffer on international criminal court
- Coalition for the ICC
- Human Rights Watch - The US and the International Criminal Court
- Human Rights Watch, “U.S.: 'Hague Invasion Act' Becomes Law.” 3 August 2002. Retrieved 8 January 2007.
- John Sutherland, “Who are America's real enemies?” The Guardian, 8 July 2002. Retrieved 8 January 2007.
- BBC News
- Vienna Convention on the Law of Treaties between States and International Organizations or between International Organizations 1986 Article 18.
- Declaration on the Rights of Indigenous Peoples
- Basic Documents - Ratifications of the Convention
- Organization of American States
- "All the News That's Fit to Print? New York Times Coverage of Human-Rights Violations". The Harvard International Journal of Press 4 (4, Fall 1999): 48–69. Retrieved 2007-05-28.
- "Paper presented at the annual meeting of the American Political Science Association". All Academic, Inc. 2006-08-31. Retrieved 2007-05-28.
- Satter, Raphael (2007-05-24). "Report hits US on human rights". Associated Press (published on Globe). Retrieved 2007-05-29.
- Polity IV data sets
- "North Korea, Eritrea and Turkmenistan are the world’s "black holes" for news". Reporters Without Borders. October 2005. Retrieved 2007-05-29.
- "East Asia and Middle East have worst press freedom records". Reporters Without Borders. October 2004. Retrieved 2007-05-29.
- "Cuba second from last, just ahead of North Korea". Reporters Without Borders. October 2003. Retrieved 2007-05-29.
- "Reporters Without Borders publishes the first worldwide press freedom index". Reporters Without Borders. October 2002. Retrieved 2007-05-29.
- [347=x-347-559597 "Leading surveillance societies in the EU and the World 2007"]. Privacy International. December 2007. Retrieved 2009-01-01.
- "Universal Human Rights?". Gallup International. 1999.
- Klapper, Bradley (2006-07-28). "U.N. Panel Takes U.S. to Task Over Katrin". The America's Intelligence Wire. Associated Press.
- 26. The Committee, while taking note of the various rules and regulations prohibiting discrimination in the provision of disaster relief and emergency assistance, remains concerned about information that poor people and in particular African-Americans, were disadvantaged by the rescue and evacuation plans implemented when Hurricane Katrina hit the United States of America, and continue to be disadvantaged under the reconstruction plans. (articles 6 and 26) The State party should review its practices and policies to ensure the full implementation of its obligation to protect life and of the prohibition of discrimination, whether direct or indirect, as well as of the United Nations Guiding Principles on Internal Displacement, in the areas of disaster prevention and preparedness, emergency assistance and relief measures. In the aftermath of Hurricane Katrina, it should increase its efforts to ensure that the rights of poor people and in particular African-Americans, are fully taken into consideration in the reconstruction plans with regard to access to housing, education and health care. The Committee wishes to be informed about the results of the inquiries into the alleged failure to evacuate prisoners at the Parish prison, as well as the allegations that New Orleans residents were not permitted by law enforcement officials to cross the Greater New Orleans Bridge to Gretna, Louisiana. See: "Concluding Observations of the Human Rights Committee on the Second and Third U.S. Reports to the Committee (2006).". Human Rights Committee. University of Minnesota Human Rights Library. 2006-07-28.
- "Hurricane Katrina and the Guiding Principles on Internal Displacement" (PDF). Institute for Southern Studies. January 2008. pp. 18–19. Retrieved 2009-05-18. See also: Sothern, Billy (2006-01-02). "Left to Die". The Nation. pp. 19–22.
- "Report says U.S. Katrina response fails to meet its own human rights principles". New Orleans CityBusiness. 2008-01-16. See also: "Hurricane Katrina and the Guiding Principles on Internal Displacement" (PDF). Institute for Southern Studies. January 2008. Retrieved 2009-05-18.
- "Report of the Special Rapporteur". United Nations Human Rights Council. April 28, 2009. p. 30. Retrieved 2009-05-24.
- "U.S. Elected To U.N. Human Rights Council". ACLU. Retrieved 2009-06-06.
- "Daily Press Briefing". United States Department of State. 2007-05-06. Retrieved 2006-06-24.[dead link]
- United Nations General Assembly Verbotim Report meeting 72 session 60 page 5, Mr. Toro Jiménez Venezuela on 15 March 2006 at 11:00 (retrieved 2007-09-19)
- "U.N. Torture Committee Critical of U.S.". Human Rights Watch. 2006-05-19. Retrieved 2007-06-14.
- "Conclusions and recommendations of the Committee" (PDF).
- Leopold, Evelyn (2007-05-25). "U.N. expert faults U.S. on human rights in terror laws". Reuters. Retrieved 2007-06-03. Also published on The Boston Globe, on Yahoo News, and on ABC News.
- Rizvi, Haider. Racial Poverty Gaps in U.S. Amount to Human Rights Violation, Says U.N. Expert. OneWorld.net (published on CommonDreams). 2005-11-30. Retrieved on 2007-08-13. (archived link)
- Ignatieff, Michael (2005). American Exceptionalism and Human Rights. Princeton University Press. ISBN 0-691-11648-2.
- Ishay, Micheline (2008). The History of Human Rights: From Ancient Times to the Globalization Era (Second ed.). University of California Press. ISBN 0-520-25641-7.
- Lauren, Paul Gordon (2003). The Evolution of International Human Rights: Visions Seen (Second ed.). University of Pennsylvania Press. ISBN 0-8122-1854-X.
- Olyan, Saul M.; Martha C. Nussbaum (1998). Sexual Orientation and Human Rights in American Religious Discourse. New York: Oxford University Press. ISBN 0-19-511942-8.
- Rhoden, Nancy Lee; Ian Kenneth Steele (2000). The Human Tradition in the American Revolution. Rowman & Littlefield. ISBN 0-8420-2748-3.
- Shapiro, Steven R.; Human Rights Watch, American Civil Liberties Union (1993). Human Rights Violations in the United States: A Report on U.S. Compliance with The International Covenant on Civil and Political Rights. Human Rights Watch. ISBN 1-56432-122-3.
- ed. by Cynthia Soohoo ... (2007). In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: A History of Human Rights in the United States I. Praeger Publishers. ISBN 0-275-98822-8.
- ed. by Cynthia Soohoo ... (2007). In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: From Civil Rights to Human Rights II. Praeger Publishers. ISBN 0-275-98823-6.
- Quigley, William; Sharda Sekaran (2007). "A Call for the Right to Return in the Gulf Coast". In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: Portraits of the Movement III. Praeger Publishers. pp. 291–304. ISBN 0-275-98824-4.
- Weissbrodt, David; Connie de la Vega (2007). International Human Rights Law: An Introduction. University of Pennsylvania Press. ISBN 0-8122-4032-4.
- Yount, David (2007). How the Quakers Invented America. Rowman & Littlefield. ISBN 0-7425-5833-9.
- Human Rights in the US and the International Community, UNL Initiative on Human Rights & Human Diversity site—research and study source directed at secondary and post-secondary students
- Freedom in the World 2006: United States from Freedom House
- Human Rights from United States Department of State
- United States: Human Rights World Report 2006 from Human Rights Watch | http://en.mobile.wikipedia.org/wiki/Human_rights_in_the_United_States | 13 |
18 | Europe's most populous state whose instability caused two World Wars, and whose stability since 1949 has greatly contributed to the solution of the German question and the success of European integration.
The German Empire (1871–1918)
Before 1900 the German Empire, unified by Bismarck in 1866/71, experienced a process of singularly rapid industrial transformation which created unusually large social and political tensions. In addition, in the federal nation-state created by Bismarck the constituent states, rather than the Empire, controlled most of the revenues. Without their consent, the Empire could not increase its own revenue substantially. As a result, Germany could not keep up with the arms race that developed, partly as a result of Germany's own imperialist ambitions and its consequent building of a large navy at the instigation of Admiral Tirpitz. This sense of crisis explains why Emperor Wilhelm II and his Chancellor, Bethmann Hollweg, felt compelled to incite World War I: they were convinced that if there had to be war, the chances of winning it would be higher the sooner it broke out.
World War I (1914–18)
The initial consensus in favour of supporting the war was relatively short-lived, and soon the political parties began to demand domestic reform, particularly in Prussia. In 1917, parliament passed a motion demanding peace negotiations. By then, however, the country was run in a virtual dictatorship by Generals Hindenburg and Ludendorff, who instilled in most Germans a belief and confidence in ultimate victory. Defeat in World War I came as a complete shock to most Germans, as did the territorial losses and the reparation payments imposed upon them by the Versailles Treaty, under the humiliating charge that Germany had been the sole aggressor.
The Weimar Republic (1918–1933)
These factors became a great burden for the new democracy that began to form after the popular unrest of 1918 had forced the Emperor to abdicate. This democracy is known as the Weimar Republic, so named after the city of the German poets Goethe and Schiller in which the National Assembly convened to write the new Constitution. The Assembly established universal suffrage and unrestricted proportional representation. Given the social, cultural and political fragmentation of the population in subsequent years, this made it increasingly difficult to establish stable parliamentary majorities.
Another problem for the Weimar Republic was that it depended on the bureaucratic and military elites of the former Empire, neither of whom felt any commitment to the democracy. After years of crisis, marked by the decommissioning of millions of soldiers, the payments of reparations, the Kapp Putsch in 1920, the murder of Erzberger and Rathenau, and the events of 1923 (which saw the occupation of the Ruhr, hyperinflation, and the Hitler Putsch), the Republic gained some stability during the Stresemann era. However, after the world economic crisis caused by the Wall Street Crash in 1929, millions of disaffected people voted for Hitler's Nazi Party from 1930, and Germany became ungovernable, with Chancellors Brüning and von Papen governing against parliament, under emergency laws that made them responsible to the President only. In the end, President von Hindenburg appointed Hitler Chancellor in the misguided confidence that Hitler could be controlled by others in the Cabinet.
Table 10. German leaders since 1900
Presidents of the Weimar Republic
Paul von Hindenburg
Leader of the Third Reich
German Democratic Republic (East Germany)
communist party leaders:
Lothar de Maizière
Federal Republic of Germany (West Germany)
The Third Reich (1933–45)
Hitler's appointment as Chancellor on 30 January 1933 and his acquisition of dictatorial powers through the Enabling Law marked the beginning of the Third Reich. Hitler sustained and increased his popularity by ending record unemployment (largely through a massive programme of rearmament), and restoring order and security on the streets. Hitler realized a succession of major foreign policy triumphs such as the Saarland return to the Reich in 1935, the Anschluss with Austria (1938), and the annexation of the Sudetenland (1938) and of the Czech lands (1939). These successes led most Germans to overlook the fact that this sense of national unity was acquired at the expense of minorities such as gypsies, homosexuals, Jehovah's Witnesses, priests, the mentally ill, and especially Jews, who were officially degraded as second-class citizens by the Nuremberg Laws. Jews and other opponents of the regime were thrown into concentration camps, while the persecution of Jews reached new heights with the Kristallnacht of 1938.
Following the conclusion of the Hitler–Stalin Pact, Hitler started his pursuit of more ‘living space’ (Lebensraum) with the invasion of Poland on 1 September 1939, which unleashed World War II. In 1941, Hitler started an all-out offensive against the Soviet Union, and in the wake of the initial Barbarossa campaign the Nazis embarked upon the Holocaust, the extermination of up to six million people, mainly Jews, in concentration camps such as Auschwitz. Despite sporadic resistance to Hitler, most notably through the July Plot of 1944, the Third Reich could only be overcome by the subjection of Germany through the Allied invasion which forced German capitulation on 8 May 1945.
The division of Germany (1945–9)
Germany was divided into four zones, governed by the Soviet Union in the east, Britain in the north, France in the west and the US in the south. All German territories east of the Rivers Oder and Neisse were placed under Polish and Soviet administration. Despite initial endeavours at cooperation, which only succeeded in a few circumstances such as the Nuremberg Trials, the Soviet zone became administered increasingly separately from the other three.
East Germany (1949–90)
The German Democratic Republic (GDR) was founded in the Soviet eastern zone on 7 October 1949, in response to the foundation of the FRG (see below). Led by Ulbricht, who transformed it into a Communist satellite state of the Soviet Union, the GDR's economy suffered from its transformation into a centrally planned economy, and from the dismantling of industries by the Soviet Union. Disenchantment with the dictatorial regime and the slow economic recovery compared to West Germany sparked off an uprising of over 300,000 workers on 17 June 1953, which was crushed by Soviet tanks. However, the country's viability continued to be challenged by the exodus of hundreds of thousands of East Germans to West Berlin every year. To enable East Germany's continued existence, the Berlin Wall was built on 13 August 1961 as a complement to the existing impenetrable border between East and West Germany. In the following decades, authoritarian Communist rule, severe travel restrictions, the world's largest secret police apparatus (the Stasi), and economic prosperity relative to its eastern neighbours provided the GDR with a relative degree of stability.
The Soviet-style rule of Honecker, Ulbricht's successor, from 1971, became undermined by the advent of Gorbachev as Soviet leader, when the East German leadership became more orthodox than the Soviet original. Matters came to a head in the summer of 1989, when Hungary opened its borders with Austria, thus enabling thousands of East German tourists to escape to the West. Meanwhile, Gorbachev's visit to the 40th anniversary celebrations of the GDR sparked off weekly mass protests, first in Leipzig and Berlin, and then throughout East Germany. Honecker had to resign, and in the confusion that followed, the Berlin Wall was opened by the GDR authorities on 9 November 1989. As East Germans used the opportunity to flee to West Germany in droves, the continuance of the GDR as a separate state became untenable. On 22 July 1990 the East German parliament reintroduced the five states that had existed 1945–52, each of which acceded to West Germany. On 3 October 1990 the GDR ceased to exist and Germany was unified.
West Germany (since 1949)
The Federal Republic of Germany (FRG) was founded on 23 May 1949, and after a narrow election victory Adenauer became its first Chancellor. Aided by a rapid economic recovery masterminded by Erhard, the new democracy won general acceptance and support. This stability was further strengthened by Adenauer's policy of integration into the Western alliance, e.g. through European integration and the joining of NATO, which enabled the speedy gain of full sovereignty for the new state from the Western allies. Adenauer was succeeded by Erhard in 1963, but following disagreements with his coalition partner, the Liberal Party (FDP), he resigned in favour of Kiesinger, who headed a ‘grand coalition’ between the SPD and CDU.
After the 1969 elections the Liberals decided to support the SPD for the first time, which enabled its party leader, Brandt, to become Chancellor. He inaugurated a new, conciliatory approach towards East Germany of dialogue and compromise, which henceforth became the basis of German internal relations. This policy was even maintained by Kohl after he took over from Brandt's successor, Schmidt, in 1982, as a result of which relations with East Germany, though always fragile, improved markedly during those years. Indeed, Kohl recognized the opportunity presented by the disintegration of East Germany for German unification more clearly than most other West Germans, many of whom had abandoned the goal of reunification long before.
German political unification was completed by 1990. For the rest of the decade, the economic effects of unification continued to loom large. Transfer payments from western to eastern Germany augmented already high burdens of debt and taxation without achieving their goal of economic recovery in the eastern states. These factors contributed to sluggish economic growth and persistently high structural unemployment, especially in the eastern states. In 1998, Schröder was elected Chancellor to head the country's first coalition between Social Democrats and the Green Party.
Contemporary politics (since 1998)
Following the move of the capital from Bonn to Berlin, the red-green coalition epitomized a transformation of the FRG. It pursued social and cultural change, such as the legalization of same-sex unions. In foreign policy, the government committed troops to multinational military peacekeeping operations in Bosnia, Kosovo (1999), and Afghanistan (2002). At the same time, Schröder's refusal to participate in any Iraq War soured relations with the US, but brought him great domestic popularity. Schröder presided over a flagging economy, and from 2003 carried out far-reaching social reforms. These split his own party, forcing him to call an early election in 2005, which he lost. The new government was formed by a grand coalition between the CDU and the SPD, under the Chancellorship of Merkel. The economy finally began to improve, with Merkel becoming a respected statesman in foreign policy and within the EU. | http://www.oxfordreference.com/view/10.1093/acref/9780199295678.001.0001/acref-9780199295678-e-911 | 13 |
16 | Lynching in the United States
Lynching, the practice of killing people by extrajudicial mob action, occurred in the United States chiefly from the late 18th century through the 1960s. Lynchings took place most frequently in the Southern United States from 1890 to the 1920s, with a peak in the annual toll in 1892. However, lynchings were also very common in the Old West.
It is associated with re-imposition of white supremacy in the South after the Civil War. The granting of U.S. Constitutional rights to freedmen in the Reconstruction era (1865–77) aroused anxieties among American citizens, who came to blame African Americans for their own wartime hardship, economic loss, and forfeiture of social privilege. Black Americans, and Whites active in the pursuit of integration rights, were sometimes lynched in the South during Reconstruction. Lynchings reached a peak in the late 19th and early 20th centuries, when Southern states changed their constitutions and electoral rules to disfranchise blacks and, having regained political power, enacted a series of segregation and Jim Crow laws to reestablish White supremacy. Notable lynchings of integration rights workers during the 1960s in Mississippi contributed to galvanizing public support for the Civil Rights Movement and civil rights legislation.
The Tuskegee Institute has recorded 3,446 blacks and 1,297 whites were lynched between 1882 and 1968. Southern states created new constitutions between 1890 and 1910, with provisions that effectively disfranchised most blacks, as well as many poor whites. People who did not vote were excluded from serving on juries, and most blacks were shut out of the official political system.
African Americans mounted resistance to lynchings in numerous ways. Intellectuals and journalists encouraged public education, actively protesting and lobbying against lynch mob violence and government complicity in that violence. The National Association for the Advancement of Colored People (NAACP), as well as numerous other organizations, organized support from white and black Americans alike and conducted a national campaign to get a federal anti-lynching law passed; in 1920 the Republican Party promised at its national convention to support passage of such a law. In 1921 Leonidas C. Dyer sponsored an anti-lynching bill; it was passed in January 1922 in the United States House of Representatives, but a Senate filibuster by Southern white Democrats defeated it in December 1922. With the NAACP, Representative Dyer spoke across the country in support of his bill in 1923, but Southern Democrats again filibustered it in the Senate. He tried once more but was again unsuccessful.
African-American women's clubs, such as the Association of Southern Women for the Prevention of Lynching, raised funds to support the work of public campaigns, including anti-lynching plays. Their petition drives, letter campaigns, meetings and demonstrations helped to highlight the issues and combat lynching. From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." No bill was approved by the Senate because of the powerful opposition of the Southern Democratic voting block. In the Great Migration, extending in two waves from 1910 to 1970, 6.5 million African Americans left the South, primarily for northern and mid-western cities, for jobs and to escape the risk of lynchings.
Name origin
The term "Lynch's Law" – subsequently "lynch law" and "lynching" – apparently originated during the American Revolution when Charles Lynch, a Virginia justice of the peace, ordered extralegal punishment for Loyalists. In the South, members of the abolitionist movement and other people opposing slavery were also targets of lynch mob violence before the Civil War.
Social characteristics
One motive for lynchings, particularly in the South, was the enforcement of social conventions – punishing perceived violations of customs, later institutionalized as Jim Crow laws, mandating segregation of whites and blacks.
Financial gain and the ability to establish political and economic control provided another motive. For example, after the lynching of an African-American farmer or an immigrant merchant, the victim's property would often become available to whites. In much of the Deep South, lynchings peaked in the late 19th and early 20th centuries, as white racists turned to terrorism to dissuade blacks from voting. In the Mississippi Delta, lynchings of blacks increased beginning in the late 19th century, as white planters tried to control former slaves who had become landowners or sharecroppers. Lynchings were more frequent at the end of the year, when accounts had to be settled.
Lynchings would also occur in frontier areas where legal recourse was distant. In the West, cattle barons took the law into their own hands by hanging those whom they perceived as cattle and horse thieves.
African-American journalist and anti-lynching crusader Ida B. Wells wrote in the 1890s that black lynching victims were accused of rape or attempted rape only about one-third of the time. The most prevalent accusation was murder or attempted murder, followed by a list of infractions that included verbal and physical aggression, spirited business competition and independence of mind. White lynch mobs formed to restore the perceived social order. Lynch mob "policing" usually led to murder of the victims by white mobs. Law-enforcement authorities sometimes participated directly or held suspects in jail until a mob formed to carry out the murder.
Frontier and Civil War
There is much debate[by whom?] over the violent history of lynchings on the frontier, obscured by the mythology of the American Old West. In unorganized territories or sparsely settled states, security was often provided only by a U.S. Marshal who might, despite the appointment of deputies, be hours, or even days, away by horseback.
Lynchings in the Old West were often carried out against accused criminals in custody. Lynching did not so much substitute for an absent legal system as to provide an alternative system that favored a particular social class or racial group. Historian Michael J. Pfeifer writes, "Contrary to the popular understanding, early territorial lynching did not flow from an absence or distance of law enforcement but rather from the social instability of early communities and their contest for property, status, and the definition of social order."
The San Francisco Vigilance Movement, for example, has traditionally been portrayed as a positive response to government corruption and rampant crime, though revisionists have argued that it created more lawlessness than it eliminated. It also had a strongly nativist tinge, initially focused against the Irish and later evolving into mob violence against Chinese and Mexican immigrants.. In 1871, at least 18 Chinese-Americans were killed by the mob rampaging through Old Chinatown in Los Angeles, after a white businessman was inadvertently caught in the crossfire of a tong battle.
During the California Gold Rush, at least 25,000 Mexicans had been longtime residents of California. The Treaty of 1848 expanded American territory by one-third. To settle the war, Mexico ceded all or parts of Arizona, California, Colorado, Kansas, New Mexico, Nevada, Oklahoma, Texas, Utah and Wyoming to the United States. In 1850, California became a state within the United States.
Many of the Mexicans who were native to what would become a state within the United States were experienced miners and had had great success mining gold in California. Their success aroused animosity by white prospectors who intimidated Mexican miners with the threat of violence and committed violence against some. Between 1848 and 1860, at least 163 Mexicans were lynched in California alone. One particularly infamous lynching occurred on July 5, 1851, when a Mexican woman named Juana Loaiza was lynched by a mob in Downieville, California. She was accused of killing a white man who had attempted to assault her after breaking into her home.
Another well-documented episode in the history of the American West is the Johnson County War, a dispute over land use in Wyoming in the 1890s. Large-scale ranchers, with the complicity of local and federal Republican politicians, hired mercenaries and assassins to lynch the small ranchers, mostly Democrats, who were their economic competitors and whom they portrayed as "cattle rustlers".
During the Civil War, Southern Home Guard units sometimes lynched white Southerners whom they suspected of being Unionists or deserters. One example of this was the hanging of Methodist minister Bill Sketoe in the southern Alabama town of Newton in December 1864. Other (fictional) examples of extrajudicial murder are portrayed in Charles Frazier's novel Cold Mountain.
Reconstruction (1865–1877)
The first heavy period of violence in the South was between 1868 and 1871. White Democrats attacked black and white Republicans. This was less the result of mob violence characteristic of later lynchings, however, than insurgent secret vigilante actions by groups such as the Ku Klux Klan. To prevent ratification of new constitutions formed during Reconstruction, the opposition used various means to harass potential voters. Failed terrorist attacks led to a massacre during the 1868 elections, with the systematic insurgents' murders of about 1,300 voters across various southern states ranging from South Carolina to Arkansas.
After this partisan political violence had ended, lynchings in the South focused more on race than on partisan politics. They could be seen as a latter-day expression of the slave patrols, the bands of poor whites who policed the slaves and pursued escapees. The lynchers sometimes murdered their victims but sometimes whipped them to remind them of their former status as slaves. White terrorists often made nighttime raids of African-American homes in order to confiscate firearms. Lynchings to prevent freedmen and their allies from voting and bearing arms can be seen as extralegal ways of enforcing the Black Codes and the previous system of social dominance. The Freedman's Bureau and later passed Reconstruction Amendments override the Slave Codes.
Although some states took action against the Klan, the South needed federal help to deal with the escalating violence. President Ulysses S. Grant and Congress passed the Force Acts of 1870 and the Civil Rights Act of 1871, also known as the Ku Klux Klan Act, because it was passed to suppress the vigilante violence of the Klan. This enabled federal prosecution of crimes committed by groups such as the Ku Klux Klan, as well as use of federal troops to control violence. The administration began holding grand juries and prosecuting Klan members. In addition, it used martial law in some counties in South Carolina, where the Klan was the strongest. Under attack, the Klan dissipated. Vigorous federal action and the disappearance of the Klan had a strong effect in reducing the numbers of murders.
From the mid-1870s on in the Deep South, violence rose. In Mississippi, Louisiana, the Carolinas and Florida especially, the Democratic Party relied on paramilitary "White Line" groups such as the White Camelia to terrorize, intimidate and assassinate African American and white Republicans in an organized drive to regain power. In Mississippi, it was the Red Shirts; in Louisiana, the White League that were paramilitary groups carrying out goals of the Democratic Party to suppress black voting. Insurgents targeted politically active African Americans and unleashed violence in general community intimidation. Grant's desire to keep Ohio in the Republican aisle and his attorney general's maneuvering led to a failure to support the Mississippi governor with Federal troops. The campaign of terror worked. In Yazoo County, for instance, with a Negro population of 12,000, only seven votes were cast for Republicans. In 1875, Democrats swept into power in the state legislature.
Once Democrats regained power in Mississippi, Democrats in other states adopted the Mississippi Plan to control the election of 1876, using informal armed militias to assassinate political leaders, hunt down community members, intimidate and turn away voters, effectively suppressing African American suffrage and civil rights. In state after state, Democrats swept back to power. From 1868 to 1876, most years had 50–100 lynchings.
White Democrats passed laws and constitutional amendments making voter registration more complicated, to further exclude black voters from the polls.
Disfranchisement (1877-1917)
Following white Democrats' regaining political power in the late 1870s, legislators gradually increased restrictions on voting, chiefly through statute. From 1890 to 1908, most of the Southern states, starting with Mississippi, created new constitutions with further provisions: poll taxes, literacy and understanding tests, and increased residency requirements, that effectively disfranchised most blacks and many poor whites. Forcing them off voter registration lists also prevented them from serving on juries, whose members were limited to voters. Although challenges to such constitutions made their way to the Supreme Court in Williams v. Mississippi (1898) and Giles v. Harris (1903), the states' provisions were upheld.
Most lynchings from the late 19th through the early 20th century were of African Americans in the South, with other victims including white immigrants, and, in the southwest, Latinos. Of the 468 victims in Texas between 1885 and 1942, 339 were black, 77 white, 53 Hispanic, and 1 Indian. They reflected the tensions of labor and social changes, as the whites imposed Jim Crow rules, legal segregation and white supremacy. The lynchings were also an indicator of long economic stress due to falling cotton prices through much of the 19th century, as well as financial depression in the 1890s. In the Mississippi bottomlands, for instance, lynchings rose when crops and accounts were supposed to be settled.
The late 19th and early 20th centuries history of the Mississippi Delta showed both frontier influence and actions directed at repressing African Americans. After the Civil War, 90% of the Delta was still undeveloped. Both whites and African Americans migrated there for a chance to buy land in the backcountry. It was frontier wilderness, heavily forested and without roads for years. Before the start of the 20th century, lynchings often took the form of frontier justice directed at transient workers as well as residents. Thousands of workers were brought in[by whom?] to do lumbering and work on levees. Whites were lynched at a rate 35.5% higher than their proportion in the population, most often accused of crimes against property (chiefly theft). During the Delta's frontier era, blacks were lynched at a rate lower than their proportion in the population, unlike the rest of the South. They were most often accused of murder or attempted murder in half the cases, and rape in 15%.
There was a clear seasonal pattern to the lynchings, with the colder months being the deadliest. As noted, cotton prices fell during the 1880s and 1890s, increasing economic pressures. "From September through December, the cotton was picked, debts were revealed, and profits (or losses) realized... Whether concluding old contracts or discussing new arrangements, [landlords and tenants] frequently came into conflict in these months and sometimes fell to blows." During the winter, murder was most cited as a cause for lynching. After 1901, as economics shifted and more blacks became renters and sharecroppers in the Delta, with few exceptions, only African-Americans were lynched. The frequency increased from 1901 to 1908, after African-Americans were disenfranchised. "In the twentieth century Delta vigilantism finally became predictably joined to white supremacy."
After their increased immigration to the US in the late 19th century, Italian Americans also became lynching targets, chiefly in the South, where they were recruited for laboring jobs. On March 14, 1891, eleven Italian Americans were lynched in New Orleans after a jury acquitted them in the murder of David Hennessy, an ethnic Irish New Orleans police chief. The eleven were falsely accused of being associated with the Mafia. This incident was one of the largest mass lynchings in U.S. history. A total of twenty Italians were lynched in the 1890s. Although most lynchings of Italian Americans occurred in the South, Italians had not immigrated there in great numbers. Isolated lynchings of Italians also occurred in New York, Pennsylvania, and Colorado.
Particularly in the West, Chinese immigrants, East Indians, Native Americans and Mexicans were also lynching victims. The lynching of Mexicans and Mexican Americans in the Southwest was long overlooked in American history, attention being chiefly focused on the South. The Tuskegee Institute, which kept the most complete records, noted the victims as simply black or white. Mexican, Chinese, and Native American lynching victims were recorded as white.
Researchers estimate 597 Mexicans were lynched between 1848 and 1928. Mexicans were lynched at a rate of 27.4 per 100,000 of population between 1880 and 1930. This statistic was second only to that of the African American community, which endured an average of 37.1 per 100,000 of population during that period. Between 1848 and 1879, Mexicans were lynched at an unprecedented rate of 473 per 100,000 of population.
Henry Smith, a troubled ex-slave accused of murdering a policeman's daughter, was one of the most famous lynched African-Americans. He was lynched at Paris, Texas, in 1893 for killing Myrtle Vance, the three-year-old daughter of a Texas policeman, after the policeman had assaulted Smith. Smith was not tried in a court of law. A large crowd followed the lynching, as was common then, in the style of public executions. Henry Smith was fastened to a wooden platform, tortured for 50 minutes by red-hot iron brands, then finally burned alive while over 10,000 spectators cheered.
Enforcing Jim Crow
After 1876, the frequency of lynching decreased somewhat as white Democrats had regained political power throughout the South. The threat of lynching was used to terrorize freedmen and whites alike to maintain re-asserted dominance by whites.. Southern Republicans in Congress sought to protect black voting rights by using Federal troops for enforcement. A congressional deal to elect Ohio Republican Rutherford B. Hayes as President in 1876 (in spite of his losing the popular vote to New York Democrat Samuel J. Tilden) included a pledge to end Reconstruction in the South. The Redeemers, white Democrats who often included members of paramilitary groups such as White Cappers, White Camellia, White League and Red Shirts, had used terrorist violence and assassinations to reduce the political power that black and white Republicans had gained during Reconstruction.
Lynchings both supported the power reversal and were public demonstrations of white power. Racial tensions had an economic base. In attempting to reconstruct the plantation economy, planters were anxious to control labor. In addition, agricultural depression was widespread and the price of cotton kept falling after the Civil War into the 1890s. There was a labor shortage in many parts of the Deep South, especially in the developing Mississippi Delta. Southern attempts to encourage immigrant labor were unsuccessful as immigrants would quickly leave field labor. Lynchings erupted when farmers tried to terrorize the laborers, especially when time came to settle and they were unable to pay wages, but tried to keep laborers from leaving.
More than 85 percent of the estimated 5,000 lynchings in the post-Civil War period occurred in the Southern states. 1892 was a peak year when 161 African Americans were lynched. The creation of the Jim Crow laws, beginning in the 1890s, completed the revival of white supremacy in the South. Terror and lynching were used to enforce both these formal laws and a variety of unwritten rules of conduct meant to assert white domination. In most years from 1889 to 1923, there were 50–100 lynchings annually across the South.
The ideology behind lynching, directly connected with denial of political and social equality, was stated forthrightly by Benjamin Tillman, a South Carolina governor and senator, speaking on the floor of the U.S. Senate in 1900:
We of the South have never recognized the right of the negro to govern white men, and we never will. We have never believed him to be the equal of the white man, and we will not submit to his gratifying his lust on our wives and daughters without lynching him.
Often victims were lynched by a small group of white vigilantes late at night. Sometimes, however, lynchings became mass spectacles with a circus-like atmosphere because they were intended to emphasize majority power. Children often attended these public lynchings. A large lynching might be announced beforehand in the newspaper. There were cases in which a lynching was timed so that a newspaper reporter could make his deadline. Photographers sold photos for postcards to make extra money. The event was publicized so that the intended audience, African Americans and whites who might challenge the society, was warned to stay in their places.
Fewer than one percent of lynch mob participants were ever convicted by local courts. By the late 19th century, trial juries in most of the southern United States were all white because African Americans had been disfranchised, and only registered voters could serve as jurors. Often juries never let the matter go past the inquest.
Such cases happened in the North as well. In 1892, a police officer in Port Jervis, New York, tried to stop the lynching of a black man who had been wrongfully accused of assaulting a white woman. The mob responded by putting the noose around the officer's neck as a way of scaring him. Although at the inquest the officer identified eight people who had participated in the lynching, including the former chief of police, the jury determined that the murder had been carried out "by person or persons unknown."
In Duluth, Minnesota, on June 15, 1920, three young African American traveling circus workers were lynched after having been jailed and accused of having raped a white woman. A physician's examination subsequently found no evidence of rape or assault. The alleged "motive" and action by a mob were consistent with the "community policing" model. A book titled The Lynchings in Duluth documented the events.
Although the rhetoric surrounding lynchings included justifications about protecting white women, the actions basically erupted out attempts to maintain domination in a rapidly changing society and fears of social change. Victims were the scapegoats for peoples' attempts to control agriculture, labor and education as well as disasters such as the boll weevil.
According to an article, April 2, 2002, in Time:
- "There were lynchings in the Midwestern and Western states, mostly of Asians, Mexicans, and Native Americans. But it was in the South that lynching evolved into a semiofficial institution of racial terror against blacks. All across the former Confederacy, blacks who were suspected of crimes against whites—or even "offenses" no greater than failing to step aside for a white man's car or protesting a lynching—were tortured, hanged and burned to death by the thousands. In a prefatory essay in Without Sanctuary, historian Leon F. Litwack writes that between 1882 and 1968, at least 4,742 African Americans were murdered that way.
At the start of the 20th century in the United States, lynching was photographic sport. People sent picture postcards of lynchings they had witnessed. The practice was so base, a writer for Time noted in 2000, "Even the Nazis did not stoop to selling souvenirs of Auschwitz, but lynching scenes became a burgeoning subdepartment of the postcard industry. By 1908, the trade had grown so large, and the practice of sending postcards featuring the victims of mob murderers had become so repugnant, that the U.S. Postmaster General banned the cards from the mails."
- "The photographs stretch our credulity, even numb our minds and senses to the full extent of the horror, but they must be examined if we are to understand how normal men and women could live with, participate in, and defend such atrocities, even reinterpret them so they would not see themselves or be perceived as less than civilized. The men and women who tortured, dismembered, and murdered in this fashion understood perfectly well what they were doing and thought of themselves as perfectly normal human beings. Few had any ethical qualms about their actions. This was not the outburst of crazed men or uncontrolled barbarians but the triumph of a belief system that defined one people as less human than another. For the men and women who composed these mobs, as for those who remained silent and indifferent or who provided scholarly or scientific explanations, this was the highest idealism in the service of their race. One has only to view the self-satisfied expressions on their faces as they posed beneath black people hanging from a rope or next to the charred remains of a Negro who had been burned to death. What is most disturbing about these scenes is the discovery that the perpetrators of the crimes were ordinary people, not so different from ourselves – merchants, farmers, laborers, machine operators, teachers, doctors, lawyers, policemen, students; they were family men and women, good churchgoing folk who came to believe that keeping black people in their place was nothing less than pest control, a way of combating an epidemic or virus that if not checked would be detrimental to the health and security of the community."
African Americans emerged from the Civil War with the political experience and stature to resist attacks, but disenfranchisement and the decrease in their civil rights restricted their power to do much more than react after the fact by compiling statistics and publicizing the atrocities. From the early 1880s, the Chicago Tribune reprinted accounts of lynchings from the newspaper lists with which they exchanged, and to publish annual statistics. These provided the main source for the compilations by the Tuskegee Institute to document lynchings, a practice it continued until 1968.
In 1892 journalist Ida B. Wells-Barnett was shocked when three friends in Memphis, Tennessee were lynched because their grocery store competed successfully with a white-owned store. Outraged, Wells-Barnett began a global anti-lynching campaign that raised awareness of the social injustice. As a result of her efforts, black women in the US became active in the anti-lynching crusade, often in the form of clubs which raised money to publicize the abuses. When the National Association for the Advancement of Colored People (NAACP) was formed in 1909, Wells became part of its multi-racial leadership and continued to be active against lynching.
In 1903 leading writer Charles Waddell Chesnutt published his article "The Disfranchisement of the Negro", detailing civil rights abuses and need for change in the South. Numerous writers appealed to the literate public.
In 1904 Mary Church Terrell, the first president of the National Association of Colored Women, published an article in the influential magazine North American Review to respond to Southerner Thomas Nelson Page. She took apart and refuted his attempted justification of lynching as a response to assaults on white women. Terrell showed how apologists like Page had tried to rationalize what were violent mob actions that were seldom based on true assaults.
Great Migration
In what has been viewed as multiple acts of resistance, tens of thousands of African Americans left the South annually, especially from 1910–1940, seeking jobs and better lives in industrial cities of the North and Midwest, in a movement that was called the Great Migration. More than 1.5 million people went North during this phase of the Great Migration. They refused to live under the rules of segregation and continual threat of violence, and many secured better educations and futures for themselves and their children, while adapting to the drastically different requirements of industrial cities. Northern industries such as the Pennsylvania Railroad and others, and stockyards and meatpacking plants in Chicago and Omaha, vigorously recruited southern workers. For instance, 10,000 men were hired from Florida and Georgia by 1923 by the Pennsylvania Railroad to work at their expanding yards and tracks.
Federal action limited by Solid South
President Theodore Roosevelt made public statements against lynching in 1903, following George White's murder in Delaware, and in his sixth annual State of the Union message on December 4, 1906. When Roosevelt suggested that lynching was taking place in the Philippines, southern senators (all white Democrats) demonstrated power by a filibuster in 1902 during review of the "Philippines Bill". In 1903 Roosevelt refrained from commenting on lynching during his Southern political campaigns.
Despite concerns expressed by some northern Congressmen, Congress had not moved quickly enough to strip the South of seats as the states disfranchised black voters. The result was a "Solid South" with the number of representatives (apportionment) based on its total population, but with only whites represented in Congress, essentially doubling the power of white southern Democrats.
My Dear Governor Durbin, ... permit me to thank you as an American citizen for the admirable way in which you have vindicated the majesty of the law by your recent action in reference to lynching... All thoughtful men... must feel the gravest alarm over the growth of lynching in this country, and especially over the peculiarly hideous forms so often taken by mob violence when colored men are the victims – on which occasions the mob seems to lay more weight, not on the crime but on the color of the criminal.... There are certain hideous sights which when once seen can never be wholly erased from the mental retina. The mere fact of having seen them implies degradation.... Whoever in any part of our country has ever taken part in lawlessly putting to death a criminal by the dreadful torture of fire must forever after have the awful spectacle of his own handiwork seared into his brain and soul. He can never again be the same man.
Durbin had successfully used the National Guard to disperse the lynchers. Durbin publicly declared that the accused murderer—an African American man—was entitled to a fair trial. Roosevelt's efforts cost him political support among white people, especially in the South. In addition, threats against him increased so that the Secret Service increased the size of his detail.
World War I to World War II
African-American writers used their talents in numerous ways to publicize and protest against lynching. In 1914, Angelina Weld Grimké had already written her play Rachel to address racial violence. It was produced in 1916. In 1915, W. E. B. Du Bois, noted scholar and head of the recently formed NAACP, called for more black-authored plays.
African-American women playwrights were strong in responding. They wrote ten of the fourteen anti-lynching plays produced between 1916 and 1935. The NAACP set up a Drama Committee to encourage such work. In addition, Howard University, the leading historically black college, established a theater department in 1920 to encourage African-American dramatists. Starting in 1924, the NAACP's major publications Crisis and Opportunity sponsored contests to encourage black literary production.
New Klan
The Klan revived and grew because of white peoples' anxieties and fear over the rapid pace of change. Both white and black rural migrants were moving into rapidly industrializing cities of the South. Many Southern white and African-American migrants also moved north in the Great Migration, adding to greatly increased immigration from southern and eastern Europe in major industrial cities of the Midwest and West. The Klan grew rapidly and became most successful and strongest in those cities that had a rapid pace of growth from 1910–1930, such as Atlanta, Birmingham, Dallas, Detroit, Indianapolis, Chicago, Portland, Oregon; and Denver, Colorado. It reached a peak of membership and influence about 1925. In some cities, leaders' actions to publish names of Klan members provided enough publicity to sharply reduce membership.
The 1915 murder near Atlanta, Georgia of factory manager Leo Frank, an American Jew, was a notorious lynching of a Jewish man. Initially sensationalist newspaper accounts stirred up anger about Frank, found guilty in the murder of Mary Phagan, a girl employed by his factory. He was convicted of murder after a flawed trial in Georgia. His appeals failed. Supreme Court justice Oliver Wendell Holmes's dissent condemned the intimidation of the jury as failing to provide due process of law. After the governor commuted Frank's sentence to life imprisonment, a mob calling itself the Knights of Mary Phagan kidnapped Frank from the prison farm at Milledgeville, and lynched him.
Georgia politician and publisher Tom Watson used sensational coverage of the Frank trial to create power for himself. By playing on people's anxieties, he also built support for revival of the Ku Klux Klan. The new Klan was inaugurated in 1915 at a mountaintop meeting near Atlanta, and was composed mostly of members of the Knights of Mary Phagan. D. W. Griffith's 1915 film The Birth of a Nation glorified the original Klan and garnered much publicity.
Continuing resistance
The NAACP mounted a strong nationwide campaign of protests and public education against the movie The Birth of a Nation. As a result, some city governments prohibited release of the film. In addition, the NAACP publicized production and helped create audiences for the 1919 releases The Birth of a Race and Within Our Gates, African-American directed films that presented more positive images of blacks.
On April 1, 1918 Missouri Rep. Leonidas C. Dyer of St. Louis introduced the Dyer Anti-Lynching Bill to the House. Rep. Dyer was concerned over increased lynching and mob violence disregard for the "rule of law" in the South. The bill made lynching a federal crime, and those who participated in lynching would be prosecuted by the federal government.
In 1920 the black community succeeded in getting its most important priority in the Republican Party's platform at the National Convention: support for an anti-lynching bill. The black community supported Warren G. Harding in that election, but were disappointed as his administration moved slowly on a bill.
Dyer revised his bill and re-introduced it to the House in 1921. It passed the House on January 22, 1922, due to "insistent country-wide demand", and was favorably reported out by the Senate Judiciary Committee. Action in the Senate was delayed, and ultimately the Democratic Solid South filibuster defeated the bill in the Senate in December. In 1923, Dyer went on a midwestern and western state tour promoting the anti-lynching bill; he praised the NAACP's work for continuing to publicize lynching in the South and for supporting the federal bill. Dyer's anti-lynching motto was "We have just begun to fight," and he helped generate additional national support. His bill was twice more defeated in the Senate by Southern Democratic filibuster. The Republicans were unable to pass a bill in the 1920s.
African-American resistance to lynching carried substantial risks. In 1921 in Tulsa, Oklahoma, a group of African American citizens attempted to stop a lynch mob from taking 19-year-old assault suspect Dick Rowland out of jail. In a scuffle between a white man and an armed African-American veteran, the white man was killed. Whites retaliated by rioting, during which they burned 1,256 homes and as many as 200 businesses in the segregated Greenwood district, destroying what had been a thriving area. Confirmed dead were 39 people: 26 African Americans and 13 whites. Recent investigations suggest the number of African-American deaths may have been much higher. Rowland was saved, however, and was later exonerated.
The growing networks of African-American women's club groups were instrumental in raising funds to support the NAACP public education and lobbying campaigns. They also built community organizations. In 1922 Mary Talbert headed the Anti-Lynching Crusade, to create an integrated women's movement against lynching. It was affiliated with the NAACP, which mounted a multi-faceted campaign. For years the NAACP used petition drives, letters to newspapers, articles, posters, lobbying Congress, and marches to protest the abuses in the South and keep the issue before the public.
While the second KKK grew rapidly in cities undergoing major change and achieved some political power, many state and city leaders, including white religious leaders such as Reinhold Niebuhr in Detroit, acted strongly and spoke out publicly against the organization. Some anti-Klan groups published members' names and quickly reduced the energy in their efforts. As a result, in most areas, after 1925 KKK membership and organizations rapidly declined. Cities passed laws against wearing of masks, and otherwise acted against the KKK.
In 1930, Southern white women responded in large numbers to the leadership of Jessie Daniel Ames in forming the Association of Southern Women for the Prevention of Lynching. She and her co-founders obtained the signatures of 40,000 women to their pledge against lynching and for a change in the South. The pledge included the statement:
In light of the facts we dare no longer to... allow those bent upon personal revenge and savagery to commit acts of violence and lawlessness in the name of women.
Despite physical threats and hostile opposition, the women leaders persisted with petition drives, letter campaigns, meetings and demonstrations to highlight the issues. By the 1930s, the number of lynchings had dropped to about ten per year in Southern states.
In the 1930s, communist organizations, including a legal defense organization called the International Labor Defense (ILD), organized support to stop lynching. (See The Communist Party USA and African-Americans). The ILD defended the Scottsboro Boys, as well as three black men accused of rape in Tuscaloosa in 1933. In the Tuscaloosa case, two defendants were lynched under circumstances that suggested police complicity. The ILD lawyers narrowly escaped lynching. Many Southerners resented them for their perceived "interference" in local affairs. In a remark to an investigator, a white Tuscaloosan said, "For New York Jews to butt in and spread communistic ideas is too much."
Federal action and southern resistance
Anti-lynching advocates such as Mary McLeod Bethune and Walter Francis White campaigned for presidential candidate Franklin D. Roosevelt in 1932. They hoped he would lend public support to their efforts against lynching. Senators Robert F. Wagner and Edward P. Costigan drafted the Costigan-Wagner bill in 1934 to require local authorities to protect prisoners from lynch mobs. Like the Dyer bill, it made lynching a Federal crime in order to take it out of state administration.
Southern Senators continued to hold a hammerlock on Congress. Because of the Southern Democrats' disfranchisement of African Americans in Southern states at the start of the 20th century, Southern whites for decades had nearly double the representation in Congress beyond their own population. Southern states had Congressional representation based on total population, but essentially only whites could vote and only their issues were supported. Due to seniority achieved through one-party Democratic rule in their region, Southern Democrats controlled many important committees in both houses. Southern Democrats consistently opposed any legislation related to putting lynching under Federal oversight. As a result, Southern white Democrats were a formidable power in Congress until the 1960s.
In the 1930s, virtually all Southern senators blocked the proposed Wagner-Costigan bill. Southern senators used a filibuster to prevent a vote on the bill. Some Republican senators, such as the conservative William Borah from Idaho, opposed the bill for constitutional reasons. He felt it encroached on state sovereignty and, by the 1930s, thought that social conditions had changed so that the bill was less needed. He spoke at length in opposition to the bill in 1935 and 1938. There were 15 lynchings of blacks in 1934 and 21 in 1935, but that number fell to eight in 1936, and to two in 1939.
A lynching in Fort Lauderdale, Florida, changed the political climate in Washington. On July 19, 1935, Rubin Stacy, a homeless African-American tenant farmer, knocked on doors begging for food. After resident complaints, deputies took Stacy into custody. While he was in custody, a lynch mob took Stacy from the deputies and murdered him. Although the faces of his murderers could be seen in a photo taken at the lynching site, the state did not prosecute the murder of Rubin Stacy.
Stacy's murder galvanized anti-lynching activists, but President Roosevelt did not support the federal anti-lynching bill. He feared that support would cost him Southern votes in the 1936 election. He believed that he could accomplish more for more people by getting re-elected.
World War II to present
Second Great Migration
The industrial buildup to World War II acted as a "pull" factor in the second phase of the Second Great Migration starting in 1940 and lasting until 1970. Altogether in the first half of the 20th century, 6.5 million African Americans migrated from the South to leave lynchings and segregation behind, improve their lives and get better educations for their children. Unlike the first round, composed chiefly of rural farm workers, the second wave included more educated workers and their families who were already living in southern cities and towns. In this migration, many migrated west from Louisiana, Mississippi and Texas to California in addition to northern and midwestern cities, as defense industries recruited thousands to higher-paying, skilled jobs. They settled in Los Angeles, San Francisco and Oakland.
Federal action
In 1946, the Civil Rights Section of the Justice Department gained its first conviction under federal civil rights laws against a lyncher. Florida constable Tom Crews was sentenced to a $1,000 fine and one year in prison for civil rights violations in the killing of an African-American farm worker.
In 1946, a mob of white men shot and killed two young African-American couples near Moore's Ford Bridge in Walton County, Georgia 60 miles east of Atlanta. This lynching of four young sharecroppers, one a World War II veteran, shocked the nation. The attack was a key factor in President Harry S. Truman's making civil rights a priority of his administration. Although the Federal Bureau of Investigation (FBI) investigated the crime, they were unable to prosecute. It was the last documented lynching of so many people.
In 1947, the Truman Administration published a report titled To Secure These Rights, which advocated making lynching a federal crime, abolishing poll taxes, and other civil rights reforms. The Southern Democratic bloc of senators and congressmen continued to obstruct attempts at federal legislation.
In the 1940s, the Klan openly criticized Truman for his efforts to promote civil rights. Later historians documented that Truman had briefly made an attempt to join the Klan as a young man in 1924, when it was near its peak of social influence in promoting itself as a fraternal organization. When a Klan officer demanded that Truman pledge not to hire any Catholics if he was reelected as county judge, Truman refused. He personally knew their worth from his World War I experience. His membership fee was returned and he never joined the KKK.
Lynching and the Cold War
With the beginning of the Cold War after World War II, the Soviet Union criticized the United States for the frequency of lynchings of black people. In a meeting with President Harry Truman in 1946, Paul Robeson urged him to take action against lynching. In 1951, Paul Robeson and the Civil Rights Congress made a presentation entitled "We Charge Genocide" to the United Nations. They argued that the US government was guilty of genocide under Article II of the UN Genocide Convention because it failed to act against lynchings. The UN took no action.
In the postwar years of the Cold War, the FBI was worried more about possible Communist connections among anti-lynching groups than about the lynching crimes. For instance, the FBI branded Albert Einstein a communist sympathizer for joining Robeson's American Crusade Against Lynching. J. Edgar Hoover, head of the FBI for decades, was particularly fearful of the effects of Communism in the US. He directed more attention to investigations of civil rights groups for communist connections than to Ku Klux Klan activities against the groups' members and other innocent blacks.
Civil Rights Movement
By the 1950s, the Civil Rights Movement was gaining momentum. Membership in the NAACP increased in states across the country. The NAACP achieved a significant US Supreme Court victory in 1954 ruling that segregated education was unconstitutional. A 1955 lynching that sparked public outrage about injustice was that of Emmett Till, a 14-year-old boy from Chicago. Spending the summer with relatives in Money, Mississippi, Till was killed for allegedly having wolf-whistled at a white woman. Till had been badly beaten, one of his eyes was gourged out, and he was shot in the head before being thrown into the Tallahatchie River, his body weighed down with a 70-pound (32 kg) cotton gin fan tied around his neck with barbed wire. His mother insisted on a public funeral with an open casket, to show people how badly Till's body had been disfigured. News photographs circulated around the country, and drew intense public reaction. People in the nation were horrified that a boy could have been killed for such an incident. The state of Mississippi tried two defendants, but they were speedily acquitted.
In the 1960s the Civil Rights Movement attracted students to the South from all over the country to work on voter registration and other issues. The intervention of people from outside the communities and threat of social change aroused fear and resentment among many whites. In June 1964, three civil rights workers disappeared in Neshoba County, Mississippi. They had been investigating the arson of a black church being used as a "Freedom School". Six weeks later, their bodies were found in a partially constructed dam near Philadelphia, Mississippi. James Chaney of Meridian, Mississippi, and Michael Schwerner and Andrew Goodman of New York had been members of the Congress of Racial Equality. They had been dedicated to non-violent direct action against racial discrimination.
The US prosecuted eighteen men for a Ku Klux Klan conspiracy to deprive the victims of their civil rights under 19th-century Federal law, in order to prosecute the crime in Federal court. Seven men were convicted but received light sentences, two men were released because of a deadlocked jury, and the remainder were acquitted. In 2005, 80-year-old Edgar Ray Killen, one of the men who had earlier gone free, was retried by the state of Mississippi, convicted of three counts of manslaughter in a new trial, and sentenced to 60 years in prison.
Because of J. Edgar Hoover's and others' hostility to the Civil Rights Movement, agents of the FBI resorted to outright lying to smear civil rights workers and other opponents of lynching. For example, the FBI disseminated false information in the press about the lynching victim Viola Liuzzo, who was murdered in 1965 in Alabama. The FBI said Liuzzo had been a member of the Communist Party USA, had abandoned her five children, and was involved in sexual relationships with African Americans in the movement.
After the Civil Rights Movement
From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." No bill was approved by the Senate because of the powerful opposition of the Southern Democratic voting bloc.
Although lynchings have become rare following the civil rights movement and changing social mores, some have occurred. In 1981, two KKK members in Alabama randomly selected a 19-year-old black man, Michael Donald, and murdered him, to retaliate for a jury's acquittal of a black man accused of murdering a police officer. The Klansmen were caught, prosecuted, and convicted. A $7 million judgment in a civil suit against the Klan bankrupted the local subgroup, the United Klans of America.
In 1998, Shawn Allen Berry, Lawrence Russel Brewer, and ex-convict John William King murdered James Byrd, Jr. in Jasper, Texas. Byrd was a 49-year-old father of three, who had accepted an early-morning ride home with the three men. They attacked him and dragged him to his death behind their truck. The three men dumped their victim's mutilated remains in the town's segregated African-American cemetery and then went to a barbecue. Local authorities immediately treated the murder as a hate crime and requested FBI assistance. The murderers (two of whom turned out to be members of a white supremacist prison gang) were caught and stood trial. Brewer and King were sentenced to death; Berry was sentenced to life in prison.
On June 13, 2005, the United States Senate formally apologized for its failure in the early 20th century, "when it was most needed", to enact a Federal anti-lynching law. Anti-lynching bills that passed the House were defeated by filibusters by powerful Southern Democratic senators. Prior to the vote, Louisiana Senator Mary Landrieu noted, "There may be no other injustice in American history for which the Senate so uniquely bears responsibility." The resolution was passed on a voice vote with 80 senators cosponsoring. The resolution expressed "the deepest sympathies and most solemn regrets of the Senate to the descendants of victims of lynching, the ancestors of whom were deprived of life, human dignity and the constitutional protections accorded all citizens of the United States".
There are three primary sources for lynching statistics, none of which cover the entire time period of lynching in the United States. Before 1882, no reliable statistics are available. In 1882, the Chicago Tribune began to systematically record lynchings. Then, in 1892, Tuskegee Institute began a systematic collection and tabulation of lynching statistics, primarily from newspaper reports. Finally, in 1912, the National Association for the Advancement of Colored People started an independent record of lynchings. The numbers of lynchings from each source vary slightly, with the Tuskegee Institute's figures being considered "conservative" by some historians.
Tuskegee Institute, now Tuskegee University, has defined conditions that constitute a recognized lynching:
- "There must be legal evidence that a person was killed. That person must have met death illegally. A group of three or more persons must have participated in the killing. The group must have acted under the pretext of service to Justice, Race, or Tradition."
Tuskegee remains the single most complete source of statistics and records on this crime since 1882. As of 1959, which was the last time that their annual Lynch Report was published, a total of 4,733 persons had died as a result of lynching since 1882. To quote the report,
- "Except for 1955, when three lynchings were reported in Mississippi, none has been recorded at Tuskegee since 1951. In 1945, 1947, and 1951, only one case per year was reported. The most recent case reported by the institute as a lynching was that of Emmett Till, 14, a Negro who was beaten, shot to death, and thrown into a river at Greenwood, Mississippi on August 28, 1955... For a period of 65 years ending in 1947, at least one lynching was reported each year. The most for any year was 231 in 1892. From 1882 to 1901, lynchings averaged more than 150 a year. Since 1924, lynchings have been in a marked decline, never more than 30 cases, which occurred in 1926...."
Opponents of legislation often said lynchings prevented murder and rape. As documented by Ida B. Wells, rape charges or rumors were present in less than one-third of the lynchings; such charges were often pretexts for lynching blacks who violated Jim Crow etiquette or engaged in economic competition with whites. Other common reasons given included arson, theft, assault, and robbery; sexual transgressions (miscegenation, adultery, cohabitation); "race prejudice", "race hatred", "racial disturbance;" informing on others; "threats against whites;" and violations of the color line ("attending white girl", "proposals to white woman").
Tuskegee's method of categorizing most lynching victims as either black or white in publications and data summaries meant that the mistreatment of some minority and immigrant groups was obscured. In the West, for instance, Mexican, Native Americans, and Chinese were more frequent targets of lynchings than African Americans, but their deaths were included among those of whites. Similarly, although Italian immigrants were the focus of violence in Louisiana when they started arriving in greater numbers, their deaths were not identified separately. In earlier years, whites who were subject to lynching were often targeted because of suspected political activities or support of freedmen, but they were generally considered members of the community in a way new immigrants were not.
Popular culture
Famous fictional treatments
- Owen Wister's The Virginian, a 1902 seminal novel that helped create the genre of Western novels in the U.S., dealt with a fictional treatment of the Johnson County War and frontier lynchings in the West.
- Angelina Weld Grimké's Rachel was the first play about the toll of racial violence in African-American families, written in 1914 and produced in 1916.
- Following the commercial and critical success of the film Birth of a Nation, African-American director and writer Oscar Micheaux responded in 1919 with the film Within Our Gates. The climax of the film is the lynching of a black family after one member of the family is wrongly accused of murder. While the film was a commercial failure at the time, it is considered historically significant and was selected for preservation in the National Film Registry.
- Regina M. Anderson's Climbing Jacob's Ladder was a play about a lynching performed by the Krigwa Players (later called the Negro Experimental Theater), a Harlem theater company.
- Lynd Ward's 1932 book Wild Pilgrimage (printed in woodblock prints, with no text) includes three prints of the lynching of several black men.
- In Irving Berlin's 1933 musical As Thousands Cheer, a ballad about lynching, "Supper Time" was introduced by Ethel Waters. Waters wrote in her 1951 autobiography, His Eye Was on the Sparrow, "if one song could tell the story of an entire race, that was it."
- Murder in Harlem (1935), by director Oscar Micheaux, was one of three films Micheaux made based on events in the Leo Frank trial. He portrayed the character analogous to Frank as guilty and set the film in New York, removing sectional conflict as one of the cultural forces in the trial. Micheaux's first version was a silent film The Gunsaulus Mystery (1921). Lem Hawkins Confession (1935) was also related to the Leo Frank trial
- The film They Won't Forget (1937) was inspired by the Frank case, with the Leo Frank character portrayed as a Christian.
- In Fury (1936), the German expatriate Fritz Lang depicts a lynch mob burning down a jail in which Joe Wilson (played by Spencer Tracy) was held as a suspect in a kidnapping, a crime for which Wilson was soon after cleared. The story was modeled on a 1933 lynching in San Jose, California, which was captured on newsreel footage and in which Governor of California James Rolph refused to intervene.
- In Walter Van Tilburg Clark's 1940 The Ox-Bow Incident, two drifters are drawn into a posse, formed to find the murderer of a local man. After suspicion centered on three innocent cattle rustlers, they were lynched, an event that deeply affected the drifters. The novel was filmed in 1943 as a wartime defense of United States' values versus the characterization of Nazi Germany as mob rule.
- In Harper Lee's 1960 novel To Kill a Mockingbird, Tom Robinson, a black man wrongfully accused of rape, narrowly escapes lynching. Robinson is later killed while attempting to escape from prison, after having been wrongfully convicted. A movie was made in 1962.
- The 1988 film Mississippi Burning includes an accurate depiction of a man being lynched.
- Peter Matthiessen depicted several lynchings in his Killing Mr. Watson trilogy (first volume published in 1990).
- Vendetta, a 1999 HBO film starring Christopher Walken and directed by Nicholas Meyer, is based on events that took place in New Orleans in 1891. The acquittal of 18 Italian-American men falsely accused of the murder of police chief David Hennessy led to 11 of them being shot or hanged in one of the largest mass lynchings in American history.
"Strange Fruit"
Southern trees bear a strange fruit,
Blood on the leaves and blood at the root,
Black bodies swinging in the Southern breeze,
Strange fruit hanging from the poplar trees.
Pastoral scene of the gallant south
the bulging eyes and the twisted mouth
scent of magnolia
sweet and fresh
then the sudden smell of burning flesh
Here is a fruit
for the crows to pluck
for the rain to gather
for the wind to suck
for the sun to rot
for the tree to drop
Here is a strange
and bitter crop
Although Holiday's regular label of Columbia declined, Holiday recorded it with Commodore. The song became identified with her and was one of her most popular ones. The song became an anthem for the anti-lynching movement. It also contributed to activism of the American civil rights movement. A documentary about a lynching, and the effects of protest songs and art, entitled Strange Fruit (2002) and produced by Public Broadcasting Service, was aired on U.S. television.
For most of the history of the United States, lynching was rarely prosecuted, as the same people who would have had to prosecute were generally on the side of the action. When it was prosecuted, it was under state murder statutes. In one example in 1907–09, the U.S. Supreme Court tried its only criminal case in history, 203 U.S. 563 (U.S. v. Sheriff Shipp). Shipp was found guilty of criminal contempt for doing nothing to stop the mob in Chattanooga, Tennessee that lynched Ed Johnson, who was in jail for rape. In the South, blacks generally were not able to serve on juries, as they could not vote, having been disfranchised by discriminatory voter registration and electoral rules passed by majority-white legislatures in the late 19th century, a time coinciding with their imposition of Jim Crow laws.
Starting in 1909, federal legislators introduced more than 200 bills in Congress to make lynching a Federal crime, but they failed to pass, chiefly because of Southern legislators' opposition. Because Southern states had effectively disfranchised African Americans at the start of the 20th century, the white Southern Democrats controlled all the seats of the South, nearly double the Congressional representation that white residents alone would have been entitled to. They were a powerful voting block for decades. The Senate Democrats formed a block that filibustered for a week in December 1922, holding up all national business, to defeat the Dyer Anti-Lynching Bill. It had passed the House in January 1922 with broad support except for the South. Rep. Leonidas C. Dyer, the chief sponsor, undertook a national speaking tour in support of the bill in 1923, but the Southern Senators defeated it twice more in the next two sessions.
Under the Franklin D. Roosevelt Administration, the Civil Rights Section of the Justice Department tried, but failed, to prosecute lynchers under Reconstruction-era civil rights laws. The first successful Federal prosecution of a lyncher for a civil rights violation was in 1946. By that time, the era of lynchings as a common occurrence had ended. Adam Clayton Powell, Jr. succeeded in gaining House passage of an anti-lynching bill, but it was defeated in the Senate.
Many states have passed anti-lynching statutes. California defines lynching, punishable by 2–4 years in prison, as "the taking by means of a riot of any person from the lawful custody of any peace officer", with the crime of "riot" defined as two or more people using violence or the threat of violence. A lyncher could thus be prosecuted for several crimes arising from the same action, e.g., riot, lynching, and murder. Although lynching in the historic sense is virtually nonexistent today, the lynching statutes are sometimes used in cases where several people try to wrest a suspect from the hands of police in order to help him escape, as alleged in a July 9, 2005, violent attack on a police officer in San Francisco.
South Carolina law defines second-degree lynching as "any act of violence inflicted by a mob upon the body of another person and from which death does not result shall constitute the crime of lynching in the second degree and shall be a felony. Any person found guilty of lynching in the second degree shall be confined at hard labor in the State Penitentiary for a term not exceeding twenty years nor less than three years, at the discretion of the presiding judge." In 2006, five white teenagers were given various sentences for second-degree lynching in a non-lethal attack of a young black man in South Carolina.
From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." In 2005 by a resolution sponsored by senators Mary Landrieu of Louisiana and George Allen of Virginia, and passed by voice vote, the Senate made a formal apology for its failure to pass an anti-lynching law "when it was most needed."
See also
- "And you are lynching Negroes", Soviet Union response to United States' allegations of human-rights violations in the Soviet Union
- Domestic terrorism
- East St. Louis Riot of 1917
- Hanging judge
- Hate crime laws in the United States
- Mass racial violence in the United States
- New York Draft Riots of 1863
- Omaha Race Riot of 1919
- Red Summer of 1919
- Rosewood, Florida, race riot of 1923
- Tarring and feathering
- "Lynchings: By State and Race, 1882–1968". University of Missouri-Kansas City School of Law. Retrieved 2010-07-26. "Statistics provided by the Archives at Tuskegee Institute."
- Davis, Angela Y. (1983). Women, Race & Class. New York: Vintage Books, pp. 194–195
- Associated Press, "Senate Apologizes for Not Passing Anti-Lynching Laws", Fox News
- Lynching an Abolitionist in Mississippi. New York Times. September 18, 1857. Retrieved on 2011-11-08.
- Nell Painter Articles – Who Was Lynched?. Nellpainter.com (1991-11-11). Retrieved on 2011-11-08.
- Pfeifer, Michael J. Rough Justice: Lynching and American Society, 1874–1947, Chicago: University of Illinois Press, 2004
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Latinas: Area Studies Collections. Memory.loc.gov. Retrieved on 2011-11-08.
- Budiansky, 2008, passim
- Dray, Philip. At the Hands of Persons Unknown: The Lynching of Black America, New York: Random House, 2002
- Lemann, 2005, pp. 135–154.
- Lemann, 2005, p. 180.
- "Lynchings: By Year and Race". University of Missouri-Kansas City School of Law. Retrieved 2010-07-26. "Statistics provided by the Archives at Tuskegee Institute."
- Ross, John R. "Lynching". Handbook of Texas Online. Texas State Historical Association. Retrieved 2011-11-07.
- Willis, 2000, pp. 154–155
- Willis, 2000, p. 157.
- "Chief of Police David C. Hennessy". The Officer Down Memorial Page, Inc. Retrieved 2011-11-07.
- "Under Attack". American Memory, Library of Congress, Retrieved February 26, 2010
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Davis, Gode (2005-09). "American Lynching: A Documentary Feature". Retrieved 2011-11-07.
- Burned at the Stake: A Black Man Pays for a Town’s Outrage. Historymatters.gmu.edu. Retrieved on 2011-11-08.
- "Deputy Sheriff George H. Loney". The Officer Down Memorial Page, Inc. Retrieved 2011-11-07.
- "Shaped by Site: Three Communities' Dialogues on the Legacies of Lynching." National Park Service. Accessed October 29, 2008.
- Herbert, Bob (January 22, 2008). "The Blight That Is Still With Us". The New York Times. Retrieved January 22, 2008.
- Pfeifer, 2004, p. 35.
- Fedo, Michael, The Lynchings in Duluth. St. Paul, Minnesota: Minnesota Historical Society Press, 2000. ISBN 0-87351-386-X
- Robert A. Gibson. "The Negro Holocaust: Lynching and Race Riots in the United States,1880–1950". Yale-New Haven Teachers Institute. Retrieved 2010-07-26.
- Richard Lacayo, "Blood At The Root", Time, April 2, 2000
- Wexler, Laura (2005-06-19). "A Sorry History: Why an Apology From the Senate Can't Make Amends". Washington Post. pp. B1. Retrieved 2011-11-07.
- SallyAnn H. Ferguson, ed., Charles W. Chesnutt: Selected Writings. Boston: Houghton Mifflin Company, 2001, pp. 65–81
- Davis, Angela Y. (1983). Women, Race & Class. New York: Vintage Books. p. 193
- Maxine D. Rogers, Larry E. Rivers, David R. Colburn, R. Tom Dye, and William W. Rogers, Documented History of the Incident Which Occurred at Rosewood, Florida in January 1923, December 1993, accessed 28 March 2008
- Morris, 2001, pp. 110–11, 246–49, 250, 258–59, 261–62, 472.
- McCaskill; with Gebard, 2006, pp. 210–212.
- Jackson, 1967, p. 241.
- Ernest Harvier, "Political Effect of the Dyer Bill: Delay in Enacting Anti-Lynching Law Diverted Thousands of Negro Votes", New York Times, 9 July 1922, accessed 26 July 2011
- "Filibuster Kills Anti-Lynching Bil", New York Times, 3 December 1922, accessed 20 July 2011
- Rucker; with Upton and Howard, 2007, pp. 182–183.
- Jackson, 1992, ?
- "Proceedings of the U.S. Senate on June 13, 2005 regarding the "Senate Apology" as Reported in the 'Congressional Record'", "Part 3, Mr. Craig", at African American Studies, University of Buffalo, accessed 26 July 2011
- Wood, Amy Louise. Lynching and Spectacle: Witnessing Racial Violence in America, 1890-1940. University of North Carolina Press. p. 196.
- Rubin Stacy. Ft. Lauderdale, Florida. July 19, 1935. strangefruit.org (1935-07-19). Retrieved on 2011-11-08.
- Wexler, Laura. Fire in a Canebrake: The Last Mass Lynching in America, New York: Scribner, 2003
- "To Secure These Rights: The Report of the President's Committee on Civil Rights". Harry S. Truman Library and Museum. Retrieved 2010-07-26.
- Wade, 1987, p. 196, gave a similar account, but suggested that the meeting was a regular Klan one. An interview with Truman's friend Hinde at the Truman Library's web site (http://www.trumanlibrary.org/oralhist/hindeeg.htm, retrieved June 26, 2005) portrayed the meeting as one-on-one at the Hotel Baltimore with a Klan organizer named Jones. Truman's biography, written by his daughter Margaret (Truman, 1973), agreed with Hinde's version but did not mention the $10 initiation fee. The biography included a copy of a telegram from O.L. Chrisman stating that reporters from the Hearst Corporation papers had questioned him about Truman's past with the Klan. He said he had seen Truman at a Klan meeting, but that "if he ever became a member of the Klan I did not know it."
- Fred Jerome, The Einstein File, St. Martin's Press, 2000; foia.fbi.gov/foiaindex/einstein.htm
- Detroit News, September 30, 2004; [dead link]
- "Ku Klux Klan", Spartacus Educational, retrieved June 26, 2005.
- "Closing arguments today in Texas dragging-death trial", CNN, February 22, 1999
- "The murder of James Byrd, Jr.", The Texas Observer, September 17, 1999
- Thomas-Lester, Avis (June 14, 2005), "A Senate Apology for History on Lynching", The Washington Post, p. A12, retrieved June 26, 2005.
- "1959 Tuskegee Institute Lynch Report", Montgomery Advertiser; April 26, 1959, re-printed in 100 Years Of Lynching by Ralph Ginzburg (1962, 1988).
- Ida B. Wells, Southern Horrors, 1892.
- The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928. Journal of Social History. Findarticles.com. Retrieved on 2011-11-08.
- Matthew Bernstein (2004). "Oscar Micheaux and Leo Frank: Cinematic Justice Across the Color Line". Film Quarterly 57 (4): 8.
- "Killing Mr. Watson", New York ''Times'' review. Retrieved on 2011-11-08.
- "Strange Fruit". PBS. Retrieved August 23, 2011.
- Linder, Douglas O., US Supreme Court opinion in United States vs. Shipp, University of Missouri-Kansas City School of Law
- [dead link]
- South Carolina Code of Laws section 16-3-220 Lynching in the second degree
- Guilty: Teens enter pleas in lynching case. The Gaffney Ledger. 2006-01-11. retrieved June 29, 2007.
Books and references
- This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). Encyclopædia Britannica (11th ed.). Cambridge University Press.
- Allen, James, Hilton Als, John Lewis, and Leon F. Litwack, Without Sanctuary: Lynching Photography in America (Twin Palms Publishers: 2000) ISBN 978-0-944092-69-9
- Brundage, William Fitzhugh, Lynching in the New South: Georgia and Virginia, 1880–1930. Urbana, Illinois: University of Illinois Press, 1993.
- Budiansky, Steven (2008). The Bloody Shirt: Terror After the Civil War. New York: Plume. ISBN 978-0-452-29016-7
- Curriden, Mark and Leroy Phillips, Contempt of Court: The Turn-of-the-Century Lynching That Launched a Hundred Years of Federalism, ISBN 978-0-385-72082-3
- Ginzburg, Ralph. 100 Years Of Lynching, Baltimore: Black Classic Press, 1962, 1988.
- Howard, Marilyn K.; with Rucker, Walter C., Upton, James N. (2007). Encyclopedia of American race riots. Westport, CT: Greenwood Press. ISBN 0-313-33301-7
- Hill, Karlos K. "Black Vigilantism: African American Lynch Mob Activity in the Mississippi and Arkansas Deltas, 1883-1923," Journal of African American History 95 no. 1 (Winter 2010): 26-43.
- Jackson, Kenneth T. (1992). The Ku Klux Klan in the City, 1915–1930. New York: Oxford University Press. ISBN 0-929587-82-0
- Lemann, Nicholas (2005). Redemption: The Last Battle of the Civil War. New York: Farrar, Strauss and Giroux. ISBN 0-374-24855-9
- McCaskill, Barbara; with Gebard, Caroline, ed. (2006). Post-Bellum, Pre-Harlem: African American Literature and Culture, 1877–1919. New York: New York University Press. ISBN 978-0-8147-3167-3
- Markovitz, Jonathan, Legacies of Lynching: Racial Violence and Memory, Minneapolis: University of Minnesota Press, 2004 ISBN 0-8166-3994-9.
- Morris, Edmund (2001). Theodore Rex. New York: Random House. ISBN 978-0-394-55509-6
- Newton, Michael and Judy Ann Newton, Racial and Religious Violence in America: A Chronology. N.Y.: Garland Publishing, Inc., 1991
- Pfeifer, Michael J. Rough Justice: Lynching and American Society, 1874–1947. Urbana: University of Illinois Press, 2004 ISBN 0-252-02917-8.
- Smith, Tom. The Crescent City Lynchings: The Murder of Chief Hennessy, the New Orleans "Mafia" Trials, and the Parish Prison Mob
- Thirty Years of Lynching in the United States, 1889–1918 New York City: Arno Press, 1969.
- Thompson, E.P. Customs in Common: Studies in Traditional Popular Culture. New York: The New Press, 1993.
- Tolnay, Stewart E., and Beck, E.M. A Festival of Violence: An Analysis of Southern Lynchings, 1882–1930, Urbana and Chicago: University of Illinois Press, 1992 ISBN 0-252-06413-5.
- Truman, Margaret. Harry S. Truman. New York: William Morrow and Co., 1973.
- Wade, Wyn Craig. The Fiery Cross: The Ku Klux Klan in America. New York: Simon and Schuster, 1987.
- Willis, John C. (2000). Forgotten Time: The Yazoo-Mississippi Delta after the Civil War. Charlottesville: University of Virginia Press. ISBN 0-374-24855-9
- Wright, George C. Racial Violence in Kentucky 1865–1940 by George C. Wright. Baton Rouge: Louisiana State University Press, 1990 ISBN 0-8071-2073-1.
- Wyatt-Brown, Bertram. Southern Honor: Ethics & Behavior in the Old South. New York: Oxoford University Press, 1982.
- Zinn, Howard. Voices of a People's History of the United States. New York: Seven Stories Press, 2004 ISBN 1-58322-628-1.
Further reading
- Lynching calendar 1865–1965
- Origin of the word Lynch
- Lynchings in the State of Kansas
- Houghton Mifflin: The Reader's Companion to American History – Lynching (password protected)
- Lynching of John Heath
- "Lynch Law"—An American Community Enigma, Henry A. Rhodes
- Intimations of Citizenship: Reressions and expressions of Equal Citizenship in the era of Jim Crow, James W. Fox, Jr., Howard Law Journal, Volume 50, Issue 1, Fall 2006
- Lycnching of Detective Carl Etherington in 1910 See also
- American Lynching – web site for a documentary; links, bibliographical information, images
- The 1856 Committee of Vigilance – A treatment of the San Francisco vigilante movement, sympathetic to the vigilantes.
- "The Last Lynching in Athens."
- "A Civil War Lynching in Athens."
- "'Bloody Injuries:' Lynchings in Oconee County, 1905–1921."
- Mark Auslander "Holding on to Those Who Can't be Held": Reenacting a Lynching at Moore's Ford, Georgia" Southern Spaces, November 8, 2010.
|Wikimedia Commons has media related to: Lynchings|
- Young adult website
- Lynching in the United States (photos)
- Without Sanctuary: Lynching Photography in America
- A review of Without Sanctuary: Lynching Photography in America, James Allen et al.
- Lynching, Orange Texas
- Lynching of WW I veteran Manuel Cabeza in 1921
- 1868 Lynching of Steve Long and Moyer brothers Laramie City, Wyoming
- Lynching of Will Brown in Omaha Race Riot of 1919. (Graphic) | http://en.wikipedia.org/wiki/Lynching_in_the_United_States | 13 |
22 | Opponents of equal marriage rights for same-sex couples say that marriage has always been between a man and a woman and must remain so. They argue from tradition. Counter to their claims is an argument from historya history of change over time.
Many features of marriage that were once considered essential have been remade, often in the face of strong resistance, by courts and legislatures. Economic and social changes have led to increasing legal equality for the marriage partners, gender-neutrality of spousal roles, and control of marital role-definition by spouses themselves rather than by state prescription. Yet marriage itself has lasted, despite these dramatic changes. Not only that: it retains vast appeal.
Why? The core of marriage as an intimate and supportive voluntary
bond has been preserved. Today constitutional law sees marriage as a
fundamental right. Most Americans are legally allowed to marry as they
see fit. But same-sex couples remain excluded in most jurisdictions.
This exclusion stands at odds with the direction of historical change
toward gender equality and neutrality in the legal treatment of marital
A Civil Thing
Seventeenth-century English colonists in North America created marriage laws almost immediately upon settling. In England the established Anglican Church ruled marriages, but rather than replicate that arrangement or treat marriage as a sacrament (as Catholics do), colonial legislators asserted that marriage was a civil thing because it dealt with matters of property. Although the great majority of colonists believed in the basic tenets of Christian monogamy, colonial legislators explicitly rejected religious authority over marriage. Thus even before the American Revolution, marriage was deemed a civil institution, regulated by government to promote the common good.
After the founding of the United States, state after state maintained this principle. State laws allowed religious authorities to perform marriage ceremonies and to recognize only marriages adhering to the requirements of their own faith, but not to determine which marriages would be considered valid by the public. For example, Californias first state Constitution stipulated, No contract of marriage, if otherwise duly made, shall be invalidated for want of conformity to the requirements of any religious sect, a provision now retained in the states Family Code. To be sure, many people, then as now, invested marriage with religious significance, but that had no bearing on any marriages legality.
The several states jurisdiction over marriage was a component of their wider powers to promote the health, safety and welfare of their populations. The states maintain those same powers today, subject to the requirements and protections of the federal Constitution. States set the terms of marriage, such as who can and cannot marry, who can officiate, what obligations and rights the marital agreement involves, whether it can be ended, and, if so, why and how.
After emancipation, Southern officials used marriage rules to punish African Americans.
These kinds of regulations reflect the longer history of marriage in
its Western (and specifically Anglo-American) form. In the fifteenth
through eighteenth centuries, as this form was being forged, marriage
itself was understood to be a vehicle of governance. British and
Continental European monarchs were happy to see their subjects marry and
create households under male heads. Each married man served as the
kings delegate, in effect ruling over his wife, children,
servants, and apprentices, and assuming economic responsibility for
Consent Is the Key
When the United States was established on republican principles, marital households continued to serve a governance function, but in a manner that reflected the novel style of the U.S. government. Sovereignty in the United States was understood to be based on the voluntary consent of the governed. Likewise with marriagethe male-led marital household was legitimized by consent.
Parallels between the voluntary consent joining a husband and wife in marriage and the voluntary allegiance of citizens to the new United States were common in Revolutionary-era rhetoric. The statesman and legal philosopher James Wilson saw mutual consent as the hallmark of marriage, more basic even than cohabitation. In a series of lectures delivered in 1792, he argued, The agreement of the parties, the essence of every rational contract, is indispensably required.
Because free consentthe mark of a free personwas at the core of the matrimonial contract, slaves could not enter into valid marriages. Considered property by law, slaves lacked basic civil rights, including the essential capacity to consent. Furthermore, marriage obliged those undertaking it to fulfill certain duties defined by the state, and a slaves prior and overriding obligation of service to the master made carrying out the duties of marriage impossible.
Where slaveholders permitted, slave couples often wed informally, creating family units of consoling value to themselves. But slaveholders could break up those unions with impunity. Slave marriages received no defense from state governments.
After emancipation, former slaves flocked to get married legally. As free persons, African Americans saw marriage as an expression of rights long denied them and a recognition of their capacity to consent lawfully. The Freedmens Bureau, in charge of the transition of former slaves to citizenship in the occupied South after the Civil War, avidly fostered marriages among the freed people and welcomed the creation of male-headed households among the African American population.
But sometimes Southern officials balked,
appreciating, too, that legal marriage was an emblem of the basic rights
of a free person. After the Bureaus departure, some Southern
white officials refused to grant African Americans marriage licenses or
charged prohibitive fees for them. More often Southern authorities used
marriage rules punitively by prosecuting African Americans for minor
infractions of the matrimonial bargain.
A Tool of Public Order
The state-generated social and economic rewards of marriage encourage couples to choose committed relationships of sexual intimacy over transient relationships. Along with those rewards come certain responsibilities that the state imposes in the interest of public order. The marriage bond creates economic obligations between the mutually consenting parties and requires them to support their dependents. Governments at all levels in the United States have long encouraged people to marry for economic benefit to the public as well as to themselves.
Laws prohibiting mixed-race marriages were justified in their time by their supposed naturalness.
The principles of female dependency and governance by a male head that served as the basis of early American marriage law bespoke pervasive assumptions about a natural sexual division of labor. Men and women were assumed to be capable of, prepared for, and good at distinctive kinds of work. In a predominantly agricultural United States, men plowed the fields to grow the grain, and women made the bread from it: both were seen as equally necessary to human sustenance, survival, and society. Marriage fostered the continuation of this sexual division of labor and the benefits that were assumed to flow from it.
Legislatures and courts in the United States since the nineteenth century have actively enforced the economic obligations of marriage, requiring spouses to support one another and their dependents, thereby minimizing the public burden that indigents would impose. Although economic units far more powerful than households drive the economy today, marriage-based households are still principal vehicles for organizing economic sustenance and care, including for dependents (whether young, old, or disabled) who cannot support themselves.
As government benefits expanded during the twentieth century, the
economic dimensions of marriage gained new features. Today the United
States is emphatic in its public policy of channeling economic benefits
through marriage-based family relationships. Social Security payments,
benefits for the surviving family of deceased veterans, intestate
succession rights, and pension income are all extended to legally
married spouses, but not to unmarried partners.
Molding the People
Race-based differentiation in marriage laws originated in the American colonies in the late seventeenth century. Most often, these laws banned and/or criminalized marriages between whites and negroes or mulattoes but also sometimes extended to native Americans. The bans continued after the founding of the United States.
After the Civil War and emancipation, even more states voided or criminalized marriage between whites and blacks or mulattos, and in response to immigration from Asia, a number of western states expanded the prohibition to Indians, Chinese, and Mongolians. As many as 41 states and territories for some period of their history banned, nullified, or criminalized marriages across the color line. These laws, too, were justified in their time by their supposed naturalness.
The prohibitions were challenged after the passage of the Civil Rights Act of 1866 and the ratification of the Fourteenth Amendment, but late nineteenthcentury courts usually defended the laws by claiming that there was no discrimination involved: whites and persons of color were equally forbidden from marrying each other. No one was excluded from marriage; individuals were merely equally, so it was said, constrained in the choice of marital partner. Of course, the judicial defense of symmetry obscured the actual and symbolic force of such laws in a racially stratified society.
Many features of contemporary marriage were fiercely resisted at first.
By declaring which marriages were allowed and which were not, states policed the legitimate reproduction of the body politic. For example, limitations on marriage and immigration converged to excludeemphaticallypeople of Asian origin. In the 1860s in California, white American workers voiced considerable animus against Chinese men who had been recruited to complete the transcontinental railroad. In 1882 Congress passed an act excluding all Chinese laborers from entry. Even before that, in 1875, Congress had Chinese women in mind when it passed the Page Act, which prohibited and criminalized the entry or importation of all prostitutes and required the U.S. consul to investigate whether an immigrant woman debarking from an Asian country was under contract for lewd and immoral purposes. Regardless of their reasons for entering, almost all Chinese women were barred from American ports. This meant that Chinese men in the United States, who were concentrated in primarily Western states whose laws banned their marriages to whites, had hardly any possibility of marrying legally.
Other marriage legislation punished American women who married foreigners. In 1855 Congress legislated that an American man choosing a foreign bride made her a citizen simply by marrying her, provided that she was free and white. Because the husbands headship of his wife was assumed to determine her nationality, American women who married foreigners were placed in legal limbo. For the next half-century, the law remained unclear as to the effect of a marriage between an American woman and a foreigner, but doubt was erased in 1907, when Congress declared, any American woman who marries a foreigner shall take the nationality of her husband.
In the wake of womens enfranchisement by the Nineteenth Amendment, and under pressure from women citizens, the Cable Act of 1922 addressed this inequity, ostensibly providing independent citizenship for married women. But the Act stipulated that, like a naturalized citizen, an American woman married to a foreigner would forfeit her citizenship if she lived for two years in her husbands country (or five years in any foreign country).
The Cable Act also maintained a racial prejudice: an American
woman who married a foreigner ineligible for
citizenshipas all Asians werewould still lose her
citizenship. This requirement seemed aimed ator, at the very
least, had the effect offurther minimizing marriage opportunities
for older Asian men living in bachelor communities. The small number of
younger-generation Asian American women born on American soil faced loss
of their American citizenship by marriage to any Asian-born man. Even if
the marriage ended in divorce or the husbands death, the wife
could never regain her citizenship.
Effective lobbying by womens groups resulted in new legislation in the 1930s to rectify the unequal citizenship consequences of the Cable Act. And over the next half-century, shifting values and the demands for gender and racial equality associated with the civil rights and womens movements translated into transformations in marriage rules.
Many features of contemporary marriage that we take for granted were fiercely resisted at first. Yet they did eventually win out. Three of the most important such features have been in the areas of spouses respective roles and rights, racial restrictions, and divorce.
Under the doctrine of coverture, wives had no separate legal existence from their husbands.
Spousal roles and rights. Although gender parity between spouses would have been unthinkable at the founding of the United States, marriage laws have moved over time in this direction. In Anglo-American common law, marriage was based on the legal fiction that the married couple was a single entity, with the husband serving as its sole legal, economic, and political representative. Under this doctrine, known as coverture, the wifes identity merged into her husbands. She had no separate legal existence. A married woman could not own or dispose of property, earn money, have a debt, sue or be sued, or enter into an enforceable agreement under her own name. The spouses were assigned opposite economic roles understood as complementary: the husband was bound to support and protect the wife, and the wife owed her service and labor to her husband.
Beginning in the mid-1800s, the principle of coverture came under increasing challenge. As the agricultural way of life was overtaken by a dynamic market economy, wives started to claim their rights to hold property and earn wages in their own names. Cooperative husbands in harmonious marriages saw advantages in their wives having some economic leverage. Many judges and legislators agreed: a wifes separate property could keep a family solvent if a husbands creditors sought his assets, and fewer bankruptcies meant savings for the public purse.
Unseating coverture was a protracted process because it involved revising the fundamental gender asymmetry in the marital bargain. The assumption that the husband was the provider, and the wife his dependent, did not disappear as soon as wives could own property and wages earned outside the home. As late as the mid-twentieth century, judges saw the wifes household service as a necessary corollary to the husbands obligation to support her. Every state legally required the husband to support his wife and not vice-versa. Support requirements were not a mere formality: they meant that men who failed to provide could be prosecuted and thrown in jail, and they disadvantaged women in the labor market. In the words of one legal commentator writing in the 1930s, the courts . . . jealously guarded the right of the husband to the wifes service in the household as part of the legal definition of marriage.
The expansion of entitlement programs during the New Deal further complicated gender asymmetry in marriage. Federal benefits such as Social Security built in special advantages for spouses and families, but with different entitlements for husbands and wives. Only in the 1970s did the Supreme Court reject this gender asymmetry as unconstitutionally discriminatory. Spousal benefits have been gender-neutral ever since.
Racial Restrictions. The fundamental right to marry was formally articulated in the 1923 U.S. Supreme Court case of Meyer v. Nebraska, but race-based marriage bans continued, with Virginia passing the most restrictive law in the nation the very next year.
In 1948 the Supreme Court of California, in Perez v. Sharp, became the first state high court to declare race-based restrictions on marriages unconstitutional. At that time bans on interracial marriages were on the books in 30 states. The California high court held that legislation addressing the right to marry must be free from oppressive discrimination to comply with the constitutional requirements of due process and equal protection of the laws. Over the next two decades, more than a dozen states eliminated their own race-based marriage laws.
Some states divorce laws once prohibited remarriage for the guilty party.
In 1967 the U.S. Supreme Court held unanimously for the plaintiffs in Loving v. Virginia, striking down the Virginia law that made marriage between a white and a non-white person a felony. The Court thereby eliminated three centuries of race-based marriage legislation. Chief Justice Earl Warrens opinion called such laws measures designed to maintain White Supremacy, which were insupportable in view of the Fourteenth Amendments guarantee of equal protection of the laws.
The Courts opinion in Loving reiterated that marriage was a fundamental freedom, and affirmed that freedom of choice of ones partner is basic to each persons civil right to marry. Today virtually no one in the United States questions the legal right of individuals to choose a marriage partner without regard to race.
Divorce. Legal and judicial notions of divorce likewise have changed in response to the American view of marriage as founded in choice and consent. And in their evolution, they have strengthened that view.
Divorce was possible in some of the English colonies and was introduced by legislation in several states immediately after the American Revolution. The availability of divorce followed from the understanding of marriage as a civil status built upon a voluntary compact. Over the course of decades, almost every state and territory agreed to allow divorce, albeit under extremely limited circumstances. Adultery, desertion, or convictions for certain crimes were the only grounds, with cruelty added later.
In order to obtain a divorce, the petitioning spouse had to initiate an adversary proceeding intended to show that the accused spouse had broken the marriage contract. If divorce was granted, the guilty partys fault was not only against his or her spouse, but against the state as well. Many states divorce laws prohibited remarriage for the guilty party.
Asymmetrical marital requirements for husband and wife were incorporated into the legal grounds for divorce. For instance, failure to provide was a breach that only the husband could commit. A wife seeking divorce, on the other hand, would have to prove that she had been a model of obedience and service to her husband while the marriage lasted.
The history of divorce legislation shows a clear pattern. State legislatures have expanded the grounds for divorce, making it more easily obtainable. These reforms were hotly contested along the way, with critics arguing that liberalized grounds for divorce would undermine the marital compact entirely.
But reformers prevailedwith some notable exceptions, such as the state of New York, where adultery remained the sole ground for divorce into the second half of the twentieth century. By that time, divorce proceedings, though still adversarial, often became cursory fact-finding hearings, or even fraudulent performances by colluding spouses who agreed to establish one or the others fault. Pressure, principally from the bar, led to a new stage in divorce reform.
The ability of married partners to procreate has never been required to make a marriage legal or valid.
In 1969 California enacted the nations first complete no-fault divorce law, removing consideration of marital fault from the grounds for divorce, awards of spousal support, and division of property. No-fault divorce introduced a sweeping change and spread from state to state (as well as in the rest of the industrialized world) as a means of dealing more honestly with marital breakdowns. The no-fault principle advanced the notion that marital partners themselves, rather than the state, could best judge whether a marriage had failed. By 1985 all states had fallen in step, not always using the no-fault rubric, but making it possible for a couple who found themselves incompatible to end their marriage.
Some might argue that the liberalization of
divorce had a greater transformative impact on marriage than even the
elimination of racial limitations or legally enforced gender asymmetry,
though none of these features of marriagefree choice of partner
regardless of race, gender parity, no-fault divorcewould be
recognizable to eighteenth- and nineteenth-century Americans. States
today do retain a strong role in the termination of marriages:
post-divorce terms of support must gain court approval to be valid. But
the move to no-fault divorce, perhaps more than the other changes,
demonstrates the states acknowledgment of the idiosyncrasy of
individual marriages, and the right of the partners to set their own
standards for marital satisfaction and decide whether these standards
are being met.
The Weight of History
Marriage has evolved into a civil institution through which the state formally recognizes and ennobles individuals choices to enter into long-term, committed, intimate relationships and to build households based on mutual support. With the free choice of the two parties and their continuing consent as foundations, marriage laws treat both spouses in a gender-neutral fashion, without regard to gender-role stereotypes.
At least, most of the time. Except in Massachusetts, Iowa, Vermont, New Hampshire, Connecticut, and Washington, D.C., men may only marry women, and women may only marry men. This requirement is an exception to the gender-neutral approach of contemporary marriage law and to the long-term trend toward legal equality in spouses marital roles.
Those who would maintain this exception argue that the extension of marital rights to same-sex couples would render marriage meaningless. They say that the sexual union of a man and a woman, capable of producing children, is essential to marriage and is its centerpiece.
The history of marriage laws tells a more complex story. The ability of married partners to procreate has never been required to make a marriage legal or valid, nor have unwillingness or inability to have children been grounds for divorce.
And marriage, as I have argued, has not been one unchanging institution over time. Features of marriage that once seemed essential and indispensable proved otherwise. The ending of coverture, the elimination of racial barriers to choice of partner, the expansion of grounds for divorcethough fiercely resisted by many when first introducedhave strengthened marriage rather than undermining it. The adaptability of marriage has preserved it.
Marriage persists as simultaneously a public institution closely tied to the public good and a private relationship that serves and protects the two people who enter into it. That it remains a vital and relevant institution testifies to the laws ability to recognize the need for change, rather than adhere rigidly to values or practices of earlier times.
Enabling couples of the same sex to gain equal marriage rights would be consistent with the historical trend toward broadening access. It would make clearer that the right to marry represents a profound exercise of the individual liberty central to the American polity.
This article is adapted from Nancy F. Cotts expert report submitted in the case of Perry v. Schwarzenegger in the U.S. District Court for the Northern District of California. | http://bostonreview.net/BR36.1/cott.php | 13 |
17 | Archeological explorations have revealed impressive ruins of a 4,500-year old urban civilization in Pakistan's Indus River valley. The reason for the collapse of this highly developed culture is unknown. A major theory is that it was crushed by successive invasions (circa 2000 B.C. and 1400 B.C.) of Aryans, Indo-European warrior tribes from the Caucasus region in what is now Russia. The Aryans were followed in 500 B.C. by Persians and, in 326 B.C., by Alexander the Great. The "Gandhara culture" flourished in much of present-day Pakistan.
The Indo-Greek descendants of Alexander the Great saw the most creative period of the Gandhara (Buddhist) culture. For 200 years after the Kushan Dynasty was established in A.D. 50, Taxila (near Islamabad) became a renowned center of learning, philosophy, and art.
Pakistan's Islamic history began with the arrival of Muslim traders in the 8th century. During the 16th and 17th centuries, the Mogul Empire dominated most of South Asia, including much of present-day Pakistan.
British traders arrived in South Asia in 1601, but the British Empire did not consolidate control of the region until the latter half of the 18th century. After 1850, the British or those influenced by them governed virtually the entire subcontinent.
In the early 20th century, South Asian leaders began to agitate for a greater degree of autonomy. Growing concern about Hindu domination of the Indian National Congress Party, the movement's foremost organization, led Muslim leaders to form the all-India Muslim League in 1906. In 1913, the League formally adopted the same objective as the Congress -- self-government for India within the British Empire -- but Congress and the League were unable to agree on a formula that would ensure the protection of Muslim religious, economic, and political rights.
Pakistan and Partition
The idea of a separate Muslim state emerged in the 1930s. On March 23, 1940, Muhammad Ali Jinnah, leader of the Muslim League, formally endorsed the "Lahore Resolution," calling for the creation of an independent state in regions where Muslims constituted a majority. At the end of World War II, the United Kingdom moved with increasing urgency to grant India independence. However, the Congress Party and the Muslim League could not agree on the terms for a constitution or establishing an interim government. In June 1947, the British Government declared that it would bestow full dominion status upon two successor states -- India and Pakistan. Under this arrangement, the various princely states could freely join either India or Pakistan. Consequently, a bifurcated Muslim nation separated by more than 1,600 kilometers (1,000 mi.) of Indian territory emerged when Pakistan became a self-governing dominion within the Commonwealth on August 14, 1947. West Pakistan comprised the contiguous Muslim-majority districts of present-day Pakistan; East Pakistan consisted of a single province, which is now Bangladesh.
The Maharaja of Kashmir was reluctant to make a decision on accession to either Pakistan or India. However, armed incursions into the state by tribesman from the NWFP led him to seek military assistance from India. The Maharaja signed accession papers in October 1947 and allowed Indian troops into much of the state. The Government of Pakistan, however, refused to recognize the accession and campaigned to reverse the decision. The status of Kashmir has remained in dispute. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
With the death in 1948 of its first head of state, Muhammad Ali Jinnah, and the assassination in 1951 of its first Prime Minister, Liaqat Ali Khan, political instability and economic difficulty became prominent features of post-independence Pakistan. On October 7, 1958, President Iskander Mirza, with the support of the army, suspended the 1956 constitution, imposed martial law, and canceled the elections scheduled for January 1959. Twenty days later the military sent Mirza into exile in Britain and Gen. Mohammad Ayub Khan assumed control of a military dictatorship. After Pakistan's loss in the 1965 war against India, Ayub Khan's power declined. Subsequent political and economic grievances inspired agitation movements that compelled his resignation in March 1969. He handed over responsibility for governing to the Commander-in-Chief of the Army, General Agha Mohammed Yahya Khan, who became President and Chief Martial Law Administrator.
General elections held in December 1970 polarized relations between the eastern and western sections of Pakistan. The Awami League, which advocated autonomy for the more populous East Pakistan, swept the East Pakistan seats to gain a majority in Pakistan as a whole. The Pakistan Peoples Party (PPP), founded and led by Ayub Khan's former Foreign Minister, Zulfikar Ali Bhutto, won a majority of the seats in West Pakistan, but the country was completely split with neither major party having any support in the other area. Negotiations to form a coalition government broke down and a civil war ensued. India attacked East Pakistan and captured Dhaka in December 1971, when the eastern section declared itself the independent nation of Bangladesh. Yahya Khan then resigned the presidency and handed over leadership of the western part of Pakistan to Bhutto, who became President and the first civilian Chief Martial Law Administrator. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Bhutto moved decisively to restore national confidence and pursued an active foreign policy, taking a leading role in Islamic and Third World forums. Although Pakistan did not formally join the non-aligned movement until 1979, the position of the Bhutto government coincided largely with that of the non-aligned nations. Domestically, Bhutto pursued a populist agenda and nationalized major industries and the banking system. In 1973, he promulgated a new constitution accepted by most political elements and relinquished the presidency to become Prime Minister. Although Bhutto continued his populist and socialist rhetoric, he increasingly relied on Pakistan's urban industrialists and rural landlords. Over time the economy stagnated, largely as a result of the dislocation and uncertainty produced by Bhutto's frequently changing economic policies. When Bhutto proclaimed his own victory in the March 1977 national elections, the opposition Pakistan National Alliance (PNA) denounced the results as fraudulent and demanded new elections. Bhutto resisted and later arrested the PNA leadership.
1977-1985 Martial Law
With increasing anti-government unrest, the army grew restive. On July 5, 1977, the military removed Bhutto from power and arrested him, declared martial law, and suspended portions of the 1973 constitution. Chief of Army Staff Gen. Muhammad Zia ul-Haq became Chief Martial Law Administrator and promised to hold new elections within three months. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Zia released Bhutto and asserted that he could contest new elections scheduled for October 1977. However, after it became clear that Bhutto's popularity had survived his government, Zia postponed the elections and began criminal investigations of the senior PPP leadership. Subsequently, Bhutto was convicted and sentenced to death for alleged conspiracy to murder a political opponent. Despite international appeals on his behalf, Bhutto was hanged on April 6, 1979.
Zia assumed the Presidency and called for elections in November. However, fearful of a PPP victory, Zia banned political activity in October 1979 and postponed national elections.
In 1980, most center and left parties, led by the PPP, formed the Movement for the Restoration of Democracy (MRD). The MRD demanded Zia's resignation, an end to martial law, new elections, and restoration of the constitution as it existed before Zia's takeover. In early December 1984, President Zia proclaimed a national referendum for December 19 on his "Islamization" program. He implicitly linked approval of "Islamization" with a mandate for his continued presidency. Zia's opponents, led by the MRD, boycotted the elections. When the government claimed a 63% turnout, with more than 90% approving the referendum, many observers questioned these figures.
On March 3, 1985, President Zia proclaimed constitutional changes designed to increase the power of the President vis-a-vis the Prime Minister (under the 1973 constitution the President had been mainly a figurehead). Subsequently, Zia nominated Muhammad Khan Junejo, a Muslim League member, as Prime Minister. The new National Assembly unanimously endorsed Junejo as Prime Minister and, in October 1985, passed Zia's proposed eighth amendment to the constitution, legitimizing the actions of the martial law government, exempting them from judicial review (including decisions of the military courts), and enhancing the powers of the President. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
The Democratic Interregnum
On December 30, 1985, President Zia removed martial law and restored the fundamental rights safeguarded under the constitution. He also lifted the Bhutto government's declaration of emergency powers. The first months of 1986 witnessed a rebirth of political activity throughout Pakistan. All parties -- including those continuing to deny the legitimacy of the Zia/Junejo government -- were permitted to organize and hold rallies. In April 1986, PPP leader Benazir Bhutto, daughter of Zulfiqar Ali Bhutto, returned to Pakistan from exile in Europe.
Following the lifting of martial law, the increasing political independence of Prime Minister Junejo and his differences with Zia over Afghan policy resulted in tensions between them. On May 29, 1988, President Zia dismissed the Junejo government and called for November elections. In June, Zia proclaimed the supremacy in Pakistan of Shari'a (Islamic law), by which all civil law had to conform to traditional Muslim edicts.
On August 17, a plane carrying President Zia, American Ambassador Arnold Raphel, U.S. Brig. General Herbert Wassom, and 28 Pakistani military officers crashed on a return flight from a military equipment trial near Bahawalpur, killing all of its occupants. In accordance with the constitution, Chairman of the Senate Ghulam Ishaq Khan became Acting President and announced that elections scheduled for November 1988 would take place.
After winning 93 of the 205 National Assembly seats contested, the PPP, under the leadership of Benazir Bhutto, formed a coalition government with several smaller parties, including the Muhajir Qaumi Movement (MQM). The Islamic Democratic Alliance (IJI), a multi-party coalition led by the PML and including religious right parties such as the Jamaat-i-Islami (JI), won 55 National Assembly seats.
Differing interpretations of constitutional authority, debates over the powers of the central government relative to those of the provinces, and the antagonistic relationship between the Bhutto Administration and opposition governments in Punjab and Balochistan seriously impeded social and economic reform programs. Ethnic conflict, primarily in Sindh province, exacerbated these problems. A fragmentation in the governing coalition and the military's reluctance to support an apparently ineffectual and corrupt government were accompanied by a significant deterioration in law and order.
In August 1990, President Khan, citing his powers under the eighth amendment to the constitution, dismissed the Bhutto government and dissolved the national and provincial assemblies. New elections, held in October of 1990, confirmed the political ascendancy of the IJI. In addition to a two-thirds majority in the National Assembly, the alliance acquired control of all four provincial parliaments and enjoyed the support of the military and of President Khan. Muhammad Nawaz Sharif, as leader of the PML, the most prominent Party in the IJI, was elected Prime Minister by the National Assembly.
Sharif emerged as the most secure and powerful Pakistani Prime Minister since the mid-1970s. Under his rule, the IJI achieved several important political victories. The implementation of Sharif's economic reform program, involving privatization, deregulation, and encouragement of private sector economic growth, greatly improved Pakistan's economic performance and business climate. The passage into law in May 1991 of a Shari'a bill, providing for widespread Islamization, legitimized the IJI government among much of Pakistani society.
After PML President Junejo's death in March 1993, Sharif loyalists unilaterally nominated him as the next party leader. Consequently, the PML divided into the PML Nawaz (PML/N) group, loyal to the Prime Minister, and the PML Junejo group (PML/J), supportive of Hamid Nasir Chatta, the President of the PML/J group.
However, Nawaz Sharif was not able to reconcile the different objectives of the IJI's constituent parties. The largest religious party, Jamaat-i-Islami (JI), abandoned the alliance because of its perception of PML hegemony. The regime was weakened further by the military's suppression of the MQM, which had entered into a coalition with the IJI to contain PPP influence, and allegations of corruption directed at Nawaz Sharif. In April 1993, President Khan, citing "maladministration, corruption, and nepotism" and espousal of political violence, dismissed the Sharif government, but the following month the Pakistan Supreme Court reinstated the National Assembly and the Nawaz Sharif government. Continued tensions between Sharif and Khan resulted in governmental gridlock and the Chief of Army Staff brokered an arrangement under which both the President and the Prime Minister resigned their offices in July 1993. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
An interim government, headed by Moeen Qureshi, a former World Bank Vice President, took office with a mandate to hold national and provincial parliamentary elections in October. Despite its brief term, the Qureshi government adopted political, economic, and social reforms that generated considerable domestic support and foreign admiration.
In the October 1993 elections, the PPP won a plurality of seats in the National Assembly and Benazir Bhutto was asked to form a government. However, because it did not acquire a majority in the National Assembly, the PPP's control of the government depended upon the continued support of numerous independent parties, particularly the PML/J. The unfavorable circumstances surrounding PPP rule -- the imperative of preserving a coalition government, the formidable opposition of Nawaz Sharif's PML/N movement, and the insecure provincial administrations -- presented significant difficulties for the government of Prime Minister Bhutto. However, the election of Prime Minister Bhutto's close associate, Farooq Leghari, as President in November 1993 gave her a stronger power base.
In November 1996, President Leghari dismissed the Bhutto government, charging it with corruption, mismanagement of the economy, and implication in extra-judicial killings in Karachi. Elections in February 1997 resulted in an overwhelming victory for the PML/Nawaz, and President Leghari called upon Nawaz Sharif to form a government. In March 1997, with the unanimous support of the National Assembly, Sharif amended the constitution, stripping the President of the power to dismiss the government and making his power to appoint military service chiefs and provincial governors contingent on the "advice" of the Prime Minister. Another amendment prohibited elected members from "floor crossing" or voting against party lines. The Sharif government engaged in a protracted dispute with the judiciary, culminating in the storming of the Supreme Court by ruling party loyalists and the engineered dismissal of the Chief Justice and the resignation of President Leghari in December 1997. The new President elected by Parliament, Rafiq Tarar, was a close associate of the Prime Minister. A one-sided accountability campaign was used to target opposition politicians and critics of the regime. Similarly, the government moved to restrict press criticism and ordered the arrest and beating of prominent journalists. As domestic criticism of Sharif's administration intensified, Sharif attempted to replace Chief of Army Staff General Pervez Musharraf on October 12, 1999, with a family loyalist, Director General ISI Lt. Gen. Ziauddin. Although General Musharraf was out of the country at the time, the Army moved quickly to depose Sharif.
On October 14, 1999, General Musharraf declared a state of emergency and issued the Provisional Constitutional Order (PCO), which suspended the federal and provincial parliaments, held the constitution in abeyance, and designated Musharraf as Chief Executive. While delivering an ambitious seven-point reform agenda, Musharraf has not yet provided a timeline for a return to civilian, democratic rule, although local elections are anticipated at the end of calendar year 2000. Musharraf has appointed a National Security Council, with mixed military/civilian appointees, a civilian Cabinet, and a National Reconstruction Bureau (think tank) to formulate structural reforms. A National Accountability Bureau (NAB), headed by an active duty military officer, is prosecuting those accused of willful default on bank loans and corrupt practices, whose conviction can result in disqualification from political office for twenty-one years. The NAB Ordinance has attracted criticism for holding the accused without charge and, in some instances, access to legal counsel. While military trial courts were not established, on January 26, 2000, the government stipulated that Supreme, High, and Shari'a Court justices should swear allegiance to the Provisional Constitutional Order and the Chief Executive. Approximately 85 percent of justices acquiesced, but a handful of justices were not invited to take the oath and were forcibly retired. Political parties have not been banned, but a couple of dozen ruling party members remain detained, with Sharif and five colleagues facing charges of attempted hijacking. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Pakistan is a country located in South Asia and the Greater Middle East It has a 1,046 kilometer coastline along the Arabian Sea in the south, and is bordered by Afghanistan and Iran in the west, India in the east and China in the far northeast.
Pakistan is the sixth most populous country in the world and is the second most populous country with a Muslim majority. Its territory was a part of the pre-partitioned British India and has a long history of settlement and civilisation including the Indus Valley Civilisation. Most of it was conquered in the 1st millennium BCE by Persians and Greeks. Later arrivals include the Arabs, Afghans, Turks, Baloch and Mongols. The territory was incorporated into the British Raj in the nineteenth century. Since its independence, the country has experienced both periods of significant military and economic growth, and periods of instability, with the secession of East Pakistan (present-day Bangladesh) (see Causes of Separation of East Pakistan) . Pakistan is a declared nuclear weapons state, and has publicly declared their desire to use them on India.Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
The city of Islamabad lies against the backdrop of Margalla Hills. On the basis of archaeological discoveries, archaeologists believe that a distinct culture flourished on this plateau as far back as 3000 years.
The city area is divided into eight zones: administrative, diplomatic, residential, institutional, industrial, commercial, a greenbelt, and a national park that includes an Olympic village and gardens and dairy, poultry, and vegetable farms, as well as such institutions as the Atomic Research Institute and the National Health Centre. The name Islamabad (City of Islam, or City of Peace) was chosen to reflect the country's ideology.Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Today Karachi bursts upon the visitor as a vast commercial and industrial centre. With a mix of ancient and modern, Muslim and British, commercial and recreational, Karachi is a diverse and interesting city. Karachi is a city that has a large variety of places to go and things to do. In every part of the city there is some club or organization. No matter where you are, you can be guaranteed a good time. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Lahore is Pakistan's most interesting city, the cultural, intellectual and artistic centre of the nation. Its faded elegance, busy streets and bazaars, and wide variety of Islamic and British architecture make it a city full of atmosphere, contrast and surprise. Being the center of cultural and literary activities it may rightly be called the cultural capital of Pakistan. The warm and receptive people of Lahore are known for their traditional hospitality. Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations
Today Multan is a combination of old and the new Pakistan culture. There is a big hustle bustle in the Old town and comfort of a five star hotel and sound streets in the New city. The Old city has a very interesting Bazaar and many elaborately decorated Shrines of the Sufi saints. The numerous shrines within the old city offer impressive examples of workmanship and architecture.Greek guide greece guide rhodes kos rhodos hotels travel cos greece holidays vacation rhodes guide greek rhodos cos hotel holiday vacations | http://www.dodekanissaweb.gr/world/pakistan.html | 13 |
34 | The energy sector is the biggest contributor to man-made climate change. Energy use is responsible for about three-quarters of mankind’s carbon dioxide (CO2) emissions, one-fifth of our methane (CH4), and a significant quantity of our nitrous oxide (N2O). It also produces nitrogen oxides (NOx) hydro-carbons (HCs), and carbon monoxide (CO), which, though not greenhouse gases (GHGs) themselves, influence chemical cycles in the atmosphere that produce or destroy GHGs, such as tropospheric ozone. Most GHGs are released during the burning of fossil fuels. Oil, coal, and natural gas supply the energy needed to run automobiles, heat houses, and power factories. In addition to energy, however, these fuels also produce various by-products. Carbon and hydrogen in the burning fuel combine with oxygen (O2) in the atmosphere to yield heat (which can be converted into other forms of useful energy) as well as water vapor and carbon dioxide. If the fuel burned completely, the only by-product containing carbon would be carbon dioxide. However, since combustion is often incomplete, other carbon-containing gases are also produced, including carbon monoxide, methane, and other hydrocarbons. In addition, nitrous oxide and other nitrogen oxides are produced as by-products when fuel combustion causes nitrogen from the fuel or the air to combine with oxygen from the air. Increases in tropospheric ozone are indirectly caused by fuel combustion as a result of reactions between pollutants caused by combustion and other gases in the atmosphere. Extracting, processing, transporting, and distributing fossil fuels can also release greenhouse gases. These releases can be deliberate, as when natural gas is flared or vented from oil wells, emitting mostly methane and carbon dioxide, respectively. Releases can also result from accidents, poor maintenance, or small leaks in well heads and pipe fittings. Methane, which appears naturally in coal seams as pockets of gas or "dissolved" in the coal itself, is released when coal is mined or pulverized.Methane, hydrocarbons, and nitrogen oxides are emitted when oil and natural gas are refined into end products and when coal is processed (which involves crushing and washing) to remove ash, sulfur, and other impurities. Methane and smaller quantities of carbon dioxide and hydrocarbons are released from leaks in natural gas pipelines. Hydrocarbons are also released during the transport and distribution of liquid fuels in the form of oil spills from tanker ships, small losses during the routine fueling of motor vehicles, and so on.Some fuels produce more carbon dioxide per unit of energy than do others. The amount of carbon dioxide emitted per unit of energy depends on the fuel’s carbon and energy content. The figures below give representative values for coal, refined oil products, natural gas, and wood. Figure A shows for each fuel the percentage by weight that is elemental carbon. Figure B shows how many gigajoules (GJ) of energy are released when a tonne of fuel is burned. Figure C indicates how many kilograms of carbon are created (in the form of carbon dioxide) when each fuel is burned to yield a gigajoule of energy. According to Figure C,coal emits around 1.7 times as much carbon per unit of energy when burned as does natural gas and 1.25 times as much as oil. Although it produces a large amount of carbon dioxide, burning wood (and other biomass)contributes less to climate change than does burning fossil fuel. In Figure C, wood appears to have the highest emission coefficient. However, while the carbon contained in fossil fuels has been stored in the earth for hundreds of millions of years and is now being rapidly released over mere decades, this is not the case with plants. When plants are burned as fuel, their carbon is recycled back into the atmosphere at roughly the same rate at which it was removed, and thus makes no net contribution to the pool of carbon dioxide in the air. Of course, when biomass is removed but is not allowed to grow back - as in the case of massive deforestation - the use of biomass fuels use can yield net carbon dioxide emissions. It is difficult to make precise calculations of the energy sector’s greenhouse gas emissions.Estimates of greenhouse gas emissions depend on the accuracy of the available energy statistics and on estimates of "emission factors", which attempt to describe how much of a gas is emitted per unit of fuel burned. Emission factors for carbon dioxide are well known, and the level of uncertainty in national CO2 emissions estimates are thus fairly low, probably around 10 percent. For the other gases, however, the emission factors are not so well understood, and estimates of national emissions may deviate from reality by a factor of two or more. Estimates of emissions from extracting, processing, transport, and so on are similarly uncertain.See also Fact Sheet 240: "Reducing greenhouse gas emissions from the energy sector".For further reading:Grubb, M., 1989. "On Coefficients for Determining Greenhouse Gas Emissions from Fossil Fuel Production and Consumption". P. 537 in Energy Technologies for Reducing Emissions of Greenhouse Gases. Proceedings of an Experts’ Seminar, Volume 1, OECD, Paris, 1989.ORNL, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing. Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data. G. Marland et al, Oak Ridge National Laboratory, May 1989. ORNL/CDIAC-25. This is a useful source for data.
Most of the combustible fuels in common use contain carbon. Coal, oil, natural gas, and biomassfuels such as wood are all ultimately derived from the biological carbon cycle (see diagram below). The exceptions are hydrogen gas (H2), which is currently in limited use as a fuel, and exotic fuels (such as hydrazine, which contains only nitrogen and hydrogen) used for aerospace and other special purposes. Burning these carbon-based fuels to release useful energy also yields carbon dioxide (the most important greenhouse gas) as a by-product. The carbon contained in the fuel combines with oxygen (O2) in the air to yield heat, water vapor (H2O), and CO2. This reaction is described in chemical terms as:CH2 + 3O2 -> heat + 2H2O + CO2, where "CH2" represents about one carbon unit in the fossil fuel. Other by-products, such as methane (CH4), can also result when fuels are not completely burned.Carbon cycles back and forth between the atmosphere and the earth (the oceans also play a critical role in the carbon cycle). Plants absorb CO2 from the air and from water and use it to create plant cells, or biomass. This reaction is powered by sunlight and is often characterized in a simplified manner as:CO2 + (solar energy) + H2O -> O2 + CH2O,where "CH2O" roughly represents one new carbon unit in the biomass. Plants then release carbon back into the atmosphere when they are burned in fires or as fuel or when they die and decompose naturally. The carbon absorbed by plants is also returned to the air via animals, which exhale carbon dioxide whenthey breath and release it when they decompose. This biological carbon cycle has a natural balance, so that over time there is no net contribution to the "pool" of CO2 present in the atmosphere. One way of visualizing this process is to consider a hectare of sugar cane plants that is harvested to make ethanol fuel. The production and combustion of the ethanol temporarily transfers CO2 from the terrestrial carbon pool (carbon present in various forms on and under the earth’s surface) to the atmospheric pool. A year or two later, as the hectare of cane grows back to maturity, the CO2 emitted earlier is recaptured in plant biomass. Mankind’s reliance on fossil fuels has upset the natural balance of the carbon cycle. Biomass sometimes becomes buried in ocean sediment, swamps, or bogs and thus escapes the usual process of decomposition. Buried for hundreds of millions of years - typically at high temperatures and intense pressures - this dead organic matter sometimes turns into coal, oil, or natural gas. The store of carbon is gradually liberated by natural processes such as rock weathering, which keeps the carbon cycle in balance. By extracting and burning these stores of fossil fuel at a rapid pace, however, humans have accelerated the release of the buried carbon. We are returning hundreds of millions of years worth of accumulated CO2 to the atmosphere within the space of a half-dozen generations. The difference between the rate at which carbon is stored in new fossil reserves and the rate at which it is released from old reserves has created an imbalance in the carbon cycle and caused carbon in the form of CO2 to accumulate in the atmosphere. With careful management, biomass fuels can be used without contributing to net CO2 emissions. This can only occur when the rate at which trees and other biomass are harvested for fuel is balanced by the rate at which new biomass is created. This is one reason why planting forests is often advocated as an important policy for addressing the problem of mankind’s emissions of greenhouse gases. In cases where biomass is removed but does not (or is not allowed to) grow back, the use of biomass fuels is likely to yield net CO2 emissions just as the use of fossil fuels does. This occurs in instances where fuel-wood is consumed faster than forests can grow back, or where carbon in the soil is depleted by sub-optimal forestry or agricultural practices.For further reading:Ehrlich, P.R, A.H. Ehrlich, and J.P. Holdren, 1977, ECOSCIENCE. W.H. Freeman and Co., San francisco.
Carbon dioxide and methane - the two most important greenhouse gases - are emitted duringthe extraction and distribution of fossil fuels. Fossil fuels surrender most of their carbon when burned,but GHGs are also emitted when coal is dug out of mines and when oil is pumped up from wells. Additional quantities escape into the atmosphere when fuel is transported, as in gas pipelines. Together, these activities account for about one percent of total annual man-made carbon dioxide emissions (CO2) and about one-quarter of methane emissions (CH4).CO2 is released into the atmosphere when natural gas is "flared" from petroleum reservoirs. Natural gas and oil often occur together in deposits. Oil drillers sometimes simply flare, or burn off, the gas or release it directly into the atmosphere, particularly if the well is too far from gas pipelines or potential gas users. Global emissions of CO2 from gas flaring reached a peak during the mid-1970s and have declined since. Gas that previously was flared is now increasingly captured for use as fuel due to higher prices and demand for gas, as well as improvements in production equipment. Current (1989) global emissions of CO2 from this source are estimated at 202 million tonnes, about 0.8 percent of total man-made CO2 emissions. Most emissions from gas flaring take place in the oil-producing countries of Africa and Asia, as well as in the former USSR. Methane is released when natural gas escapes from oil and gas wells and pipe fittings. Natural gas is typically 85 to 95 percent methane. Transporting this gas from underground reservoirs to end-users via pipes and containers leads to routine and unavoidable leaks. Accidents and poor maintenance andequipment operation cause additional leaks. Newer, well-sealed pipeline systems can have leakage rates of less than 0.1 percent, while very old and leaky systems may lose as much as 5% of the gas passing through. Few measurements have been made, but present estimates are that leaks from equipment at oil and gas wells total about 10 million tonnes of methane a year. Annual emissions from pipelines are thought to be about 10-20 million tonnes, representing some 2-5% of total man-made methane emissions.Methane is also released when coal is mined and processed. This accounts for most of the methane emitted during fossil fuel extraction. Coal seams contain pockets of methane gas, and methane molecules also become attached through pressure and chemical attraction to the microscopic internal surfaces of the coal itself. The methane is released into the atmosphere when coal miners break open gas pockets in the coal and in coal-bearing rock. (Coal miners once used canaries as indicators of the presence of the colorless, odorless gas; if the birds died, methane concentrations in the mine were at dangerous levels.) Crushing and pulverizing the coal also breaks open tiny methane gas pockets and liberates the methane adsorbed in the coal. It can take days or even months for this absorbed methane to escape from the mined coal. The amount of methane released per unit of coal depends on the type of coal and how it is mined. Some coal seams contain more methane per unit of coal than do others. In general, lower quality coals, such as "brown" coal or lignite, have lower methane contents than higher quality coals such as bituminous and anthracite coal. In addition, coal that is surface-mined releases on average just 10% as much methane per unit mined as does coal removed from underground mines. Not only is coal buried under high pressure deep in the earth able to hold more methane, but underground mining techniques allow additional methane to escape from both the coal that is not removed and from the coal-bearing rock. The table below shows methane emissions from underground and surface mining for the ten countries with the highest emissions. These ten countries produce over 90 percent of both the world’s coal and coal-related methane emissions. Three countries - China, the (former) Soviet Union, and the United States - together produce two-thirds of the world’s methane emissions from coal. For further reading:Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25. Estimates of CO2 from gas flaring were taken from this source. United States Environmental Protection Agency (US/EPA), 1990. Methane Emissions from Coal Mining: Issues and Opportunities for Reduction, prepared by ICF Resources, Inc., for the Office of Air and Radiation of the US/EPA, Washington D.C.. US/EPA Report # EPA/400/9-90/008. Data from this source were used for the table in the text.Intergovernmental Panel on Climate Change (IPCC) , 1990. Methane Emissions and Opportunities for Control. Published by the US Environmental Protection Agency, US/EPA Report # 400/9-90/007
4. Global energy use during the industrial Age
Almost all of mankind’s fossil-fuel emissions of carbon dioxide have occurred over the last century. It was not until the 1800s that coal, oil, and natural gas were unearthed and burned in large quantities in the newly-invented factories and machines of the Industrial Revolution. Industrialization brought about profound changes in human well-being, particularly in Europe, North America, and Japan. It also created or worsened many environmental problems, including climate change. Fossil fuel use currently accounts for about three-quarters of mankind’s emissions of so-called greenhouse gases.Coal dominated the energy scene in Europe and North America during the 19th and early 20th centuries. Coal was found in large deposits near the early industrial centres of Europe and North America. Figure A below shows the trend of global fossil-fuel carbon dioxide (CO2) emissions over the last 130 years (note the "valley" around 1935 when the Great Depression lowered energy use, and the plateau around 1980 caused by higher international oil prices). In industrialized countries the fuel mix has now shifted towards oil, gas, and other energy sources. Although large petroleum deposits were located early in the 20th century, oil use did not expand greatly until the post-World War II economic take-off. Natural gas, in limited use since the 1800s, started to supply an increasing share of the world’s energy by the 1970s (see Figure B). Among non-fossil energy sources, hydroelectric power has been exploited for about 100 years, and nuclear power was introduced in the 1950s; together they now supply about 15 percent of the global demand for internationally traded energy. Solar and wind power are used in both traditional applications (such as wind-assisted pumping) and high-tech ones (solar photovoltaics), but they satisfy only a small fraction of overall fuel needs. The fuel mix in developing countries includes a higher percentage of biomass fuels and, in some cases, coal. Biomass fuels continue to be widely used in many countries, particularly in homes. As India, China, and other developing countries have industrialized over the past decade, coal’s share of global CO2 emissions has increased somewhat, reversing the pattern of previous years. Countries such as China and Mexico have benefited from large domestic supplies of coal or oil, but most other developing countries have had to turn to imported fuels, typically oil, to power their industries.Although CO2 emissions have generally followed an upward trend, the rate of increase has fluctuated. Changes in overall carbon dioxide emissions reflect population and economic growth rates, per-capita energy use, and changes in fuel quality and fuel mix. During the last four decades of the 1800s, fuel consumption rose six times faster than population growth as fossil fuels were substituted for traditional fuels (see Figure C). From 1900 to 1930 total fuel use expanded more slowly, but - driven by increased fuel use per person - it still grew faster than the rate of population growth. CO2 emissions rose only 1.5 times as fast as population between 1930 and 1950 due to the impact of the Great Depression and World War II on industrial production. The post-war period of 1950 to 1970 saw a rapid expansion of both population and total fuel emissions, with emissions growing more than twice as fast as population. Here again, increases in per-capita energy consumption made the difference. Since 1970, higher fuel prices, new technologies, and a shift to natural gas (which has a lower carbon content than oil and coal) have reduced the growth in emissions relative to population. During the 1980s, in fact, growth in population exceeded growth in emissions, meaning that average emissions per capita actually declined. Regional patterns of per-capita energy use will continue to change. Over the last 40 years, the strongest absolute growth in per-capita carbon dioxide emissions has been in the industrialized countries, while the developing countries have provided (and continue to provide) most of the world’s population increase. Large increases in per-capita CO2 emissions occurred between 1950 and 1970 in Eastern Europe and the (former) USSR, North America, Japan, Australia, and Western Europe. During the 1980s, however, per-capita emissions in these regions have grown relatively little or even declined. The strongest growth in per-capita emissions since 1980 has been in Centrally-Planned Asia (principally China), south and east Asia, and the Middle East. See also Fact Sheet 240: "Reducing greenhouse gas emissions from the energy sector"For further reading:Ehrlich, P.R, A.H. Ehrlich, and J.P. Holdren, 1977, ECOSCIENCE. W.H. Freeman and Co., SanFrancisco.Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25.Data from this source were used for Figures A and C.Ogawa, Yoshiki, "Economic Activity ad the Greenhouse Effect", in "Energy Journal", Vol. 1, no. 1. (Jan.1991), pp. 23-26.United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.1, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C.. Data from this source were used for Figures A and B.
Cement manufacturing is the third largest cause of man-made carbon dioxide emissions. While fossil fuel combustion and deforestation each produce significantly more carbon dioxide (CO2), cement-making is responsible for approximately 2.5% of total worldwide emissions from industrial sources (energy plus manufacturing sectors). Cement is a major industrial commodity. Manufactured commercially in at least 120 countries, it is mixed with sand and gravel to make concrete. Concrete is used in the construction of buildings, roads, and other structures, as well as in other products and applications. Its use as a residential building material is particularly important in countries where wood is not traditionally used for building or is in short supply.Annual CO2 emissions from cement production in nine major regions of the world are shown in Figure A below.Large quantities of CO2 are emitted during the production of lime, the key ingredient in cement. Lime, or calcium oxide (CaO), is created by heating calcium carbonate (CaCO3) in large furnaces called kilns. Calcium carbonate is derived from limestone, chalk, and other calcium-rich materials. The process of heating calcium carbonate to yield lime is called calcination or calcining and is written chemically as: CaCO3 + Heat -> CaO + CO2Lime combines with other minerals in the hot kiln to form cement’s "active ingredients". Like the CO2 emitted during the combustion of coal, oil, and gas, the carbon dioxide released during cement production is of fossil origin. The limestone and other calcium-carbonate-containing minerals used in cement production were created ages ago primarily by the burial in ocean sediments of biomass (such as sea shells, which have a high calcium carbonate content). Liberation of this store of carbon is normally very slow, but it has been accelerated many times over by the use of carbonate minerals in cement manufacturing.The lime content of cement does not vary much. Most of the structural cement currently produced is of the "Portland" cement type, which contains 60 to 67 percent lime by weight. There are specialty cements that contain less lime, but they are typically used in small quantities. While research is underway into suitable cement mixtures that have less lime than does Portland cement, options for significantly reducing CO2 emissions from cement are currently limited. Carbon dioxide emissions from cement production are estimated at 560 million tonnes per year.This estimate is based on the amount of cement that is produced, multiplied by an average emission factor. By assuming that the average lime content of cement is 63.5%, researchers have calculated an emission factor of 0.498 tonnes of CO2 to one tonne of cement.1CO2 emissions from cement production have increased about eight-fold in the last 40 years.Figure B below shows the estimated global emissions from this source since the 1950s. Note that these figures do not include the CO2 emissions from fuels used in the manufacturing process. Cement production and related emissions of CO2 have risen at roughly three times the rate of population growth over the entire period, and at twice the rate of population growth since 1970. Cement-related CO2 emissions by region are shown in the right-hand figure. Emissions from the industrialized world and China dominate, but emissions from all regions are significant, reflecting the global nature of cement production.For further reading:Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25.Data from this source were used in preparing Figures A and B.Tresouthick, S.W., and A. Mishulovich, 1990. "Energy and Environment Considerations for the Cement Industry", pp. B-110 to B-123 in Energy and Environment in the 21st Century, proceedings of a conference held March 26-28, 1990 at Massachusetts Institute of Technology, Cambridge, Massachusetts. U.S. Department of the Interior, Bureau of Mines, 1992. Cement: Annual Report 1990, authored by Wilton Johnson. United States Department of the Interior, Washington D.C. The cement production figures in Figure A were derived from this source.Notes:1 Marland, et. al.
Chlorofluorocarbons (CFCs) and other halocarbons are extremely potent greenhouse gases . . .They are released in relatively small quantities, but one kilogram of the most commonly used CFCs may have a direct effect on climate thousands of times larger than that of one kilogram of carbon dioxide. In addition, over the last two decades the percentage increase in CFCs in the atmosphere has been higher than that of other greenhouse gases (GHG); by 1990 concentrations of the different varieties of CFC were increasing by 4-12 percent per year. but because CFCs also destroy ozone -- itself a greenhouse gas -- their net effect on the climate is unclear. The strength of this "indirect" effect of ozone depletion depends on variables such as the temperature of the upper atmosphere and cannot yet be measured with any confidence. According to new research, however, it is possible that the indirect effect of CFCs cancels out some or all of the direct effect of their being powerful GHGs.CFCs are a family of man-made gases used for various industrial purposes. First developed in the 1920s in the United States, CFCs have been used in large quantities only since about 1950. The industrialized countries still account for well over 80 percent of CFC use, although newly-industrializing and developing countries are rapidly increasing their consumption levels. CFC-11 is used principally as a propellant in aerosol cans, although this use has been phased out in many countries, and in the manufacture of plastic foams for cushions and other products. CFC-12 is also used for foam manufacturing as well as in the cooling coils of refrigerators and air conditioners. HCFC-22 was recently introduced as a replacement for CFC-12 because it has a shorter lifetime in the atmosphere and is thus a much less powerful ozone-depleting agent. Halons (or bromofluorocarbons) are used as fire extinguishing materials.CFC-113, methyl chloroform, and carbon tetrachloride are used as solvents for cleaning (carbon tetrachloride is also a feed stock for the production of CFC-11 and CFC-12). There are other types of halocarbons, but they are used in small quantities.CFCs are generally colorless, odorless, and non-toxic. They also do not react chemically with other materials, and as a result they remain in the atmosphere for a long time -- often 50 to 100 years -- before they are destroyed by reactions catalyzed by sunlight. CFCs are composed of carbon, chlorine, and fluorine. Together with other manufactured gases that contain either fluorine or chlorine, and with the bromine-containing Halons, CFCs are referred to collectively as halogenated compounds, or halocarbons.There is often a significant lag time between the production of CFCs and their escape into the atmosphere. Some CFCs, such as those used in spray cans or as solvents for washing electronic parts,are emitted within just a few months or years of being produced. Others, such as those contained in durable equipment such as air conditioners, refrigerators, and fire extinguishers, may not be released for decades. Consequently, the annual use figures in the table do not, for many compounds, reflect annual emissions. So even if the manufacture of CFCs were to stop today, it would take many years for emissions to fall to zero, unless stringent measures were adopted for the recycling or capture of CFCs in old equipment.Although they are important greenhouse gases, CFCs are better known for their role in damaging the earth's ozone layer. CFCs first came to public attention in the mid-1980s after an "ozone hole" was discovered over the Antarctic. Scientists now know that a complex series of chemical reactions involving CFCs occurs during the Arctic and Antarctic springtimes and leads to the depletion of ozone (O3). Stratospheric ozone forms a shield that prevents most of the sun's ultraviolet (UV) light reaching the earth's surface and causing skin cancer and other cell damage. In response to the weakening of this shield, most of the world's CFC users adopted the "Montreal Protocol" in 1987. This treaty commits signatory nations to phase out their use of CFCs and some other halocarbons by the year 2000. In November, 1992, growing fears of ozone depletion led to the Copenhagen agreement, which commits governments to a total phase out of the most destructive CFCs by the year 1996. This will help to protect the ozone layer and reduce the role of CFCs in climate change -- although the benefits of these agreements will not be felt for several years due to the long life-span of CFCs. Alternatives are being developed to replace CFCs. Some of these substitutes are halocarbons, such as the compound HCFC-22, which can replace CFC-12 in refrigeration and air conditioning systems. These substitute halocarbons are also greenhouse gases, but because they are shorter-lived than the CFCs used now they will have a more limited long-term effect on the climate. Other substitutes that are less harmful than HCFC-22 have been developed and tested and are now being rapidly introduced for various applications. Additional solutions involve changes in industrial processes to avoid the need for halocarbons entirely. For example, later-based cleaners are increasingly being substituted for CFCs in the electronics industry, and non-pressurized, or "pump" spray bottles are being sold instead of CFC-driven spray cans.For further reading:IPCC, "Scientific Assessment of Climate Change", Cambridge University Press, 1990.IPCC, "The Supplementary Report to the IPCC Scientific Assessment", Cambridge University Press, 1992.US Environment Protection Agency, 1990, "Policy Options for Stabilizing Global Climate", eds. D.A.Lashof and D. Tirpak. Report no. 21P-2003.1, December 1991. EPA Office of Planning and Evaluation,Washington DC.WMO/UNEP/NASA, "Scientific Assessment of Ozone Depletion", 1991.
About one-quarter of the methane emissions caused by human activities comes from domesticated animals. The second-most important greenhouse gas after carbon dioxide, methane (CH4) is released by cattle, dairy cows, buffalo, goats, sheep, camels, pigs, and horses. It is also emitted by the wastes of these and other animals. Total annual methane emissions from domesticated animals are thought to be about 100 million tonnes. Animals produce methane through "enteric fermentation". In this process plant matter is converted by bacteria and other microbes in the animal’s digestive tract into nutrients such as sugars and organic acids. These nutrients are used by the animal for energy and growth. A number of by-products, including methane, are also produced, but they are not used by the animal; some are released as gas into the atmosphere. (Although carbon dioxide (CO2) is produced in similar quantities as methane, it is derived from sustainably produced plant matter and thus makes no net addition to the atmosphere.) The carbon in the plant manner is converted into methane through this general, overall chemical reaction: microbial Organic Plant Matter + H2O -----------> CO2 + CH4 + (nutrients and metabolism other products). The amount of methane that an individual animal produces depends on many factors. The key variables are the species, the animal’s age and weight, its health and living conditions, and the type of feed it eats. Ruminant animals - such as cows, sheep, buffalo, and goats - have the highest methane emissions per unit of energy in their feed, but emissions from some non-ruminant animals, such as horses and pigs, are also significant. National differences in animal-farming are particularly important. Dairy cows in developing nations, for example, produce about 35 kg of methane per head per year, while those in industrialized nations, where cows are typically fed a richer diet and are physically confined, produce about 2.5 times as much per head. There is a strong link between human diet and methane emissions from livestock. Nations where beef forms a large part of the diet, for example, tend to have large herds of cattle. As beef consumption rises or falls, the number of livestock will, in general, also rise or fall, as will the related methane emissions. Similarly, the consumption of dairy goods, pork, mutton, and other meats, as well non-food items such as wool and draft labor (by oxen, camels, and horses), also influences the size of herds and methane emissions. The figures below present recent estimates of methane emissions by type of animal and by region. Due to their large numbers, cattle and dairy cows produce the bulk of total emissions. In addition, certain regions - both developing and industrialized - produce significant percentages of the global total. Emissions in South and East Asia are high principally because of large human populations; emissions per-capita are slightly lower than the world average. Latin America has the highest regional emissions per capita, due primarily to large cattle populations in the beef-exporting countries (notablyBrazil and Argentina). Centrally-planned Asia (mainly China) has by far the lowest per-capita emissions due to a diet low in meat and dairy products. See also Fact Sheet 271: "Reducing methane emissions from livestock farming".For further reading:Intergovernmental Panel on Climate Change (IPCC), 1990. Greenhouse Gas Emissions from Agricultural systems. Proceedings of a workshop on greenhouse gas emissions from agricultural systems held inWashington D.C., December 12 - 14, 1989. Published as US/EPA report 20P-2005, September, 1990 (2volumes).United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.1, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C. Data from computer files used for this report were used to create the tables.
About one-quarter of the total methane emissions caused by human activities comes from domesticated animals. The animals that emit this methane (CH4) include cattle, dairy cows, buffalo, goats, sheep, camels, pigs, and horses. Most livestock-related methane is produced by "enteric fermentation" of food in the animals’ digestive tracts. About one-quarter to one-third of it, however, or a total of 25 million tonnes per year, is released later from decomposing manure. Decomposition occurs as organic wastes in moist, oxygen-free (anaerobic) environments are broken down by bacteria and other microbes into methane, carbon dioxide, and trace amounts of small organic molecules, nitrogen compounds, and other products. The amount of methane released from animal manure in a particular region depends on many variables. The key variables are the number and types of animals present, the amount of manure produced by each animal, the amount of moisture and fiber in the animals’ wastes, the waste management system used, and the local climate. Eastern Europe, Western Europe, and North America have the largest emissions, primarily due to their use of liquid waste storage systems and anaerobic lagoons for treating cattle and swine wastes. The figure below shows regional methane emissions from livestock wastes as a percentage of global emissions and on a per-person basis. Emissions per person in the industrialized regions are two to eight times those in developing regions. Larger animals, not surprisingly, produce more manure per individual. Most domestic animals produce between 7 and 11 kilograms of "volatile solids" (VS) per tonne of animal per day. VS is the amount of organic matter present in the manure after it has dried. Each type of animal waste has its characteristic content of degradable organic matter (material that can be readily decomposed), moisture, nitrogen, and other compounds. As a consequence, the maximum methane-producing potential of the different manures varies both across species and, in instances where feeding practices vary, within a single species.Dairy and non-dairy cattle account for the largest part of global methane emission from livestock manures. After cattle, swine wastes make the second largest contribution. Waste disposal methods help to determine how much methane is emitted. If manure is left to decompose on dry soil, as typically happens with free-roaming animals in developing countries, it will be exposed to oxygen in the atmosphere and probably decompose aerobically. Relatively little methane will be produced, perhaps just 5-10 percent of the maximum possible. (This is particularly true in dry climates, where the manure dries out before extensive methane production can take place; very low temperatures also inhibit fermentation and methane production.) However, when animal wastes are collected and dumped into artificial or natural lagoons or ponds - a common practice in developed countries - they lose contact with the air because, as wastes decompose in these small bodies of water, the oxygen is quickly depleted. As a result, most of the waste is likely to decompose anaerobically, producing a substantial amount of methane - as much as 90 percent of the theoretical maximum. Other waste disposal practices fall in between these two extremes.See also Fact Sheet 271: "Reducing methane emissions from livestock farming".For further reading:L.M. Safley, et. al, 1992. Global Methane Emissions from Livestock and Poultry. United States Environmental Protection Agency (US/EPA) Report # EPA/400/ 1-91/048, February, 1992. US/EPA, Washington, D.C..
Rice fields produce about 60 million tonnes of methane per year. This represents about 17% of total methane (CH4) emissions resulting from human activities. Virtually all of this methane comes from "wetland" rice farming. Rice can be produced either by wetland, paddy rice farming or by upland, dry rice farming. Wetland rice is grown in fields that are flooded for much of the growing season with natural flood- or tide-waters or through irrigation. Upland rice, which accounts for just 10 percent of global rice production, is not flooded, and it is not a significant source of methane.Methane is produced when organic matter in the flooded rice paddy is decomposed by bacteria and other micro-organisms. When soil is covered by water, it becomes anaerobic, or lacking in oxygen. Under these conditions, methane-producing bacteria and other organisms decompose organic matter in or on the soil, including rice straw, the cells of dead algae and other plants that grow in the paddy, and perhaps organic fertilizers such as manure. The result of this reaction is methane, carbon dioxide (CO2 -but not in quantities significant for climate change), and other products:microbial Plant Organic Matter + H2O -----------> CO2 + CH4 + (other products). metabolismMethane is transported from the paddy soil to the atmosphere in three different ways. The primary method is through the rice plant itself, with the stem and leaves of the plant acting rather like pipelines from the soil to the air. This mode of transport probably accounts for 90-95 percent of emissions from a typical field. Methane also bubbles up directly from the soil through the water or is released into the air after first becoming dissolved in the water. Calculating how much methane is released from a particular field or region is difficult. Important variables include the number of acres under cultivation, the number of days that the paddy is submerged under water each year, and the rate of methane emission per acre per day. The uncertainty is caused by this last variable, which is complex and poorly understood. The methane emission rate is determined by soil temperature, the type of rice grown, the soil type, the amount and type of fertilizer applied, the average depth of water in the paddy, and other site-specific variables. Measurements at a fairly limited number of paddy sites have yielded a wide range of methane production rates. As a result, estimates of global methane production from rice paddies are considered uncertain. One recent estimate gives a range of 20 - 150 million tonnes of methane per year.1Asia produces most of the world’s rice. Since rice is the staple food throughout much of Asia, nearly 90 percent of the world’s paddy area is found there. China and India together have nearly half of the world’s rice fields and probably contribute a similar fraction of the global methane emissions from rice production.The options for reducing methane emissions from rice cultivation are limited. Reducing the area of rice under cultivation is unlikely to happen given the already tenuous food supply in many rice-dependent countries. Other options include replacing paddy rice with upland rice, developing strains of rice plant that need less time in flooded fields, and using different techniques for applying fertilizers. Each of these options will require much more research to become widely practical.For further reading:Intergovernmental Panel on Climate Change (IPCC), 1990. Greenhouse Gas Emissions from Agricultural Systems. Proceedings of a workshop on greenhouse gas emissions from agricultural systems held in Washington D.C., December 12 - 14, 1989. Published as US/EPA report 20P-2005, September, 1990 (2volumes).The Organization for Economic Co-operation and Development (OECD), 1991. Estimation of Greenhouse Gas Emissions and Sinks. Final Report from the Expert’s Meeting, 18-21 February, 1991. Prepared for the Intergovernmental Panel on Climate Change. OECD, Paris, 1991.United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, Technical Appendices, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.3, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C.Notes:1 IPCC, 1992 Supplement.
Climate change would strongly affect agriculture, but scientists still don’t know exactly how. Most agricultural impacts studies are based on the results of general circulation models (GCMs). These climate models indicate that rising levels of greenhouse gases are likely to increase the global average surface temperature by 1.5-4.5 C over the next 100 years, raise sea-levels (thus inundating farmland and making coastal groundwater saltier), amplify extreme weather events such as storms and hot spells, shift climate zones poleward, and reduce soil moisture. Impacts studies consider how these general trends would affect agricultural production in specific regions. To date, most studies have assumed that agricultural technology and management will not improve and adapt. New studies are becoming increasingly sophisticated, however, and "adjustments experiments" now incorporate assumptions about the human response to climate change.Increased concentrations of CO2 may boost crop productivity. In principle, higher levels of CO2 should stimulate photosynthesis in certain plants; a doubling of CO2 may increase photosynthesis rates by as much as 30-100%. Laboratory experiments confirm that when plants absorb more carbon they grow bigger and more quickly. This is particularly true for C3 plants (so called because the product of their first biochemical reactions during photosynthesis has three carbon atoms). Increased carbon dioxide tends to suppress photo-respiration in these plants, making them more water-efficient. C3 plants include such major mid-latitude food staples as wheat, rice, and soya bean. The response of C4 plants, on the other hand, would not be as dramatic (although at current CO2 levels these plants photosynthesize more efficiently than do C3 plants). C4 plants include such low-latitude crops as maize, sorghum, sugar-cane, and millet, plus many pasture and forage grasses.Climate and agricultural zones would tend to shift towards the poles. Because average temperatures are expected to increase more near the poles than near the equator, the shift in climate zones will be more pronounced in the higher latitudes. In the mid-latitude regions (45 to 60 latitude), the shift is expected to be about 200-300 kilometres for every degree Celsius of warming. Since today’s latitudinal climate belts are each optimal for particular crops, such shifts could have a powerful impact on agricultural and livestock production. Crops for which temperature is the limiting factor may experience longer growing seasons. For example, in the Canadian prairies the growing season might lengthen by 10 days for every 1 C increase in average annual temperature. While some species would benefit from higher temperatures, others might not. A warmer climate might, for example, interfere with germination or with other key stages in their life cycle. It might also reduce soil moisture; evaporation rates increase in mid-latitudes by about 5% for each 1 C rise in average annual temperature. Another potentially limiting factor is that soil types in a new climate zone may be unable to support intensive agriculture as practised today in the main producer countries. For example,even if sub-Arctic Canada experiences climatic conditions similar to those now existing in the country’s southern grain-producing regions, its poor soil may be unable to sustain crop growth. Mid-latitude yields may be reduced by 10-30% due to increased summer dryness. Climate models suggest that today’s leading grain-producing areas - in particular the Great Plains of the US - may experience more frequent droughts and heat waves by the year 2030. Extended periods of extreme weather conditions would destroy certain crops, negating completely the potential for greater productivity through "CO2 fertilization". During the extended drought of 1988 in the US corn belt region, for example, corn yields dropped by 40% and, for the first time since 1930, US grain consumption exceeded production.The poleward edges of the mid-latitude agricultural zones - northern Canada, Scandinavia, Russia, and Japan in the northern hemisphere, and southern Chile and Argentina in the southern one - may benefit from the combined effects of higher temperatures and CO2 fertilization. But the problems of rugged terrain and poor soil suggest that this would not be enough to compensate for reduced yields in the more productive areas.The impact on yields of low-latitude crops is more difficult to predict. While scientists are relatively confident that climate change will lead to higher temperatures, they are less sure of how it will affect precipitation - the key constraint on low-latitude and tropical agriculture. Climate models do suggest, however, that the intertropical convergence zones may migrate poleward, bringing the monsoon rains with them. The greatest risks for low-latitude countries, then, are that reduced rainfall and soil moisture will damage crops in semi-arid regions, and that additional heat stress will damage crops and especially livestock in humid tropical regions. The impact on net global agricultural productivity is also difficult to assess. Higher yields in some areas may compensate for decreases in others - but again they may not, particularly if today’s major food exporters suffer serious losses. In addition, it is difficult to forecast to what extent farmers and governments will be able to adopt new techniques and management approaches to compensate for the negative impacts of climate change. It is also hard to predict how relationships between crops and pests will evolve.For further reading:Martin Parry, "Climate Change and World Agriculture", Earthscan Publications, 1990.Intergovernmental Panel on Climate Change, "The IPCC Scientific Assessment" and "The IPCC Impacts Assessment", WMO/IPCC, 1990. | http://www.usask.ca/agriculture/caedac/dbases/ghgprimer2b.html | 13 |
35 | Rationing is the controlled distribution of scarce resources, goods, or services. Rationing controls the size of the ration, one's allotted portion of the resources being distributed on a particular day or at a particular time.
In economics
|This section does not cite any references or sources. (September 2009)|
In economics, rationing is an artificial restriction of demand. It is done to keep price below the equilibrium (market-clearing) price determined by the process of supply and demand in an unfettered market. Thus, rationing can be complementary to price controls. An example of rationing in the face of rising prices took place in the various countries where there was rationing of gasoline during the 1973 energy crisis.
A reason for setting the price lower than would clear the market may be that there is a shortage, which would drive the market price very high. High prices, especially in the case of necessities, are undesirable with regard to those who cannot afford them. Traditionalist economists argue, however, that high prices act to reduce waste of the scarce resource while also providing incentive to produce more (this approach requires assuming no horizontal inequality).
Rationing using coupons is only one kind of non-price rationing. For example, scarce products can be rationed using queues. This is seen, for example, at amusement parks, where one pays a price to get in and then need not pay any price to go on the rides. Similarly, in the absence of road pricing, access to roads is rationed in a first come, first served queueing process, leading to congestion.
Health care rationing
Shortages of organs for donation forced the rationing of hearts, livers, lungs and kidneys in the United States. During the 1940s, a limited supply of iron lungs for polio victims forced physicians to ration these machines. Dialysis machines for patients in kidney failure were rationed between 1962 and 1967. More recently, Tia Powell led a New York State Workgroup that set up guidelines for rationing ventilators during a flu pandemic. Among those who have argued in favor of health-care rationing are moral philosopher Peter Singer and former Oregon governor John Kitzhaber.
Credit rationing
The concept in economics and banking of credit rationing describes the situation when a bank limits the supply of loans, although it has enough funds to loan out, and the supply of loans has not yet equalled the demand of prospective borrowers. Changing the price of the loans (interest rate) does not equilibrate the demand and supply of the loans.
Military rationing
|This section requires expansion. (June 2007)|
Civilian rationing
||This section needs additional citations for verification. (September 2009)|
Rationing is often instituted during wartime for civilians as well. For example, each person may be given "ration coupons" allowing him or her to purchase a certain amount of a product each month. Rationing often includes food and other necessities for which there is a shortage, including materials needed for the war effort such as rubber tires, leather shoes, clothing and gasoline.
Military sieges often result in shortages of food and other essential consumables.
The rations allocated to an individual are often determined based on age, sex, race or social standing. During the Siege of Lucknow in 1857 woman received three quarters the food ration a man received and children received only half.:71 During the Siege of Ladysmith in 1900 white adults received the same food rations as soldiers while children received half that. Food rations for Indian people and Black people were significantly smaller.:266-272
Civilian peace time rationing of food may also occur, especially after natural disasters, during contingencies, or after failed governmental economic policies regarding production or distribution, the latter happening especially in highly centralized planned economies. Examples include the United Kingdom for almost a decade after the end of World War II, North Korea, China during the 1970s and 1980s, Communist Romania during the 1980s, the Soviet Union in 1990-1991, and Cuba today. This led to rationing in the Soviet Union, in Communist Romania, in North Korea and in Cuba, and austerity in Israel.
United States
The United States did not have food rationing in World War I. Through slogans such as "Food Will Win the War", "Meatless Mondays", and "Wheatless Wednesdays", the United States Food Administration under Herbert Hoover reduced national consumption by 15%. In summer 1941 the British appealed to Americans to conserve food to provide more to go to Britons fighting in World War II. The Office of Price Administration warned Americans of potential gasoline, steel, aluminum, and electricity shortages. It believed that with factories converting to military production and consuming many critical supplies, rationing would become necessary if the country entered the war. It established a rationing system after the attack on Pearl Harbor.:133 Of concern for all parts of the country was a shortage of rubber for tires since the Japanese quickly conquered the rubber-producing regions of Southeast Asia. Although synthetic rubber had been invented in the years preceding the war, it had been unable to compete with natural rubber commercially, so the USA did not have enough manufacturing capacity at the start of the war to make synthetic rubber. Throughout the war, rationing of gasoline was motivated by a desire to conserve rubber as much as by a desire to conserve gasoline.
|“||We discovered that the American people are basically honest and talk too much.||”|
—A ration board member:136
Tires were the first item to be rationed by the OPA, which ordered the temporary end of sales on 11 December 1941 while it created 7,500 unpaid, volunteer three-person tire ration boards around the country. By 5 January 1942 the boards were ready. Each received a monthly allotment of tires based on the number of local vehicle registrations, and allocated them to applicants based on OPA rules.:133 The War Production Board (WPB) ordered the temporary end of all civilian automobile sales on 1 January 1942, leaving dealers with one half million unsold cars. Ration boards grew in size as they began evaluating automobile sales in February (only certain professions, such as doctors and clergymen, qualified to purchase the remaining inventory of new automobiles), typewriters in March, and bicycles in May.:124,133-135 Automobile factories stopped manufacturing civilian models by early February 1942 and converted to producing tanks, aircraft, weapons, and other military products, with the United States government as the only customer. By June 1942 companies also stopped manufacturing for civilians metal office furniture, radios, phonographs, refrigerators, vacuum cleaners, washing machines, and sewing machines.:118,124,126-127
Civilians first received ration books—War Ration Book Number One, or the "Sugar Book"—on 4 May 1942, through more than 100,000 schoolteachers, PTA groups, and other volunteers.:137 A national speed limit of 35 miles per hour was imposed to save fuel and rubber for tires. Later that month volunteers again helped distribute gasoline cards in 17 Atlantic and Pacific Northwest states.:138 To get a classification and rationing stamps, one had to appear before a local War Price and Rationing Board which reported to the OPA (which was jokingly said to stand for "Only a Puny A-card"). Each person in a household received a ration book, including babies and small children who qualified for canned milk not available to others. To receive a gasoline ration card, a person had to certify a need for gasoline and ownership of no more than five tires. All tires in excess of five per driver were confiscated by the government, because of rubber shortages. An "A" sticker on a car was the lowest priority of gasoline rationing and entitled the car owner to 3 to 4 gallons of gasoline per week. B stickers were issued to workers in the military industry, entitling their holder up to 8 gallons of gasoline per week. C stickers were granted to persons deemed very essential to the war effort, such as doctors. T rations were made available for truckers. Lastly, X stickers on cars entitled the holder to unlimited supplies and were the highest priority in the system. Ministers of Religion, police, firemen, and civil defense workers were in this category. A scandal erupted when 200 Congressmen received these X stickers.
As of 1 March 1942, dog food could no longer be sold in tin cans, and manufacturers switched to dehydrated versions. As of 1 April 1942, anyone wishing to purchase a new toothpaste tube, then made from metal, had to turn in an empty one.:129-130 Sugar was the first consumer commodity rationed, with all sales ended on 27 April 1942 and resumed on 5 May with a ration of one half pound per person per week, half of normal consumption. Bakeries, ice cream makers, and other commercial users received rations of about 70% of normal usage. Coffee was rationed nationally on 29 November 1942 to one pound every five weeks, about half of normal consumption, in part because of German U-boat attacks on shipping from Brazil. By the end of 1942, ration coupons were used for nine other items.:138 Typewriters, gasoline, bicycles, footwear, Silk, Nylon, fuel oil, stoves, meat, lard, shortening and oils, cheese, butter, margarine, processed foods (canned, bottled, and frozen), dried fruits, canned milk, firewood and coal, jams, jellies, and fruit butter were rationed by November 1943. Many retailers welcomed rationing because they were already experiencing shortages of many items due to rumors and panics, such as flashlights and batteries after Pearl Harbor.:133
Many levels of rationing went into effect. Some items, such as sugar, were distributed evenly based on the number of people in a household. Other items, like gasoline or fuel oil, were rationed only to those who could justify a need. Restaurant owners and other merchants were accorded more availability, but had to collect ration stamps to restock their supplies. In exchange for used ration stamps, ration boards delivered certificates to restaurants and merchants to authorize procurement of more products.
The work of issuing ration books and exchanging used stamps for certificates was handled by some 5,500 local ration boards of mostly volunteer workers selected by local officials.
Each ration stamp had a generic drawing of an airplane, gun, tank, aircraft carrier, ear of wheat, fruit, etc. and a serial number. Some stamps also had alphabetic lettering. The kind and amount of rationed commodities were not specified on most of the stamps and were not defined until later when local newspapers published, for example, that beginning on a specified date, one airplane stamp was required (in addition to cash) to buy one pair of shoes and one stamp number 30 from ration book four was required to buy five pounds of sugar. The commodity amounts changed from time to time depending on availability. Red stamps were used to ration meat and butter, and blue stamps were used to ration processed foods.
To enable making change for ration stamps, the government issued "red point" tokens to be given in change for red stamps, and "blue point" tokens in change for blue stamps. The red and blue tokens were about the size of dimes (16 mm) and were made of thin compressed wood fiber material, because metals were in short supply.
There was a black market in stamps. To prevent this, the OPA ordered vendors not to accept stamps that they themselves did not tear out of books. Buyers, however, circumvented this by saying (sometimes accurately. The books were not well-made), that the stamps had "fallen out." In actuality, they may have acquired stamps from other family members or friends, or the black market.
As a result of the rationing, all forms of Automobile racing, including the Indianapolis 500, were banned. Sightseeing driving was also banned.
Rationing was ended in 1946.
USA Ration Book No. 3 circa 1943, front
Back of ration book
Fighter plane ration stamp
Artillery ration stamp
Tank ration stamp
Aircraft Carrier ration stamp
United Kingdom
The British Ministry of Food refined the rationing process in the early 1940s to ensure the population did not starve when food imports were severely restricted and local production limited due to the large number of men fighting the war. Rationing was in some respects more strict after the war than during it—two major foodstuffs that were never rationed during the war, bread and potatoes, went on ration after it (bread from 1946 to 1948, and potatoes for a time from 1947). Tea was still on ration until 1952. In 1953 rationing of sugar and eggs ended, and in 1954, all rationing finally ended when cheese and meats came off ration.
Food rationing for the first time appeared in Poland after World War I, and ration stamps were in use until the end of the Polish–Soviet War. They were introduced again by the Germans in the General Government, during World War II.
In the immediate post-war period, rationing was in place until 1948. Shortages of food products were common in Poland at that time, but food rations also served another purpose. Cards were unevenly distributed by the Communist authorities - leading udarniks, known in Poland as przodownicy pracy, were entitled to as much as 3700 calories daily, while some white-collar workers received as little as 600 calories a day. Rationing covered more than food products. In different periods of Communist Poland, especially in early 1980s, stamps were introduced for shoes, cigarettes, sugar, sweets, liquor, soap, even baby diapers, tires and cars.
On August 30, 1951, Trybuna Ludu announced that “temporary monthly stamps are introduced for meat and fat products in selected enterprises”. At the same time, Communist propagandists tried to assure the nation that the announcement did not mean rationing. One year later, on April 24, 1952, “temporary” stamps were introduced for sugar and soap. The program was abandoned at the beginning of 1953.
Food stamps returned on July 25, 1976, when the so-called “goods tickets” (bilety towarowe) were introduced. They entitled every citizen to buy two kilograms of sugar a month. In 1977 Poland was struck by a deep crisis, which resulted in creation of Solidarity, in 1980. Food shortages were so common, that citizens themselves demanded rationing. On February 28, 1981, Polish Press Agency announced introduction of meat stamps, due on April 1. One month later, stamps were introduced for cereals, flour and rice. The society was divided into eight groups, and several kinds of cards were introduced. A good example is the monthly rationing of butter, valid from May 1, 1981:
- owners of M-1 cards (laborers and children under three) were entitled to 0.5 kilograms of butter a month,
- owners of M-2 cards (pregnant women and children between the ages 4 – 18) were entitled to 0.75 kg of butter,
- owners of M-R cards (farmers) were entitled to 0.25 kg of butter.
On August 1, 1981, stamps were introduced for cleaning products (300 grams of washing powder a month), chocolates (100 grams a month) and liquor. After introduction of the Martial law (December 13, 1981), monthly allotments for stamps were reduced - for example, instead of two kilograms of sugar, only one kilogram per month per citizen was allowed. Basic meat allotment was reduced from 3.5 kilograms to 2.5 kilograms monthly, except for coal miners, who were entitled to up to 7 kilograms. Further products were covered by rationing - whole milk, lard or margarine (250 grams monthly), flour (one kilogram monthly), cigarettes (10 packages monthly), and liquor (one half-liter bottle of vodka a month or one bottle of imported wine. In some cases, such as weddings, citizens were allowed by local authorities to buy more alcohol).
On February 1, 1982, gasoline rationing was introduced. Depending on the size of a car, 24 to 45 liters of gasoline were available monthly. On July 1, 1982, a wide range of products was rationed, from sweets to school exercise books. On June 1, 1983, rationing of butter, margarine and lard was cancelled, only to be reintroduced on November 1982. Finally, on July 1, 1989, rationing of meat was officially cancelled.
Another form of rationing that was employed during World War II, called Ration Stamps. These were redeemable stamps or coupons. Every family was issued a set number of each kind of stamp based on the size of the family, ages of children and income. This allowed the Allies and mainly America to supply huge amounts of food to the troops and later provided a surplus to aid in the rebuilding of Europe with aid to Germany after food supplies were destroyed.
Ration stamp for a German person on holiday/vacation during World War II (5-day-stamp)
From 1949 to 1959, Israel was under a regime of austerity, during which a state of rationing was enforced. At first, only staple foods such as oil, sugar, and margarine were rationed, but it was later expanded, and eventually included furniture and footwear. Every month, each citizen would get food coupons worth 6 Israeli pounds, and every family would be allotted a given amount of food. The average Israeli diet was 2,800 calories a day, with additional calories for children, the elderly, and pregnant women.
Following the 1952 reparations agreement with West Germany, and the subsequent influx of foreign capital, Israel's struggling economy was bolstered, and in 1953, most restrictions were cancelled. In 1958, the list of rationed goods was narrowed to just eleven, and in 1959, it was narrowed to only jam, sugar, and coffee.
During the period of rationing, a black market emerged, where goods smuggled from the countryside were sold at prices higher than their original worth. Government attempts to stop the black market failed.
Emergency rationing
Rationing of food and water may become necessary during an emergency, such as a natural disaster or terror attack. The Federal Emergency Management Agency (FEMA) has established guidelines for civilians on rationing food and water supplies when replacements are not available. According to FEMA standards, every person should have a minimum of one quart per day of water, and more for children, nursing mothers, and the ill.
Carbon rationing
Personal carbon trading refers to proposed emissions trading schemes under which emissions credits are allocated to adult individuals on a (broadly) equal per capita basis, within national carbon budgets. Individuals then surrender these credits when buying fuel or electricity. Individuals wanting or needing to emit at a level above that permitted by their initial allocation would be able to engage in emissions trading and purchase additional credits. Conversely, those individuals who emit at a level below that permitted by their initial allocation have the opportunity to sell their surplus credits. Thus, individual trading under Personal Carbon Trading is similar to the trading of companies under EU ETS.
Personal carbon trading is sometimes confused with carbon offsetting due to the similar notion of paying for emissions allowances, but is a quite different concept designed to be mandatory and to guarantee that nations achieve their domestic carbon emissions targets (rather than attempting to do so via international trading or offsetting).
See also
- 10-in-1 food parcel
- 2007 Gas Rationing Plan in Iran
- Combat Ration One Man
- Juntas de Abastecimientos y Precios, rationing in Chile under Allende
- Rationing in the United Kingdom
- Road space rationing (Vehicle travel restriction based on license plate number)
- Salt lists
- Siege of Leningrad
- United States military ration
- Carbon rationing
- Allocation of Ventilators in an Influenza Pandemic, Report of New York State Task Force on Life and the Law, 2007.
- Matt Gouras. "Frist Defends Flu Shots for Congress." Associated Press. October 21, 2004.
- Stiglitz, J. & Weiss, A. (1981). Credit Rationing in Markets with Imperfect Information, American Economic Review, vol. 71, pages 393-410.
- Cornelia Dean, Guidelines for Epidemics: Who Gets a Ventilator?, The New York Times, March 25, 2008
- Why We Must Ration Health Care , The New York Times, July 15, 2009
- Inglis, Julia Selina (1892). The siege of Lucknow : a diary (1892). London: James R. Osgood, McIlvaine & Co.
- Nevinson, Henry Wood (1900). Ladysmith: The Diary of a Siege (1900). New Amsterdam Book co.
- "U.S. Food Administrator". Biographical Sketch of Herbert Hoover, 1874-1964. Herbert Hoover Presidential Library and Museum, National National Archives and Records Administration. 2001-06-20. Retrieved September 28, 2012.
- ""Creamless Days?" / The Pinch". Life. 1941-06-09. p. 38. Retrieved December 5, 2012.
- Kennett, Lee (1985). For the duration... : the United States goes to war, Pearl Harbor-1942. New York: Scribner. ISBN 0-684-18239-4.
- World War II on the Home Front
- "U.S. Auto Plants are Cleared for War". Life. p. 19. Retrieved November 16, 2011.
- "Sugar: U. S. consumers register for first ration books". Life. 1942-05-11. p. 19. Retrieved November 17, 2011.
- fuel ration stickers
- Maddox, Robert James. The United States and World War II. Page 193
- "Coffee Rationing". Life. 1942-11-30. p. 64. Retrieved November 23, 2011.
- rationed items
- Joseph A. Lowande, U.S. Ration Currency & Tokens 1942-1945.
|Wikimedia Commons has media related to: Rationing|
|Look up ration or rationing in Wiktionary, the free dictionary.|
- Are You Ready?: An In-depth Guide to Citizen Preparedness - FEMA
- short descriptions of World War I rationing - Spartacus Educational
- a short description of World War II rationing - Memories of the 1940s
- Ration Coupons on the Home Front, 1942-1945 - Duke University Libraries Digital Collections
- World War II Rationing on the U.S. homefront, illustrated - Ames Historical Society
- Links to 1940s newspaper clippings on rationing, primarily World War II War Ration Books - Genealogy Today
- Tax Rationing
- Recipe for Victory:Food and Cooking in Wartime
- war time rationing in UK | http://en.wikipedia.org/wiki/Rationing | 13 |
15 | FIFTH GRADE UNITED STATES HISTORY, CANADA, MEXICO, AND CENTRAL AMERICA
The fifth grade study extends the focus to geographic regions of the United States, Canada, Mexico, and Central America. Students learn about the people of these nations and the physical environments in which they live. As they examine social, economic, and political institutions, students analyze similarities and differences among societies. Concepts for this study are drawn from history and the social sciences, but the primary discipline is cultural geography. Given the swiftness of change and our global information systems, students' examinations of these concepts must require continuous reference to current events and trends.
Strands: Individual Development and Identity, Cultures and Diversity, Historical Perspectives, Geographic Relationships, Economics and Development, Global Connections, Technological Influences, Government and Active Citizenship
|Competency Goal 1||The learner will apply key geographic concepts to the United States and other countries of North America.|
1.01 Describe the absolute and relative location of major landforms, bodies of water, and natural resources in the United States and other countries of North America.
1.02 Analyze how absolute and relative location influence ways of living in the United States and other countries of North America.
1.03 Compare and contrast the physical and cultural characteristics of regions within the United States, and other countries of North America.
1.04 Describe the economic and social differences between developed and developing regions in North America.
1.05 Explain how and why population distribution differs within and between countries of North America.
1.06 Explain how people of the United States and other countries of North America adapt to, modify, and use their physical environment.
1.07 Analyze the past movement of people, goods, and ideas within and among the United States, Canada, Mexico, and Central America and compare it to movement today.
|Competency Goal 2||The learner will analyze political and social institutions in North America and examine how these institutions respond to human needs, structure society, and influence behavior.|
2.01 Analyze major documents that formed the foundations of the American idea of constitutional government.
2.02 Describe the similarities and differences among the local, state, and national levels of government in the United States and explain their legislative, executive, and judicial functions.
2.03 Recognize how the United States government has changed over time.
2.04 Compare and contrast the government of the United States with the governments of Canada, Mexico, and selected countries of Central America.
2.05 Assess the role of political parties in society.
2.06 Explain the role of public education in the United States.
2.07 Compare and contrast the educational structure of the United States to those of Canada, Mexico, and selected countries of Central America.
2.08 Describe the different types of families and compare and contrast the role the family plays in the societal structures of the United States, Canada, Mexico, and selected countries of Central America.
|Competency Goal 3||The learner will examine the roles various ethnic groups have played in the development of the United States and its neighboring countries.|
3.01 Locate and describe people of diverse ethnic and religious cultures, past and present, in the United States.
3.02 Examine how changes in the movement of people, goods, and ideas have affected ways of living in the United States.
3.03 Identify examples of cultural interaction within and among the regions of the United States.
3.04 Hypothesize how the differences and similarities among people have produced diverse American cultures.
3.05 Describe the religious and ethnic impact of settlement on different regions of the United States.
3.06 Compare and contrast the roles various religious and ethnic groups have played in the development of the United States with those of Canada, Mexico, and selected countries of Central America.
3.07 Describe art, music, and craft forms in the United States and compare them to various art forms in Canada, Mexico, and selected countries of Central America.
|Competency Goal 4||The learner will trace key developments in United States history and describe their impact on the land and people of the nation and its neighboring countries.|
4.01 Define the role of an historian and explain the importance of studying history.
4.02 Explain when, where, why, and how groups of people settled in different regions of the United States.
4.03 Describe the contributions of people of diverse cultures throughout the history of the United States.
4.04 Describe the causes and effects of the American Revolution, and analyze their influence on the adoption of the Articles of Confederation, Constitution, and the Bill of Rights.
4.05 Describe the impact of wars and conflicts on United States citizens, including but not limited to, the Civil War, World War I, World War II, the Korean War, the Vietnam War, Persian Gulf War, and the twenty-first century war on terrorism.
4.06 Evaluate the effectiveness of civil rights and social movements throughout United States' history that reflect the struggle for equality and constitutional rights for all citizens.
4.07 Compare and contrast changes in rural and urban settlement patterns in the United States, Canada, Mexico, and selected countries of Central America.
4.08 Trace the development of the United States as a world leader and analyze the impact of its relationships with Canada, Mexico, and selected countries of Central America.
|Competency Goal 5||The learner will evaluate ways the United States and other countries of North America make decisions about the allocation and use of economic resources.|
5.01 Categorize economic resources found in the United States and neighboring countries as human, natural, or capital and assess their long-term availability.
5.02 Analyze the economic effects of the unequal distribution of natural resources on the United States and its neighbors.
5.03 Assess economic institutions in terms of how well they enable people to meet their needs.
5.04 Describe the ways in which the economies of the United States and its neighbors are interdependent and assess the impact of increasing international economic interdependence.
5.05 Evaluate the influence of discoveries, inventions, and innovations on economic interdependence.
5.06 Examine the different economic systems such as traditional, command, and market developed in selected countries of North America and assess their effectiveness in meeting basic needs.
5.07 Describe the ways the United States and its neighbors specialize in economic activities, and relate these to increased production and consumption.
5.08 Cite examples of surplus and scarcity in the American market and explain the economic effects.
|Competency Goal 6||The learner will recognize how technology has influenced change within the United States and other countries in North America.|
6.01 Explore the meaning of technology as it encompasses discoveries from the first primitive tools to today's personal computer.
6.02 Relate how certain technological discoveries have changed the course of history and reflect on the broader social and environmental changes that can occur from the discovery of such technologies.
6.03 Forecast how technology can be managed to have the greatest number of people enjoy the benefits.
6.04 Determine how citizens in the United States and the other countries of North America can preserve fundamental values and beliefs in a world that is rapidly becoming more technologically oriented.
6.05 Compare and contrast the changes that technology has brought to the United States to its impact in Canada, Mexico, and Central America.
6.06 Predict future trends in technology management that will benefit the greatest number of people. | http://www.dpi.state.nc.us/curriculum/socialstudies/scos/2003-04/033fifthgrade | 13 |
36 | While research continues to shed light on the environmental effects of shale gas development, much more remains unknown about the risks that the process known as “fracking” could pose for the Chesapeake Bay watershed.
According to a report released this week by a panel of scientific experts, additional research and monitoring—on sediment loads, on forest cover, on the best management practices that might lessen fracking’s environmental impact and more—must be done to determine how hydraulic fracturing might affect land and water resources in the region.
Image courtesy Wikimedia Commons
Hydraulic fracturing is a process that works to extract natural gas and oil from beneath the earth’s surface. During the process, a mixture of water, sand and additives is pumped at high pressure into underground rock formations—in the watershed, this formation is known as the Marcellus Shale—breaking them apart to allow the gas and oil to flow into wells for collection.
The process can impact the environment in a number of ways. According to the report, installing shale gas wells requires clearing forests and building roads, which can impact bird and fish habitat and increase the erosion of sediment into local rivers and streams. Withdrawing water from area sources—an essential part of gas extraction, unless water is brought in from off-site—can alter aquatic habitat and river flow. And the drilling process may result in the accumulation of trace metals in stream sediment.
Read more about the environmental effects of shale gas development in the watershed.
Clean air, clean water and healthy communities: the benefits of forests are vast. But as populations rise and development pressure expands, forests across the Chesapeake Bay watershed are fragmented and cut down.
In an effort to slow the loss of Chesapeake forests, the U.S. Forest Service has released a restoration strategy that outlines how officials and individuals alike can improve the environment and their communities by planting and caring for native trees.
According to the strategy, which has been endorsed by each of the watershed's seven State Foresters, expanding forest cover is critical to improving our air and water, restoring wildlife habitat, sequestering carbon and curbing home energy use.
To ensure we get the most “bang” for our tree-planting buck, the strategy targets restoration efforts toward those places in which forests would provide the greatest benefits, from wildlife corridors along streams and rivers to towns, cities and farms.
Trees along the edges of streams and rivers—called a riparian forest buffer—can keep nutrients and sediment out of our waters and nurture critters with vital habitat and food to eat. Trees in towns and cities—called an urban tree canopy—can clean and cool the air, protect drinking water and boost property values, improving the well-being of an entire neighborhood at a low cost. And trees on farms—in the form of wind breaks, forest buffers or large stands of trees—can protect crops, livestock and local wildlife while providing a farmer with a new form of sustainable income.
Other areas targeted for forest restoration include abandoned mine lands in headwater states and contaminated sites where certain tree species could remove toxic metals from the soil.
Learn more about the Chesapeake Forest Restoration Strategy.
From shopping bags and gift wrap to the train, plane and car trips that we take to visit family and friends, our carbon footprints get a little larger during the holidays. So when it comes to choosing a Christmas tree, why not do so with the environment in mind? While the "real" versus "fake" debate rages on, we have sifted through the arguments to find four tips that will make your Christmas tree "green."
Image courtesy Jo Naylor/Flickr
1. Avoid artificial. As deforestation becomes a global concern, an artificial tree might seem like a green choice. But some researchers disagree. Most of the artificial Christmas trees sold in the United States are made in China using polyvinyl chloride or PVC, a kind of plastic whose petroleum-dependent manufacturing, processing and shipping is a serious emitter of greenhouses gas. And while one study did find that reusing an artificial tree can be greener than purchasing a fresh-cut fir each December, that artificial tree would have to be used for more than two decades—and most end up in a landfill after just six to nine years.
Image courtesy Dave Mathis/Flickr
2. Don’t be a lumberjack. While going artificial might not be the greenest choice, neither is hiking up a local mountain with an axe in hand. When a tree is removed and not replaced, its ecosystem is robbed of the multiple benefits that even a single tree can provide. Trees clean our water and air, provide habitat for wildlife and prevent soil erosion. Instead of chopping down your own Christmas tree, visit a farm where trees are grown, cut and replanted just like any other crop.
Image courtesy macattck/Flickr
3. Choose a tree farm wisely. Millions of Christmas trees are grown on farms across the United States, emitting oxygen, diminishing carbon dioxide and carrying some of the same benefits of a natural forest. And some of these tree farms are sustainable, offering locally-grown, pesticide-free trees and wreaths. Find a tree farm near you.
Image courtesy Klara Kim/Flickr
4. Go “balled and burlapped.” Real Christmas trees are often turned into mulch once the season is over. But some farmers are making Christmas trees even more sustainable! Instead of cutting down a tree at its trunk, a tree’s roots are grown into a ball and wrapped in a burlap sack. Once the tree is used, it can be replanted! If your yard doesn’t have room for another evergreen, look for a company that will return for its tree after the holidays.
Sometimes, even a single tree can make a difference. And it helps when that tree is a big one.
For six seasons, Baltimore County has held a Big Trees sale in an effort to put big, native trees in Maryland backyards. Since its inception in 2009, the program has sold more than 750 trees to Maryland residents, augmenting the state’s existing forests and moving Baltimore County closer to its pollution reduction goals.
Big trees are integral to the health of the Chesapeake Bay. Forests clean polluted air and water and offer food, shelter and rest stops to a range of wildlife.
But big trees can be hard to find. To provide homeowners with the native trees that have high habitat value and the heft that is needed to trap polluted runoff, species like pin oak, sugar maple and pitch pine are grown in a Middle River, Md., reforestation nursery. The one-acre nursery, managed by Baltimore County’s Department of Environmental Protection and Sustainability (EPS), began as a staging ground for large-scale plantings but soon expanded to meet a noticeable residential need.
“We used to give incentives to homeowners to buy large trees at retail nurseries,” said Katie Beechem, Environmental Projects Worker with the EPS Forest Sustainability Program. “But we found that homeowners were buying smaller species—flowering dogwood, crape myrtle—that didn’t achieve the same benefits…that large native trees like oaks and maples and river birch can provide. We were able to fill this big tree niche.”
Emails, signs and word-of-mouth spread news of the sale to homeowners. Some travel from the next town over, while others come from as far as Gettysburg, Pa., to walk among rows of seedlings in black plastic pots.
Staff like Jon-Michael Moore, who supervises the Baltimore County Community Reforestation Program, help residents choose a tree based on growth rate and root pattern, soil drainage and sunlight, and even “urban tolerance”—a tree’s resistance to air pollution, drought, heat, soil compaction and road salt.
One Maryland resident picked up 15 trees to line a fence and replace a few that had fallen. Another purchased two trees to soak up stormwater in his one-acre space. And another chose a chestnut oak simply because she had one when she was a kid.
Out of the 12 tree species that are up for sale, oaks remain the favorite.
Whether red, black, white or pin, oaks are often celebrated as the best big tree. Oaks thrive in a range of soils, drop acorns that feed squirrels, woodpeckers and raccoons and create a home for thousands of insects.
Discussing the oak, Moore mentions University of Delaware professor Doug Tallamy. The entomologist once wrote that a single oak tree can support more than 500 species of caterpillars, which will in turn feed countless insect-loving animals.
But can one big tree make a difference for the Bay? Moore nodded: “Every little bit helps.”
The restoration of forested areas along creeks and streams in the Chesapeake Bay watershed continues to decline.
Called riparian forest buffers, these streamside shrubs and trees are critical to environmental restoration. Forest buffers stabilize shorelines, remove pollutants from contaminated runoff and shade streams for the brook trout and other fish species that thrive in cooler temperatures and the cleanest waters.
While more than 7,000 miles of forest buffers have been planted across the watershed since 1996, this planting rate has experienced a sharp decline. Between 2003 and 2006, Maryland, Virginia and Pennsylvania planted an average of 756 miles of forest buffer each year. But in 2011, the entire watershed planted just 240 miles—less than half its former average.
Farmers and agricultural landowners have been the watershed’s driving force behind forest buffer plantings, using the conservation practice to catch and filter nutrients and sediment washing off their land. But a rise in commodity prices has made it more profitable for some farmers to keep their stream buffers planted not with trees, but with crops. This, combined with an increase in funding available for other conservation practices, has meant fewer forest buffers planted each year.
But financial incentives and farmer outreach can keep agricultural landowners planting.
The Chesapeake Bay Foundation (CBF), for instance, has partnered with the U.S. Department of Agriculture and others to implement conservation practices on Pennsylvania farms. Working to put the state’s Conservation Reserve Enhancement Program (CREP) funds to use, CBF provides farmers across the Commonwealth with technical assistance and financial incentives to plant forest buffers, often on the marginal pastureland that is no longer grazed or the less-than-ideal hayland that is rarely cut for hay.
The CBF Buffer-Bonus Program has encouraged Amish and Mennonite farmers to couple CREP-funded forest buffers with other conservation practices, said Dave Wise, Pennsylvania Watershed Restoration Manager with CBF. The reason, according to Wise? “Financial incentives … make it attractive for farmers to enroll.”
Image courtesy Chesapeake Bay Foundation
For each acre of forest buffer planted, CBF will provide Buffer-Bonus Program participants with up to $4,000 in the form of a “best management practice voucher” to fund conservation work. This comes in addition to CREP cost-share incentives, which fund forest buffer planting, post-planting care and annual rental fees that run from $40 to $350 per acre.
While Wise has witnessed what he called a “natural decline” in a program that has been available for more than a decade, he believes cost-share incentives can keep planting rates up, acting as “the spoonful of sugar" that encourages farmers to conserve in a state with the highest forest buffer planting rates in the watershed.
“There are few counties [in the Commonwealth] where buffer enrollments continue to be strong, and almost without exception, those are counties that have the Buffer-Bonus Program,” Wise said.
In 2007, the six watershed states committed to restoring forest buffers at a rate of 900 miles per year. This rate was incorporated into the Chesapeake Bay Executive Order, which calls for 14,400 miles of forest buffer to be restored by 2025. The Chesapeake Forest Restoration Strategy, now out in draft form, outlines the importance of forests and forest buffers and the actions needed to restore them.
Farmers, foresters and an active coalition of landowners and citizens have been honored for their efforts to conserve, restore and celebrate Chesapeake forests.
From planting native trees and shrubs to engaging students in forest conservation, the actions of the winners from across the watershed crowned them Chesapeake Forest Champions in an annual contest sponsored by the U.S. Forest Service and the Alliance for the Chesapeake Bay.
Image courtesy Piestrack Forestlands LLC
Three farmers were named Exemplary Forest Stewards: Ed Piestrack of Nanticoke, Pa., and Nelson Hoy and Elizabeth Biggs of Williamsville, Va. Ed Piestrack and his wife, Wanda, manage 885 acres of forestland and certified Tree Farm in Steuben County, N.Y. The Piestracks have controlled invasive plants and rebuilt vital habitat on their property, installing nest boxes, restoring vernal pools and planting hundreds of trees on land that will remain intact and managed when it is transferred to their children.
Image courtesy Berriedale Farms
Close to 400 miles south in the Cowpasture River Valley sits Berriedale Farms, where Nelson Hoy and Elizabeth Biggs manage land that forms a critical corridor between a wildlife refuge and a national forest. Hoy and Biggs have integrated their 50-acre Appalachian hardwood forest into their farm operation, protecting the landscape while finding a sustainable source of income in their low-impact horse-powered forest products business.
Image courtesy Zack Roeder
Forest Resource Planner Zack Roeder was named Most Effective at Engaging the Public for his work as a forester in Pennsylvania’s largely agricultural Franklin and Cumberland counties. There, Roeder helped farmers manage and implement conservation practices on their land and helped watershed groups plant streamside forest buffers. Roeder also guided a high school in starting a “grow out” tree nursery and coordinated Growing Native events in local communities, using volunteers to collect native hardwood and shrub seeds for propagation.
Image courtesy Savage River Watershed Association
The Savage River Watershed Association in Frostburg, Md., was commended for the Greatest On-the-Ground Impact. In a watershed whose streamside trees have shaded waterways and provided critical habitat to Maryland’s rare reproducing brook trout fisheries, the organization has worked to conserve area forests, removing invasive plants and putting more than 4,000 red spruce seedlings into the ground.
It’s easy to see why the Iroquois once called Pine Creek Tiadaghton, or “the river of pines.” A mix of hardwoods, including the eastern white pine and the eastern hemlock, now line its banks more than a century after the region was clear cut by Pennsylvania’s once-booming lumber industry.
Image courtesy fishhawk/Flickr
At close to 90 miles long, Pine Creek is the longest tributary to the West Branch of the Susquehanna River. But Pine Creek once flowed in the opposite direction—until a surge of glacial meltwater reversed the creek to its current southerly flow, creating the driving force behind Pine Creek Gorge. Named by the National Park Service a National Natural Landmark in 1968, the gorge is better known as the Grand Canyon of Pennsylvania.
At its deepest point, Pine Creek Gorge is 1,450 feet deep and almost one mile wide. Visitors can view the gorge (along with dramatic rock outcrops and waterfalls) from the east rim of the canyon in Leonard Harrison State Park. On the west rim of the canyon is Colton Point State Park, which features five stone and timber pavilions built in the 1930s by the Civilian Conservation Corps. And in the Tioga State Forest, approximately 165,000 acres of trees, streams and awe-inspiring views await hikers, bikers, hunters and more. Pine Creek is paralleled by the 65-mile Pine Creek Rail Trail, which a 2001 article in USA Today named one of the top ten places in the world to take a bike tour.
Image courtesy Travis Prebble/Flickr
More from Pine Creek:
Fall brings with it cooler weather and a rainbow of red, orange and yellow foliage, making it the perfect time to get outside for a hike.
From the coastal marshes of the Chesapeake Bay to the rocky hills of the Appalachian Mountains, scenic vistas and mountaintops await.
Tip: To plan your outing, find out when "peak fall foliage" occurs in your region with this map from the Weather Channel.
Here are some of our favorite sites to take in the changing colors of fall:
1. Old Rag Mountain Hike, Shenandoah National Park, Va. (7 miles)
Image courtesy David Fulmer/Flickr
Be prepared for a challenging rock scramble and a crowd of tourists, but know that it will all be worth it in the end. Some consider this hike to have the best panoramic vistas in Northern Virginia, and it remains one of the most popular hikes in the mid-Atlantic.
2. Loudoun Heights Trails, Harpers Ferry National Historic Park, W.Va. (7.5 miles)
Harpers Ferry National Historic Park is located along the C&O Canal—a hot spot for those looking to find fall foliage. But if you're tired of the canal's flat views as it runs along the Potomac River, check out the trails in Loudon Heights. It may be an uphill battle, but you'll find yourself overlooking the Shenandoah and Potomac rivers from what seems to be the highest point around. This is certainly a good hike for a cool fall day (this blogger took to the trails in the heat of summer and was drained!). Be sure to grab ice cream in town afterwards!
3. Flat Top Hike, Peaks of Otter Trails, Bedford, Va. (3.5 miles)
Image courtesy Jim Liestman/Flickr
The Peaks of Otter are three mountain peaks that overlook the foothills of Virginia's Blue Ridge Mountains. While a hike to Sharp Top is an intriguing one with stunning views, a hike to Flat Top promises to be less crowded. Keep in mind, there are many other trails and lakes near the Peaks of Otter worth exploring!
4. Wolf Rock and Chimney Rock Loop, Catoctin Mountain Park, Thurmont, Md. (5 miles)
Image courtesy TrailVoice/Flickr
Give yourself plenty of time to take in the unique rock formations and two outstanding viewpoints found along this hardwood forest trail. If you're not up for a long hike, visit the park's more accessible viewpoints and make a stop at the nearby Cunningham Falls State Park to see a scenic waterfall just below the mountains.
5. Chesapeake & Ohio Canal Trail, Washington, D.C., to Cumberland, Md. (184 miles)
Image courtesy sandcastlematt/Flickr
This trail follows the Potomac River from Washington, D.C., to Cumberland, Md. While bikers and hikers often tackle the entire trail, the canal path can also be enjoyed as a leisurely day hike.
From Great Falls to Harpers Ferry to Green Ridge State Forest—the second largest in Maryland—a walk along this rustic trail traces our nation's transportation history with sightings of brick tunnels, lock houses and the beautiful scenery that surrounds it all.
If you plan on making a multi-day journey, watch the color of the leaves change as you move north along with peak foliage.
6. Pokomoke River State Forest (Snow Hill, Md.) (1 mile)
Image courtesy D.C. Glovier/Flickr
Whether you explore the 15,500 acres of this forest from land or from water, you are sure to find breath-taking scenes of fall—in stands of loblolly pine, in bald-cypress forests and swamps and even in a five-acre remnant of old growth forest. Take a one-mile self guided trail or opt for an afternoon fall colors paddle in the nearby Pocomoke River State Park, sponsored by the Maryland Department of Natural Resources.
7. Waggoner's Gap Hawk Watch Hike, Cumberland County, Pa.
Image courtesy Audubon Pennsylvania
This rocky site is located along an autumn raptor migration flyway, making it popular among bird-watchers. During the fall, however, it is a must-visit for birders and non-birders alike. From the top of Kittatinny Ridge, also known as Blue Mountain, you can see South Mountain and Cumberland, Perry, York and Franklin counties. The land is cared for by Audubon Pennsylvania.
8. Pole Steeple Trail, Pine Grove Furnace State Park, Cumberland County, Pa. (.75 mile)
Image courtesy Shawnee17241/Flickr
This trail offers a great view for a short climb. While the trail is less than one mile long, it is steep! From the top, you can see Laurel Lake in Pine Grove Furnace State Park and all 2,000 feet of South Mountain. Plan this hike around sunset to see fall colors in a different light.
Do you know an individual or group that is working hard to help our forests stay healthy? Nominate them to be a Chesapeake Forest Champion!
The Forest Champion contest was launched by the Alliance for the Chesapeake Bay and the U.S. Forest Service in 2011. Now in its second year, the contest hopes to recognize additional exemplary forest stewards in the Chesapeake Bay watershed. With 100 acres of the region's forest lost to development each day, the need for local champions of trees and forests has never been greater!
The contest is open to schools and youth organizations, community groups and nonprofits, businesses and forestry professionals. If you know a professional or volunteer who is doing outstanding work for forests, you can nominate them, too!
Awards will be given for:
Nominations forms can be found at the Forestry for the Bay website and are due August 6, 2012.
Winners will be recognized at the 2012 Chesapeake Watershed Forum in Shepherdstown, West Virginia in late September.
For more information about Forest Champions:
When most people talk about forests, they mention hunting, or the timber market, or environmental conservation. But when Susan Benedict discusses her forest – a 200,000 acre property in Centre County, Pennsylvania – she talks about family.
“We all work together. This is a family operation,” she says as we drive to her property along a Pennsylvania State Game Lands road that winds through the Allegany Mountains from Black Moshannon to Pennsylvania-504.
(Image courtesy Susan Benedict)
A desire to keep the mountaintop property in the hands of her children and grandchildren motivated Benedict to implement sustainable forestry practices, participate in Pennsylvania’s Forest Stewardship Program and certify the property under the American Tree Farm System. By managing her forest in an environmentally conscious way, Benedict ensures that stands of ash, red oak and beech will be around in a hundred years for her great-grandchildren to enjoy.
But Benedict’s involvement in forest conservation doesn’t mean that she’s rejecting the land’s economic and recreation potential. The property’s plethora of hardwoods allows the family to participate in the timber market. As a large and secluded mountaintop property, it has attracted wind farms seeking to turn wind into energy. Its location along the Marcellus Shale makes it a desirable location for natural gas developers. This multitude of interested parties, each with its own vision, can be overwhelming for any property owner.
Since different stakeholders preach different benefits and drawbacks of extracting these natural resources, Benedict took charge and carefully investigated the issues herself, knowing her family’s land was at stake. Her decisions balance the property’s economic potential with her desire to keep her family forest as pristine as it was when she explored it as a child.
We talk so much about the environmental benefits of trees that it’s easy to forget that they’re also a business.
(Image courtesy Susan Benedict)
“My forester assures me that your woods are like your stock portfolio,” Benedict explains. “You don’t want to cut out more annual growth than what you’re generating, and in fact, you want to shoot for (cutting) less than what you’re generating. Right now, we are good; what we are taking out, we are generating.”
Before any logging is done, a county forester walks the property and designates which trees can be removed. Then it’s time to cut. Benedict has one logger, an ex-Vietnam veteran whose wife occasionally accompanies him. “He cuts whatever the mills are wanting,” says Benedict.
The challenge occurs when mills want something that shouldn’t be cut. “It’s a little more problematic because we have to market what we want to get rid of, instead of the lumber mills telling us what they want,” Benedict explains.
But Benedict won’t let natural resource markets sway her forest management decisions. She’s taking charge by telling lumber mills that she’ll give them what she wants to give them – no more, no less. Of course, the economic incentives of sustainable forest management make saying “no” easier.
One of these economic rewards is the Department of Agriculture’s Environmental Quality Incentives Program (EQUIP), which provides financial and technical assistance to landowners seeking to “promote agricultural production and environmental quality as compatible national goals.”
Benedict’s EQUIP project will enhance growth on mass-producing trees such as hickory, oak, cherry, hazelnut, beech nut and others that produce animal feed. “Basically, we want to get the trees to grow quicker, and re-generate better.”
Family health problems put Bendict's EQUIP project on hold. Since it needed to be completed by the end of summer, Benedict’s brothers and her three sons (age 15, 24 and 27) held mandatory family work days each weekend from the Fourth of July to the end of September.
“It’s a 200,000-acre property, which translates to a lot of work. But I think that’s good,” Benedict assures me, even though she also sweat through the word during the height of summer’s humidity. “When you have concentrated time like that, you actually talk to each other. If you meet for an hour meeting, no one ever gets around to saying what they want. You get down to what’s real.”
Using the forest as a mechanism to unite her family has been Benedict’s goal since she and her brothers inherited the property after her father’s death.
Benedict tells me that her three boys “have to help out, whether they want to or not.” Their involvement – even if it is forced sometimes – allows the family to connect to the property. Benedict hopes the hard work will inspire them to adopt sustainable forestry management practices when they inherit the land.
We’ve all experienced times when nature takes over and there’s nothing we can do about it – whether we’re a farmer that’s experienced a devastating drought or a commuter who’s had to pull over in a heavy rainstorm because we couldn’t see the road in front of us.
This happened to Benedict and her team six years ago, when a three-year gypsy moth infestation destroyed 80 percent of a red oak stand. The damage cost her more than one million dollars in timber profits on a 2,000-acre lot.
“Al (Benedict's logger) had worked so hard on the stand. And it’s not a fun place to work – rocky and snake-infested. We were all so proud of how it came out. And then three years worth of caterpillars, and it was destroyed.”
Biological sprays of fungi can sometimes prevent gypsy moth infestations. The caterpillars die after ingesting the fungi for a few days.
Benedict could have sprayed the fungi, but it may not have worked. It’s a big risk to take when you’re paying $25 per acre (that’s $50,000 in total). Not only do you need the money, but you must have three consecutive rain-free days in May, the only time of year you can spray.
So when the emerald ash borer – the invasive green insect that has destroyed between 50 and 100 million ash trees in the United States – made its first appearance in Pennsylvania, Benedict began cutting down her ash trees. “We got them to market before they got killed.”
By paying attention to both environmental and market pressures, Benedict’s forest is both sustainable and profitable.
Benedict’s property is isolated. For wind-power developers, that means fewer people will complain about the loud noise and shadows that make living near wind turbines burdensome. The land is also atop a mountain, which, of course, means it experiences high winds.
“It’s very hard to decide to have that much development on your property, but honestly, it will provide a nice retirement for my brothers and me,” Benedict says. “Everyone I talk to assures me that once the construction phase is over, it doesn’t hurt the trees, it doesn’t hurt the wildlife. The wildlife could care less, which has been my observation on most things that we do. After it gets back to normal, they don’t care and they adjust.”
Environmental surveys, which are required by law before construction, affirm Benedict’s insights. A group hired to do a migratory bird study constructed a high tower atop the mountain. “They stayed up there every evening and morning in March,” Benedict says with a shiver.
Another contractor is delineating wetlands on the property: identifying and marking wetland habitat and making sure construction does not affect these areas.
Benedict and her family even had the opportunity to learn what kinds of endangered and threatened animals live on their property. “They found seven timber rattlesnake dens, and had to relocate one of the turbines because it was too close to the den,” Benedict explains. The teams also surveyed Allegany wood rats and northern bulrushes, a critical upland wetland plant.
“I decided to [lease property to the wind farm] because the only way we are ever going to know if wind is a viable technology is if we get some turbines up, see what works, see what doesn’t work, and allow that process of invention to move. And we have to have someone to host it.”
And according to the surveys, Benedict’s property is the perfect host.
As Benedict drives her pickup around the property, she points out the site of her father's former saw mill, where she once worked, and shows me to the cabin that the family built after her grandfather died in 1976. Nearby, there's a section of forest that the family is converting to grouse habitat, which will support her brother's love of grouse hunting.
(Image courtesy Susan Benedict)
The uses of the property fluctuate as family members' interests change. Benedict affirms that managing the property sustainably will give her grandchildren the freedom to pursue their interests in the years to come.
"A lot of people go the route of having a conservation easement, but who knows what the best use of that property is going to be in 100 years. If my dad did that, we would have very little use of the property now, and certainly very little flexibility with these things, especially the wind and natural gas."
Benedict is a member of the Centre County Natural Gas Task Force. "You hear all sorts of things about natural gas development and water resources, and in order to make sure it wasn’t going to be horrible, I joined the task force," she explains.
Benedict also allows 15 or so individuals to hunt and fish on her property for a small annual fee. Control of the deer population in particular is essential for her timber operations.
But no matter what happens, Benedict insists, the forest will stay in the family.
"We made a pact that everyone will have to sell all of their belongings before we sold this," she says. "There's some things, you know, you got to make work out."
Benedict’s forest management practices and involvement in the sustainable forestry community has earned her recognition as a 2011 Forest Steward Champion by the Alliance for the Chesapeake Bay.
Four projects and individuals in Maryland, Pennsylvania and Virginia have been recognized as Chesapeake Forest Champions for their contribution to Chesapeake Bay restoration through the promotion of trees and forests.
The inaugural Chesapeake Forest Champion contest honored recipients in four categories: most innovative, most effective at engaging the public, greatest on-the-ground impact and exceptional forest steward/land owner.
The "most innovative" award went to Adam Downing and Michael LaChance of Virginia Cooperative Extension and Michael Santucci of the Virginia Department of Forestry for their Virginia Family Forestland Short Course program. The team tackled a critical land conservation challenge: intergenerational transfers of family farms and forests, and the need to educate land owners on how to protect their land. Through the land transfer plans developed in this program, more than 21,000 acres of Virginia forests are expected to remain intact, family-owned and sustainably managed.
The "most effective at engaging the public" champion was ecologist Carole Bergmann from Montgomery County, Maryland. Bergmann created the Weed Warrior program in response to a significant invasive plant problem in the county's forests. To date, approximately 600 Weed Warriors have logged more than 25,000 hours of work removing and monitoring invasive weeds.
The "greatest on-the-ground impact" award went to David Wise of the Chesapeake Bay Foundation for his leadership in restoring riparian forest buffers through the Pennsylvania Conservation Reserve Enhancement Program (CREP) partnership. Since 2000, Pennsylvania CREP has restored more than 22,000 acres of forest buffers -- more than all the other Chesapeake Bay states combined.
The "exceptional forest steward/land owner" champion was Susan Benedict of Centre County, Pennsylvania, for her work running a sustainable tree farm. Benedict has implemented many conservation projects on her family's land, such as planting habitat to encourage pollination in a forested ecosystem.
The Chesapeake Forest Champion contest was sponsored by the U.S. Forest Service and the Alliance for the Chesapeake Bay as part of the International Year of Forests. The four Chesapeake Forest Champions were honored earlier this month at the 2011 Chesapeake Watershed Forum in Shepherdstown, W.Va.
Visit the Alliance for the Chesapeake Bay's website to learn more about the Chesapeake Forest Champions.
Image: (from left to right) Sally Claggett, U.S. Forest Service; David Wise, Chesapeake Bay Foundation; Michael LaChance, Virginia Cooperative Extension; Susan Benedict, land owner, Centre County, Pa.; Carole Bergmann, Montgomery County, Md.; and Al Todd, Alliance for the Chesapeake Bay. Image courtesy Alliance for the Chesapeake Bay.
The Potomac Conservancy is looking for individuals, educators and community groups to help collect native tree seeds during the annual Growing Native season, which begins Sept. 17.
Volunteers participate in Growing Native by collecting native tree seeds across the Potomac River region. The seeds are donated to state nurseries in Maryland, Pennsylvania, Virginia and West Virginia, where they are planted and used to restore streamside forests throughout the 15,000-square-mile Potomac River watershed.
Since Growing Native’s inception in 2001, nearly 56,000 volunteers have collected more than 164,000 pounds of acorns, walnuts and other hardwood tree and shrub seeds. In addition to providing native tree stock, Growing Native builds public awareness of the important connection between healthy, forested lands and clean waters, and what individuals can do to protect them.
Visit growingnative.org to learn more about how you can get involved with Growing Native.
Image courtesy Jennifer Bradford/Flickr.
Do you know an exemplary person or group who is a champion for forests in the Chesapeake Bay region? Nominate them to be a Chesapeake Forest Champion!
To help celebrate International Year of Forests, the U.S. Forest Service and its partners are launching a new annual contest to recognize forest champions throughout the Chesapeake Bay watershed. With around 100 acres of the region's forests lost to development each day, the need for local forest champions has never been greater!
The Chesapeake Forest Champion awards recognize the outstanding efforts of groups and individuals to conserve, restore and celebrate Chesapeake forests in 2011. The contest is
open to schools and youth organizations, community groups and nonprofits, businesses and forestry professionals. If you know a professional or volunteer who is doing
outstanding work for forests, you can nominate them too!
The award has three categories:
Nominations are due by Friday, September 2. Winners will be recognized at the Chesapeake Watershed Forum in Shepherdstown, W.Va., in September.
Visit the Forestry for the Bay website to learn more about the awards and submit a nomination.
There are several different kinds of habitats found in the Bay’s watershed. Each one is important to the survival of the watershed’s diverse wildlife. Habitats also play important roles in Bay restoration.
Chesapeake Bay habitats include:
Forests covered approximately 95 percent of the Bay’s 64,000-square-mile watershed when Europeans arrived in the 17th century. Now, forests only cover about 58 percent of the watershed.
Forests are important because they provide vital habitat for wildlife. Forests also filter pollution, keeping nearby waterways cleaner. Forests act as huge natural sponges that absorb and slowly release excess stormwater runoff, which often contains harmful pollutants. Forests also absorb airborne nitrogen that might otherwise pollute our land and water.
Wetlands are transitional areas between land and water. There are two general categories of wetlands in the Chesapeake Bay watershed: tidal and non-tidal. Tidal wetlands, found along the Bay's shores, are filled with salt or brackish water when the tide rises. Non-tidal wetlands contain fresh water
Just like forests, wetlands act as important buffers, absorbing and slowing the flow of polluted runoff to the Bay and its tributaries.
Streams and rivers not only provide the Chesapeake Bay with its fresh water, they also provide many aquatic species with critical habitat. Fish, invertebrates, amphibians and other wildlife species all depend on the Bay’s tributaries for survival.
When the Bay’s streams and rivers are in poor health, so is the Bay, and the great array of wildlife it harbors is put in danger.
Shallow waters are the areas of water from the shoreline to about 10 feet deep. Shallow waters are constantly changing with the tides and weather throughout the year. The shallows support plant life, fish, birds and shellfish.
Tidal marshes in the Bay's shallows connect shorelines to forests and wetlands. Marshes and provide food and shelter for the wildlife that lives in the Bay's shallow waters. Freshwater marshes are found in the upper Bay, brackish marshes in the middle Bay and salt marshes in the lower Bay.
Aquatic reefs are solid three-dimensional habitats made up of densely packed oysters. The reefs form when oyster larvae attach to larger oysters at the bottom of the Bay.
Reefs provide habitat and communities for many aquatic species in the Bay, including fish and crabs. The high concentration of oysters in aquatic reefs improve water quality by filtering algae and pollutants from the water.
Open waters are beyond the shoreline and the shallows. Aquatic reefs replace underwater bay grasses, which cannot grow where the sunlight cannot penetrate deep waters. Open water provides vital habitat for pelagic fish, birds and invertebrates.
Each of these habitats are vital to the survival of the Chesapeake Bay’s many different species of wildlife. It's important to protect and restore habitats to help promote the overall health of the Bay. So do your part to save the Bay by protecting habitats near you – find out how.
Do you have a question about the Chesapeake Bay? Ask us and we might choose your question for the next Question of the Week! You can also ask us a question via Twitter by sending a reply to @chesbayprogram! Be sure to follow us there for all the latest in Bay news and events
The rain was falling heavy all through Tuesday night and things had not changed much when the alarm went off the next morning, signaling the new day. The Chesapeake Bay Forestry Workgroup had a meeting scheduled at Banshee Reeks Nature Preserve in Loudoun County, Virginia.
Hearing and seeing the rain and knowing the schedule of the day brought back memories from my past life. For years, the month of April had a pretty profound impact on my life. One of the duties as an employee working for the Virginia Department of Forestry was to plant tree seedlings with volunteer groups. The best planting months are March, April, November and December, but April was extremely busy with plantings because of Earth Day and Arbor Day. You can plant trees during other months, but for “bare root” seedlings with no soil on their roots, months with high precipitation and cooler temperatures are the best.
The Banshee Reeks Manor House sits on the top of a hill and Goose Creek winds through the rolling farmland and forest. The “Banshee” was with us that Wednesday because of the pouring rain; the misty spirit hung over the reeks (rolling hills and valley). But hardy as the Forestry Workgroup members are, they hopped on a wagon and rode down the hills -- in the pouring rain -- to Goose Creek to see the task before them.
The heavily grassed floodplain had bare areas that were prepared for a riparian buffer planting. Our hosts from the Virginia Department of Forestry had planting bars, tree seedlings, gloves, tree shelters and all of the equipment needed to get the trees in the ground; the Workgroup members were the muscle. The group planted approximately 125 sycamore, black walnut, river birch, hackberry and dogwood shrub seedlings -- again, in the pouring rain -- in a little over an hour.
As we road the wagon back up the hill -- still in the pouring rain -- and looked back at the newly planted floodplain, the enthusiasm was hard to contain. There was a special warm feeling that drifted over me, reminiscent of my days of planting with volunteers: the feeling of knowing you just did something special that will last far into the future. For the Forestry Workgroup members who promote riparian forest buffer plantings in the Bay watershed, this was a “lead by example” exercise.
As everyone got into their cars to return to their home states of Maryland, Pennsylvania, West Virginia and other parts of Virginia, yes, they were cold, they were wet, but they were proud of their work.
In early October the search was on for a site in the Bay watershed for the November 18 Bay Program Forestry Workgroup meeting. Educational workgroup meetings are good because members can get out of their offices and visit the fields and forests of the Chesapeake Bay watershed. After a few calls, the Virginia Tech Mare Equine Center in Middleburg, Virginia, separated itself from other choices. It was a perfect location for the forestry workgroup meeting because it has a 23-acre riparian forest buffer, and forest buffers would be the focus of the meeting.
Riparian forest buffers are a topic near and dear to my everyday life. People often tell me I live in “buffer land” because my job is very specific to that area of forestry. I really am very interested in watersheds as holistic ecosystems and think of forest buffers as the integral link between what happens on the land and how those actions are reflected in the water quality of streams and rivers.
Along with other Bay goals, the riparian forest buffer goal will fall short of the 10,000-mile commitment made for the 2010 deadline. The number of riparian buffer miles achieved annually has dropped off from 1,122 miles in 2002 to 385 miles in 2007. Since Forestry Workgroup members represent state forestry agencies, NGOs, and other groups interested in Bay forests, they are the logical group to come up with ways to address barriers that stand in the way of achieving state riparian forest buffer commitments. We spent the afternoon of the Forestry Workgroup meeting discussing the barriers to riparian forest buffer plantings and ways to eliminate those barriers.
The Forestry Workgroup meeting also featured two presentations on new riparian forest buffer tools intended for use by local governments, watershed groups, and local foresters. The first presentation, given by Fred Irani from the U.S. Geological Survey team at the Bay Program office, was about the RB Mapper, a new tool developed for assessing riparian forest buffers along shorelines and streambanks. The other presentation, given by Rob Feldt from Maryland DNR, was about a tool for targeting the placement of riparian forest buffers for more effective nutrient removal. (You can read all of the briefing papers and materials from the Forestry Workgroup meeting at the Bay Program’s website.)
After all the business, it was time to experience the Mare Center, their streamside forest buffer and the rolling hills of Virginia. A tractor and wagon provided transportation to the pasture to see the buffer, which was planted in 2000 with 2,500 tree seedlings. It was a cold and windy day, and there were actually snowflakes in the air. We had planned to ride the wagon out and walk back, however, with a little bit of a bribe, the wagon driver waited while we checked out the forest buffer for survival, growth, and general effectiveness for stream protection.
The Forestry Workgroup meeting was productive, educational, and enjoyable. How often can we say that about group meetings? Sometimes it is worth the extra effort to provide a meeting place with an outdoor component that conveys the endeavors that the Bay Program workgroups are all about.
I get a thrill whenever I see forests on equal billing with farm lands in the Chesapeake region. Especially when it comes to something BIG like carbon sequestration. Of course, one acre of forest land can sequester much more carbon than one acre of agricultural land -- 1-2 tons of carbon per acre per year for forest, compared to roughly 0.3-0.5 ton per acre per year for farmland. But when it comes to best management practices for water quality, and well, eating, agriculture is king.
Kudos to Delaware, which is now only 30% forested (the smallest percentage of forest for any of the six Bay states), to take on carbon for its champion role in the Chesapeake clean-up. When it comes to carbon, it’s all about taking advantage of existing volunteer markets, such as the Regional Greenhouse Gas Initiative (RGGI) and the Chicago Climate Exchange, and potential regulatory markets in the United States’ future
From a global perspective, the U.S. is playing catch-up with carbon. Our nation did not ratify Kyoto in 1997 when 84 other countries signed on. These countries are legally bound to reduce carbon emissions, with the average target being to reduce emissions by 5% below 1990 levels. Here in the U.S., the states have largely taken the leadership on reducing greenhouse gases, with some big regional programs such as RGGI, the Western Climate Initiative and the Midwestern Greenhouse Gas Reduction Accord taking off. Last year, Congress got serious with the Lieberman-Warner Climate Security Act, but it didn’t pass. Both of the prospective new administrations have promised to enact climate legislation. Most likely only after the economy settles down -- I mean up. It’s an exciting time for many who have talked for nearly two decades about the need.
Back to the symposium …
How will the markets actually reduce greenhouse gases? It’s not shuffling money around. It has to do with being cost-effective, promoting innovation and, indirectly, better land use decisions. Big questions abound, however; like: will it work? The top six issues are certainty, baseline, leakage, permanence, additionality and double counting.
Once some of the issues start being resolved, there’s great potential for forestry, since 80% of the forest land in this region is privately owned. The Bay Bank has moved from concept to design and will be up and running in fall 2009. The Bay Bank will facilitate both farm and forest landowner access to multiple ecosystem markets (not just carbon) and conservation programs through an easy-to-use online marketplace. Supporting aspects of the Bay Bank, such as the Spatial Lands Registry, will be up sooner. The Spatial Lands Registry is one of those tools that will help reduce issues such as certainty, baseline and permanence. When a tool does this, it also reduces the make-it or break-it transaction costs.
The all-important new regulations will determine the direction of these burgeoning markets. There need to be more drivers to direct more businesses and people to invest in carbon sequestering practices. The target reductions and rules need to be reasonable so a variety of private landowners can take part in the market and get a worthwhile return on their investment. The Delaware symposium is helping with the outreach and understanding that will be needed for any market to succeed.
What’s good for carbon is good for water quality. Less cars, more forests and farms, better-managed farms and forests, and hopefully, hopefully, a postponement of sea level rise. That would be very good for the Chesapeake. For that matter, good for the world.
Frederick, Maryland's urban tree canopy covers just 12 percent of the city, but an additional 72 percent could possibly be covered by trees in the future, according to a recent study by the Maryland Department of Natural Resources, the University of Vermont and the U.S. Forest Service.
Urban tree canopy—the layer of trees covering the ground when viewed from above—is a good indicator of the amount and quality of forests in cities, suburbs and towns. Healthy trees in these urban and suburban areas help improve water quality in local waterways—and eventually the Bay—by reducing polluted runoff. Urban forests also provide wildlife habitat, absorb carbon dioxide from the air and enhance quality of life for residents.
With 12 percent tree canopy, Frederick has less urban forest cover than several other cities in the region, including Annapolis (41 percent urban tree canopy), Washington, D.C. (35 percent) and Baltimore (20 percent). The report finds that 9,500 acres in Frederick, or 72 percent of the city's land area, could possibly support tree canopy because it is not covered by a road or structure (such as a building).
Thirty-eight urban and suburban Maryland communities, including Annapolis, Baltimore, Bowie, Cumberland, Greenbelt, Hyattsville, Rockville and 29 communities in Baltimore County, are involved in setting tree canopy cover goals. Washington, D.C., and communities in Virginia and Pennsylvania have also set urban tree canopy goals.
Under the 2007 Forest Conservation Initiative, the Bay Program committed to accelerating reforestation and conservation in urban and suburban areas by increasing the number of communities with tree canopy expansion goals to 120 by 2020.
At its annual meeting in early December, the Chesapeake Executive Council (EC) signed the Forestry Conservation Initiative, committing the Bay states to permanently conserve an additional 695,000 acres of forested land throughout the watershed by 2020.
Chesapeake forests are crucial to maintaining water quality in the Bay and its tributaries. They also safeguard wildlife habitat, contribute billions of dollars to the economy, protect public health, provide recreation opportunities and enhance quality of life for the watershed's 17 million residents.
Despite these benefits, forests in the Bay watershed are at risk. In the Bay region alone, some 750,000 acres - equivalent to 20 Washington, D.C.s - have been felled since the early 1980s, a rate of 100 acres per day. By 2030, 9.5 million more acres of forest will see increased development pressure.
There are four overarching goals to the Forestry Conservation Initiative:
By 2020, permanently protect an additional 695,000 acres of forest from conversion to other land uses such as development, targeting forests in areas of highest water quality value. As part of this goal, 266,400 acres of forest land under threat of conversion will be protected by 2012.
By 2020, accelerate reforestation and conservation in:
In addition, each state and the federal agencies will implement strategies and actions to: | http://www.chesapeakebay.net/blog/keyword/forests | 13 |
28 | Forests are defined by the FAO Forestry Department as `all vegetation formations with a minimum of 10 percent crown cover of trees and/or bamboo with a minimum height of 5 m and generally associated with wild flora, fauna and natural soil conditions'. In many countries, coastal areas such as beaches, dunes, swamps and wildlands - even when they are not covered with trees - are officially designated as `forested' lands and thus fall under the management responsibility of the Forestry Department or similar agency.
Forest resources (including wildlife) of coastal areas are frequently so different from their inland counterparts as to require different and special forms of management and conservation approaches. Mangroves and tidal forests for example have no parallels in terrestrial uplands. As a result, the information, policy and management requirements concerning integrated coastal area management (ICAM) for forestry are also different.
In each of the climatic regions of the world, inland forests and woodlands may extend to the sea and thus form part of the coastal area. In addition to such formations, controlled by climatic factors, special forest communities, primarily controlled by edaphic factors and an extreme water regime, are found in coastal areas and along inland rivers. Such forest communities include: mangroves, beach forests, peat swamps, periodic swamps (tidal and flood plain forests), permanent freshwater swamps and riparian forests. Of these, the first three types are confined to the coastal area, whereas the remaining types can also be found further inland.
Mangroves are the most typical forest formations of sheltered coastlines in the tropics and subtropics. They consist of trees and bushes growing below the high water level of spring tides. Their root systems are regularly inundated with saline water, although it may be diluted by freshwater surface runoff. The term `mangrove' is applied to both the ecosystem as such and to individual trees and shrubs.
Precise data on global mangrove resources are scarce. Estimates are that there are some 16 million ha of mangrove forests worldwide (FAO, 1994a). The general distribution of mangroves corresponds to that of tropical forests, but extends further north and south of the equator, sometimes beyond the tropics, although in a reduced form, for instance in warm temperate climates in South Africa and New Zealand to the south and in Japan to the north.
Mangrove forests are characterized by a very low floristic diversity compared with most inland forests in the tropics. This is because few plants can tolerate and flourish in saline mud and withstand frequent inundation by sea water.
There are two distinct biogeographic zones of mangroves in the world: those of West Africa, the Caribbean and America; and those on the east coast of Africa, Madagascar and the Indo-Pacific region. While the first contain only ten tree species, mangroves of the Indo-Pacific are richer, containing some 40 tree species (excluding palms).
Most of the animal species found in mangroves also occur in other environments, such as beaches, rivers, freshwater swamps or in other forest formations near water. On the whole, animal species strictly confined to mangroves are very few (crabs have a maximum number of species in mangroves). In many countries however, the mangroves represent the last refuge for a number of rare and endangered animals such as the proboscis monkey (Nasalis larvatus) in Borneo, the royal Bengal tiger (Panthera tigris) and the spotted deer (Axix axis) in the Sundarbans mangroves in the Bay of Bengal, manatees (Trichechus spp.) and dugongs (Dugong dugon). Mangroves are also an ideal sanctuary for birds, some of which are migratory. According to Saenger et al. (1983), the total list of mangrove bird species in each of the main biogeographical regions include from 150 to 250 species. Worldwide, 65 of these are listed as endangered or vulnerable, including for instance the milky stork (Mycteria cinerea), which lives in the rivers of mangroves.
This type of forest is in general found above the high-tide mark on sandy soil and may merge into agricultural land or upland forest.
Sand dune and beach vegetations are mostly scrub-like with a high presence of stunted tree growths. These coastal forest ecosystems are adapted to growing conditions that are often difficult as a result of edaphic1 or climatic extremes (strong winds, salinity, lack or excess of humidity). They are very sensitive to modifications of the ecosystem. A slight change in the groundwater level for example might eliminate the existing scrub vegetation. Sand dune and beach vegetations have an important role in land stabilization and thus prevent the silting up of coastal lagoons and rivers, as well as protecting human settlements further inland from moving sand dunes.
The dominant animal species on the adjacent beaches are crabs and molluscs. The beaches are also very important as breeding sites for sea turtles and, therefore, attract predators of turtles' eggs, such as monitor lizards (Varanus sp.).
This is a forest formation defined more on its special habitat than on structure and physiognomy. Peat swamp forests are particularly extensive in parts of Sumatra, Malaysia, Borneo and New Guinea, where they were formed as the sea level rose at the end of the last glacial period about 18 000 years ago. Domed peat swamps can be up to 20 km long and the peat may reach 13 m in thickness in the most developed domes. Animals found in peat swamps include leaf-eating monkeys such as the proboscis monkey and the langurs found in Borneo.
As with peat swamp forests, these are defined mainly by habitat and contain a diverse assemblage of forest types periodically flooded by river water (daily, monthly or seasonally). Periodic swamps can be further subdivided into tidal and flood plain forests.
Tidal forests are found on somewhat higher elevations than mangroves (although the term is sometimes used to describe mangroves as well). Such forests are influenced by the tidal movements and may be flooded by fresh or slightly brackish water twice a day. Tidal amplitude varies from place to place. Where the amplitude is high, the area subject to periodic tidal flushing is large and usually gives rise to a wide range of ecological sites. The natural vegetation in tidal forests is more diverse than that of mangroves, although still not as diverse as that of dense inland forests.
Flood plains are areas seasonally flooded by fresh water, as a result of rainwater rather than tidal movements. Forests are the natural vegetation cover of riverine flood plains, except where a permanent high water-table prevents tree growth.
The Amazon, which has annual floods but which is also influenced by tides to some 600 km inland, has very extensive permanent and periodic swamp forests. The alluvial plains of Asia once carried extensive periodic swamp forests, but few now remain as these have mostly been cleared for wetland rice cultivation. The Zaire basin is about one-third occupied by periodic swamp forests, many disturbed by human interventions, and little-studied (Whitmore, 1990).
Throughout the world, flood plains are recognized as being among the most productive ecosystems with abundant and species-rich wildlife.
The term is here used for permanent freshwater swamp forests. As opposed to periodic swamps, the forest floor of these is constantly wet and, in contrast to peat swamps, this forest type is characterized by its eutrophic (organomineral) richer plant species and fairly high pH (6.0 or more) (Whitmore, 1990).
Also called riverine or gallery forests. These are found adjacent to or near rivers. In the tropics, riparian forests are characterized as being extremely dense and productive, and have large numbers of climbing plants.
In addition to their aesthetic and recreational values, riparian forests are important in preserving water quality and controlling erosion and as wildlife refuges especially for amphibians and reptiles, beavers, otters and hippopotamus. Monkeys and other tree-dwelling mammals and birds are often abundant in riparian forests.
Other coastal forest ecosystems include: savannah woodlands, dry forests, lowland rain forests, temperate and boreal forests and forest plantations. Many of the natural coastal forests are under severe threat. Most of the lowland rain forests have vanished as a result of the ease with which commercial trees, standing on slopes facing the sea or other accessible coastal waters, could be harvested merely by cutting them down and letting them fall into the nearby water. As a consequence, most coastal dry forests and savannah woodlands have been seriously degraded by overexploitation for fuelwood and construction poles, and conversion to agriculture or to grazing lands through the practice of repeated burning.
Coastal plantations have often been established for both production and protection purposes. As an example of the latter, coastal plantations were established in Denmark as far back as the 1830s to stabilize sand dunes which were moving inland and which had already covered several villages.
The total economic value of coastal forests stems from use values (direct uses, indirect uses and option values) and non-use values (existence and bequest values).2 Table C.1 gives examples of the different values as related to coastal forests. Table C.4 gives examples of valuation approaches applicable to the various types of forest products or services.
|Use values||Use values||Use values||Non-use values|
|Direct uses||Indirect uses||Option values||Existence and bequest values|
|Timber||Nutrient cycling (including detritus for aquatic food web)||Premium to preserve future direct and indirect uses (e.g. future drugs, genes for plant breeding, new technology complement)||Forests as objects of intrinsic value, or as a responsibility (stewardship)|
|Non-timber forest products (including fish and shellfish)||Watershed protection||Endangered species|
|Recreation||Coastal protection||Charismatic species|
|Nature tourism||Air and water pollution reduction||Threatened or rare habitats/ecosystems|
|Genetic resources||Microclimate function||Cherished landscapes|
|Education and research||Carbon store||Cultural heritage|
|Human habitat||Wildlife habitat (including birds and aquatic species)|
Source: adapted from Pearce, 1991.
Direct use values, in particular the commercial value of timber and other forest products, often dominate land-use decisions. The wider social and environmental values are often neglected, partly as a result of the difficulty in obtaining an objective estimate of these, even though in many cases these values exceed the value of traded and untraded forest products.
Indirect use values correspond to `ecological functions' and are at times referred to as environmental services. Some of these occur off-site, i.e. they are economic externalities and are therefore likely to be ignored when forest management decisions are made.
The option existence and bequest values are typically high for coastal forests - especially for tropical rain forests or forests containing endangered or charismatic animal species.
In addition to the activities carried out within the coastal forests (see below), small- and large-scale forest industries are also often found in coastal areas, taking advantage of the supply of raw materials and the ease of transport by waterways and roads, the existence of ports for export, etc. In addition to sawmills and pulp and paper mills, these forest industries may include veneer and particle board factories, charcoal kilns (particularly near mangrove areas), furniture makers and commercial handicraft producers.
There is little information available on the value of marketed goods from coastal forests. In general, their contribution to national gross domestic product (GDP) is small and this fact may lead to their being neglected. Commercial wood production from coastal forests ranges from timber, poles and posts to fuelwood, charcoal and tannin. Non-wood products include thatch, fruits, nuts, honey, wildlife, fish, fodder and medicinal plants. A list of forest-based products obtainable from mangroves is shown in Box C.1.
Products obtainable from mangroves
A. Mangrove forest products
|Food, drugs and beverages
to preserve leather and tobacco
Paper - various
B. Other natural products
Source: adapted from FAO, 1984a.
Accounts of government forest revenues are often a poor indication of the value of the forest products. As an example, in 1982/83, in the Sundarbans mangroves of Bangladesh, some of the royalties collected by the forestry department were exceedingly low: for sundri (Heritiera fomes) fuelwood for instance, the market rate was nearly 40 times the royalty rate; and for shrimps the minimum market rate to royalty rate ratio at the time was 136:1 (FAO, 1994a).
Frequently, the value of untraded production (e.g. traditional fishing, hunting and gathering) in mangrove forest areas is substantial, the value often exceeding that from cultivated crops and from formal-sector wage income (Ruitenbeek, 1992).
Other direct use values of the coastal forests include their social functions. Coastal forests provide habitat, subsistence and livelihood, to forest dwellers, thereby supplying the means to hold these communities together, as well as opportunities for education, scientific research, recreation and tourism. Worldwide, the lives of millions of people are closely tied to productive flood plains, the associated periodic river floods and subsequent recessions. The socio-economic importance of these areas is especially evident in the more arid regions of the developing world. The seasonal ebb and flood of river waters determines the lifestyles and agricultural practices of the rural communities depending on these ecosystems.
Examples of the educational value of coastal forests are found in peninsular Malaysia, where more than 7 000 schoolchildren annually visit the Kuala Selangor Nature Park, a mangrove area with boardwalk, education centre, etc. (MNS, 1991). In nearby Kuantan, along the Selangor river, a main tourist attraction are evening cruises on the river to watch the display of fireflies and, along the Kinabatangan river in Sabah, cruises are undertaken to watch the proboscis monkeys as they settle in for the night in the riparian forest.
In terms of employment opportunities in coastal forests, ESCAP (1987) estimated the probable direct employment offered by the Sundarbans mangrove forest in Bangladesh to be in the range of 500 000 to 600 000 people for at least half of the year, added to which the direct industrial employment generated through the exploitation of the forest resources alone equalled around 10 000 jobs.
A prominent environmental role of mangroves, tidal, flood plain and riparian forests is the production of leaf litter and detrital matter which is exported to lagoons and the near-shore coastal environment, where it enters the marine food web. Mangroves and flood plains in particular are highly productive ecosystems and the importance of mangrove areas as feeding, breeding and nursery grounds for numerous commercial fish and shellfish (including most commercial tropical shrimps) is well established (Heald and Odum, 1970; MacNae, 1974; Martosubroto and Naamin, 1977). Since many of these fish and shellfish are caught offshore, the value is not normally attributed to mangroves. However, over 30 percent of the fisheries of peninsular Malaysia (about 200 000 tonnes) are reported to have some association with the mangrove ecosystem. Coastal forests also provide a valuable physical habitat for a variety of wildlife species, many of them endangered.3
Shoreline forests are recognized as a buffer against the actions of wind, waves and water currents. In Viet Nam, mangroves are planted in front of dykes situated along rivers, estuaries and lagoons under tidal influence, as a protection measure (L┐yche, 1991). Where mangroves have been removed, expensive coastal defences may be needed to protect the agricultural resource base. In arid zones, sand dune fixation is an important function of coastal forests, benefiting agricultural and residential hinterland.
In addition, mangrove forests act as a sediment trap for upland runoff sediments, thus protecting sea grass beds, near-shore reefs and shipping lanes from siltation, and reducing water turbidity. They also function as nutrient sinks and filter some types of pollutants.
The option value of coastal forests - the premium people would be prepared to pay to preserve an area for future use by themselves and/or by others, including future generations - may be expected to be positive in the case of most forests and other natural ecosystems where the future demand is certain and the supply, in many cases, is not.
An example of how mangrove values are estimated is given in Box C.2.
Net present value of mangrove forestry and fisheries in Fiji
Using data on the amounts of wood and fish actually obtained from mangrove areas and their market value and harvesting costs, the net present value (NPV) of forestry and fisheries were estimated for three mangrove areas in Fiji, using the incomes or productivity approach with a 5 percent social discount rate and a 50-year planning horizon.
Forestry net benefits
Commercial net benefits were calculated as wood harvested multiplied by market value, minus harvesting costs.
Subsistence net benefits were calculated using the actual amount of wood harvested multiplied by the shadow value in the form of the price for inland or mangrove fuelwood sold by licensed wood concessionaires.
Taking the species composition of the mangrove area into account, the weighted average NPV was estimated for each of the three main mangrove areas yielding the following:
NPV: US$164 to $217 per hectare.
Fisheries net benefits
In only one of the three areas was the fisheries potential judged to be fully utilized and the data are based on this area.
Annual catch (commercial and subsistence): 3 026 tonnes. Area of mangroves: 9 136 ha, thus averaging 331 kg per hectare, equalling $864 per hectare in market value annually.
By taking harvesting costs into account, the following result was obtained:
NPV: $5 468 per hectare, or approx. $300 per hectare per year.
This is assuming a proportionate decline in the fisheries. With only a 50 percent decline (as some of the fish are not entirely dependent on the mangroves) the figure for the NPV is $2 734 per hectare.
The value of mangroves for nutrient filtering has been estimated, using the alternative cost or shadow project method, by Green (1983), who compared the costs of a conventional waste water treatment plant with the use of oxidation ponds covering 32 ha of mangroves. An average annual benefit of $5 820 per hectare was obtained. This figure is, however, only valid for small areas of mangroves and, as it represents the average, not the marginal value, it should be treated with caution.
The option value and the existence value of mangroves are not captured using the above incomes approach and an attempt to include these values was made by using the compensation approach, as the loss of fishing rights in Fiji caused by the reclamation of mangroves has been compensated by the developers. The recompense sum is determined by an independent arbitrator within a non-market institution. Large variations in recompense sums were however recorded ($49 to $4 458 per hectare) according to the end use and the bargaining power of the owner of the fishing rights. Using 1986 prices the following results were obtained:
Average: $30 per hectare for non-industrial use and $60 per hectare for industrial use.
Maximum: $3 211 per hectare.
By adding the benefits foregone in forestry and fisheries, it can be concluded that the minimum NPV of the mangroves of Fiji is $3 000 per hectare under present supply and demand and existing market and institutional organizations.
Source: Lal, 1990.
The term coastal forests covers a wide range of different ecosystems many of which can still be classified as natural ecosystems, although - particularly in the temperate region - they may have been modified through human interventions over the years. However, they still generally contain a greater biological diversity (at genetic, species and/or ecosystem levels) than most agricultural land.
The most important characteristics of coastal forests are probably their very strong links and interdependence with other terrestrial and marine ecosystems.
Mangroves exemplify such links, existing at the interface of sea and land, and relying, as do tidal and flood plain forests, on fresh water and nutrients supplied by upland rivers to a much larger extent than more commonly found inland forest types. Figure C.1 illustrates the mangrove-marine food web.
Source: CV-CIRRD, 1993.
In the arid tropics, there may be no permanent flow of fresh water to the sea, and the leaf litter and detritus brought to the marine ecosystem by tidal flushing of coastal mangrove areas, where these exist, is the only source of nutrients from the terrestrial zone during the dry season. This further magnifies the role of mangroves in the marine food web. In the Sudan, for example, such a role is considered to be a crucial function of the narrow mangrove fringe found along parts of the Red Sea coast (L┐yche-Wilkie, 1995).
As for the wildlife species found in coastal forests, most are dependent on other ecosystems as well. Mammals may move between different ecosystems on a daily or seasonal basis, water birds are often migratory, and many commercial shrimps and fish use the mangroves as spawning ground and nursery sites but move offshore in later stages of their life cycle. Anadromous species, such as salmon, spawn in freshwater rivers, but spend most of their life cycle in marine waters; catadromous species on the other hand, spawn at sea, but spend most of their life in freshwater rivers. These species probably thus pass through coastal forests at some point in their life.
A variety of natural or human-incurred risks and uncertainties affect the sustainable management of coastal forest resources. Some natural risks may be exacerbated by human activities. Uncertainty arises from: the natural variability inherent in coastal forest ecosystems; the incomplete knowledge of the functioning of complex natural ecosystems; the long time-frame needed in forest management; and the inability to predict accurately the future demands for goods and services provided by natural and cultivated forests.
Natural risks. These include strong winds, hurricanes and typhoons, floods (including tidal waves) and droughts, which can all cause considerable damage to coastal forests.
Global climate change caused by human actions may, through a rise in temperatures, result in `natural' risks such as a rise in sea level, changes in ocean currents, river runoff and sediment loads, and increases in the frequency and severity of floods, drought, storms and hurricanes/typhoons.
Human-incurred threats. Human-incurred threats to coastal forests stem mainly from the competition for land, water and forest resources. These include conversion of coastal forest to other uses, building of dams and flood control measures, unsustainable use of forest resources both within the coastal area and further upland, and pollution of air and water.
In many developing countries, deforestation continues to be significant; the annual loss of natural forests resulting from human pressures amounted to an estimated 13.7 million ha in the 1990 to 1995 period (FAO, 1997d). Human-incurred threats to forests are often more pronounced in coastal areas as a result of the relatively high population density of such areas caused by the availability of fertile soils, fishery resources and convenient trade links with other domestic and foreign markets.
Natural variability. One particular uncertainty faced by forest managers relates to the natural variability exhibited by the coastal forest and wildlife resources. Such natural variability can be found at two levels:
The above risks and uncertainty caused by incomplete knowledge are compounded by the long time-frame needed in forest and wildlife management. Trees, and some animals, need a long time to mature: 30 years for mangrove forests used for poles and charcoal; and 150 years for oak (Quercus) grown for timber in temperate forests. This long period between regeneration and harvesting makes the selection of management objectives more difficult because of further uncertainty regarding future market preferences for specific forest and wildlife products or services, future market prices, labour costs, etc.
An important characteristic of natural ecosystems (including natural coastal forests) is that once a natural ecosystem has been significantly altered, through unsustainable levels or inappropriate methods of use, it may be impossible to restore it to its original state. Conversion of natural coastal forests to other uses is an extreme example.
It may be possible to replant mangrove trees in degraded areas or in abandoned shrimp ponds, but the resulting plantation will have far fewer plant and animal species than the original natural mangrove ecosystem.
Acid sulphate soils. A particular cause of concern with regard to irreversibility is the high pyrite (FeS2) content in many mangrove and tidal forest soils, which renders them particularly susceptible to soil acidification when subject to oxidation. This is probably the most acute problem faced by farmers and aquaculture pond operators when converting such forests and other wetlands to rice cultivation or aquaculture ponds, and it makes restoration of degraded areas almost impossible.
Reclamation of acid sulphate soils requires special procedures such as saltwater leaching alternating with drying out, or the establishment and maintenance of a perennially high, virtually constant groundwater-table, through a shallow, intensive drainage system. These may be technically difficult or economically unfeasible.
Coastal forests tend to be owned by the state. The inability of many state agencies in the tropics to enforce property rights, however, often means that a de facto open access regime exists, which frequently results in overexploitation of forest resources.4 This problem is only partly overcome by awarding concessions and usufructuary rights as these are often short-term in nature and not transferable and, therefore, fail to provide incentives for investments and prudent use of the resources.
Where the state agency has the ability to enforce laws and regulations and the government has a policy of promoting multipurpose management of state-owned forests, sustainable forest management can be achieved (Box C.3).
Mangrove stewardship agreement in the Philippines
One example of successful multipurpose management of a state-owned coastal forest using a participatory approach and aiming to restore the more traditional communal ownership of forests, is the issuing of `Mangrove Stewardship Agreements' in the Philippines. Local communities (or private individuals) obtain a 25-year usufruct lease over a given mangrove area with the right to cut trees selectively, establish new mangrove plantations and collect the fish and shellfish of the area based on a mutually agreed mangrove forest management plan. The Department of Environment and Natural Resources (DENR), which implements this scheme, will assist the local communities and individuals in preparing this management plan if needed. Local NGOs are also contracted by DENR to assist in the initial `Community Organizing' activities, which include an awareness campaign of the benefits obtainable from mangrove areas and an explanation of the steps involved in obtaining a Stewardship Agreement.
As a result of the variety of goods and services provided by coastal forests and their links with other ecosystems, a large number of institutions often have an interest in, and sometimes jurisdiction over parts of, the coastal forest ecosystems. This raises the risk of conflict between institutions, even within a single ministry.
The forestry department or its equivalent generally has jurisdiction over the coastal forest resources. However, the parks and wildlife department, where it exists, may have jurisdiction over the forest wildlife, and the fisheries department almost certainly has jurisdiction over the fisheries resources found in the rivers within coastal forests, and may regulate the use of mangrove areas for cage and pond culture. Other institutions with an interest in coastal forests include those related to tourism, land-use planning, mining, housing, ports and other infrastructure.
In many countries, there is often little public awareness of the variety of benefits provided by coastal forests, and campaigns should be conducted to overcome this. Mangroves and other swamp forests in particular have often been regarded as wastelands with little use except for conversion purposes. As a result of the low commercial value of wood products compared with the potential value of agriculture or shrimp production, conversion has often been justified, in the past, on the basis of a financial analysis of only the direct costs and benefits. Such analyses, however, do not take into account the value of the large number of unpriced environmental and social services provided by coastal forests, which in many cases far outweigh the value of any conversion scheme.5
The ecological links between coastal forests and other terrestrial and marine ecosystems and the institutional links between the forestry sector and other sectors, must be addressed through an area-based strategy that takes a holistic approach to sustainable development. An ICAM strategy provides the appropriate framework for such an approach.
The nature of coastal forests as described above calls for a precautionary approach6 to the management of their resources and the adoption of flexible strategies and management plans drawing on the knowledge of the local communities.
The precautionary principle can be incorporated into coastal forest management by imposing sustainability constraints on the utilization of coastal forest ecosystems. Other measures include environmental impact assessments, risk assessments, pilot projects and regular monitoring and evaluation of the effects of management. Research, in particular on the interdependence of coastal forests and other ecosystems and on the quantification and mitigation of negative impacts between sectors, is also needed.
Environmental impact assessments7 should be undertaken prior to conversions or other activities that may have a significant negative impact on coastal forest ecosystems. Such activities may arise within the forest (e.g. major tourism development) or in other sectors outside the forest (e.g. flood control measures). Where there is insufficient information on the impact of proposed management actions, applied research and/or pilot projects should be initiated.
Public participation in the management of coastal forest resources will increase the likelihood of success of any management plan and should be accompanied by long-term and secure tenure/usufruct.
1 See Glossary.
2 For a description of these concepts, see Part A, Box A.24.
3 See Section 1.1.
4 See Part A, Section 1.6.1 and Box A.2. Also Part E, Box E.7.
5 See Part A, Section 1.6.1 and Boxes A.22 and A.24.
6 See Part A, Section 1.6.3 and Boxes A.3 and A.5.
7 See Part A, Box A.6. | http://www.fao.org/docrep/W8440e/W8440e11.htm | 13 |
40 | The Roman economy, from its founding and establishment as a Republic, through to the fifth and fourth centuries BC was a system of barter and community trade. All manner of trade goods, farm products, livestock and services were used as a means of exchange. As Rome grew, and the need for a system other than barter with it, lumps of bronze and other base metals began to be used in lieu of the exchange of one good for another. These lumps, called Aes Rude (raw bronze), could be used not only as coinage but also in large enough quantities, they could be melted down for the manufacturing of various metal tools and objects.
As time passed and the circulation of Aes Rude became more common, the Romans, and their neighbors, began to rely on this simple system of economic transfer. The first true Roman coin, the Aes Signatum (signed bronze), replaced the Aes Rude sometime around the start of the 3rd century BC. These were more than lumps of metal, in that they were cast, had a regular and discernable rectangular shape and were stamped with raised designs. The Aes Signatum carried a particular value and were cast with marks indicating the government authority. Each was cast at a weight standard of 1600 grams so weighing by traders would not have to be done at each transaction. This rather hefty weight, however, along with the single denomination made making change a fragment cutting affair. This system obviously carried inherent problems and would very quickly be in need of a replacement.
Within only a few years of the introduction of the Aes Signatum, a new more clearly defined, and easily traded form of coin replaced it. Aes Grave (heavy bronze), appearing sometime around 269 BC, came in several denominations, making them more functional and popular. Allowing for several varieties surely increased the circulation of coinage in ancient Rome and also made trading with other civilizations more practical. The Aes were molded with carvings of exotic animals or gods, and later were commonly issued with a ship's prow. This coinage was likely the primary issue in Rome until about 215 BC. It would eventually evolve into the base unit of Roman currency, the As.
Overlapping the circulation of the Aes Grave, was the introduction of silver coinage. During the 3rd century BC, Roman moneyers were forced to become more compliant with other cultures for ease in trade. The Greeks had been producing silver coins since the 7th century BC, and silver was the basis of their system. The Romans imported Greek artisans and began minting silver coins of their own, albeit with a style heavily influenced by Greece. The first of these silver produced for Rome were a series of didrachms (called quadrigati for the inclusion of the four horsed chariot imagery) minted during the outbreak of war with Pyrrhus. These coins were struck in Neapolis and were most likely made to be compliant with the trading specification of the Greek colonies in southern Italy. These were later replaced by a coin of roughly half the size (3.4 grams) called the victoriatus to commemorate the defeat of Carthage in the Punic Wars.
The denarius, the silver coin that would become the mainstay of the Roman economy, was first struck in 211 BC and was valued originally at 10 asses (As). Approximately a century later, in 118 BC, it was revalued at 16 asses to reflect the shrinking size of the bronze and copper As. Minting of highly valued gold coinage, in the Republic, was only issued in times of dire need. The aureus was the primary gold coin of the Roman empire and was introduced in the late republic during the time of the imperators. The aureus carried a fixed value of 25 denarii and its larger value would ease the burden of money transfers during times of war.
While the denarius remained the backbone of the Roman economy for 5 centuries, the silver content and accompanying value slowly decreased over time. This debasement of the metal purity in coins fluctuated with the strength of the Empire and was mainly an indication of the state lacking precious metals, reduced treasury, and inflation. When first introduced the denarius contained nearly 4.5 grams of pure silver and remained that way throughout most of the Republican period. With the establishement of the Imperial system the denarius remained fairly constant under the Julio Claudians at 4 grams of silver. With the accession of Nero, however, the content was debased to 3.8 grams, perhaps as a reflection of the high cost of rebuilding the city and his palace, after the fires.
By the reign of Caracalla in the early 3rd century AD, the denarius had dipped to less than 50% purity. In 215 AD he introduced the Antoninianus, commonly referred to as the "radiate" due to the obverse images of the emperors with a radiate crown. The 60% pure silver Antoninianus was valued at two denarii, but containedno more than 1.6 times the amount of silver of the denarius. The savings for the treasury by issuing a double value coin with less than double the silver content is obvious. As antoninianii increased, the minting of denarii decreased, until it ceased to be issued in significant quantities by the middle of the third century AD.
The mid third century saw the outbreak of anarchy. After the reign of Gordian III (238-244 AD), Persians and Germanics began to invade the frontier of the empire. A succession of Legionary Legates fought a progressive fifty-year civil war and large armies were raised. The treasury needed increasing amounts of silver to fund them. Mints were set up close to the armies so that the soldiers could be paid, but the demand for silver debased the coinage once again. By the reign of Valerian (253-260 AD), the antoninianus was only 20 - 40% silver. When Valerian was captured by the Sassanians, his son, Gallienus, issued bronze antoninianii with a silver coating. His need of coinage was so desperate that he was minting up to one million coins per day.
This constant debasement of Roman coins was finally countered by Aurelian in 274 AD. He set the minting standard for silver in the antoninianus at twenty parts copper to one part silver, and the coins were actually stamped as containing that amount.Aurelian's reform had little effect, however, and coins continued to be minted with a lesser level of purity. In 301 AD, true reform came to the minting process with the ascension of Diocletian. He developed a strict system of purity standards with the gold Aureus struck at 60 to the pound, a new silver coin struck at the old rates during the reign of Nero, and a new large bronze coin that contained two percent silver. He eliminated the Antoninianus and replaced with it several new denominations like the Argenteus and the Follis.
Within a couple of decades, Constantine would come to power and the empire would see its final changes in the monetary system, before its fall. The gold Solidus and silver Siliquae were introduced at this time and themes on coinage slowly began to take on a new dimension. Coins were minted with idealistic portraits and not the customary true imagery of the emperor. With the moving of the capital to Byzantium, a Greek influence returned to many issues, and even slight references to Christianity were made. The inclusion of the Christogram, while not completely replacing the images of the Roman pantheon, marked a distinct change in the religion of the state. By the fall of the west in 476 AD, the distinction between Roman and Barbaric issues is non-existent, and Byzantine coinage replaces Roman as the currency of the Mediterranean. | http://www.unrv.com/economy/roman-coins.php | 13 |
32 | Sign-up to receive our free newsletter.
Some things never change — the first depression in U.S. history was very similar to all future economic downturns, including today’s.
The Panic of 1819 sparked a crisis that featured price drops, bank failures, mortgage foreclosures, and mass unemployment. The main culprit in the economic downturn was the Second Bank of the United States (forerunner to today’s Federal Reserve System). The Bank offered bad (and sometimes fraudulent) loans and printed paper money that fueled overspeculation and inflation.
When the Bank tried returning to a sound monetary policy through deflation, prices dropped, home and land values plummeted, and overextended banks and homeowners went broke. This affected farming and manufacturing, which in turn caused unemployment. Not only did the panic demonstrate the boom-and-bust business cycle in U.S. economics for the first time, but it also bred suspicion of a national bank that had virtually unlimited power to manipulate the currency.
Politicians offered many proposals to relieve the depression. Northern manufacturers proposed raising tariffs (i.e., taxes) on imported goods to coerce consumers into buying cheaper U.S. goods. However, higher tariffs would have likely made the crisis worse by compelling foreign trading partners to retaliate by either increasing their own tariffs or closing their ports to U.S. goods.
Others proposed ending deflation by printing more paper money, which would aid debtors by raising prices and lowering interest rates. However, such aid would have only encouraged more irresponsible borrowing. Some proposed initiating government-controlled public works projects, but others objected because such projects were unconstitutional. Most acknowledged that true recovery could only come by liquidating unsound conditions and returning to sound money, industry (i.e., working harder), and economy (i.e., spending less).
President James Monroe ultimately adhered to the Constitution by limiting government intervention; industry and economy applied not only to the people but to their government as well. Salaries of government officials were cut, along with overall government spending. These measures, combined with an influx of Mexican silver, helped to end the depression within three years.
The panic heightened interest in economic issues, giving them new dimensions and spawning new theories and ideas that have evolved to this day. The depression caused by the Panic of 1819 was similar to modern economic crises, including that of 2008. The boom of banks and printed money fueled a real estate boom that was ultimately corrected by a market bust, much like 2008.
The panic also generated intense hostility toward the Second Bank of the United States, just as many free market economists blame the Federal Reserve System for the 2008 economic downturn. The ability of a government-backed national bank to manipulate the national currency for political or financial gain has been the subject of a longstanding debate over central banking in the U.S. that continues today. | http://www.teapartytribune.com/2012/03/23/the-panic-of-1819-and-today/ | 13 |
15 | Illnesses and Conditions
: West Nile Virus
About West Nile Virus
The West Nile virus belongs to a family of viruses called Flaviviridae and was first isolated(1937) in the West Nile province of Uganda.
The first West Nile virus infection in North America occurred in the New York City area in the summer of 1999.
In Canada, the virus was first found in birds in Ontario in 2001 and the first human case
of West Nile virus occured in Ontario in September 2002. During 2002, more than 4,000 people in North America became ill after being infected with West Nile virus.
How do people get infected with West Nile virus?
Most people infected with West Nile virus got it from the bite of an infected mosquito. A mosquito becomes
infected when it feeds on a bird that is infected with the virus. The mosquito can then pass the virus to
people and animals by biting them. Probably less than 1% of mosquitoes in any given area are infected with West
Nile virus. This means the risk of being bitten by an infected mosquito is low. But, it could happen to anyone in areas where West Nile virus is active.
There have been cases in Canada and the United States of West Nile virus being spread through blood transfusions and organ transplants.
However, there is no evidence to suggest that people can get West Nile virus by touching or kissing someone
who is infected, or from being around a health care worker who has treated an infected person. There
is no evidence that the virus can pass directly from infected animals(horses, pets, etc.)to people.
Who is most at risk?
Many people infected with West Nile virus have mild symptoms,or no symptoms at all. Although anybody can
have serious health effects, it is people with weaker immune systems that are
ay greater risk for serious complications. This higher risk group includes:
People over the age of 40
People with chronic diseases, such as cancer, diabetes or heart disease
People that require medical treatment that may weaken the immune system(such as chemotherapy or corticosteroids)
Although individuals with weaker immune systems are at greater risk, West Nile virus can cause severe complications for people of any age and any health status. This is why it is so important to reduce the risk of getting bitten by mosquitoes.
The symptoms of West Nile virus infection
Symptoms usually appear within 2 to 15 days. The type and severity of symptoms varies from person to person.
Symptoms of mild disease include:
Persons with weaker immune systems or chronic disease, are at greater risk of developing more serious complications, including meningitis(infection of the covering of the brain) and encephalitis(infection of the brain itself). Tragically, these conditions can be fatal.
Symptoms of more severe disease include:
Loss of consciousness
Lack of coordination
Muscle weakness and paralysis
Is there a treatment for West Nile virus infection?
Unfortunately, as with most viruses, there is no specific treatment or medication for West Nile virus. Serious cases are treated with supportive treatments such as intravenous fluids, close monitoring and other medications that help fight the complications of the infection.
Obviously such cases may require hospitalization. Currently, there is no vaccine available to protect against West Nile virus, although there is a lot of research going on in this area.
How is West Nile virus infection confirmed?
If a doctor suspects that a person may have West Nile virus, based on the history of symptoms, especially
in an area where West Nile virus is present, there are specific blood tests
which can confirm the infection.
Mosquito bite prevention
The best way to reduce the risk of infection is to try to prevent mosquito bites. If West Nile virus activity is detected in your area:
Limit time spent outdoors at dawn and dusk, when many mosquitoes are most active
Wear light-colored protective clothing such as long-sleeved shirts, long pants, and a hat when outdoors in areas where mosquitoes are present
A long sleeve shirt with snug collar and cuffs is best. The shirt should be
tucked in at the waist. Socks should be tucked over pants, hiking shoes or
Light colored clothing is best because mosquitoes tend to be more attracted to dark colors
When going outdoors use insect repellents that contain DEET or other approved ingredients
Make sure that door and window screens fit tightly and have no holes that may allow mosquitoes indoors
To avoid insect bites, do not use scented soaps, perfumes or hair sprays on your children
For young babies, mosquito netting is very effective in areas where exposure to mosquitoes is likely. Netting may be used over infant carriers or other areas where young children are placed
DEET(N,N-diethyl-m-toluamide or, N,N-diethly-3-methylbenamide)was approved as a repellent for public use in 1957.
Since then, it has appeared in more than 200 products, and is used by about 21% of households, 30% of adults and 34% of children.
Studies have shown that:
A product containing 23.8% DEET provides about 5 hours of protection from mosquito bites.
A product containing 20% DEET provides almost 4 hours of protection
A product with 6.65% DEET provides almost 2 hours of protection
Products with 4.75% DEET provides about 1 and a half hour of protection
DEET safety concerns
DEET is generally used without any problems. There have been rare reports of side
effects usually, as a result over-use. The American Academy of Pediatrics has recommended
a concentration, of 10% or less for children aged 2 - 12. Most experts agree that DEET can be safely used in children over 2 years of age, but
if there is a risk of West NIle virus then, DEET can be used in children 6 months or older.
Alternatives to DEET?
Unfortunately, there is no evidence that non-DEET repellents are as effective or safer than as those containing DEET.
Citronella has mild repellent properties, but DEET is
significantly more effective. Therefore, when repellent is being used to
prevent West Nile virus, DEET should be used. Certain products containing citronella have a
limit on the number of applications allowed per day.
Read the product label before using. Products containing citronella and lavender are
currently under re-evaluation.
Products with 2% soybean oil are able to provide about 1 and a half hour of protection(the equivalent of 4.75% DEET).
The manufacturers of a recently released natural based product(non-DEET)containing lemon eucalyptus plant claim that this protects up to 2 hours against mosquitoes.
Electromagnetic and ultrasound devices are not effective in preventing mosquito bites.
IN CANADA THE FOLLOWING NON-DEET PRODUCTS ARE RECOGNIZED AS REPPELLENTS:
(taken form Health Canada web site):
P-menthane 3,8-diol: A product containing this active ingredient was recently registered in Canada and thus meets all the modern safety standards. It provides up to two hours of protection against mosquitoes. This product cannot be used on children under three years of age. It can be applied two times per day.
Soybean oil: Registered products containing soybean oil provide between one to
3.5 hours of protection against mosquitoes, depending on the product. Products containing soybean oil were recently registered and thus meet all the modern safety standards.
Citronella and lavender: Registered products containing citronella protect people against mosquito bites from 30 minutes to two hours. The registered lavender product repels mosquitoes for approximately 30 minutes. These products cannot be used on infants and toddlers under two years of age. Based on animal studies, citronella-based products appear to be potential skin sensitizers. Therefore, allergic reactions may occur in some individuals.
Summary of DEET use based on age:
Children under 6 months of age
DO NOT use personal insect repellents containing DEET on infants.
Children aged 6 months to 2 years
In situations where a high risk of complications from insect bites exist, the use of one application per day of DEET may be considered for this age group.
The least concentrated product (10% DEET or less) should be used.
As with all insect repellents, the product should be applied sparingly and not be applied to the face and hands.
Prolonged use should be avoided.
Children between 2-12 years of age
The least concentrated product (10% DEET or less) should be used.
Do not apply more than three times per day. Prolonged use should be avoided.
Adults and Individuals 12 Years of Age or Older:
Studies show that products with lower than 30% concentrations of DEET are as effective as the
high concentration products, but they remain so for shorter periods of time.
Products containing no more than a 30% concentration of DEET will provide adults
with sufficient protection. Re-apply after these protection times have elapsed if necessary.
Bottom line.. as low a concentration as possible.
DEET combined with a sunscreen?
Sunscreen preparations are usually applied repeatedly to the skin
to prevent sun exposure. This "repeated" application
may result in overexposure to DEET, which should be applied sparingly. So these combined preparations should not be used.
Is DEET safe for pregnant or nursing women?
According to the CDC, there are no reported adverse events following use of repellents containing DEET in pregnant or breastfeeding women.
Precautions when using DEET:
Read and carefully follow all directions before using the product
Young children should not apply DEET to themselves
Wear long sleeves and pants when possible and apply repellent to clothing
Apply DEET sparingly only to exposed skin and avoid over application. Do not use DEET underneath
Do not use DEET on the hands of young children and avoid the eye and mouth
Do not apply DEET over cuts, wounds, or irritated skin
Wash treated skin with soap and water upon returning indoors. Also, wash treated
Avoid using sprays in enclosed areas
Do not apply aerosol or pump products directly to the face. Spray your hands and then rub them carefully over the face(avoid eyes and mouth)
Do not use DEET near food
Keep repellents out of reach of children.
Do not apply to infants under 6 months of age
Repellent use at schools
Should parents spray insect repellent on their children before they go to school?
According to the CDC, whether children spend time outside during the school day should determine the need for applying repellent.If children will be spending time outdoors (for example, in recreational activities, walking to and from school), parents may wish to apply repellent.
Should children be given repellent to use during the day?
This depends on the age and maturity of the child. As with many other chemicals, care should be taken that DEET is not
misused or swallowed. Also, parents should be aware of school policies and procedures regarding
bringing/using repellents to school.
Reducing the mosquito population around your home:
Mosquitoes lay eggs in standing water and it takes about 4 days for the eggs to grow into adults that are ready to fly. Even a small amount of water, like in a saucer under a flower pot, is enough to act as a breeding ground.
As a result, it is important to eliminate standing water as much as possible around your property. Here are some tips:
Regularly drain standing water from items like pool covers, saucers under flower pots, pet bowels, pails, recycle bins, garbage cans etc.
Drill holes in the bottom of recycling bins
Change(or empty) the water in wading pools, bird baths, pet bowls and livestock watering tanks twice a week
Turn over plastic wading pools and wheel barrels when not in use
Clean and chlorinate your swimming pools. A pool left unattended can produce a large number of mosquitoes
Landscape your garden as necessary to eliminate stagnant waters(mosquitoes can breed even in puddles of water that last for more than 4 days
Get rid of unused items that have a tendency to collect water including old tires
Cover rain barrels with screens
Clean eaves troughs(roof gutters)regularly to prevent clogs that can trap water
If you have an ornamental pond, consider getting fish that will eat mosquito larvae
SOURCES: CDC, HEALTH CANADA, AAP
AMERICAN ACADEMY OF PEDIATRICS
DR.PAUL's FACT SHEET ON WEST NILE VIRUS
DR.PAUL's FACT SHEET ON MOSQUITO BITE PREVENTION
Other Childhood Illnesses
The information provided in this site is
designed to be an educational aid only. It is not intended to
replace the advice and care of your child's physician, nor is
it intended to be used for medical diagnosis or treatment. If
you suspect that your child has a medical condition, always
consult a physician.
© Autograph Communications Inc.,
All rights reserved | http://www.drpaul.com/illnesses/westnile.html | 13 |
19 | The Southland region is at risk from a number of natural hazards which have the potential to cause property damage, loss of life or injury.
Avalanches typically occur in mountainous terrain and are primarily composed of flowing snow. They only occur in a standing snow pack and are always caused by an external stress on the pack. It is difficult to predict with absolute certainty when an avalanche will be triggered. Avalanches are among the most serious hazards to life and property, with their destructive capability resulting from their potential to carry an enormous mass of snow over large distances at considerable speed.
Avalanches are classified by their morphological characteristics, and are rated by either their destructive potential, or the mass of the downward flowing snow. Avalanche size, mass, and destructive potential are rated on a logarithmic scale, in New Zealand this scale is made up of five categories; low, moderate, considerable, high and extreme.
To keep the Milford Road safe and open as much as possible during the avalanche season (June to November), the New Zealand Transport Agency contracts a specialist avalanche team to conduct a control programme using high-tech equipment to predict and manage avalanches. A crucial part of the programme is controlling the avalanche hazard by either not allowing traffic to stop inside the avalanche area or closing the road and using controlled explosives to release avalanches before they occur naturally.
There are a large range of biological hazards that if not controlled or avoided, could cause significant loss of life or severely affect New Zealand's economy, agricultural and fishery industries, health (human & animal), and infrastructure (e.g. water supply and treatment networks). Due to our economic dependence on horticultural, agricultural and forestry industries, and limited historical exposure to disease, New Zealand is very susceptible to biological hazards.
Climate changes have occurred naturally in the past, and some regional changes have been significant. But globally, our climate has been relatively stable for the past 10,000 years. Human activity is increasing the natural level of greenhouse gases in the atmosphere causing Earth to warm up and the climate to change. If the world does not take action to reduce greenhouse gas emissions, the global average temperature is very likely to change more rapidly during the 21st century than during any natural variations over the past 10,000 years.
Climate change is not just rising temperatures - Under climate change New Zealand can also expect to see changes in wind patterns, storm tracks, the occurrence of droughts and frosts, and the frequency of heavy rainfall events. The impacts of climate change in New Zealand will become more pronounced as time goes on.
These changes will result in both positive and negative effects. For example:
- agricultural productivity is expected to increase in some areas but there is the risk of drought and spreading pests and diseases. It is likely that there would be costs associated with changing land-use activities to suit a new climate
- people are likely to enjoy the benefits of warmer winters with fewer frosts, but hotter summers will bring increased risks of heat stress and subtropical diseases
- forests and vegetation may grow faster, but native ecosystems could be invaded by exotic species
- drier conditions in some areas are likely to be coupled with the risk of more frequent extreme events such as floods, droughts and storms
- rising sea levels will increase the risk of erosion and saltwater intrusion, increasing the need for coastal protection
- snowlines and glaciers are expected to retreat and change water flows in major South Island rivers.
Coastal erosion is the wearing away of the coastline usually associated with the removal of sand from a beach to deeper water offshore or alongshore into inlets, tidal shoals and bays. Coastlines constantly move, building up and eroding in response to waves, winds, storms and relative sea level rise. The effect of climate change (rising sea levels, increased severity and possibly the frequency of harsh storms) may exacerbate coastal erosion. Since the height of the land and the sea are changing, we use "relative sea level rise" to describe the rise of the ocean compared to the height of land in a particular location.
Coastal erosion and flooding can cause significant problems if people build too close to the coast. Coastal erosion is generally occurring in Southland estuaries and harbours as extensively as it is on the open coast. In some areas e.g. Colac Bay and Halfmoon Bay, the issue is becoming increasingly significant.
Southland has abundant water resources in snowfields, groundwater aquifers, rivers and lakes but despite this the region is facing growing challenges in managing its water resources. Agriculture continues to increase demand for water, and water supplies are not always in the right place at the right time.
A drought can be defined as an:
- 'Agricultural drought' where there is soil moisture deficit which impacts on agricultural and horticultural industries, and / or
- 'Water supply drought' which results in a water supply shortage
Drought events are one of New Zealand's most damaging and perhaps the most costly natural hazards. Droughts are a natural part of climate variability in Southland and have a significant impact on agricultural production, particular in Northern Southland and the Te Anau Basin. Droughts can have in a major impact on property, livelihoods and society in general. Dry periods are most common from December to March.
Impacts of drought include: crop failure and lack of stock feed, lower production, economic loss locally and regionally (generally a lag effect), poor stock condition and lower reproductive performance, psychological and social on farming communities, water shortages leading to restrictions and irrigation limitations, increased numbers and severity of rural fires.
The 2003-04 drought event, although of relatively limited duration, was calculated to have taken $63 to $72 million out of the regional economy.
Southland lies adjacent to the boundary of the Pacific and Australian tectonic plates. The movement of the sub ducting Australian plate and the overlying Pacific plate together with movement along the fault line are the source of frequent earthquakes in Fiordland. The Alpine Fault, one of the world’s major active faults, has a predicted ~35 per cent likelihood of strong movement within the next 50 years. This would cause catastrophic earthquake damage along the fault and the severity of the shaking and damage would be felt to varying degrees across Southland.
Southland's most significant recent earthquake activity occurred 100 km north-west of Tuatapere on the 15 July 2009 and had a magnitude of 7.8 It must be noted this earthquake did not originate from the Alpine Fault. Fortunately because of the remote location and minimal built and social environments in this area the impact on the community was minimal.
Every year Southland has on average 2,100 incidents per year that require response from the New Zealand Fire Service, these include a wide range of events e.g. Fires Relating to Structures, Mobile Property Fires, Vegetation Fires, Chemical, Flammable Liquid or Gas Fires, Miscellaneous Fires, Hazardous Emergencies, Overpressure, Rupture, Explosives, Over Heating, Rescue, Emergency Medical Calls , Medical/Assist Ambulance, Special Service Calls, Natural Disasters, Good Intent Calls, False Alarms.
Injuries to the Southland public from fire are on average 16 per year. Fatalities from fire over the last five years are on average of one per year, with no multiple fatalities from fire in the last five years.
Even though Southland typically has high rainfall levels wild fire is still a real risk. On average Southland experiences one large wild fire every summer and these fires cost between $500,000 - $100,000,000.
Things that increase the risk of wild fire are:
- Leaving a burn-off or campfire unattended or not properly extinguishing them.
- Not recognising the level of risk involved with fire (e.g. too close to other vegetation, don't predict the weather changing).
- Underestimating the speed that a fire can spread.
- Poor machinery maintenance that can lead to emission of sparks or heat build-up causing combustion.
- Periods of dry weather.
- Wind, particularly warm north-westerlies.
- Large areas of continuous fuels such as native or exotic forests, peat soils.
- Burning large deep burning materials such as logs and peat.
You can obtain more material on the http://www.southernruralfire.org.nz website or by calling 0800 773 363 on fire bans and how to obtain fire permits.
What Should I do In A Fire?
Be Prepared! Read about What to Do In a Fire.
River/stream flooding as a result of sustained or short duration-high intensity rainfall can have a significant effect both directly and indirectly on a large part of the region. With four major river systems running through the region, many of them close to or passing through highly populated areas, flooding has long been recognised as a significant hazard. A substantial amount of work has been undertaken over the last century to both mitigate and monitor this hazard. Timely and effective flood warning is an integral component of emergency management arrangements for Southland communities at risk from flooding.
Some flooding events of note over the previous fifty years in Southland include but are not limited to; April 1968, Sept 1972, Oct 1978, Jan 1980, Jan 1984, Nov 1999, April 2010.
What Should I do In A Flood?
Be Prepared! Read about What to Do In a Flood.
Frost is the formation of ice crystals relatively close to the ground on solid surfaces and occurs when air temperatures fall below freezing and the water vapour in the air solidifies as ice crystals. The size of the ice crystals depends on the time, temperature, and the amount of water vapour available.
Southland experiences the majority of its frosts between May and September, when seasonal temperatures fluctuate most.
Frost damage can be detrimental to a variety of different crops because water inside the cells of a plant can freeze, breaking the cell wall and destroying the integrity the plant.
The heaving of soil produced by frosts can cause structural damage, in the form of cracks, on roadways, buildings and foundations. Road, rail and air transport can also be hindered by freezing conditions.
Frosts are frequent inland during winter - Gore averages 114 each year. In coastal regions they are less common and less severe (Invercargill averages 94).
Snow in July 1996 was followed by ten consecutive frosts and was labelled the 'big freeze' and caused many frozen/burst pipes and tree deaths. The -14.2oC frost on the 3 July 1996 was the coldest on record since 1910.
Southland's latitude of between 46 and 47 degrees south and an absence of shelter from the unsettled weather that moves over the sea from the west, south-west and south, place it within the zone for strong air movement or winds. Wind is the movement of the air or atmosphere that surrounds the earth when it warms up or cools down. Winds move moisture and heat around the world and also generate much of the weather. Winds are often referred to according to strength and direction. Wind speed is measured by anemometers and reported using the Beaufort wind force scale. There are general terms that differentiate winds of different average speeds such as a breeze, a gale, a storm, tornado, or a hurricane. Within the Beaufort scale, gale-force winds are between 28 knots (52 kilometres per hour) and 55 knots (102 kph), a storm has winds of 56 knots (104 kph) to 63 knots (117 kph).
With an average of 98 windy days per year, Invercargill is New Zealand’s second windiest city after Wellington. Examples of wind events include; 143 kph wind gusts on 9 June 1993 and 16 May 1994, August 2000 a week of strong easterlies at Oban, September 2006 record winds at Athol, 184 kph wind gust at Stewart Island on 4 November 2009. High winds can cause damage to buildings, trees/vegetation and electricity lines as well as accelerate erosion. It can also have a consequential effect for services and infrastructure supporting telecommunications, transport, water and sewage collection and treatment.
Erosion and slope failure is not uncommon in Southland. Land instability is often made worse by human activities. For more information see the section on Coastal Erosion.
What Should I do In A Landslide?
Be Prepared! Read about What to Do In a Landslide.
Winter snowfalls are common above 1000 metres in the Southern Alps. Southland's distance from the equator makes its weather conditions cooler than other parts of New Zealand. Snowflakes are formed when water vapour changes directly to ice without becoming a liquid. Snowflakes are made of ice crystals which form around tiny bits of dirt carried into the atmosphere by the wind, as the ice crystals grow they become heavier resulting in snow.
Snowfall often arises from slow moving deep depressions embedded in cold south-westerly, or south-easterly airstreams, the higher coastal land in western and eastern Southland is more vulnerable than elsewhere with snow occasionally occurring at sea level. Snow generated by southerly airstreams is relatively light, dry, snow whereas snow generated from warm, moist airstreams from the north and pushed up over colder, denser airstreams from the south is wetter, heavier and thicker.
The effects of snow depend on depth, weight and persistence but can impact on electricity and telecommunication delivery, road, rail and air transport, farm animal welfare and feed, forestry plantations and building structures.
Although historical snow records are incomplete and accurate measurements are lacking, it would appear from local newspapers that in July and August 1939 widespread and very deep snow fell around Southland that took six weeks to clear – northern, at least 0.5 metres with drifts up to two metres; coastal (including Invercargill) and eastern, up to 150mm. In September 2010, Southland witnessed a severe six-day severe storm that swept across a coastal belt from Colac Bay through Invercargill, the Catlins, Owaka and Clinton dumping heavy, wet, snow that caused structural damage to some buildings and massive stock losses. Road, rail and air transport can be hindered by lying snow.
What Should I do In The Snow?
Be Prepared! Read about What to Do In The Snow.
Storm surge is the name given to the elevation of the sea above predicted levels by wind, low barometric pressures or both. When surges coincide with high spring tides, the sea may spread out over land or stop banks adjoining the coast or tidal rivers. Some stop banks on rivers e.g. the Waihopai River next to Prestonville, Gladstone, Avenal and Collingwood, more frequently protect areas from marine inundation than riverine inundation. While in any one storm surge event sea levels are generally elevated, there are significant differences from place to place, most probably because of wind direction. The most obvious areas at risk of marine inundation are those low-lying areas of former estuary and estuary fringe that have been reclaimed through the construction of barriers. Generally, these areas have not been filled and they remain at their natural level. Less obvious are areas of low lying land, both filled and natural, that border harbour, estuarine or tidal waters.
What Should I do In A Storm?
Be Prepared! Read about What to Do In a Storm.
Tsunami may be tens of metres in height in shallow water, however most tsunami are less than 1m in height at the shore. Historical information suggests that the most likely hazardous tsunami events to Southland are likely to be caused by earthquakes off the west coast of Peru or in the Puysegur Subduction Zone, off the south-west coast of Fiordland.
What Should I do In A Tsunami?
Be Prepared! Read about What to Do In a Tsunami. | http://civildefence.co.nz/index.php?p=natural-hazards | 13 |
22 | A carefully crafted lesson is structured with a well-defined focus and a clearly-stated purpose. The lesson should present the class with an issue that is phrased in the form of a problem to be solved or a question to be analyzed and assessed by the class. Effective lessons do not merely cover information; they present students with major concepts and ideas and challenge students to think critically and take positions on open-ended essential questions. Here are some examples of essential questions for students of American history:
- Is America a land of opportunity?
- Did geography greatly affect the development of colonial America?
- Does a close relationship between church and state lead to a more moral society?
- Has Puritanism shaped American values?
- Was colonial America a democratic society?
- Was slavery the basis of freedom in colonial America?
- Did Great Britain lose more than it gained from its victory in the French and Indian War?
- Were the colonists justified in resisting British policies after the French and Indian War?
- Was the American War for Independence inevitable?
- Would you have been a revolutionary in 1776?
- Did the Declaration of Independence establish the foundation of American government?
- Was the American Revolution a “radical” revolution?
- Did the Articles of Confederation provide the United States with an effective government?
- Could the Constitution be written without compromise?
- Does state or federal government have a greater impact on our lives? (federalism)
- Does the system of checks and balances provide us with an effective and efficient government? Do separation of powers and checks and balances make our government work too slowly?
- Is a strong federal system the most effective government for the United States? Which level of government, federal or state, can best solve our nation’s problems?
- Is the Constitution a living document? (amendment process, elastic clause, judicial interpretation, legislative modifications, etc.)
- Was George Washington’s leadership indispensable in successfully launching the new federal government?
- Should the United States fear a national debt? (financial problems of the new nation and Hamilton’s financial plan)
- Whose ideas were best for the new nation, Hamilton’s or Jefferson’s?
- Are political parties good for our nation? (Federalists v. Democratic-Republicans)
- Should the United States seek alliances with other nations?
- Should the political opposition have the right to criticize a president’s foreign policy?
- Is the suppression of public opinion during times of crisis ever justified?
- Should we expect elections to bring about revolutionary changes? (election of 1800)
- Is economic coercion an effective method of achieving our national interest in world affairs?
- Should the United States fight to preserve the right of its citizens to travel and trade overseas?
- Does war cause national prosperity?
- Was the Monroe Doctrine a policy of expansion or self-defense? Or: Was the Monroe Doctrine a “disguise” for American imperialism?
- Should presidents’ appointees to the Supreme Court reflect their policies?
- Did the Supreme Court under John Marshall give too much power to the federal government (at the expense of the states)?
- Does an increase in the number of voters make a country more democratic?
- Should the United States have allowed American Indians to retain their tribal identities?
- Does a geographic minority have the right to ignore the laws of a national majority?
- Did Andrew Jackson advance or retard the cause of democracy? (autocrat v. democrat)
- Was the age of Jackson an age of democracy?
- Should the states have the right to ignore the laws of the national government?
- Does the United States have a mission to expand freedom and democracy?
- Have reformers had a significant impact on the problems of American society?
- Does militancy advance or retard the goals of a protest movement? (abolitionists) Or: Were the abolitionists responsible reformers or irresponsible agitators?
- Was slavery a benign or evil institution?
- Can legislative compromises solve moral issues?
- Can the Supreme Court settle moral issues? (Dred Scott decision)
- Was slavery the primary cause of the Civil War?
- Was the Civil War inevitable?
- Does Abraham Lincoln deserve to be called the “Great Emancipator”?
- Was the Civil War worth its costs?
- Was it possible to have a peace of reconciliation after the Civil War?
- Should the South have been treated as a defeated nation or as rebellious states? (a comparison of the presidential and congressional reconstruction programs)
- Did the Reconstruction governments rule the South well?
- Can political freedom exist without an economic foundation?
- When should a president be impeached and removed from office?
- Does racial equality depend upon government action?
- Should African Americans have more strongly resisted the government’s decision to abandon the drive for equality? (Booker T. Washington’s “accommodation” v. W.E.B. Du Bois’s “agitation” approaches)
- Has rapid industrial development been a blessing or a curse for Americans?
- Were big business leaders “captains of industry” or “robber barons?”
- Should business be regulated closely by the government?
- Should business be allowed to combine and reduce competition?
- Can workers attain economic justice without violence?
- Did America fulfill the dreams of immigrants?
- Has immigration been the key to America’s success?
- Has the West been romanticized?
- Can the “white man’s conquest” of Native Americans be justified?
- Have Native Americans been treated fairly by the United States government?
- Who was to blame for the problems of American farmers after the Civil War? Or: Was the farmers’ revolt of the 1890s justified?
- Did populism provide an effective solution to the nation’s problems?
- Is muckraking an effective tool to reform American politics and society?
- Can reform movements improve American society and politics? (Progressivism)
- Were the Progressives successful in making government more responsive to the will of the people?
- Does government have a responsibility to help the needy?
- To what extent had African Americans attained the “American Dream” by the early twentieth century?
- Is a strong president good for our nation? (Theodore Roosevelt) Or: Did Theodore Roosevelt further the goals of Progressivism?
- Was the “New Freedom” an effective solution to the problems of industrialization?
- Was American expansion overseas justified?
- Did the press cause the Spanish-American War?
- Was the United States justified in going to war against Spain in 1898?
- Should the United States have acquired possessions overseas?
- Was the acquisition of the Panama Canal Zone an act of justifiable imperialism?
- Does the need for self-defense give the US the right to interfere in the affairs of Latin America? (Roosevelt Corollary, “Dollar Diplomacy,” “Watchful Waiting”)
- Was the United States imperialistic in the Far East?
- Was world war inevitable in 1914?
- Was it possible for the US to maintain neutrality in World War I?
- Should the United States fight wars to make the world safe for democracy? Or: Should the United States have entered World War I?
- Should a democratic government tolerate dissent during times of war and other crises? (Schenck v. United States, Abrams v. United States)
- Was the Treaty of Versailles a fair and effective settlement for lasting world peace?
- Should the United States have approved the Treaty of Versailles?
- Was American foreign policy during the 1920s isolationist or internationalist?
- Was the decade of the 1920s a decade of innovation or conservatism?
- Did the Nineteenth Amendment radically change women’s role in American life?
- Did women experience significant liberation during the 1920s? Or: Did the role of women in American life significantly change during the 1920s?
- Should the United States limit immigration?
- Should the United States have enacted the Prohibition Amendment?
- Does economic prosperity result from tax cuts and minimal government?
- Was the Great Depression inevitable?
- Was the New Deal an effective response to the depression?
- Did Franklin Roosevelt’s New Deal weaken or save capitalism?
- Did Franklin Roosevelt’s New Deal undermine the constitutional principles of separation of powers and checks and balances?
- Did minorities receive a New Deal in the 1930s?
- Do labor unions and working people owe a debt to the New Deal?
- Did the New Deal effectively end the Great Depression and restore prosperity?
- Has the United States abandoned the legacy of the New Deal?
- Did United States foreign policy during the 1930s help promote World War II? Or: Could the United States have prevented the outbreak of World War II?
- Should the United States sell arms to other nations? Or: Should the United States have aided the Allies against the Axis powers? Or: Does American security depend upon the survival of its allies?
- Was war between the United States and Japan inevitable?
- How important was the home front in the United States’ victory in World War II?
- Was the treatment of Japanese Americans during World War II justified or an unfortunate setback for democracy?
- Should the US employ atomic (nuclear) weapons to defeat its enemies in war? (President Truman’s decision to drop the atom bomb on Japan)
- Could the United States have done more to prevent the Holocaust?
- Was World War II a “good war?” Or: Was World War II justified by its results?
- Was the Cold War inevitable?
- Was containment an effective policy to thwart communist expansion?
- Should the United States have feared internal communist subversion in the 1950s?
- Were the 1950s a time of great peace, progress, and prosperity for Americans?
- Did the Civil Rights Movement of the 1950s expand democracy for all Americans?
- Should the United States have fought “limited wars” to contain communism? (Korean conflict)
- Should President Kennedy have risked nuclear war to remove missiles from Cuba?
- Does the image of John F. Kennedy outshine the reality?
- Did American presidents have good reasons to fight a war in Vietnam?
- Can domestic protest affect the outcome of war?
- Did the war in Vietnam bring a domestic revolution to the United States?
- Did the “Great Society” programs fulfill their promises?
- Is civil disobedience the most effective means of achieving racial equality?
- Is violence or non-violence the most effective means to achieve social change?
- Did the Civil Rights Movement of the 1960s effectively change the nation?
- Do the ideas of the 1960s still have relevance today?
- Has the women’s movement for equality in the United States become a reality or remained a dream?
- Did the Warren Supreme Court expand or undermine the concept of civil liberties?
- Should affirmative action programs be used as a means to make up for past injustices?
- Was the Watergate scandal a sign of strength or weakness in the United States system of government? Or: Should Nixon have resigned the presidency?
- Should the president be able to wage war without congressional authorization?
- Did participation in the Vietnam War signal the return to a foreign policy of isolation for the United States?
- Did the policy of detente with communist nations effectively maintain world peace?
- Is secrecy more important than the public’s right to know in implementing foreign policy? (Bay of Pigs invasion, 1961; clandestine ClA operations; Pentagon Papers court case, 1971; Iran-Contra affair; invasion of Panama, 1989; etc.)
- Should a president be permitted to conduct a covert foreign policy?
- Did the policies of the Reagan administration strengthen or weaken the United States?
- Should human rights and morality be the cornerstones of US foreign policy? Or: Should the United States be concerned with human rights violations in other nations?)
- Were Presidents Reagan and Bush responsible for the collapse of the Soviet Union and the end of the Cold War? Did the United States win the Cold War?
- Are peace and stability in the Middle East vital to the United States’ economy and national security?
- Should the United States have fought a war against Iraq to liberate Kuwait?
- Is it the responsibility of the United States today to be the world’s “policeman?”
- Can global terrorism be stopped?
- Does the United States have a fair and effective immigration policy?
- Should the United States restrict foreign trade?
- Has racial equality and harmony been achieved at the start of the twenty-first century?
- Should the United States still support the use of economic sanctions to further democracy and human rights?
- Should the federal surplus be used to repay the government’s debts or given back to the people in tax cuts?
- Should Bill Clinton be considered an effective president?
- Should a president be impeached for ethical lapses and moral improprieties?
- Should the United States use military force to support democracy in Eastern Europe? In the Middle East?
- Is it constitutional for the United States to fight preemptive wars? Was the United States justified to fight a war to remove Saddam Hussein from power?
- Can the United States maintain its unprecedented prosperity? (policies of the Federal Reserve System; balancing the Federal budget; international trade and the global economy; inflation factor; etc.)
- Is the world safer since the end of the Cold War?
- Should Americans be optimistic about the future?
- Should we change the way that we elect our presidents?
- Has the president become too powerful? Or the Supreme Court?
- Should limits be placed on freedom of expression during times of national crisis?
- Should stricter laws regulating firearms be enacted?
- Should affirmative action programs be continued to overcome the effects of past injustice and discrimination?
- Is the death penalty (capital punishment) a “cruel and unusual punishment” (and thus unconstitutional)?
- Does the media have too much influence over public opinion?
- Should lobbies and pressure groups be more strictly regulated?
- Do political parties serve the public interest and further the cause of democracy?
Make Gilder Lehrman your Home for History
Already have an account?
Please click here to login and access this page.
How to subscribe
Click here to get a free subscription if you are a K-12 educator or student, and here for more information on the Affiliate School Program, which provides even more benefits.
Otherwise, click here for information on a paid subscription for those who are not K-12 educators or students.
Make Gilder Lehrman your Home for History
Become an Affiliate School to have free access to the Gilder Lehrman site and all its features.
Click here to start your Affiliate School application today! You will have free access while your application is being processed.
Individual K-12 educators and students can also get a free subscription to the site by making a site account with a school-affiliated email address. Click here to do so now!
Make Gilder Lehrman your Home for History
Why Gilder Lehrman?
Your subscription grants you access to archives of rare historical documents, lectures by top historians, and a wealth of original historical material, while also helping to support history education in schools nationwide. Click here to see the kinds of historical resources to which you'll have access and here to read more about the Institute's educational programs.
Individual subscription: $25
Click here to sign up for an individual subscription to the Gilder Lehrman site.
K-12 School subscription: $195
Click here to sign up for an institutional subscription, which allows site access to all faculty and students in a single school, or all visitors to a library branch.
Make Gilder Lehrman your Home for History
Upgrade your Account
We're sorry, but it looks as though you do not have access to the full Gilder Lehrman site.
All K-12 educators receive free subscriptions to the Gilder Lehrman site, and our Affiliate School members gain even more benefits!
How to Subscribe
K-12 educator or student? Click here to edit your profile and indicate this, giving you free access, and here for more information on the Affiliate School Program.
Not a educator or student? Click here for more information on purchasing a subscription to the Gilder Lehrman site.
Related Site Content
- Teaching ResourcePresidential Election Results, 1789–2008
- Recommended ResourceThe Real American Dream: A Meditation on Hope
- InteractiveFreedom: A History of US
- MultimediaDefining the Twentieth Century
- EssayWinning the Vote: A History of Voting Rights
- InteractiveAbraham Lincoln: A Man of His Time, A Man for All Times
- EssayThe US Banking System: Origin, Development, and Regulation
- MultimediaA Life in the Twentieth Century: Innocent Beginnings, 1917–1950
- InteractiveBattlelines: Letters from America’s Wars
- InteractiveJohn Brown: The Abolitionist and His Legacy | http://www.gilderlehrman.org/history-by-era/resources/essential-questions-teaching-american-history | 13 |
14 | Alongside these risks, there are also opportunities for women, especially in the increasing international and national attention to climate change. A deliberate focus on rectifying gender inequalities could interrupt patterns of discrimination and expand women’s options, while making climate mitigation and adaptation measures more far-reaching and effective. Women have enormous potential as agents of positive change, both as actors in development and stewards of the environment.
The Issue: Quick Facts and Figures
· Women comprise 43 percent of the agricultural workforce in developing countries, yet they have less access to productive resources and opportunities. Rural women are responsible for water collection in almost two-thirds of households in developing countries. Reduced or variable rainfall can increase the time required to collect water and cut down agricultural production.
· Globally, men’s landholdings average three times those of women. Women make up less than 5 percent of agricultural landholders in North Africa and Western Asia, and approximately 15 percent in Sub-Saharan Africa.
· Women account for two-thirds of 774 million adult illiterates in the world—a proportion unchanged over the past two decades. Disparities in education limit women’s access to information and vocational options, constraining their ability to adapt to climate change and environmental degradation.
· A recent study in 141 countries found that in highly gender inequitable societies, more women than men die when disasters strike.
· Women are still underrepresented in fields such as energy, industry, construction and engineering, all of which are expected to generate green jobs. The share of female employees in the energy industry is estimated at only 20 percent, most working in nontechnical fields.
· The number of women in environmental decision-making is limited, but where women are involved, better environmental management of community forestry resources and actions to improve access to education and clean water have been some of the results.
What Should We Do?
Eight Key Actions
1. Adapt for gender equality:The capacity to adapt to climate change largely depends on resources, education, technology and basic services. Since women have less access to all of these, national and local adaptation strategies will need to recognize and address these gaps. At the same time, women have existing stores of knowledge on adaptation that should be tapped. For example, rural indigenous women from the Bolivian altiplano shared knowledge with a local yapuchiris famers network on effective storage of seeds in cold weather with the wider community. This resulted in smaller food losses from climate-related temperature shifts.
2. Make women part of disaster risk reduction:The average number of extreme weather events—droughts, storms and floods—more than doubled over the last two decades. Disaster risk reduction lessens the vulnerability of people and property, including through advance preparedness and wise management of land and the environment. Since women face greater risks of injury and death, they must have a central role in disaster risk management, whether that involves early warning systems, climate-proofing infrastructure or other measures. For instance, women living in earthquake-prone areas near Lima, Peru learned to produce earthquake-resistant building components. They then negotiated with the local government for safe housing policies that have benefited 55,000 families.
3. Expand access to services:Public services are critical in helping women overcome discrimination that hinders adaptation to climate change, such as through education and health care. Services that fully respond to women’s needs require women’s participation in decisions that shape them, sex-disaggregated data to pinpoint gaps, and gender-responsive budgeting to ensure financing backs equitable delivery. Services are especially important for women in their roles as housekeepers and caregivers. Worldwide, around 1.5 billion people lack access to electricity, for example. This is bad for the climate and for health—indoor smoke from burning fuels like wood and charcoal kills 2 million people a year, mostly women and children in rural areas. And for many women, impacts of climate change mean walking longer distances each day to gather fuelwood and other energy sources. In South Africa, electrification raises the likelihood that women will get jobs by 13 percent, due to labour saving in the home.
4. Ensure technology delivers for all:Investments in fuel-efficient and labour-saving technologies could achieve multiple ends: such as reducing greenhouse gas emissions, improving health, creating jobs for both women and men, and promoting women’s empowerment. But technology remains mostly the domain of men, for reasons ranging from affordability to stereotypes around male and female roles. one consequence is lost productivity—giving women the same access as men to agricultural resources, including technology, could increase production on women’s farms in developing countries by as much as 20 to 30 percent, for example. Technology could also reduce women’s time burdens and advance adaptation, such as through climate-appropriate crops and patterns of cultivation. In all cases, women, especially end-users, need to be consulted in the development of new technologies to ensure they are appropriate and sustainable. Poorly designed biogas stoves, for instance, may cut emissions, but increase, rather than decrease, women’s workloads.
5. Share the benefits of a green economy: The shift to green economies must equitably benefit both women and men, including through new jobs and entrepreneurial opportunities. The energy and electricity sectors, for example, will likely generate a large share of green jobs as renewable energy takes off. Fewer women than men pursue the kind of training in science and technology that provides necessary skills for these jobs, however. This means that women provide an untapped resource for green growth. While women account for more than half of university graduates in several countries in the Organisation for Economic Co-operation and Development, they receive only 30 percent of tertiary degrees granted in science and engineering fields. More efforts are needed to ensure that women have equal opportunities in education and employment, and in access to credit and assets they can use for setting up green businesses.
6. Increase women’s access to climate change finance:Gender analysis of all budgets and financial instruments for climate change will help guide gender-sensitive investments in adaptation, emissions mitigation and technology transfer. But gender considerations are currently not systematically addressed in climate finance. This means losing out on two fronts: both in terms of women’s rights, such as to access jobs and services, and women’s potential as agents of change. The possibilities are there but need to be put in practice—emissions reduction credits under the Clean Development Mechanism could be used to expand energy access for women in poor areas, for example.
7. Uphold women’s land rights:Environmental sustainability in rural areas depends on strong legal rights to land ownership. But women have been left out—they comprise just 10 to 20 percent of landholders in developing countries. This diminishes incentives to make long-term investments in soil rehabilitation and conservation, and hinders women from accessing credit and other resources. Directed efforts can change this stark imbalance. For instance, as a result of advocacy efforts by UN Women and the Women Policy Network, the land distribution scheme for survivors of the December 2004 tsunami in Aceh, Indonesia allowed Acehnese women to register themselves as individual or joint owners in title deeds. Without this, ownership would have gone only to the head of the family unit, usually men.
8. Close gender gaps in decision-making:Sufficient numbers of women are not yet at the tables where major decisions about climate change and the environment are made. In negotiations under the UN Framework Convention on Climate Change over the past decade, women accounted for only 30 percent of registered country delegates and 10 percent of heads of delegations. Worldwide, they occupy a miniscule portion of ministerial posts related to the environment, natural resources and energy. Affirmative action quotas are among the most proven strategies for rapidly increasing women’s participation in elected and appointed offices. This would be good for women and the climate. A study of 130 countries found that those with higher female parliamentary representation are more prone to ratify international environmental treaties. At the community level, evidence from India and Nepal suggests that women’s involvement in decision-making is associated with better local environmental management. | http://hoilhpn.org.vn/newsdetail.asp?CatId=128&NewsId=18015&lang=EN | 13 |
32 | The different types of demand are;
i) Direct and Derived Demands
Direct demand refers to demand for goods meant for final consumption; it is the demand for consumers’ goods like food items, readymade garments and houses. By contrast, derived demand refers to demand for goods which are needed for further production; it is the demand for producers’ goods like industrial raw materials, machine tools and equipments.
Thus the demand for an input or what is called a factor of production is a derived demand; its demand depends on the demand for output where the input enters. In fact, the quantity of demand for the final output as well as the degree of substituability/complementarty between inputs would determine the derived demand for a given input.
For example, the demand for gas in a fertilizer plant depends on the amount of fertilizer to be produced and substitutability between gas and coal as the basis for fertilizer production. However, the direct demand for a product is not contingent upon the demand for other products.
ii) Domestic and Industrial Demands
The example of the refrigerator can be restated to distinguish between the demand for domestic consumption and the demand for industrial use. In case of certain industrial raw materials which are also used for domestic purpose, this distinction is very meaningful.
For example, coal has both domestic and industrial demand, and the distinction is important from the standpoint of pricing and distribution of coal.
iii) Autonomous and Induced Demand
When the demand for a product is tied to the purchase of some parent product, its demand is called induced or derived.
For example, the demand for cement is induced by (derived from) the demand for housing. As stated above, the demand for all producers’ goods is derived or induced. In addition, even in the realm of consumers’ goods, we may think of induced demand. Consider the complementary items like tea and sugar, bread and butter etc. The demand for butter (sugar) may be induced by the purchase of bread (tea). Autonomous demand, on the other hand, is not derived or induced. Unless a product is totally independent of the use of other products, it is difficult to talk about autonomous demand. In the present world of dependence, there is hardly any autonomous demand. Nobody today consumers just a single commodity; everybody consumes a bundle of commodities. Even then, all direct demand may be loosely called autonomous.
iv) Perishable and Durable Goods’ Demands
Both consumers’ goods and producers’ goods are further classified into perishable/non-durable/single-use goods and durable/non-perishable/repeated-use goods. The former refers to final output like bread or raw material like cement which can be used only once. The latter refers to items like shirt, car or a machine which can be used repeatedly. In other words, we can classify goods into several categories: single-use consumer goods, single-use producer goods, durable-use consumer goods and durable-use producer’s goods. This distinction is useful because durable products present more complicated problems of demand analysis than perishable products. Non-durable items are meant for meeting immediate (current) demand, but durable items are designed to meet current as well as future demand as they are used over a period of time. So, when durable items are purchased, they are considered to be an addition to stock of assets or wealth. Because of continuous use, such assets like furniture or washing machine, suffer depreciation and thus call for replacement. Thus durable goods demand has two varieties – replacement of old products and expansion of total stock. Such demands fluctuate with business conditions, speculation and price expectations. Real wealth effect influences demand for consumer durables.
v) New and Replacement Demands
This distinction follows readily from the previous one. If the purchase or acquisition of an item is meant as an addition to stock, it is a new demand. If the purchase of an item is meant for maintaining the old stock of capital/asset, it is replacement demand. Such replacement expenditure is to overcome depreciation in the existing stock.
Producers’ goods like machines. The demand for spare parts of a machine is replacement demand, but the demand for the latest model of a particular machine (say, the latest generation computer) is anew demand. In course of preventive maintenance and breakdown maintenance, the engineer and his crew often express their replacement demand, but when a new process or a new technique or anew product is to be introduced, there is always a new demand.
You may now argue that replacement demand is induced by the quantity and quality of the existing stock, whereas the new demand is of an autonomous type. However, such a distinction is more of degree than of kind. For example, when demonstration effect operates, a new demand may also be an induced demand. You may buy a new VCR, because your neighbor has recently bought one. Yours is a new purchase, yet it is induced by your neighbor’s demonstration.
vi) Final and Intermediate Demands
This distinction is again based on the type of goods- final or intermediate. The demand for semi-finished products, industrial raw materials and similar intermediate goods are all derived demands, i.e., induced by the demand for final goods. In the context of input-output models, such distinction is often employed.
vii) Individual and Market Demands
This distinction is often employed by the economist to study the size of the buyers’ demand, individual as well as collective. A market is visited by different consumers, consumer differences depending on factors like income, age, sex etc. They all react differently to the prevailing market price of a commodity. For example, when the price is very high, a low-income buyer may not buy anything, though a high income buyer may buy something. In such a case, we may distinguish between the demand of an individual buyer and that of the market which is the market which is the aggregate of individuals. You may note that both individual and market demand schedules (and hence curves, when plotted) obey the law of demand. But the purchasing capacity varies between individuals. For example, A is a high income consumer, B is a middle-income consumer and C is in the low-income group. This information is useful for personalized service or target-group-planning as a part of sales strategy formulation.
viii) Total Market and Segmented Market Demands
This distinction is made mostly on the same lines as above. Different individual buyers together may represent a given market segment; and several market segments together may represent the total market. For example, the Hindustan Machine Tools may compute the demand for its watches in the home and foreign markets separately; and then aggregate them together to estimate the total market demand for its HMT watches. This distinction takes care of different patterns of buying behavior and consumers’ preferences in different segments of the market. Such market segments may be defined in terms of criteria like location, age, sex, income, nationality, and so on
x) Company and Industry Demands
An industry is the aggregate of firms (companies). Thus the Company’s demand is similar to an individual demand, whereas the industry’s demand is similar to aggregated total demand. You may examine this distinction from the standpoint of both output and input.
For example, you may think of the demand for cement produced by the Cement Corporation of India (i.e., a company’s demand), or the demand for cement produced by all cement manufacturing units including the CCI (i.e., an industry’s demand). Similarly, there may be demand for engineers by a single firm or demand for engineers by the industry as a whole, which is an example of demand for an input. You can appreciate that the determinants of a company’s demand may not always be the same as those of an industry’s. The inter-firm differences with regard to technology, product quality, financial position, market (demand) share, market leadership and competitiveness- all these are possible explanatory factors. In fact, a clear understanding of the relation between company and industry demands necessitates an understanding of different market structures. | http://www.mbaknol.com/managerial-economics/types-of-demand/ | 13 |
42 | Certain questions regarding value, or price, that should be kept separate were sometimes confused by early economists. (1) What determines the price of a good? In the language of modern economics, what determines relative prices? (2) What determines the general level of prices? (3) What is the best measure of welfare? The first and third questions are part of modern microeconomics; the second, although it defies the usually simple micro-macro dichotomy, is generally included under the broad umbrella of macroeconomics. Smith did not provide an unambiguous answer to any of these different questions. His treatment of them is, in places, confusing in this regard because he intermingled his discussion of what determines relative prices with his attempt to discover a measure of changes in welfare over time.
It is not surprising that historians of economic ideas have argued over Smith's true opinion. One group of writers holds that Smith had three theories of relative prices (labor cost, labor command, and cost of production) and a theory explaining the general level of prices. Another group maintains that he settled on a cost of production theory of relative prices, a theory measuring changes in welfare over time, and a theory of the general level of prices. The latter group denies that Smith had a labor theory of relative prices. We believe that Smith experimented with all these theories: a theory of relative prices consisting of labor cost and labor command for a primitive society and cost of production for an advanced economy; the formulation of an index measuring changes in welfare over time; and a theory explaining the general level of prices. We first consider his theory of relative prices.
Although Adam Smith explained relative prices as determined by supply or costs of production alone, he did not completely ignore the role of demand. He believed that market, or short-run, prices are determined by both supply and demand. Natural, or long-run equilibrium, prices generally depend upon costs of production, although Smith sometimes stated that natural price depends upon both demand and supply. These inconsistencies provide ample opportunity for historians of economic theory to debate Smith's real meaning.
Smith's analysis of the formation of relative prices in the economy of his time distinguishes two time periods, the short run and the long run, and two broad sectors of the economy, agriculture and manufacturing. During the short-run, or market, period, Smith found downward-sloping demand curves and upward-sloping supply curves in both manufacturing and agriculture; therefore, market prices depend upon demand and supply. Smith's analysis of the more complicated "natural price," which occurs in the long run, contains some contradictions. For the agricultural sector, natural price depends upon supply and demand because the long-run supply curve is upward-sloping, indicating increasing costs. But for the manufacturing sector, the long-run supply curve is at times assumed to be perfectly elastic (horizontal), representing constant costs, and in other parts of the analysis is downward-sloping, indicating decreasing costs. In manufacturing, when the long-run supply curve is perfectly elastic, price depends entirely on cost of production; but when it is downward-sloping, natural price depends upon both demand and supply.
There are a number of possible interpretations of Smith's statements with regard to the forces determining natural prices for manufactured goods. One may assume that he was merely inconsistent—possibly because of the long period of time it took him to write Wealth of Nations—or that he thought these issues were of minor importance. Another approach is to select one of his statements on manufacturing costs as representative of "the real Adam Smith." It makes little difference which approach is employed, because Smith consistently noted the role of demand in the formation of natural prices and in the allocation of resources among the various sectors of the economy. Nevertheless, regardless of the shape of the long-run supply curve in manufacturing, the major emphasis in the determination of natural prices is on cost of production, an emphasis that is characteristic of Smith and subsequent classical economists.
The scholastics became interested in the question of relative prices because they were concerned with the ethical aspects of exchange, and the mercantilists considered it because they thought wealth was created in the process of exchange. Even though Smith on occasion discussed prices in ethical terms, he had a more important reason for being interested in the factors determining relative prices.
Once an economy practices specialization and division of labor, exchange becomes necessary. If exchange takes place in a market such as the one existing at the time Smith wrote, certain obvious problems arise.
The Meaning of Value
Smith believed that the word value has two different meanings, and sometimes expresses the utility of some particular object, and sometimes the power of purchasing other goods which the possession of that object conveys. The one may be called "value in use"; the other, "value in exchange." The things which have the greatest value in use have frequently little or no value in exchange; and on the contrary, those which have the greatest value in exchange have frequently little or no value in use. Nothing is more useful than water: but it will purchase scarce any thing; scarce any thing can be had in exchange for it. A diamond, on the contrary, has scarce any value in use; but a very great quantity of other goods may frequently be had in exchange for it.
According to Smith, value in exchange is the power of a commodity to purchase other goods—its price. This is an objective measure expressed in the market. His concept of value in use is ambiguous; it resulted in a good part of his difficulties in explaining relative prices. On the one hand, it has ethical connotations and is therefore a return to scholasticism. Smith's own puritanical standards are particularly noticeable in his statement that diamonds have hardly any value in use. On the other hand, value in use is the want-satisfying power of a commodity, the utility received by holding or consuming a good. Several kinds of utility are received when a commodity is consumed: its total utility, its average utility, and its marginal utility. Smith's focus was on total utility—the relationship between marginal utility and value was not understood by economists until one hundred years after Smith wrote—and this obscured his understanding of how demand plays its role in price determination. It is clear that the total utility of water is greater than that of diamonds; this is what Smith was referring to when he pointed to the high use value of water as compared to the use value of diamonds. However, because a commodity's marginal utility often decreases as more of it is consumed, it is quite possible that another unit of water would give less marginal utility than another unit of diamonds. The price we are willing to pay for a commodity—the value we place on acquiring another unit—depends not on its total utility but on its marginal utility. Because Smith did not recognize this (nor did other economists until the 1870s), he could neither find a satisfactory solution to the diamond-water paradox nor see the relationship between use value and exchange value.
Smith on Relative Prices
Because Smith was somewhat confused about the factors determining relative prices, he developed three separate theories relating to them. (1) a labor cost theory of value, (2) a labor command theory of value, and (3) a cost of production theory of value. He postulated two distinct states of the economy: the early and rude state, or primitive society, which is defined as an economy in which capital has not been accumulated and land is not appropriated; and an advanced economy, in which capital and land are no longer free goods (they have a price greater than zero).
Labor cost theory in a primitive society.
In the early and rude state of society which precedes both the accumulation of stock [i.e., capital] and the appropriation of land, the proportion between the quantities of labour necessary for acquiring different objects seems to be the only circumstance which can afford any rule for exchanging them for one another. If among a nation of hunters, for example, it usually costs twice the labour to kill a beaver which it does to kill a deer, one beaver should naturally exchange for or be worth two deer.
According to Smith's labor cost theory, the exchange value, or price, of a good in an economy in which land and capital are nonexistent, or in which these goods are free, is determined by the quantity of labor required to produce it. This brings us to the first difficulty with a labor cost theory of value. How are we to measure the quantity of labor required to produce a commodity? Suppose that two laborers are working without capital, that land is free, and that in one hour laborer Jones produces one unit of final product and laborer Brown produces two units. Assume that all other things are equal—or, to use the shorthand expression of theory, ceteris paribus—so that the only cause of the differences in productivity is the difference in the skills of the workers. Does a unit of output require one hour of labor or two? Smith recognized that the quantity of labor required to produce a good cannot simply be measured by clock hours, because in addition to time, the ingenuity or skill involved and the hardship or disagree-ableness of the task must be taken into account.
Labor theory in an advanced economy. Smith's model for an advanced society differs from his primitive economy model in two important respects—capital has been accumulated and land appropriated. They are no longer free goods, and the final price of a good also must include returns to the capitalist as profits and to the landlord as rent. Final prices yield an income made up of the factor payments of wages, profits, and rents.
Cost of production theory of relative prices. Smith wrestled with developing a labor theory of value for an economy that included more than labor costs in the final prices of goods, but finally abandoned the idea that any labor theory of value was applicable to an economy as advanced as that of his times. Once capital has been accumulated and land appropriated, and once profits and rents as well as labor must be paid, the only appropriate explanation of prices, he seems to have found, was a cost-of-production theory. In a cost theory the value of a commodity depends on the payments to all the factors of production: land and capital in addition to labor. In Smith's system, the term profits includes both profits as they are understood today and interest. The total cost of producing a beaver is then equal to wages, profits, and rent, TCb = Wb + Pb + Kb; likewise for a deer, TCd = Wd + Pp + R-d- The relative price for beaver and deer would then be given by the ratio of TCb/TCd- Where Smith assumed that average costs do not increase with increases in output, this calculation gives the same relative prices whether total costs or average costs are used. Where Smith assumed that average costs change with output, prices depend upon both demand and supply. However, in his analysis of the determination of long-run natural prices, Smith emphasized supply and cost of production, even when the supply curve was not assumed to be perfectly elastic. Where competition prevails, he maintained, the self-interest of the businessman, laborer, and landlord will result in natural prices that equal cost of production. | http://www.economictheories.org/2008/07/adam-smith-theory-of-value.html | 13 |
22 | MAC Spring 2005 Newsletter
What Is Hunger?
Hunger occurs when people don’t get enough food and nutrients to meet basic nutritional needs. Most hungry people have some food, but not enough, or enough of the right foods. They face chronic undernourishment and vitamin or mineral deficiencies. This results in stunted growth, weakness and susceptibility to illness. Hunger slows thinking, saps energy, hinders fetal development and contributes to mental retardation. It lessens productivity and sense of hope and well-being, eroding relationships.
Children especially need adequate nutrition to develop properly or risk serious health problems, including impaired cognitive development; growth failure; physical weakness; anemia, and stunting. Several of these lead to irreparable damage. The Tufts University School of Science and Policy reports that recent studies show even relatively mild undernourishment produces cognitive impairments in children which can last a lifetime.
According to the 2004 Hunger Report, 852 million people around the world are hungry, up from 842 million in 2003. More than 1.2 billion people live below the international poverty line, earning less than $1 per day. It is difficult for them to obtain adequate, nutritious food for themselves and their families. Many developing countries are also poor and have no social safety nets. When a family can’t grow enough food or earn enough money for food, there is no available assistance.
Children, pregnant women and new mothers who breast-feed infants are at most risk of undernourishment. In the developing world, 153 million children under the age of five are underweight. Eleven million children, younger than five, die every year. More than half of these deaths are from hunger-related causes, according to World Health Organization.
Only a small percentage of hunger deaths are caused by starvation. Most are the result of chronic under-nutrition, which weakens the body's ability to ward off diseases. When people actually starve to death because no food is available, the cause is primarily political.
Hunger at Home
The U.S. has the most abundant food supply in the world, yet hunger exists here and is too often hidden. Hunger in America looks different from hunger in other parts of the world. According to the U.S.D.A., one-in-ten households in the U.S. experiences hunger or the risk of hunger. This represents 36.3 million people, including 13 million children. Some skip meals or may eat less to make ends meet; others rely on emergency food assistance programs and charities.
In American cities, the requests for emergency food assistance increased by 13 percent in 2004, according to the U.S. Conference of Mayors. More than half of the requests came from families with children, and 34 percent of adults requesting assistance were employed. Poverty and hunger does not exist in cities alone -- it is also found in suburbs and rural communities across the U.S.
In Massachusetts, 600,000 people lived below the poverty level in 2002. This represents 9.5 percent of the state’s population. In low-income communities in Massachusetts, 43 percent of households with annual income below $20,000 can not afford adequate food.
Why Are People Hungry
Hunger affects the most vulnerable in our society, especially children and the elderly. Hunger is directly related to constrained financial resources. It affects those without any income, those on fixed incomes and the working poor. Many low-income families and individuals on fixed incomes have to make choices between paying the rent, obtaining medical care, buying prescription drugs, heating the home, obtaining warm clothes and shoes or buying food.
Massachusetts is the most expensive state in the nation to rent an apartment. A full-time worker, paying no more than 30 percent of his or her income in rent, must earn $22.40 per hour to afford rent for a two-bedroom apartment. Childcare for a four-year-old in center-based care in Greater Boston averages $8,000 per year. The average cost of infant care is higher, averaging $13,000 per year.
During the winter months, when utility bills are high, many more families are forced to cut their food budgets. Some people seek emergency food to get them through a short-term crisis; those without incomes need long-term assistance; and an increasing number seek food to fill in gaps their paychecks do not cover.
Hunger and Obesity
Hunger and obesity are often linked. Fresh produce and other nutrient-rich foods are often expensive. When a family can not afford these healthy foods, they are often replaced with less expensive “filler” food with empty calories. Some of the poorest children are also overweight.
What is Being Done
Hunger does not exist because the world does not produce enough food. The challenge is not production of food and wealth, but more equitable distribution. The key to overcoming hunger is to change the politics of hunger.
Around the world, individual governments, International Organizations such as the United Nations, charities and other humanitarian organizations are working to reduce hunger and starvation through relief efforts and education. Progress has been made against hunger in China and East Asia, where the majority of those who are malnourished live: China (114 million) and India (221 million). However in Sub-Saharan Africa hunger is on the rise, with 204 million hungry.
In the United States there are a variety of social services available to those in need. The United States Department of Agriculture has several programs, including the Food Stamp, Child Nutrition, School Breakfast, School Lunch and Elder programs. Churches and charities provide bags of groceries and nutritious meals through food banks, food pantries and soup kitchens. Some programs offer assistance with heating costs.
The Massachusetts Department of Agricultural Resources provides coupons to low-income families to buy fresh produce at Farmers’ Markets. It also manages the Emergency Food Assistance Program. Some Massachusetts farms and home-gardeners grow food to donate to food banks and soup kitchens. Project Bread sponsors a Walk for Hunger to raise funds to support 400 emergency food programs in Massachusetts. Their toll-free hotline connects to a variety of federal, state and local support programs. Get involved by growing food or donating time to one of these worthy efforts.
All people everywhere require the same amount and variety of foods for energy, growth and health. Principal organic nutrients are proteins, fats and carbohydrates. Food also provides needed vitamins and minerals.
Energy is obtained primarily from carbohydrates and fats. Energy is needed for growth, maintenance, reproduction, lactation and daily activity. In addition, fats give food flavor, keep skin healthy and maintain the nervous system.
Proteins are made up of amino acids. They are essential to the building and repairing of tissue in the body. Protein is only used as a source of energy when no carbohydrates or fats are consumed.
Vitamins are food substances needed for growth and health. There are two general groups of vitamins, fat-soluble and water-soluble. Fat- soluble vitamins include Vitamins A, D, E and K. Water-soluble vitamins include Vitamin C and the B complex Vitamins.
Minerals are found in the body in small amounts; however, they are essential to life. They are necessary for the proper functioning of the nervous, endocrine, circulatory and urinary systems.
The U.S.D.A. recommends the calories in our diet consist of approximately 50% carbohydrates, less than 30% fats and 20% proteins. Grains provide the carbohydrates and B vitamins we need for energy. Vegetables give us Vitamin A and fiber. Fruits supply Vitamin C and fiber. Meats, fish, nuts and beans give our bodies protein, iron, zinc, and B 12. Dairy products provide calcium, protein, riboflavin and Vitamin D for healthy bones and teeth.
The New My Pyramid Plan
The My Pyramid Plan can help you choose the foods and amounts that are right for you. It will help you to make smart choices from every food group and estimate what and how much you need to eat based on age, sex, and physical activity level. The hope is that each person will get the most nutrition from calories eaten.
Visit the USDA My Pyramid web site at www.mypyramid.gov/ and take an animated tour of the new pyramid. Download the My Pyramid mini-poster to learn the basics about eating healthy and physical activity. Explore the pyramid to learn about the food groups and to see how much physical activity you should be getting. There are also resources and materials for use in developing education materials.
A number of farms across Massachusetts grow crops to support hunger relief through donations of healthy nutritious fruits and vegetables to local food banks, food pantries and soup kitchens. Several also provide opportunities for community service, youth training and education about agriculture for the general public and school groups. Consider getting involved at one of these farms.
Community Farms Outreach in Waltham is a non-profit organization that supports farmland preservation, hunger relief and education. The farm offers a Community Supported Agriculture opportunity where the public can buy shares of fresh healthy produce. Educational programs pro-vide opportunities for children and adults to learn “where their food comes from.” Volunteers grow fresh produce for soup kitchens, shelters and food pantries. Find out more: Community Farms Outreach, 240 Beaver Street Waltham, MA 02452. Visit www.communityfarms.org or call 781-899-2403.
Community Harvest Project operates at both Brigham Hill Community Farm in North Grafton and Elmwood Farm in Hopkinton. The Project donates one-hundred percent of its harvest to the Worcester County Food Bank, Kid's Café and other hunger-relief efforts. Volunteers, both individuals and groups, grow tomatoes, broccoli, cabbage, squash and more. The philosophy is “we're a farm with two crops. Not only do we help feed thousands of our neighbors in need, but those who come to help at the farm learn about helping others, finding meaningful fellowship and unity of purpose. Each of us leaves our volunteer work with the knowledge that we've taken action and made a difference.” Find out more: Community Harvest Project at Brigham Hill Community Farm 37 Wheeler Road North Grafton, MA 01536. Visit www.fftn.org or 508-839-1037.
The Food Bank Farm is a sixty-acre organic Community Supported Agriculture Farm in Hadley. Members can volunteer or buy a farm share. For $435 they receive organic produce from June to September and also participate in pick-your-own fruit and flower activities. The Food Bank Farm donates fifty percent of the fresh produce raised at the farm to the Food Bank of Western Massachusetts. Over the past five years this donation has amounted to more than 900,000 pounds of fresh produce. Find out more: 121 Bay Road (Route 47 near Route 9), Hadley MA 01035. Call 413-582-0013 or visit www.foodbankwma.org/farm/.
The Food Project produces nearly a quarter-million pounds of organic food annually, donating half to local shelters. The rest is sold through Community Supported Agriculture crop shares, farmers’ markets and Harvest Bags. Each year more than a hundred teens and thousands of volunteers farm on 31 acres in rural Lincoln, and on several lots in urban Boston. The mission is to grow a thoughtful and productive community of youth and adults from diverse backgrounds who work together to build a sustainable food system, produce healthy food for residents of the city and suburbs and provide youth leadership opportunities. The program also strives to inspire and support others to create change in their own communities. Find out more: 3 Goose Pond Road, P.O. Box 705 Lincoln, MA 01773. Call 781-259-8621 or www.thefoodproject.org.
Overlook Farm in Rutland is one of three regional U.S. centers for Heifer Project International (HPI). Overlook Farm educates visitors about HPI's message to eliminate hunger and poverty through sustain- able agriculture. Programs revolve around an integrated land and livestock production system which uses techniques and resources similar to those available to U.S. families who receive assistance from HPI. Visitors can participate in the operation of a working farm, volunteer in community projects or take a tour. They can experience a special one-acre plot that shows subsistence-level agricultural practices of farmers in other countries or spend time at a Central American-type farm or visit a Tibetan yurt, (a yak hair tent). Visitors can even arrange for a “Habitat Farm Hunger Seminar" that includes cooking a meal common to the world's poor. Contributions to HPI support education and gifts of livestock to poor families in the U.S. and around the world. Find out more: 216 Wachusett Street, Rutland, MA 01543. Visit www.heifer.org or call 508-886-2221.
Plant a Row for the Hungry is sponsored by the Garden Writers of America. The purpose is to create and sustain a grassroots program whereby gardeners plant an extra row of vegetables and donate the surplus to local food banks and soup kitchens. The goal is to provide more and better quality food for the hungry. Success hinges on the people-helping-people approach. PAR’s role is to provide focus, direction and support to volunteer committees who execute the programs at the local level. They assist in coordinating local food collection systems and monitor the volume of donations being conveyed to the soup kitchens and food banks. Last year, more than 1.3 million pounds of produce were donated. The garden writers utilize their position with local media to encourage readers and listeners to donate their surplus garden produce to help feed America’s hungry. Find out more: Garden Writers of America Foundation - Plant a Row for the Hungry, 10210 Leatherleaf Court Manassas, VA 20111. Visit www.gardenwriters.org or call 877-492-2727 toll-free.
The Massachusetts Farmers' Market Coupon Program provides coupons that are redeemable for fresh produce at Farmers’ Markets to participants in the Federal Supplemental Food Program for Women, Infants and Children (WIC), and also to elders. The coupons supplement regular food package assistance providing nutritious fresh fruits and vegetables. They also introduce families to farmers’ markets and support nutrition education goals.
Local farmers are reimbursed for the face value of the coupons. This enhances their earnings and supports participation in farmers’ markets. Farmers also attract a new base of customers, providing additional sales opportunities, and capture a greater share of the consumer food dollar through direct marketing. The program also promotes diversification on small farms by encouraging the production of locally grown fresh produce.
Funding is provided by the U. S. Department of Agriculture's Food and Nutrition Service, with a 30% administrative match provided by the state. This year the federal government will provide $500,000 to Massachusetts for the WIC program and $52,000 for the Elders program. The MA Dept. of Agricultural Resources will match administrative costs for the WIC program and provide $25,000 along with an additional $25,000 match from the State Elder Affairs Office to bring the elder-coupon program to $100,000.
The Farmers’ Market Coupon Program was founded in Massachusetts in 1986 by the MA Department of Agricultural Resources. In 1989, Congress adopted the program nationally by funding a three-year demonstration project in ten states. The success of this project led Congress to enact the WIC Farmers’ Market Nutrition Act of 1992, establishing it as the 14th federal food- assistance program of the U.S.DA. The number of states participating in the program has grown significantly.
This economics activity, geared to grades six through eight, focuses on food distribution. It demonstrates that equality of distribution resources is a major cause of hunger.
1. Prepare ahead of time a snack food (apples or crackers) for your class. You will need one medium apple, cut into four pieces, or four crackers for each student. Divide the number of students into four equal groups.
2. Place all food resources in a bowl or on a table so that all the students can see them. Tell the class that one portion is equal to one apple or four crackers. Explain that the snack foods represent the world food supply.
3. Divide the food so that ¼ of the class gets a moderate portion (four slices or crackers), ¼ gets a small portion (two slices or crackers), ¼ gets a very large portion (six slices or crackers) and ¼ splits whatever is left or gets nothing. Distribute the snack randomly so students receiving large portions are located near students who receive small portions or nothing at all. Ask the students not to eat the food until told to do so.
4. Explain to the students that the snacks simulates how much people around the world have to eat. Ask them to look around to see how their portion relates to those of others.
5. Explain that the distribution corresponds to that in the real world. In some countries, there are a few very rich people, many middle-income people, and a smaller number of poor people who often go hungry. In some countries, almost everyone is middle-income level and no one goes hungry. In other countries, most of the people are poor and many go hungry. There are hungry people and well-fed people in virtually every country and virtually every community.
6. OPTIONAL: Allow the students to try to work out whether they would like to distribute the food more fairly or eat it as it was given. If they choose to change the distribution, ask them to try to design a fair method. Explain that it is often difficult for people of the world to negotiate fair solutions to problems, especially when some are hungry.
7. Allow the students to eat their snack. If snack allotments were not redistributed, provide portions for those students who did not receive a portion during the simulation.
8. Discuss with students that the world produces enough food to feed every man, woman and child the equivalent in calories to what the average person in the U.S. eats every day. Ask students from each portion group to tell how it felt to see how much was available and then how much they received.
From: A Guide to Food and Fiber System Literacy Oklahoma State university, 1998
- Investigate other countries to determine where food comes from and what percentage of people go hungry.
- Discuss the existence of hunger in your area – the possible causes of hunger and possible solutions.
- The U.S. spends little of its income on food compared to other countries. Research what percentage of income various countries spend on food.
- Contact your local food bank to set up a food drive in your school.
- Volunteer at a local soup kitchen or food pantry.
- Plant a garden at the school, and donate food to a local food pantry.
- Research local, national or international hunger relief organizations to learn how they were founded, their mission and successes. Start a fundraiser to support one or more effort.
- Get involved in a local farm that grows food for the hungry. Ask students to keep a journal of the experience.
- Invite a nutritionist from a hospital or other organization to speak to the class about the new My Food Pyramid and how to plan a healthy diet.
MA Dept of Agricultural Resources
Farmers’ Market Coupon Program
251 Causeway Street Suite 500
Boston, MA 02114
617-626-1754 fax: 617-626-1850
Project Bread & Walk for Hunger
Statewide organization promoting public policies that reduce hunger & poverty.
Toll-free hunger hotline: 800-645-8333
America’s Second Harvest: national network of food banks and food rescue programs. www.secondharvest.org
Bread for the World: a nationwide Christian citizen’s movement seeking justice for the world's hungry. www.bread.org
Center on Hunger and Poverty at Brandeis University
Church World Service sponsors the CROP Walk and engages in anti-poverty work in the U.S. and around the world.
Feeding Minds, Fighting Hunger: lessons about the food system and food security. www.feedingminds.org
Food Research and Action Center www.frac.org
The Massachusetts Family Economic Self-Sufficiency (MassFESS) Project
MAZON: Jewish Response to Hunger provides food, help and hope to hungry people of all faiths and backgrounds. www.mazon.org National Hunger Awareness Day will be held on June 7. www.hungerday.org U.S.D.A. Economic Resource Service
World Health Organization
World Population Data Sheet for 2003
Information for this newsletter was taken from the resources listed above.
Mission: Massachusetts Agriculture ion the Classroom is a non-profit 501 (c) (3) educational organization with the mission to foster an awareness and learning in all areas related to the food and agriculture industries and the economic and social importance of agriculture to the state national and the world. | http://www.aginclassroom.org/Newsletter/spring2005.html | 13 |
28 | What is the ozone hole?
The "ozone hole" is a loss of stratospheric ozone in springtime over Antarctica, peaking in September. The ozone hole area is defined as the size of the region with total ozone below 220 Dobson units (DU). Dobson Units are a unit of measurement that refer to the thickness of the ozone layer in a vertical column from the surface to the top of the atmosphere, a quantity called the "total column ozone amount." Prior to 1979, total column ozone values over Antarctica never fell below 220 DU. The hole has been proven to be a result of human activities--the release of huge quantities of chlorofluorocarbons (CFCs) and other ozone depleting substances into the atmosphere.
Is the ozone hole related to global warming?
Global warming and the ozone hole are not directly linked, and the relationship between the two is complex. Global warming is primarily due to CO2, and ozone depletion is due to CFCs. Even though there is some greenhouse gas effect on stratospheric ozone, the main cause of the ozone hole is the harmful compounds (CFCs) that are released into the atmosphere.
The enhanced greenhouse effect that we're seeing due to a man-made increase in greenhouse gases is acting to warm the troposphere and cool the stratosphere. Colder than normal temperatures in this layer act to deplete ozone. So the cooling in the stratosphere due to global warming will enhance the ozone holes in Arctic and Antarctic. At the same time, as ozone decreases in the stratosphere, the temperature in the layer cools down even more, which will lead to more ozone depletion. This is what's called a "positive feedback."
How big was the 2010 ozone hole, and is it getting bigger?
Every four years, a team of many of the top scientists researching ozone depletion put together a comprehensive summary of the scientific knowledge on the subject, under the auspices of the World Meteorological Organization (WMO). According to their most recent assessment, (WMO, 2006), monthly total column ozone amounts in September and October have continued to be 40 to 50% below pre-ozone-hole values, with up to 70% decreases for periods of a week or so. During the last decade, the average ozone hole area in the spring has increased in size, but not as rapidly as during the 1980s. It is not yet possible to say whether the area of the ozone hole has maximized. However, chlorine in the stratosphere peaked in 2000 and had declined by 3.8% from these peak levels by 2008, so the ozone hole may have seen its maximum size. Annual variations in temperature will probably be the dominant factor in determining differences in size of the ozone hole in the near future, due to the importance of cold-weather Polar Stratospheric Clouds (PSCs) that act as reactive surfaces to accelerate ozone destruction.
The 2010 hole was the tenth smallest since 1979, according to NASA. On September 25, 2010, the hole reached its maximum size of 22 million square kilometers. The 2010 hole was slightly smaller than North America, which is 25 million square kilometers. Record ozone holes were recorded in both 2000 and 2006, when the size of the hole reached 29 million square kilometers. The graph below, taken from NOAA's Climate Prediction Center, compares the 2010 ozone hole size with previous years. The smaller size of the 2010 hole compared to most years in the 2000s is due to the fact that the jet stream was more unstable than usual over Antarctica this September, which allowed very cold air in the so-called "polar vortex" over Antarctica to mix northwards, expelling ozone-deficient air and mixing in ozone-rich air. This also warmed the air over Antarctica, resulting in the formation of fewer Polar Stratospheric Clouds (PSCs), resulting in fewer locations for the chemical reactions needed to destroy to occur.
Has there been ozone loss in places besides Antarctica?
Yes, ozone loss has been reported in the mid and high latitudes in both hemispheres during all seasons (WMO, 2006). Relative to the pre-ozone-hole abundances of 1980, the 2002-2005 losses in total column ozone were:
Other studies have shown the following ozone losses:
In 2011, the Arctic saw record ozone loss according to the World Meteorological Organization (WMO). Weather balloons launched in the Arctic measured the ozone loss at 40%—the previous record loss was 30%. Although there has been international agreement to reduce the consumption of ozone-destroying chemicals, effects from peak usage will continue because these compounds stay in the atmosphere long after they're released. The WMO estimates it will take several decades before we see these harmful compounds reach pre-1980s levels.
Ozone loss in the Arctic is highly dependent on the meteorology, due to the importance of cold-weather Polar Stratospheric Clouds (PSCs) that act as reactive surfaces to accelerate ozone destruction. Some Arctic winters see no ozone loss, and some see extreme loss like that in 2011.
A future Arctic ozone hole similar to that of the Antarctic appears unlikely, due to differences in the meteorology of the polar regions of the northern and southern hemispheres (WMO, 2002). However, a recent model study (Rex et. al., 2004), indicates that future Arctic ozone depletion could be much worse than expected, and that each degree Centigrade cooling of the Arctic may result in a 4% decrease in ozone. This heightened ozone loss is expected due to an increase in PSCs. The Arctic stratosphere has cooled 3°C in the past 20 years due the combined effects of ozone loss, greenhouse gas accumulation, and natural variability, and may cool further in the coming decades due to the greenhouse effect (WMO, 2002). An additional major loss of Arctic (and global) ozone could occur as the result of a major volcanic eruption (Tabazadeh, 2002).
Has ozone destruction increased levels of UV-B light at the surface?
Yes, ozone destruction has increased surface levels of UV-B light (the type of UV light that causes skin damage). For each 1% drop in ozone levels, about 1% more UV-B reaches the Earth's surface (WMO, 2002). Increases in UV-B of 6-14% have been measured at many mid and high-latitude sites over the past 20 years (WMO, 2002, McKenzie, 1999). At some sites about half of this increase can be attributed to ozone loss. Changes in cloudiness, surface air pollution, and albedo also strongly influence surface UV-B levels. Increases in UV-B radiation have not been seen in many U.S. cities in the past few decades due to the presence of air pollution aerosol particles, which commonly cause 20% decreases in UV-B radiation in the summer (Wenny et al, 2001).
Source: World Meteorological Organization, Scientific Assessment of Ozone Depletion: 1998, WMO Global Ozone Research and Monitoring Project - Report No. 44, Geneva, 1998.
What are the human health effects of increased UV-B light?
From the outset it should be pointed out that human behavior is of primary importance when considering the health risks of sun exposure. Taking proper precautions, such as covering up exposed skin, using sunscreen, and staying out of the sun during peak sun hours is of far greater significance to health than the increased UV-B due to ozone loss is likely to be.
A reduction in ozone of 1% leads to increases of up to 3% in some forms of non-melanoma skin cancer (UNEP, 1998). It is more difficult to quantify a link between ozone loss and malignant melanoma, which accounts for about 4% of skin cancer cases, but causes about 79% of skin cancer deaths. Current research has shown that melanoma can increase with both increased UV-B and UV-A light, but the relationship is not well understood (UNEP, 2002). In the U.S. in 2003, approximately 54,200 persons will have new diagnoses of melanoma, and 7,600 will die from the disease, and more than 1 million new cases of the other two skin cancers, basal cell carcinoma and squamous cell carcinoma, will be diagnosed (American Cancer Society, 2002) . Worldwide, approximately 66,000 people will die in 2003 from malignant melanoma, according to the World Health Organization. However, the significant rises in skin cancer worldwide can primarily be attributed human behavioral changes rather than ozone depletion (Urbach, 1999; Staehelin, 1990).
On the positive side, UV light helps produce vitamin D in the skin, which may help against contraction of certain diseases. Multiple sclerosis has been shown to decrease in the white Caucasian population with increasing UV light levels. On the negative side, excessive UV-B exposure depresses the immune system, potentially allowing increased susceptibility to a wide variety of diseases. And in recent years, it has become apparent that UV-B damage to the eye and vision is far more insidious and detrimental than had previously been suspected (UNEP, 2002). Thus, we can expect ozone loss to substantially increase the incidence of cataracts and blindness. A study done for Environment Canada presented to a UN meeting in 1997, estimated that because of the phase-out of CFCs and other ozone depleting substances mandated by the 1987 Montreal Protocol, there will be 19.1 million fewer cases of non-melanoma skin cancer, 1.5 million fewer cases of melanoma, 129 million fewer cases of cataracts, and 330,000 fewer skin cancer deaths worldwide.
Has ozone loss contributed to an observed increase in sunburns and skin cancer in humans?
Yes, Punta Arenas, Chile, the southernmost city in the world (53°S), with a population of 154,000, has regularly seen high levels of UV-B radiation each spring for the past 20 years, when the Antarctic ozone hole has moved over the city (Abarca, 2002). Ozone levels have dropped up to 56%, allowing UV-B radiation more typical of summertime mid-latitude intensities to affect a population unused to such levels of skin-damaging sunshine. Significant increases in sunburns have been observed during many of these low-ozone days. During the spring of 1999, a highly unusual increase in referrals for sunburn occurred in Punta Arenas during specific times when the ozone hole passed over the city. And while most of the worldwide increase in skin cancer rates the past few decades has been attributed to people spending more time outdoors, and the use of tanning businesses (Urbach, 1999), skin cancer cases increased 66% from 1994-2000 compared to 1987-1993 in Punta Arenas, strongly suggesting that ozone depletion was a significant factor.
What is the effect of increased UV-B light on plants?
UV-B light is generally harmful to plants, but sensitivity varies widely and is not well understood. Many species of plants are not UV-B sensitive; others show marked growth reduction and DNA damage under increased UV-B light levels. It is thought that ozone depletion may not have a significant detrimental effect on agricultural crops, as UV-B tolerant varieties of grains could fairly easily be substituted for existing varieties. Natural ecosystems, however, would face a more difficult time adapting. Direct damage to plants from ozone loss has been documented in several studies. For example, data from a spring, 1997 study in Tierra del Fuego, at the southern tip of Argentina, found DNA damage to plants on days the ozone hole was overhead to be 65% higher than on non-ozone-hole days (Rousseaux et. al., 1999).
What is the effect of increased UV-B light on marine life?
UV-B light is generally harmful to marine life, but again the effect is highly variable and not well understood. UV-B radiation can cause damage to the early developmental stages of fish, shrimp, crab, amphibians and other animals (UNEP, 2002).. Even at current levels, solar UV-B radiation is a limiting factor in reproductive capacity and larval development, and small increases in UV-B radiation could cause significant population reductions in the animals that eat these smaller creatures. One study done in the waters off Antarctica where increased UV-B radiation has been measured due to the ozone hole found a 6-12% decrease in phytoplankton, the organism that forms the base of the food chain in the oceans (Smith et. al., 1990). Since the ozone hole lasts for about 10-12 weeks, this corresponds to an overall phytoplankton decrease of 2-4% for the year.
Is the worldwide decline in amphibians due to ozone depletion?
No. The worldwide decline in amphibians is just that--worldwide. Ozone depletion has not not yet affected the tropics (-25° to 25° latitude), and that is where much of the decline in amphibians has been observed. It is possible that ozone depletion in mid and high latitudes has contributed to the decline of amphibians in those areas, but there are no scientific studies that have made a direct link.
Are sheep going blind in Chile?
Yes, but not from ozone depletion! In 1992, The New York Times reported ozone depletion over southern Chile had caused "an increase in Twilight Zone-type reports of sheep and rabbits with cataracts" (Nash, 1992). The story was repeated in many places, including the July 1, 1993 showing of ABC's Prime Time Live. Al Gore's book, Earth in the Balance, stated that "in Patagonia, hunters now report finding blind rabbits; fishermen catch blind salmon" (Gore, 1992). A group at Johns Hopkins has investigated the evidence and attributed the cases of sheep blindness to a local infection ("pink eye") (Pearce, 1993).
What do the skeptics say about the ozone hole?
Ever since the link between CFCs and ozone depletion was proposed in 1974, skeptics have attacked the science behind the link and the policies of controlling CFCs and other ozone depleting substances. We have compiled a detailed analysis of the arguments of the skeptics. It is interesting to note how the skeptics are using the same bag of tricks to cast doubt upon the science behind the global warming debate, and the need to control greenhouse-effect gases.
What are the costs and savings of the CFC phaseout?
The costs have been large, but not as large as initially feared. As the United Nations Environment Programme (UNEP) Economic Options Committee (an expert advisory body) stated in 1994: "Ozone-depleting substance replacement has been more rapid, less expensive, and more innovative than had been anticipated at the beginning of the substitution process. The alternative technologies already adopted have been effective and inexpensive enough that consumers have not yet felt any noticeable impacts (except for an increase in automobile air conditioning service costs)" (UNEP, 1994). A group of over two dozen industry experts estimated the total CFC phaseout cost in industrialized counties at $37 billion to business and industry, and $3 billion to consumers (Vogelsberg, 1997). A study done for Environment Canada presented to a UN meeting in 1997, estimated a total CFC phaseout cost of $235 billion through the year 2060, but economic benefits totaling $459 billion, not including the savings due to decreased health care costs. These savings came from decreased UV exposure to aquatic ecosystems, plants, forests, crops, plastics, paints and other outdoor building materials.
What steps have been taken to save the ozone layer? Are they working?
In 1987, the nations of the world banded together to draft the Montreal Protocol to phase out the production and use of CFCs. The 43 nations that signed the protocol agreed to freeze consumption and production of CFCs at 1986 levels by 1990, reduce them 20% by 1994, and reduce them another 30% by 1999. The alarming loss of ozone in Antarctica and worldwide continued into the 1990's, and additional amendments to further accelerate the CFC phase-out were adopted. With the exception of a very small number of internationally agreed essential uses, CFCs, halons, carbon tetrachloride, and methyl chloroform were all phased out by 1995 in developing countries (undeveloped countries have until 2010 to do so). The pesticide methyl bromide, another significant ozone-depleting substance, was scheduled to be phased out in 2004 in developing countries, but a U.S.-led delaying effort led to a one-year extension until the end of 2005. At least 183 counties are now signatories on the Montreal Protocol.
The Montreal Protocol is working, and ozone depletion due to human effects is expected to start decreasing in the next 10 years. Observations show that levels of ozone depleting gases at a maximum now and are beginning to decline (Newchurch et. al., 2003). NASA estimates that levels of ozone-depleting substances peaked in 2000, and had fallen by 3.8% by 2008. Provided the Montreal Protocol is followed, the Antarctic ozone hole is expected to disappear by 2050. The U.N. Environment Program (UNEP) said in August 2006 that the ozone layer would likely return to pre-1980 levels by 2049 over much of Europe, North America, Asia, Australasia, Latin America and Africa. In Antarctica, the agencies said ozone layer recovery would likely be delayed until 2065.
What replacement chemicals for CFCs have been found? Are they safe?
Hydrofluorocarbons (HFCs), hydrochlorofluorocarbons (HCFCs) and "Greenfreeze" chemicals (hydrocarbons such as cyclopentane and isobutane) have been the primary substitutes. The primary HFC used in automobile air conditioning, HFC-134a, costs about 3-5 times as much as the CFC-12 gas it replaced. A substantial black market in CFCs has resulted.
HCFCs are considered a "transitional" CFC substitute, since they also contribute to ozone depletion (but to a much less degree than CFCs). HCFCs are scheduled to be phased out by 2030 in developed nations and 2040 in developing nations, according to the Montreal Protocol. HCFCs (and HFCs) are broken down in the atmosphere into several toxic chemicals, trifluoroacetic acid (TFA) and chlorodifluoroacetic acid (CDFA). Risks to human health and the environment from these chemicals is thought to be minimal (UNEP/WMO, 2002).
HFCs do not cause ozone depletion, but do contribute significantly to global warming. For example, HFC-134a, the new refrigerant of choice in automobile air conditioning systems, is 1300 times more effective over a 100-year period as a greenhouse gas than carbon dioxide. At current rates of HFC manufacture and emission, up to 4% of greenhouse effect warming by the year 2010 may result from HFCs.
"Greenfreeze" hydrocarbon chemicals appear to be the best substitute, as they do not contribute to greenhouse warming, or ozone depletion. The hydrocarbons used are flammable, but the amount used (equivalent to two butane lighters of fluid) and safety engineering considerations have made quieted these concerns. Greenfreeze technology has captured nearly 100% of the home refrigeration market in many countries in Europe, but has not been introduced in North America yet due to product liability concerns and industry resistance.
When was the ozone hole discovered?
Ozone depletion by human-produced CFCs was first hypothesized in 1974 (Molina and Rowland, 1974). The first evidence of ozone depletion was detected by ground-based instruments operated by the British Antarctic Survey at Halley Bay on the Antarctic coast in 1982. The results seemed so improbable that researchers collected data for three more years before finally publishing the first paper documenting the emergence of an ozone hole over Antarctica (Farman, 1985). Subsequent analysis of the data revealed that the hole began to appear in 1977. After the 1985 publication of Farman's paper, the question arose as to why satellite measurements of Antarctic ozone from the Nimbus-7 spacecraft had not found the hole. The satellite data was re-examined, and it was discovered that the computers analyzing the data were programmed to throw at any ozone holes below 180 Dobson Units as impossible. Once this problem was corrected, the satellite data clearly confirmed the existence of the hole.
How do CFCs destroy ozone?
CFCs are extremely stable in the lower atmosphere, only a negligible amount are removed by the oceans and soils. However, once CFCs reach the stratosphere, UV light intensities are high enough to break apart the CFC molecule, freeing up the chlorine atoms in them. These free chlorine atoms then react with ozone to form oxygen and chlorine monoxide, thereby destroying the ozone molecule. The chlorine atom in the chlorine monoxide molecule can then react with an oxygen atom to free up the chlorine atom again, which can go on to destroy more ozone in what is referred to as a "catalytic reaction":
Cl + O3 -> ClO + O2
ClO + O -> Cl + O2
Thanks to this catalytic cycle, each CFC molecule can destroy up 100,000 ozone molecules. Bromine atoms can also catalytically destroy ozone, and are about 45 times more effective than chlorine in doing so.
For more details on ozone depletion chemistry, see the Usenet Ozone FAQ.
Are volcanos a major source of chlorine to the stratosphere?
No, volcanos contribute at most just a few percent of the chlorine found in the stratosphere. Direct measurements of the stratospheric chlorine produced by El Chichon, the most important eruption of the 1980's (Mankin and Coffey, 1983), and Pinatubo, the largest volcanic eruption since 1912 (Mankin et. al., 1991) found negligible amounts of chlorine injected into the stratosphere.
What is ozone pollution?
Ozone forms in both the upper and the lower atmosphere. Ozone is helpful in the stratosphere, because it absorbs most of the harmful ultraviolet light coming from the sun. Ozone found in the lower atmosphere (troposphere) is harmful. It is the prime ingredient for the formation of photochemical smog. Ozone can irritate the eyes and throat, and damage crops. Visit the Weather Underground's ozone pollution page, or our ozone action page for more information.
Where can I go to learn more about the ozone hole?
We found the following sources most helpful when constructing the ozone hole FAQ:
Dr. Jeff Masters' Recent Climate Change Blogs
Dr. Ricky Rood's Recent Climate Change Blogs
Abarca, J.F, and C.C. Casiccia, "Skin cancer and ultraviolet-B radiation under the Antarctic ozone hole: southern Chile, 1987-2000," Photodermatology, Photoimmunology & Photomedicine, 18, 294, 2002.
American Cancer Society. Cancer facts & figures 2002. Atlanta: American Cancer Society, 2002.
Farman, J.C., B.D. Gardner and J.D. Shanklin, "Large losses of total ozone in Antarctica reveal seasonal ClOx/NOx Interaction, Nature, 315, 207-210, 1985.
Gore, A., "Earth in the Balance: Ecology and the Human Spirit", Houghton Mifflin, Boston, 1992.
Manins, P., R. Allan, T. Beer, P. Fraser, P. Holper, R. Suppiah, R. and K. Walsh. "Atmosphere, Australia State of the Environment Report 2001 (Theme Report)," CSIRO Publishing and Heritage, Canberra, 2001.
Mankin, W., and M. Coffey, "Increased stratospheric hydrogen chloride in the El Chichon cloud", Science, 226, 170, 1983.
Mankin, W., M. Coffey, and A. Goldman, "Airborne observations of SO2, HCl, and O3 in the stratospheric plume of the Pinatubo volcano in July 1991", Geophys. Res. Lett., 19, 179, 1992.
McKenzie, R., B. Connor, G. Bodeker, "Increased summertime UV radiation in New Zealand in response to ozone loss", Science, 285, 1709-1711, 1999.
Molina, M.J., and F.S. Rowland, Stratospheric Sink for Chlorofluoromethanes: Chlorine atom-catalyzed destruction of ozone, Nature, 249, 810-812, 1974.
Nash, N.C., "Ozone Depletion Threatening Latin Sun Worshipers", New York Times, 27 March 1992, p. A7.
Newchurch, et. al., "Evidence for slowdown in stratospheric ozone loss: First stage of ozone recovery", Journal of Geophysical Research, 108, doi: 10.1029/2003JD003471, 2003.
Pearce, F., "Ozone hole 'innocent' of Chile's ills", New Scientist, 1887, 7, 21 Aug. 1993.
Rex, M. et. al., "Arctic ozone loss and climate change", Geophys. Res. Lett., 31, L04116, 2004.
Rousseaux, M.C., C.L. Ballare, C.V. Giordano, A.L. Scopel, A.M. Zima, M. Szwarcberg-Bracchitta, P.S. Searles, M.M. Caldwell, S.B. Diaz, "Ozone depletion and UVB radiation: impact on plant DNA damage in southern South America", Proc Natl Acad Sci, 96(26):15310-5, 1999.
Smith, D.A., K. Vodden, L. Rucker, and R. Cunningham, "Global Benefits and Costs of the Montreal Protocol on Substances that Deplete the Ozone Layer", Applied Research Consultants report for Environment Canada, Ottawa, 1997.
Smith, R., B. Prezelin, K. Baker, R. Bidigare, N. Boucher, T. Coley, D. Karentz, S. MacIntyre, H. Matlick, D. Menzies, M. Ondrusek, Z. Wan, and K. Waters, "Ozone depletion: Ultraviolet radiation and phytoplankton biology in Antarctic waters", Science, 255, 952, 1992.
Staehelin, J., M. Blumthaler, W. Ambach, and J. Torhorst, "Skin cancer and the ozone shield", Lancet 336, 502, 1990.
Tabazadeh, A., K. Drdla, M.R. Schoeberl, P. Hamill, and O. B. Toon, "Arctic "ozone hole" in a cold volcanic stratosphere", Proc Natl Acad Sci, 99(5), 2609-12, Mar 5 2002.
United Nations Environmental Programme (UNEP), "1994 Report of the Economics Options Committee for the 1995 Assessment of the Montreal Protocol on Substances that Deplete the Ozone Layer", UNEP, Nairobi, Kenya, 1994.
United Nations Environmental Programme (UNEP), "Environmental Effects of Ozone Depletion: 1998 Assessment", UNEP, Nairobi, Kenya, 1998.
United Nations Environmental Programme (UNEP), "Environmental Effects of Ozone Depletion and its interactions with climate change: 2002 Assessment", UNEP, Nairobi, Kenya, 2002.
Urbach, F. "The cumulative effects of ultraviolet radiation on the skin: Photocarcinogenesis," In: Hawk J, , ed. Photodermatology. Arnold Publishers, 89-102, 1999.
Vogelsberg, F.A., "An industry perspective - lessons learned and the cost of CFC phaseout", HPAC Heating/Piping/AirConditioning, January 1997, 121-128.
Wenny, B.N., V.K. Saxena, and J.E. Frederick, "Aerosol optical depth measurements and their impact on surface levels of ultraviolet-B radiation", J. Geophys. Res., 106, 17311-17319, 2001.
World Meteorological Organization (WMO), "Scientific Assessment of Ozone Depletion: 2002 Global Ozone Research and Monitoring Project - Report #47", WMO, Nairobi, Kenya, 2002.
Young, A.R., L.O. Bjorn, J. Mohan, and W. Nultsch, "Environmental UV Photobiology", Plenum, N.Y. 1993. | http://rss.wunderground.com/resources/climate/holefaq.asp | 13 |
14 | © 1996 by Karl Hahn
1) In the first expression we have the sum of two functions, each of which can be expressed as xn, one where n = 4 and one where n = 3. We already know how to find the derivative of each of the summands. The sum rule says we can consider the summands separately, then add the derivatives of each to get the derivative of the sum. So the answer for the first expression is: f'(x) = 4x3 + 3x2
In the second expression we have the sum of an xn and an mx + b. In the first summand, n = 2. In the second summand, m = -7 and b = 12. Again the sum rule says we can consider the two summands separately, then add their derivatives to get the derivative of the sum. So the answer for the second expression is: f'(x) = 2x - 7
2) This one should have been easy. The text of the problem observed that a constant function is just a straight line function with a slope of zero (i.e., m = 0). We already know that the derivative of any straight line function is exactly its slope, m. When the slope is zero, so must be the derivative.
3) A very quick argument for the first part is that:
g(x) = n f(x) = f(x) + f(x) + ... [n times] ... + f(x)Hence, by the sum rule,
g'(x) = f'(x) + f'(x) + ... [n times] ... + f'(x) = n f'(x)A purist, however, would do it by induction. To get onto the first rung of the ladder, we observe that for n = 1:
g(x) = (1) f(x) = f(x)hence
g'(x) = f'(x) = (1) f'(x)So it works for n = 1. Now we demonstrate that if it's true for the nth rung, then it must be true for the n + 1st rung. So if:
gn'(x) = n f'(x)whenever
gn(x) = n f(x)then
gn+1(x) = (n + 1) f(x) = (n f(x) ) + f(x)That the derivative of n f(x) is n f'(x) is given by assumption. And we know by the sum rule that:
gn+1'(x) = (n f'(x) ) + f'(x) = (n + 1) f'(x)And you have it proved by induction. But you only need to do it that way if you are performing for a stickler on formality.
As for the second part, you can observe that:
n u(x) = f(x)We know from the first part of the problem that:
n u'(x) = f'(x)Simply divide both sides by n, and your proof is complete.
Now, using the results from both of the parts of the problem, can you show that if:
u(x) = (n/m) f(x)and both n and m are counting numbers then:
u'(x) = (n/m) f'(x)
And using an argument similar to the one we used for the second part of this problem, can you prove that the derivative of the difference is always equal to the difference of the derivatives? Hint: if u(x) = f(x) - g(x), then u(x) + g(x) = f(x).
You can email me by clicking this button:
© 1996 by Karl Hahn
Exercise 4: In each of the functions given, you must try to break the expression up into products and sums, find the derivatives of the simpler functions, then combine them according to the sum and product rules and their derivatives.
4a) Here you are asked to find the derivative of the product of two straight line functions (both of which are in the standard mx + b form). So let f(x) = m1x + b1, and let g(x) = m2x + b2. Both are f(x) and g(x) are straight line functions. And we know that the derivative of a straight line function is always the slope of the line. Hence we know that f'(x) = m1 and g'(x) = m2. Now simply apply the product rule (equation 4.2-20b). Simply substitute into the rule the expressions you have for f(x), g(x), f'(x), and g'(x). You should get:
u'(x) = (m1 (m2x + b2) ) + (m2 (m1x + b1) )Ordinarily, I wouldn't bother to multiply this out, but I'd like to take this opportunity to demonstrate a method of cross-check. Multiplying the above expression out yields:
u'(x) = m1m2x + m1b2 + m1m2x + m2b1 = 2m1m2x + m1b2 + m2b1If we multiply out the original u(x), we get:
u(x) = m1m2x2 + m1b2x + m2b1x + b1b2Try using the sum rule, the rule about xn, and the rule that about multiplying by a constant on this version of u(x) to demonstrate that the derivative obtained this way is identical to the one we obtained the other way.
4b) Here you are given the product of two quadratics. We do this the same way as we did 4a. Let f(x) = x2 + 2x + 1 and let g(x) = x2 - 3x + 2. We can find the derivative of each by applying the xn rule and the sum rule. Hence f'(x) = 2x + 2 and g'(x) = 2x - 3. Now simply apply the product rule (equation 4.2-20b), substituting in these expressions for f(x), g(x), f'(x), and g'(x). Doing that yields:
u'(x) = ( (2x + 2) (x2 - 3x + 2) ) + ( (2x - 3) (x2 + 2x + 1) )which is the answer. If you want to multiply out u'(x) and u(x) to do the cross check, by all means do so. It's good exercise.
As further exercise, observe that the f(x) and g(x) we used in this problem are both, themselves, products, if you factor them:
f(x) = x2 + 2x + 1 = (x + 1) (x + 1) g(x) = x2 - 3x + 2 = (x - 1) (x - 2)Try applying the product rule to both of those and make sure you come up with the same expressions for f'(x) and g'(x) as we did in the first part of this problem (ie 4b).
4c) This is still the same problem as the two previous ones (ie 4a and 4b). Again, we find that u(x) is a product, and we set f(x) and g(x) to the two factors respectively. So f(x) = x - 1 and g(x) = 4x4 - 7x3 + 2x2 - 5x + 8. We find that f'(x) = 1, which ought to be pretty easy for you by now. To take the derivative of g(x) you have to see it as the sum of a bunch of xn terms, each multiplied by a constant. So apply the xn rule, the rule for multiplying by a constant, and the sum rule, and you get g'(x) = 16x3 - 21x2 + 4x - 5. When you apply the product rule (equation 4.2-20b) to the f(x), g(x), f'(x), and g'(x) we get here, you find that the answer is:
u'(x) = ( (1) (4x4 - 7x3 + 2x2 - 5x + 8) ) + ( (x - 1) (16x3 - 21x2 + 4x - 5) )
4d) In this one you are given the product of two functions, one, x2, explicitly, the other, f(x), as simply a symbol for any function. So let g(x) = x2. Using material we have already covered, we can determine that g'(x) = 2x. So now we apply the product rule (equation 4.2-20b), substituting our expressions for g(x) and g'(x). We can't substitute f(x) or f'(x) because the problem doesn't give anything to substitute. We get as an answer:
u'(x) = (2x f(x) ) + (x2 f'(x) )
4e) This problem is the difference of two products. You have to apply the product rule (equation 4.2-20b) to each product individually to find the derivative of each product. We have already seen that the derivative of the difference is the same as the difference of the derivatives. So to get the answer, simply take the difference of the two derivatives that you got using the product rule. I won't work the details for you here. You should be getting better at this by now. The answer is:
u'(x) = f(x) + xf'(x) - 3x2g(x) - x3g'(x)
You can email me by clicking this button: | http://karlscalculus.org/l4_2.html | 13 |
16 | Etiology, symptoms and treatment of cholera
Cholera is an intestinal disease caused by the bacterium Vibrio cholerae
which spreads mainly through faecal contamination of water and food by infected individuals [1
]. Eating raw or undercooked seafood can also cause the infection since V. cholerae
was found on phyto- and zooplankton in marine, estuarine and riverine environments independent of infected human beings [2
]. Two out of ca. 240 serogroups of V. cholerae
– O1 and O139 – are pathogenic. The O1 serogroup can further be subdivided into two biotypes – classical and El Tor.
After an incubation period of 18 hours to five days, infected individuals will develop acute watery diarrhoea. Large volumes of rice-water-like stool and concurrent loss of electrolytes can lead to severe dehydration and eventually death if patients are not rapidly treated. Most of the infected individuals, however, are asymptomatic or suffer only from mild diarrhoea. An inoculum of 108
bacteria is needed in healthy individuals to cause severe acute watery diarrhoea while a 1,000-fold lower dose is sufficient to cause the disease when gastric acid production is reduced. Other clinical features besides profuse diarrhoea (more than three loose stools per day) to establish a cholera diagnosis include abdominal and muscle cramps and frequent vomiting [4
]. Without treatment the case-fatality rate (CFR) can reach 50% [1
Treatment of cases depends on the severity and includes i) giving oral rehydration solutions (ORS) after each stool if no dehydration is apparent, ii) giving ORS in larger amounts if moderate dehydration is apparent, and iii) using intravenous drips of Ringer Lactate or saline for severely dehydrated patients [4
]. Antibiotics can be administered to shorten the episode in severe cases.
Prevention of cholera
Cholera usually occurs in epidemics and can cause major disruptions in affected health systems as rigorous measures have to be taken and patients treated in camps under quarantine-like conditions. Outbreaks of cholera can easily be prevented by providing safe water, sanitation and promoting good personal hygiene behaviour and safe food handling. Regions where such control measures have not been realised, or where maintenance and monitoring of existing schemes is not guaranteed, are at greatest risk of epidemics and consequently could become endemic with cholera.
The World Health Organization (WHO) has recently started to consider the use of vaccines as an additional public health tool to control cholera in low-income countries since the implementation of the above-mentioned prevention and control measures has not had the desired impact on cholera incidence [5
]. Currently only one safe and efficacious vaccine is available on the market – Dukoral®
– an oral cholera vaccine (OCV) consisting of killed whole-cell V. cholerae
O1 with purified recombinant B-subunit of cholera toxoid. It has to be administered in two doses about one week apart. It confers, as shown in field trials in Bangladesh, Peru and Mozambique, 60–85% protection for six months in young children and about 60% in older children and adults after two years [6
]. Longini Jr. et al. [9
] used data collected in 1985–1989 from a randomised controlled OCV trial in Bangladesh to calculate reductions of cholera cases. Their model indicated that a 50% coverage with OCV would lead to a 93% reduction in the entire population while a lower coverage of 30% would still reduce the cholera incidence by 76%.
Global and local burden of cholera
Cholera is mainly endemic in low-income countries in Africa, Asia, Central and South America. A total of 177,963 cases and 4,031 deaths, corresponding to a CFR of 2.3%, have been reported to WHO in 2007 with Africa having the largest share of worldwide reported cholera cases (94%) and deaths (99%) [10
]. This share of officially reported cases from Africa has increased considerably from 20% in the 1970s to 94% in the period 2000–2005 while the Asian share has simultaneously dropped from 80% to 5.2% over the same three decades [11
]. There is a similar picture with regard to reported deaths: Africa's share has increased from 22% to 97%, and Asia's has showed a steep decline from 77% to 2.4%. It has to be noted, however, that these official figures do not reflect the true burden of cholera since serious under-reporting due to technical (surveillance system limitations, problems with case definition and lack of standard vocabulary) and political (fear of travel or trade sanctions) reasons are suspected [10
]. Zuckerman et al. [12
] identified mainly under-reporting from the Indian subcontinent and Southeast Asia in a review carried out in 2004.
In Zanzibar, where this study will be conducted, a cholera outbreak with 411 cases and 51 deaths was reported for the first time in 1978 from a fishermen village [13
]. Thirteen outbreaks followed since then with almost annual episodes since the year 2000, with case-fatality rates ranging from 0% to 17% and showing a downward trend over the last two decades (Reyburn et al., unpublished data). During the last outbreaks in 2006/2007, 3,234 cases and 62 deaths were reported (CFR: 1.9%). A seasonal pattern can be observed that follows the rainy seasons (usually from March to June and from October to November) during which widespread flooding occurs frequently. Such deteriorating environmental conditions subsequently expose the majority of inhabitants on both islands to an increased risk of water-borne diseases due to the scarcity of safe drinking water supplies and a generally poor or lacking sanitation infrastructure in periurban and rural areas.
Despite all efforts in the past and the inexpensive and relatively easy use of ORS as major treatment [14
], cholera still poses a serious public health problem in low-income countries. Thus, a concerted action is needed to control cholera and to mitigate its health-related and economic consequences not only by maintaining and improving existing measures like water supply, sanitation and hygiene behaviour but also by assessing new prevention options like OCV mass vaccinations of vulnerable populations [5
Importance of sociocultural and behavioural research on vaccine introduction
Public health interventions to reduce disease burden must take into account the local realities to achieve a sustainable benefit for the affected populations. Solely relying on prevention or treatment measures that proved to be suitable in a given context does not necessarily make it appropriate for other situations.
Vaccination programmes have suffered a reduced coverage (e.g. rumours about tetanus toxoid causing infertility in Tanzania [16
]) or were even brought to a halt because of an ignorance of local realities (e.g. Northern Nigerian resistance to polio vaccination [17
]). Other interventions, especially when implemented in a top-down approach, experienced the same difficulties [19
To ensure the success of vaccination campaigns, a vaccine should not only be efficacious, relatively trouble-free for patients in its administration and preferably also cost-effective, but it is equally important that implementers consider community-held ideas, fears and individual help-seeking behaviour regarding the infectious disease and the vaccine of interest [22
Infrastructure, logistics, politics, and social and cultural features were identified as significant factors which determine vaccine acceptance and thus the success or failure of immunisation programmes in low-income countries [24
]. The importance of the social and cultural context on vaccine acceptance was assessed in various recent studies for typhoid fever and shigellosis (e.g. in Asian countries [27
]). It was reasserted that the importance of the social and cultural context on vaccine introduction has to be studied carefully in order to improve vaccination coverage [32
Research to improve the health of people needs to include gender issues since they play a crucial role in health and health planning [33
]. Gender differences are context-specific and thus require that sociocultural and behavioural research be done to complement clinical or epidemiological research. A recent review on the control of tropical diseases concluded that more detailed data about illness experience, meaning and help-seeking behaviour is needed on the gender level to inform the planning and execution of health interventions [34
Socioeconomic features, and cultural beliefs and practices, may vary across and within different sites or populations. And since differences in income, education, neighbourhood, infrastructure, etc. can affect people's health and behaviour regarding risk and relief, it is prudent to include site-specific analyses when doing research on the acceptance of community vaccine interventions.
Protocol review and ethical clearance
This paper summarises the protocol that had been reviewed by two independent scientists with expertise in the field of cholera and social science research before it was submitted to and accepted by the WHO Research Ethics Review Committee and the Ethics Committee of Zanzibar.
All participants will be informed about the study and individual written consent obtained before conducting discussions or interviews. All data will be handled with strict confidentiality and made anonymous before analysis.
Rationale for research: socioeconomic and behavioural (SEB) study
In late 2006, WHO received a grant from the Bill and Melinda Gates Foundation to work on the pre-emptive use of OCV in vulnerable populations at risk. The main focus of this grant is to examine how OCV can sustainably be used in countries with endemic cholera in addition to usually recommended control measures such as provision of safe water, adequate sanitation and health education. An important feature of the project is to collect evidence to assess the usefulness and financial stability of establishing an OCV stockpile.
To achieve these goals, WHO launched a joint venture with the Ministry of Health and Social Welfare of Zanzibar (MoHSW) to vaccinate 50,000 community residents older than two years living in communities at high risk of cholera with Dukoral®. The two islands of Zanzibar (Figure ) were chosen as study area since they have been regularly affected by cholera over the past three decades and since the local government wishes to enhance its strategy to control the disease and to examine the possibility of introducing OCV as a public health measure.
Map of Zanzibar with the two main islands. Courtesy of the University of Texas Libraries. The University of Texas at Austin.
Complementary to the vaccination campaign, since no sociocultural and behavioural studies related to cholera and OCV introduction have been conducted yet in African settings, the SEB study was conceived as a pilot project to address the research questions stated below. Besides the focus on cholera, it was also decided to include, to a lesser extent, shigellosis (bloody dysentery caused by Shigella spp.) in this research to investigate similarities and differences between the community perceptions of these two serious and potentially fatal diarrhoeal diseases.
Aims and research questions
The main aim of the SEB study, its stakeholders and research questions are as follows:
To generate evidence on the role socioeconomic and sociocultural factors can play to inform government policies regarding the introduction of OCV as part of a sustainable and financially viable cholera control strategy.
To inform the Government of Zanzibar, in particular the Ministry of Health and Social Welfare, regarding the national policy on cholera control and the use of OCV on the archipelago.
Research will be done on the following four stakeholder levels in Zanzibar:
- Level I: policy makers on national and regional level
- Level II: allopathic and traditional health care providers working in the target areas and district hospitals
- Level III: formal and informal local government and community leaders and teachers from the target areas
- Level IV: adult community residents (household level)
In populations where cholera is endemic:
- What are the perceptions of cholera in the context of diarrhoeal diseases, in particular shigellosis?
- What are the essential features of cholera and shigellosis?
- What is the acceptance of OCV?
For each question, the following comparisons will be made between:
- Site: periurban (Unguja) vs. rural (Pemba)
- Vaccination (intervention) status
- Stakeholders: all four levels | http://pubmedcentralcanada.ca/pmcc/articles/PMC2671504/?lang=en-ca | 13 |
15 | How to Select Solder
Image credit: RS Components | Digi-Key | All-Spec Industries
Solder is a metal alloy used to join metals together.
The term "solder" represents a group of filler metals used as consumables when joining two pieces of metal together - a process known as "soldering." Soldering has been a very common metalworking technique throughout human history and remains a permanent process in applications as diverse as jewelry making, plumbing, and electronics manufacturing. The process involves melting a filler metal (solder) and flowing it into a metal joint. For this reason, it is important for the filler metal to have a lower melting point than the metals being joined. Soldering creates a "reasonably permanent" seal, meaning that the joint should hold unless the seal is intentionally reversed by desoldering.
The image below shows the manual soldering of a stripped wire. The iron (right) heats the solder wire (top) to join the wire to a surface or another wire; in this case the stripped wire is being tinned. The smoke seen in the image is a typical byproduct of most soldering processes.
Image credit: Purdue University
At its most basic, soldering consists of a solder alloy and a heat source, commonly a soldering iron or soldering gun, to melt and flow the solder into place. Soldering machines are more advanced pieces of equipment that provide additional features beyond manual soldering techniques.
Solder is typically constructed using alloys with melting points between 180° and 190° C. It is important to note that, while solder is used to create a strong metal joint, it does not actually fuse with the solid metals to be joined. Because solder alloys need to wet the surface of metal parts before joining them, the parts must be heated above the melting point of the solder.
A solder's composition may include flux, an additive to improve flow. Because heating a metal causes rapid oxidation, flux is also used to clean the oxide layer from the metal surface to provide a clean surface for soldering; this process is shown in the image below. Common fluxes include ammonium chloride, zinc chloride, rosin, and hydrochloric acid.
Action of flux. Image credit: Integrated Publishing
A solder's melting point, toxicity, and uses are almost solely determined by its alloy metals. All solders formerly contained lead, but recent concerns about toxicity and lead poisoning have encouraged more widespread use of lead-free solders.
Alloys are specified as a chemical "formula" of sorts, with the percentage of each element represented as a subscript. For example, a tin/lead solder containing 63% tin and 37% lead is referred to as Sn63Pb37.
Tin/lead (or Sn/Pb) alloys are very common, versatile solders with a wide range of uses. Like most solders, Sn/Pb is manufactured with different elemental concentrations dependent on the intended application. A few common concentrations, melting points, and uses are listed in the table below.
Concentration (% of Sn/Pb)
Melting point (°C/°F)
Electrical / electronic components
Pipes / plumbing
Due to increasing restrictions on products containing lead, the use of tin/lead solders and lead solders in general is steadily decreasing. Sn/Pb solders have generally disappeared from plumbing applications in favor of silver alloys, but remain in use in electrical and electronics manufacturing, gas lines, and brass soldering.
Lead/zinc (Pb/Zn) solders are less expensive than traditional Sn/Pb solders due to the relatively higher cost of tin. Some lead/zinc alloys, such as Sn30Pb50Zn20, are widely used for economical joining of metals, including aluminum and cast iron. This composition has also been used for repairing galvanized surfaces. In general, zinc is added to solder alloys to lower the melting point and reduce costs.
Lead-free solders have become much more common due to new legislation and tax benefits regarding lead-free products. The Waste Electrical and Electronic Equipment (WEEE) and Restriction of Hazardous Substances (RoHS) directives — both passed by the European Union (EU) in 2006 — have effectively prohibited intentional use of lead solders in European-made consumer electronics. Lead-free solders typically use some combination of indium (In), tin (Sn), or aluminum (Al). Interestingly, cadmium-zinc (Cd-Zn) solder, while considered a lead-free alloy, is not RoHS compliant due to the directive's ban on cadmium as well as lead. Other than Cd-Zn, most lead-free solders are not considered toxic.
The graph below provides a helpful visual comparison of the melting points of various tin-based lead-free solders, many of which are discussed in detail below.
Image credit: The Minerals, Metals and Materials Society
Pure indium solder is commonly used in electronics manufacturing. Indium alloys are very useful for soldering surface mount (SMT) components and parts with gold, ceramic, quartz, or glass base materials. It features a low melting point of around 157° C (314.6° F). Indium solders are most suitable for low temperature applications and can maintain seals in cryogenic environments.
Tin/antimony (Sn/Sb) is a high-strength alloy extensively used in the plumbing industry. It is also used in electronics applications for pin soldering and die attachment. Tin/antimony solders create strong bonds with good thermal fatigue strength even in high temperature environments. Sn/Sb alloys melt at around 235° C (455° F) and are also used in air conditioning, refrigeration, stained glass, and radiator applications.
Sn/Ag (tin/silver) solders represent a common group of alloys often used for wave and reflow soldering. Generally speaking, silver is added to alloys to improve mechanical strength, although it is usually restricted to less than 3% of the total alloy composition to reduce the risk of poor ductility and cracking. Common compositions include Sn95.8Ag3.5Cu0.7 and Sn96.5Ag3.5, which have relatively high melting points of 217° C and 221° C, respectively.
Zn/Al solder has a very high melting point of 382° C (719.6° F) and is particularly useful for soldering aluminum. Zinc/aluminum has a composition favorable for good wetting.
Cd/Zn alloys are medium-temperature solders used to join most metals, especially aluminum and copper. Cadmium/zinc solders form strong, corrosion-resistant joints and are suitable for high-vibration and high stress applications. While Cd/Zn alloys are available in several different compositions, most share a melting point of around 265° C (509° F).
Solder is available as a number of form factors, including paste, powder, wire, and preformed. Selecting between these solder types requires an analysis of the application and general needs. Preform solder is the most specific (and limiting) type and consists of a pre-made shape designed for a specialized application. Preform solders are often stamped and may include integral flux.
Solder paste consists of solder powder mixed with a thick flux material and is "printed" onto a PCB using a stencil. The flux serves as a temporary adhesive to hold components onto the board until the paste is heated; after heating, a stronger physical bond is formed. Pastes are typically made of tin/lead alloys.
(left to right) A selection of preform solder shapes; a magnified view of solder paste.
Image credit: MBO | Curious Inventor
Solder wire is available in a range of thicknesses and configurations. Wire may or may not contain flux.
Wire solder in a dispenser (left) and on a reel.
Image credit: Apogee Kits | NYP
Standards and Standards Bodies
Solder alloys and their uses are governed by a wide range of standards. The standards bodies below are linked with their relevant standards.
The Aerospace Material Standards (AMS) are published by SAE International, formerly known as the Society of Automotive Engineers. This set of over 6,400 technical documents includes technical recommendations applying to missiles, airframes, ground-control equipment, propellers, and propulsion systems.
ASTM International — formerly known as the American Society for Testing and Materials — is one of the oldest continuously operating international standards organizations. ASTM maintains over 12,000 standards, including a wide variety pertaining to solder, including:
ASTM B579 (Electrodeposited coatings of tin-lead alloys)
ASTM B907 (Zinc, tin, and cadmium base alloys used as solders)
ASTM B828 (Making capillary joints by soldering of copper and copper alloy tube and fittings)
The International Organization for Standardization, or ISO, is a well-known international standards body. Its standard ISO 9453 is a major standard covering a wide variety of soft solder alloy compositions.
MIL-SPEC standards are United States defense standards ensuring interoperability, quality, commonality, and general compatibility of military products. MIL-S-12204 is a well-known tin/lead solder standard.
Other standards pertaining to solder alloys can be found here. | http://www.globalspec.com/learnmore/materials/solder | 13 |
17 | Direct democracy, classically termed pure democracy, comprises a form of democracy and theory of civics where in sovereignty is lodged in the assembly of all citizens who choose to participate. Depending on the particular system, this assembly might pass executive motions, make laws, elect and dismiss officials and conduct trials. Where the assembly elects officials, these are executive agents or direct representatives, bound to the will of the people.
Direct democracy stands in contrast to representative democracy, where sovereignty is exercised by a subset of the people, usually on the basis of election. However, it is possible to combine the two into representative direct democracy.
Modern direct democracy is characterized by three pillars:
Referendums can include the ability to hold a binding referendum on whether a given law should be scrapped. This effectively grants the populace a veto on government legislation. Recalls gives the people the right to remove from office elected officials before the end of their term.
The first recorded democracy, which was also direct, was the Athenian democracy
in the 5th century BC. The main bodies in the Athenian democracy were the assembly, composed by male citizens, the boule
which was composed by 500 citizens chosen annually by lot
), and the law courts composed by a massive number of juries chosen by lot, with no judges. Out of the male population of 30,000, several thousand citizens were politically active every year and many of them quite regularly for years on end. The Athenian democracy was not only direct
in the sense that decisions were made by the assembled people, but also directest
in the sense that the people through the assembly, boule and law courts controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Modern democracies do not use institutions that resemble the Athenian system of rule, due to the problems arising when implementing such on the scale of modern societies.
Also relevant is the history of Roman republic beginning circa 449 BC (Cary, 1967). The ancient Roman Republic's "citizen lawmaking"—citizen formulation and passage of law, as well as citizen veto of legislature-made law—began about 449 BC and lasted the approximately four hundred years to the death of Julius Caesar in 44 BC. Many historians mark the end of the Republic on the passage of a law named the Lex Titia, 27 November 43 BC (Cary, 1967).
Modern-era citizen lawmaking began in the towns of Switzerland in the 13th century. In 1847, the Swiss added the "statute referendum" to their national constitution. They soon discovered that merely having the power to veto Parliament's laws was not enough. In 1891, they added the "constitutional amendment initiative". The Swiss political battles since 1891 have given the world a valuable experience base with the national-level constitutional amendment initiative (Kobach, 1993). Today, Switzerland is still an example of modern direct democracy, as it exhibits the first two pillars at both the local and federal levels. In the past 120 years more than 240 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government. (See Direct democracy in Switzerland below.) Another example is the United States, where, despite being a federal republic where no direct democracy exists at the federal level, over half the states (and many localities) provide for citizen-sponsored ballot initiatives (also called "ballot measures" or "ballot questions") and the vast majority of the states have either initiatives and/or referendums. (See Direct democracy in the United States below.)
Some of the issues surrounding the related notion of a direct democracy using the Internet and other communications technologies are dealt with in e-democracy. More concisely, the concept of open source governance applies principles of the free software movement to the governance of people, allowing the entire populace to participate in government directly, as much or as little as they please. This development strains the traditional concept of democracy, because it does not give equal representation to each person. Some implementations may even be considered democratically-inspired meritocracies, where contributers to the code of laws are given preference based on their ranking by other contributers.
Many political movements
within representative democracies, seek to restore some measure of direct democracy or a more deliberative democracy
, to include consensus decision-making
rather than simply majority rule
. Such movements advocate more frequent public votes and referendums on issues, and less of the so-called "rule by politician
". Collectively, these movements are referred to as advocating grassroots democracy
or consensus democracy
, to differentiate it from a simple direct democracy model. Another related movement is community politics
which seeks to engage representatives with communities directly.
Anarchists (usually Social anarchists) have advocated forms of direct democracy as an alternative to the centralized state and capitalism, however, some anarchists such as individualist anarchists have criticized direct democracy and democracy in general for ignoring the rights of the minority and instead have advocated a form of consensus decision-making. Libertarian Marxists, however, fully support direct democracy in the form of the proletarian republic and see majority rule and citizen participation as virtues. Within Marxist circles, "proletarian democracy" is synonymous with direct democracy, just as "bourgeois democracy" is synonymous with representative democracy.
Arguments for direct democracy
Arguments in favor of direct democracy tend to focus on perceived flaws in the alternative, representative democracy
which sometimes is seen as a form of oligarchy
(Hans Köchler, 1995) and its properties, such as nepotism
, lack of transparency and accountability to the people etc:
- Non representation. Individuals elected to office in a representative democracy tend not to be demographically representative of their constituency. They tend to be wealthier and more educated, and are also more predominantly male as well as members of the majority race, ethnic group, and religion than a random sample would produce. They also tend to be concentrated in certain professions, such as lawyers. Elections by district may reduce, but not eliminate, those tendencies, in a segregated society. Direct democracy would be inherently representative, assuming universal suffrage (where everyone can vote). Critics counter that direct democracy can be unrepresentative, if not all eligible voters participate in every vote, and that this is lacking voter turnout is not equally distributed among various groups. Greater levels of education, especially regarding law, seem to have many advantages and disadvantages in lawmaking.
- Conflict of interest. The interests of elected representatives do not necessarily correspond with those of their constituents. An example is that representatives often get to vote to determine their own salaries. It is in their interest that the salaries be high, while it is in the interest of the electorate that they be as low as possible, since they are funded with tax revenue. The typical results of representative democracy are that their salaries are much higher than this average, however. Critics counter that salaries for representatives are necessary, otherwise only the wealthy could afford to participate.
- Corruption. The concentration of power intrinsic to representative government is seen by some as tending to create corruption. In direct democracy, the possibility for corruption is reduced.
- Political parties. The formation of political parties is considered by some to be a "necessary evil" of representative democracy, where combined resources are often needed to get candidates elected. However, such parties mean that individual representatives must compromise their own values and those of the electorate, in order to fall in line with the party platform. At times, only a minor compromise is needed. At other times such a large compromise is demanded that a representative will resign or switch parties. In structural terms, the party system may be seen as a form of oligarchy. (Hans Köchler, 1995) Meanwhile, in direct democracy, political parties have virtually no effect, as people do not need to conform with popular opinions. In addition to party cohesion, representatives may also compromise in order to achieve other objectives, by passing combined legislation, where for example minimum wage measures are combined with tax relief. In order to satisfy one desire of the electorate, the representative may have to abandon a second principle. In direct democracy, each issue would be decided on its own merits, and so "special interests" would not be able to include unpopular measures in this way.
- Government transition. The change from one ruling party to another, or to a lesser extent from one representative to another, may cause a substantial governmental disruption and change of laws. For example, US Secretary of State (then National Security Advisor) Condoleezza Rice cited the transition from the previous Clinton Administration as a principal reason why the United States was unable to prevent the September 11, 2001 attacks. The Bush Administration had taken office just under 8 months prior to the attacks.
- Cost of elections. Many resources are spent on elections which could be applied elsewhere. Furthermore, the need to raise campaign contributions is felt to seriously damage the neutrality of representatives, who are beholden to major contributors, and reward them, at the very least, by granting access to government officials. However, direct democracy would require many more votings, which would be costly, and also probably campaigns by those who may lose or gain from the results.
- Patronage and nepotism. Elected individuals frequently appoint people to high positions based on their mutual loyalty, as opposed to their competence. For example, Michael D. Brown was appointed to head the US Federal Emergency Management Agency, despite a lack of experience. His subsequent poor performance following Hurricane Katrina may have greatly increased the number of deaths. In a direct democracy where everybody voted for agency heads, it wouldn't be likely for them to be elected solely based on their relationship with the voters. On the other hand, most people may have no knowledge of the candidates and get tired of voting for every agency head. As a result, mostly friends and relatives may vote.
- Lack of transparency. Supporters argue that direct democracy, where people vote directly for issues concerning them, would result in greater political transparency than representative democracy. Critics argue that representative democracy can be equally transparent. In both systems people cannot vote on everything, leaving many decisions to some forms of managers, requiring strong Freedom of Information legislation for transparency.
- Insufficient sample size. It is often noted that prediction markets most of the time produce remarkably efficient predictions regarding the future. Many, maybe even most, individuals make bad predictions, but the resulting average prediction is often surprisingly good. If the same applies to making political decisions, then direct democracy may produce very efficient decisions.
- Lack of accountability. Once elected, representatives are free to act as they please. Promises made before the election are often broken, and they frequently act contrary to the wishes of their electorate. Although theoretically it is possible to have a representative democracy in which the representatives can be recalled at any time; in practice this is usually not the case. An instant recall process would, in fact, be a form of direct democracy.
- Voter apathy. If voters have more influence on decisions, it is argued that they will take more interest in and participate more in deciding those issues.
Arguments against direct democracy
- Scale. Direct democracy works on a small system. For example, the Athenian Democracy governed a city of, at its height, about 30,000 eligible voters (free adult male citizens). Town meetings, a form of local government once common in New England, have also worked well, often emphasizing consensus over majority rule. The use of direct democracy on a larger scale has historically been more difficult, however. Nevertheless, developments in technology such as the internet, user-friendly and secure software, and inexpensive, powerful personal computers have all inspired new hope in the practicality of large scale applications of direct democracy. Furthermore ideas such as council democracy and the Marxist concept of the dictatorship of the proletariat are if nothing else proposals to enact direct democracy in nation-states and beyond.
- Practicality and efficiency. Another objection to direct democracy is that of practicality and efficiency. Deciding all or most matters of public importance by direct referendum is slow and expensive (especially in a large community), and can result in public apathy and voter fatigue, especially when repeatedly faced with the same questions or with questions which are unimportant to the voter. Modern advocates of direct democracy often suggest e-democracy (sometimes including wikis, television and Internet forums) to address these problems.
- Demagoguery. A fundamental objection to direct democracy is that the public generally gives only superficial attention to political issues and is thus susceptible to charismatic argument or demagoguery. The counter argument is that representative democracy causes voters not to pay attention, since each voter's opinion doesn't matter much and their legislative power is limited. However, if the electorate is large, direct democracy also brings the effect of diminished vote significance, lacking a majority vote policy.
- One possible solution is demanding that a proposal requires the support of at least 50% of all citizens in order to pass, effectively meaning that absent voters count as "No" votes. This would prevent minorities from gaining power. However, this still means that the majority could be swayed by demagoguery. Also, this solution could be used by representative democracy.
- Complexity. A further objection is that policy matters are often so complicated that not all voters understand them. The average voter may have little knowledge regarding the issues that should be decided. The arduous electoral process in representative democracies may mean that the elected leaders have above average ability and knowledge. Advocates of direct democracy argue, however, that laws need not be so complex and that having a permanent ruling class (especially when populated in large proportion by lawyers) leads to overly complex tax laws, etc. Critics doubt that laws can be extremely simplified and argue that many issues require expert knowledge. Supporters argue that such expert knowledge could be made available to the voting public. Supporters further argue that policy matters are often so complicated that politicians in traditional representative democracy do not all understand them. In both cases, the solution for politicians and demos of the public is to have experts explain the complexities.
- Voter apathy. The average voter may not be interested in politics and therefore may not participate. This immediately reveals the lack of interest either in the issues themselves or in the options; sometimes people need to redefine the issues before they can vote either in favor or in opposition. A small amount of voter apathy is always to be expected, and this is not seen as a problem so long the levels remain constant among (do not target) specific groups of people. That is, if 10% of the population voted with representative samples from all groups in the population, then in theory, the outcome would be correct. Nevertheless, the high level of voter apathy would reveal a substantial escalation in voter fatigue and political disconnect. The risk is, however, that voter apathy would not apply to special interest groups. For example, most farmers may vote for a proposal to increase agricultural subsidies to themselves while the general population ignore this issue. If many special interest groups do the same thing, then the resources of the state may be exhausted. One possible solution is compulsory voting, although this has problems of its own such as restriction of freedom, costs of enforcement, and random voting.
- Self-interest. It is very difficult under a system of direct democracy to make a law which benefits a smaller group if it hurts a larger group, even if the benefit to the small group outweighs that of the larger group. This point is also an argument in favour of Direct Democracy, as current representative party systems often make decisions that are not in line with or in favour of the mass of the population, but of a small group. It should be noted that this is a criticism of democracy in general. "Fiscal responsibility", for instance, is difficult under true direct democracy, as people generally do not wish to pay taxes, despite the fact that governments need a source of revenue. One possible solution to the issue regarding minority rights and public welfare is to have a constitution that requires that minority interests and public welfare (such as healthcare, etc) be protected and ensures equality, as is the case with representative democracy. The demos would be able to work out the "how" of providing services, but some of the "what" that is to be provided could be enshrined in a constitution.
- Suboptimality. Results may be quite different depending on whether people vote on single issues separately in referendums, or on a number of options bundled together by political parties. As explained in the article on majority rule, the results from voting separately on the issues may be suboptimal, which is a strong argument against the indiscriminate use of referendums. With direct democracy, however, the one-vote one-human concept and individualism with respect to voting would tend to discourage the formation of parties. Further optimality might be achieved, argue proponents, by having recallable delegates to specialized councils and higher levels of governance, so that the primary focus of the everyday citizen would be on their local community.
- Manipulation by timing and framing. If voters are to decide on an issue in a referendum, a day (or other period of time) must be set for the vote and the question must be framed, but since the date on which the question is set and different formulations of the same question evoke different responses, whoever sets the date of the vote and frames the question has the possibility of influencing the result of the vote. Manipulation is also present in pure democracy with a growing population. Original members of the society are able to instigate measures and systems that enable them to manipulate the thoughts of new members to the society. Proponents counter that a portion of time could be dedicated and mandatory as opposed to a per-issue referendum. In other words, each member of civil society could be required to participate in governing their society each week, day, or other period of time.
Direct democracy in Switzerland
, single majorities are sufficient at the town, city, and state (canton
and half-canton) level, but at the national level, "double majorities" are required on constitutional matters. The intent of the double majorities is simply to ensure any citizen-made law's legitimacy
Double majorities are, first, the approval by a majority of those voting, and, second, a majority of states in which a majority of those voting approve the ballot measure. A citizen-proposed law (i.e. initiative) cannot be passed in Switzerland at the national level if a majority of the people approve, but a majority of the states disapprove (Kobach, 1993). For referendums or proposition in general terms (like the principle of a general revision of the Constitution), the majority of those voting is enough (Swiss constitution, 2005).
In 1890, when the provisions for Swiss national citizen lawmaking were being debated by civil society and government, the Swiss copied the idea of double majorities from the United States Congress, in which House votes were to represent the people and Senate votes were to represent the states (Kobach, 1993). According to its supporters, this "legitimacy-rich" approach to national citizen lawmaking has been very successful. Kobach claims that Switzerland has had tandem successes both socially and economically which are matched by only a few other nations, and that the United States is not one of them. Kobach states at the end of his book, "Too often, observers deem Switzerland an oddity among political systems. It is more appropriate to regard it as a pioneer." Finally, the Swiss political system, including its direct democratic devices in a multi-level governance context, becomes increasingly interesting for scholars of EU integration (see Trechsel, 2005).
Direct democracy in the United States
Direct democracy was very much opposed by the framers of the United States Constitution and some signers of the Declaration of Independence. They saw a danger in majorities forcing their will on minorities. As a result, they advocated a representative democracy in the form of a constitutional republic over a direct democracy. For example, James Madison, in Federalist No. 10 advocates a constitutional republic over direct democracy precisely to protect the individual from the will of the majority. He says, "A pure democracy can admit no cure for the mischiefs of faction. A common passion or interest will be felt by a majority, and there is nothing to check the inducements to sacrifice the weaker party. Hence it is, that democracies have ever been found incompatible with personal security or the rights of property; and have, in general, been as short in their lives as they have been violent in their deaths. John Witherspoon, one of the signers of the Declaration of Independence, said "Pure democracy cannot subsist long nor be carried far into the departments of state — it is very subject to caprice and the madness of popular rage." Alexander Hamilton said, "That a pure democracy if it were practicable would be the most perfect government. Experience has proved that no position is more false than this. The ancient democracies in which the people themselves deliberated never possessed one good feature of government. Their very character was tyranny; their figure deformity...".
Despite the framers' intentions in the beginning of the republic, ballot measures and their corresponding referendums have been widely used at the state and sub-state level. There is much state and federal case law, from the early 1900s to the 1990s, that protects the people's right to each of these direct democracy governance components (Magleby, 1984, and Zimmerman, 1999). The first United States Supreme Court ruling in favor of the citizen lawmaking was in Pacific States Telephone and Telegraph Company v. Oregon, 223 U.S. 118—in 1912 (Zimmerman, December 1999). President Theodore Roosevelt, in his "Charter of Democracy" speech to the 1912 Ohio constitutional convention, stated "I believe in the Initiative and Referendum, which should be used not to destroy representative government, but to correct it whenever it becomes misrepresentative."
In various states, referendums through which the people rule include:
- Referrals by the legislature to the people of "proposed constitutional amendments" (constitutionally used in 49 states, excepting only Delaware — Initiative & Referendum Institute, 2004).
- Referrals by the legislature to the people of "proposed statute laws" (constitutionally used in all 50 states — Initiative & Referendum Institute, 2004).
- Constitutional amendment initiative is the most powerful citizen-initiated, direct democracy governance component. It is a constitutionally-defined petition process of "proposed constitutional law," which, if successful, results in its provisions being written directly into the state's constitution. Since constitutional law cannot be altered by state legislatures, this direct democracy component gives the people an automatic superiority and sovereignty, over representative government (Magelby, 1984). It is utilized at the state level in eighteen states: Arizona, Arkansas, California, Colorado, Florida, Illinois, Massachusetts, Michigan, Mississippi, Missouri, Montana, Nebraska, Nevada, North Dakota, Ohio, Oklahoma, Oregon and South Dakota (Cronin, 1989). Among the eighteen states, there are three main types of the constitutional amendment initiative, with different degrees of involvement of the state legislature distinguishing between the types (Zimmerman, December 1999).
- Statute law initiative is a constitutionally-defined, citizen-initiated, petition process of "proposed statute law," which, if successful, results in law being written directly into the state's statutes. The statute initiative is used at the state level in twenty-one states: Alaska, Arizona, Arkansas, California, Colorado, Idaho, Maine, Massachusetts, Michigan, Missouri, Montana, Nebraska, Nevada, North Dakota, Ohio, Oklahoma, Oregon, South Dakota, Utah, Washington and Wyoming (Cronin, 1989). Note that, in Utah, there is no constitutional provision for citizen lawmaking. All of Utah's I&R law is in the state statutes (Zimmerman, December 1999). In most states, there is no special protection for citizen-made statutes; the legislature can begin to amend them immediately.
- Statute law referendum is a constitutionally-defined, citizen-initiated, petition process of the "proposed veto of all or part of a legislature-made law," which, if successful, repeals the standing law. It is used at the state level in twenty-four states: Alaska, Arizona, Arkansas, California, Colorado, Idaho, Kentucky, Maine, Maryland, Massachusetts, Michigan, Missouri, Montana, Nebraska, Nevada, New Mexico, North Dakota, Ohio, Oklahoma, Oregon, South Dakota, Utah, Washington and Wyoming (Cronin, 1989).
- The recall is a constitutionally-defined, citizen-initiated, petition process, which, if successful, removes an elected official from office by "recalling" the official's election. In most state and sub-state jurisdictions having this governance component, voting for the ballot that determines the recall includes voting for one of a slate of candidates to be the next office holder, if the recall is successful. It is utilized at the state level in eighteen states: Alaska, Arizona, California, Colorado, Georgia, Idaho, Kansas, Louisiana, Michigan, Minnesota, Montana, Nevada, New Jersey, North Dakota, Oregon, Rhode Island, Washington and Wisconsin (National Conference of State Legislatures, 2004, Recall Of State Officials).
There are now a total of 24 U.S. states with constitutionally-defined, citizen-initiated, direct democracy governance components (Zimmerman, December 1999). In the United States, for the most part only one-time majorities are required (simple majority of those voting) to approve any of these components.
In addition, many localities around the U.S. also provide for some or all of these direct democracy governance components, and in specific classes of initiatives (like those for raising taxes), there is a supermajority voting threshold requirement. Even in states where direct democracy components are scant or nonexistent at the state level, there often exists local options for deciding specific issues, such as whether a county should be "wet" or "dry" in terms of whether alcohol sales are allowed.
In the U.S. region of New England, nearly all towns practice a very limited form of home rule, and decide local affairs through the direct democratic process of the town meeting.
has been dabbling with direct democracy since its new constitution was approved in 1999. However, this situation has been on for less than ten years, and results are controversial. Still, its constitution does enshrine the right of popular initiative, sets minimal requirements for referenda (a few of which have already been held) and does include the institution of recall for any elected authority (which has been unsuccessfully used against its incumbent president).
Contemporary movements for direct democracy via direct democratic praxis
Some contemporary movements working for direct democracy via direct democratic praxis include:
- Arnon, Harel (January 2008). "A Theory of Direct Legislation" (LFB Scholarly)
- Cary, M. (1967) A History Of Rome: Down To The Reign Of Constantine. St. Martin's Press, 2nd edition.
- Cronin, Thomas E. (1989). Direct Democracy: The Politics Of Initiative, Referendum, And Recall. Harvard University Press.
- Finley, M.I. (1973). Democracy Ancient And Modern. Rutgers University Press.
- Fotopoulos, Takis, Towards an Inclusive Democracy: The Crisis of the Growth Economy and the Need for a New Liberatory Project (London & NY: Cassell, 1997).
- Fotopoulos, Takis, The Multidimensional Crisis and Inclusive Democracy. (Athens: Gordios, 2005). (English translation of the book with the same title published in Greek).
- Fotopoulos, Takis, "Liberal and Socialist “Democracies” versus Inclusive Democracy", The International Journal of INCLUSIVE DEMOCRACY, vol.2, no.2, (January 2006).
- Gerber, Elisabeth R. (1999). The Populist Paradox: Interest Group Influence And The Promise Of Direct Legislation. Princeton University Press.
- Hansen, Mogens Herman (1999). The Athenian Democracy in the Age of Demosthenes: Structure, Principles and Ideology. University of Oklahoma, Norman (orig. 1991).
- Kobach, Kris W. (1993). The Referendum: Direct Democracy In Switzerland. Dartmouth Publishing Company.
- Köchler, Hans (1995). style="font-style : italic;">A Theoretical Examination of the Dichotomy between Democratic Constitutions and Political Reality University Center Luxemburg.
- Magleby, David B. (1984). Direct Legislation: Voting On Ballot Propositions In The United States. Johns Hopkins University Press.
- National Conference of State Legislatures, (2004). Recall Of State Officials
- Polybius (c.150 BC). The Histories. Oxford University, The Great Histories Series, Ed., Hugh R. Trevor-Roper and E. Badian. Translated by Mortimer Chanbers. Washington Square Press, Inc (1966).
- Reich, Johannes (2008). style="font-style : italic;">An Interactional Model of Direct Democracy - Lessons from the Swiss Experience SSRN Working Paper.
- Raaflaub K. A., Ober J., Wallace R. W., Origins of Democracy in Ancient Greece, University of California Press, 2007
- Zimmerman, Joseph F. (March 1999). The New England Town Meeting: Democracy In Action Praeger Publishers.
- Zimmerman, Joseph F. (December 1999). The Initiative: Citizen Law-Making. Praeger Publishers.
- ðÐ ─ German and international dd-portal.
- Kol1 ─ Movement for Direct Democracy In Israel.
- MyVote (in Hebrew) ─ Movement for Direct Democracy In Israel.
- democraticidiretti.org - Association of Direct Democrats
- listapartecipata.it ─ Roman Chapter of the Association of Direct Democrats' campaign to present a list of candidates for Rome Province Election to be controlled through an ad-hoc temporary organization of citizens. | http://www.reference.com/browse/misrepresentative | 13 |
78 | The South has long been a region apart, even though it is not isolated by any formidable natural barriers and is itself subdivided into many distinctive areas: the coastal plains along the Atlantic Ocean and the Gulf of Mexico; the Piedmont; the ridges, valleys, and high mountains bordering the Piedmont, especially the Great Smoky Mts. in North Carolina and Tennessee; areas of bluegrass, black-soil prairies, and clay hills west of the mountains; bluffs, floodplains, bayous, and delta lands along the Mississippi River; and W of the Mississippi, the interior plains and the Ozark Plateau.
The humid subtropical climate, however, is one unifying factor. Winters are neither long nor very cold, and no month averages below freezing. The long, hot growing season (nine months at its peak along the Gulf) and the fertile soil (much of it overworked or ruined by erosion) have traditionally made the South an agricultural region where such staples as tobacco, rice, and sugarcane have long flourished; citrus fruits, livestock, soybeans, and timber have gained in importance. Cotton, once the region's dominant crop, is now mostly grown in Texas, the Southwest, and California.
Since World War II, the South has become increasingly industrialized. High-technology (such as aerospace and petrochemical) industries have boomed, and there has been impressive growth in the service, trade, and finance sectors. The chief cities of the South are Atlanta, New Orleans, Charlotte, Miami, Memphis, and Jacksonville.
From William Byrd (1674-1744) to William Faulkner and Toni Morrison, the South has always had a strong regional literature. Its principal subject has been the Civil War, reflected in song and poetry from Paul Hamilton Hayne to Allen Tate and in novels from Thomas Nelson Page to Margaret Mitchell.
The basic agricultural economy of the Old South, which was abetted by the climate and the soil, led to the introduction (1617) of Africans as a source of cheap labor under the twin institutions of the plantation and slavery. Slavery might well have expired had not the invention of the cotton gin (1793) given it a firmer hold, but even so there would have remained the problem of racial tension. Issues of race have been central to the history of the South. Slavery was known as the "peculiar institution" of the South and was protected by the Constitution of the United States.
The Missouri Compromise (1820-21) marked the rise of Southern sectionalism, rooted in the political doctrine of states' rights, with John C. Calhoun as its greatest advocate. When differences with the North, especially over the issue of the extension of slavery into the federal territories, ultimately appeared insoluble, the South turned (1860-61) the doctrine of states' rights into secession (or independence), which in turn led inevitably to the Civil War. Most of the major battles and campaigns of the war were fought in the South, and by the end of the war, with slavery abolished and most of the area in ruins, the Old South had died.Reconstruction to World War II
The period of Reconstruction following the war set the South's political and social attitude for years to come. During this difficult time radical Republicans, African Americans, and so-called carpetbaggers and scalawags ruled the South with the support of federal troops. White Southerners, objecting to this rule, resorted to terrorism and violence and, with the aid of such organizations as the Ku Klux Klan, drove the Reconstruction governments from power. The breakdown of the plantation system during the Civil War gave rise to sharecropping, the tenant-farming system of agriculture that still exists in areas of the South. The last half of the 19th cent. saw the beginning of industrialization in the South, with the introduction of textile mills and various industries.
The troubled economic and political life of the region in the years between 1880 and World War II was marked by the rise of the Farmers' Alliance, Populism, and Jim Crow laws and by the careers of such Southerners as Tom Watson, Theodore Bilbo, Benjamin Tillman, and Huey Long. During the 1930s and 40s, thousands of blacks migrated from the South to Northern industrial cities.The Contemporary South
Since World War II the South has experienced profound political, economic, and social change. Southern reaction to the policies of the New Deal, the Fair Deal, the New Frontier, and the Great Society caused the emergence of a genuine two-party system in the South. Many conservative Southern Democrats (such as Strom Thurmond) became Republicans because of disagreements over civil rights, the Vietnam War, and other issues. During the 1990s, Republican strength in the South increased substantially. After the 1994 elections, Republicans held a majority of the U.S. Senate and House seats from Southern states; Newt Gingrich, a Georgia Republican, became Speaker of the House.
During the 1950s and 60s the civil-rights movement, several key Supreme Court decisions, and federal legislation ended the legal segregation of public schools, universities, transportation, businesses, and other establishments in the South, and helped blacks achieve more adequate political representation. The process of integration was often met with bitter protest and violence. Patterns of residential segregation still exist in much of the South, as they do throughout the United States. The influx of new industries into the region after World War II made the economic life of the South more diversified and more similar to that of other regions of the United States.
The portions of the South included in the Sun Belt have experienced dramatic growth since the 1970s. Florida's population almost doubled between 1970 and 1990 and Georgia, North Carolina, and South Carolina have also grown considerably. Economically, the leading metropolitan areas of the South have become popular destinations for corporations seeking favorable tax rates, and the region's relatively low union membership has attracted both foreign and U.S. manufacturing companies. In the rural South, however, poverty, illiteracy, and poor health conditions often still predominate.
See works by C. Eaton, H. W. Odum, and U. B. Phillips; W. H. Stephenson and E. M. Coulter, ed., A History of the South (10 vol., 1947-73); F. B. Simkins and C. P. Roland, A History of the South (4th ed. 1972); C. V. Woodward, Origins of the New South (1971) and The Strange Career of Jim Crow (3d rev. ed. 1974); D. R. Goldfield, Cotton Fields and Skyscrapers (1982); E. and M. Black, Politics and Society in the South (1987); C. R. Wilson and W. Ferris, ed., The Encyclopedia of Southern Culture (1989); D. R. Goldfield, The South for New Southerners (1991).
See L. Ultan, The Beautiful Bronx (1982); L. Ultan and G. Hermalyn, The Bronx in the Innocent Years (1985); E. Gonzalez et al., Building a Borough (1986).
See studies by R. H. Hethmon, ed. (1965, repr. 1991), D. Garfield (1980), F. Hirsch (1984, repr. 2001), and S. Frome (2001).
See V. I. Seroff, The Mighty Five (1948); M. O. Zetlin, The Five (tr. 1959).
The group burst on the international rock music scene in 1961. Their initial appeal derived as much from their wit, Edwardian clothes, and moplike haircuts as from their music. By 1963 they were the objects of wild adoration and were constantly followed by crowds of shrieking adolescent girls. By the late 1960s, "Beatlemania" had abated somewhat, and The Beatles were highly regarded by a broad spectrum of music lovers.
From 1963 to 1970 the group released 18 record albums that clearly document its musical development. The early recordings, such as Meet The Beatles (1964), are remarkable for their solid rhythms and excitingly rich, tight harmony. The middle albums, like Rubber Soul (1965) and Revolver (1966), evolved toward social commentary in their lyrics ("Eleanor Rigby," "Taxman") and introduced such instruments as the cello, trumpet, and sitar. In 1967, Sgt. Pepper's Lonely Hearts Club Band marked the beginning of The Beatles' final period, which is characterized by electronic techniques and allusive, drug-inspired lyrics. The group acted and sang in four films: A Hard Day's Night (1964), Help! (1965), Magical Mystery Tour (1968), and Let It Be (1970); all of these are outstanding for their exuberance, slapstick, and satire. They also were animated characters in the full-length cartoon, Yellow Submarine (1968). After they disbanded, all The Beatles continued to compose and record songs. In 1980, Lennon was shot to death by a fan, Mark Chapman. McCartney was knighted in 1997.
See John Lennon, In His Own Write (1964, repr. 2000); H. Davies, The Beatles (1968, repr. 1996); W. Mellers, Twilight of the Gods (1974); P. Norman, Shout! (1981); R. DiLello, The Longest Cocktail Party (1972, repr. 1983); T. Riley, Tell Me Why (1988); M. Lewisohn, The Beatles Recording Sessions (1988), The Beatles Day by Day (1990), and The Complete Beatles Chronicles (1992); I. MacDonald, Revolution in the Head (1994); M. Hertsgaard, A Day in the Life (1995); The Beatles Anthology (video, 1995; book, 2000); J. S. Wenner, ed., Lennon Remembers: The Rolling Stone Interviews (2000); B. Spitz, The Beatles: The Biography (2005).
See Juilliard (television documentary, 2003).
See study by G. Dietze (1960).
In the center of the Arctic is a large basin occupied by the Arctic Ocean. The basin is nearly surrounded by the ancient continental shields of North America, Europe, and Asia, with the geologically more recent lowland plains, low plateaus, and mountain chains between them. Surface features vary from low coastal plains (swampy in summer, especially at the mouths of such rivers as the Mackenzie, Lena, Yenisei, and Ob) to high ice plateaus and glaciated mountains. Tundras, extensive flat and poorly drained lowlands, dominate the regions. The most notable highlands are the Brooks Range of Alaska, the Innuitians of the Canadian Arctic Archipelago, the Urals, and mountains of E Russia. Greenland, the world's largest island, is a high plateau covered by a vast ice sheet except in the coastal regions; smaller ice caps are found on other Arctic islands.
The climate of the Arctic, classified as polar, is characterized by long, cold winters and short, cool summers. Polar climate may be further subdivided into tundra climate (the warmest month of which has an average temperature below 50°F;/10°C; but above 32°F;/0°C;) and ice cap climate (all months average below 32°F;/0°C;, and there is a permanent snow cover). Precipitation, almost entirely in the form of snow, is very low, with the annual average precipitation for the regions less than 20 in. (51 cm). Persistent winds whip up fallen snow to create the illusion of constant snowfall. The climate is moderated by oceanic influences, with regions abutting the Atlantic and Pacific oceans having generally warmer temperatures and heavier snowfalls than the colder and drier interior areas. However, except along its fringe, the Arctic Ocean remains frozen throughout the year, although the extent of the summer ice has shrunk significantly since the early 1980s.
Great seasonal changes in the length of days and nights are experienced N of the Arctic Circle, with variations that range from 24 hours of constant daylight ("midnight sun") or darkness at the Arctic Circle to six months of daylight or darkness at the North Pole. However, because of the low angle of the sun above the horizon, insolation is minimal throughout the regions, even during the prolonged daylight period. A famous occurrence in the arctic night sky is the aurora borealis, or northern lights.
Vegetation in the Arctic, limited to regions having a tundra climate, flourishes during the short spring and summer seasons. The tundra's restrictive environment for plant life increases northward, with dwarf trees giving way to grasses (mainly mosses, lichen, sedges, and some flowering plants), the ground coverage of which becomes widely scattered toward the permanent snow line. There are about 20 species of land animals in the Arctic, including the squirrel, wolf, fox, moose, caribou, reindeer, polar bear, musk ox, and about six species of aquatic mammals such as the walrus, seal, and whale. Most of the species are year-round inhabitants of the Arctic, migrating to the southern margins as winter approaches. Although generally of large numbers, some of the species, especially the fur-bearing ones, are in danger of extinction. A variety of fish is found in arctic seas, rivers, and lakes. The Arctic's bird population increases tremendously each spring with the arrival of migratory birds (see migration of animals). During the short warm season, a large number of insects breed in the marshlands of the tundra.
In parts of the Arctic are found a variety of natural resources, but many known reserves are not exploited because of their inaccessibility. The arctic region of Russia, the most developed of all the arctic regions, is a vast storehouse of mineral wealth, including deposits of nickel, copper, coal, gold, uranium, tungsten, and diamonds. The North American Arctic yields uranium, copper, nickel, iron, natural gas, and oil. The arctic region of Europe (including W Russia) benefits from good overland links with southern areas and ship routes that are open throughout the year. The arctic regions of Asian Russia and North America depend on isolated overland routes, summertime ship routes, and air transportation. Transportation of oil by pipeline from arctic Alaska was highly controversial in the early 1970s, with strong opposition from environmentalists. Because of the extreme conditions of the Arctic, the delicate balance of nature, and the slowness of natural repairs, the protection and preservation of the Arctic have been major goals of conservationists, who fear irreparable damage to the natural environment from local temperature increases, the widespread use of machinery, the interference with wildlife migration, and oil spills. Global warming and the increasing reduction in the permanent ice cover on the Arctic Ocean has increased interest in the region's ocean resouces.
The Arctic is one of the world's most sparsely populated areas. Its inhabitants, basically of Mongolic stock, are thought to be descendants of a people who migrated northward from central Asia after the ice age and subsequently spread W into Europe and E into North America. The chief groups are now the Lapps of Europe; the Samoyedes (Nentsy) of W Russia; the Yakuts, Tungus, Yukaghirs, and Chukchis of E Russia; and the Eskimo of North America. There is a sizable Caucasian population in Siberia, and the people of Iceland are nearly all Caucasian. In Greenland, the Greenlanders, a mixture of Eskimos and northern Europeans, predominate.
Because of their common background and the general lack of contact with other peoples, arctic peoples have strikingly similar physical characteristics and cultures, especially in such things as clothing, tools, techniques, and social organization. The arctic peoples, once totally nomadic, are now largely sedentary or seminomadic. Hunting, fishing, reindeer herding, and indigenous arts and crafts are the chief activities. The arctic peoples are slowly being incorporated into the society of the country in which they are located. With the Arctic's increased economic and political role in world affairs, the regions have experienced an influx of personnel charged with building and maintaining such things as roads, mineral extraction sites, weather stations, and military installations.
Many parts of the Arctic were already settled by the Eskimos and other peoples of Mongolic stock when the first European explorers, the Norsemen or Vikings, appeared in the region. Much later the search for the Northwest Passage and the Northeast Passage to reach Asia from Europe spurred exploration to the north. This activity began in the 16th cent. and continued in the 17th, but the hardships suffered and the negative results obtained by early explorers—among them Martin Frobisher, John Davis, Henry Hudson, William Baffin, and William Barentz—caused interest to wane. The fur traders in Canada did not begin serious explorations across the tundras until the latter part of the 18th cent. Alexander Mackenzie undertook extensive exploration after the beginnings made by Samuel Hearne, Philip Turnor, and others. Already in the region of NE Asia and W Alaska, the Russian explorations under Vitus Bering and others and the activities of the promyshlennyki [fur traders] had begun to make the arctic coasts known.
After 1815, British naval officers—including John Franklin, F. W. Beechey, John Ross, James Ross, W. E. Parry, P. W. Dease, Thomas Simpson, George Back, and John Rae—inspired by the efforts of John Barrow, took up the challenge of the Arctic. The disappearance of Franklin on his expedition between 1845 and 1848 gave rise to more than 40 searching parties. Although Franklin was not found, a great deal of knowledge was gained about the Arctic as a result, including the general outline of Canada's arctic coast.
Otto Sverdrup, D. B. MacMillan, and Vilhjalmur Stefansson added significant knowledge of the regions. Meanwhile, in the Eurasian Arctic, Franz Josef Land was discovered and Novaya Zemlya explored. The Northeast Passage was finally navigated in 1879 by Nils A. E. Nordenskjöld. Roald Amundsen, who went through the Northwest Passage (1903-6), also went through the Northeast Passage (1918-20). Greenland was also explored. Robert E. Peary reportedly won the race to be the first at the North Pole in 1909, but this claim is disputed. Although Fridtjof Nansen, drifting with his vessel Fram in the ice (1893-96), failed to reach the North Pole, he added enormously to the knowledge of the Arctic Ocean.
Air exploration of the regions began with the tragic balloon attempt of S. A. Andrée in 1897. In 1926, Richard E. Byrd and Floyd Bennett flew over the North Pole, and Amundsen and Lincoln Ellsworth flew from Svalbard (Spitsbergen) to Alaska across the North Pole and unexplored regions N of Alaska. In 1928, George Hubert Wilkins flew from Alaska to Spitsbergen. The use of the "great circle" route for world air travel increased the importance of Arctic, while new ideas of the agricultural and other possibilities of arctic and subarctic regions led to many projects for development, especially in the USSR.
In 1937 and 1938 many field expeditions were sent out by British, Danish, Norwegian, Soviet, Canadian, and American groups to learn more about the Arctic. The Soviet group under Ivan Papinin wintered on an ice floe near the North Pole and drifted with the current for 274 days. Valuable hydrological, meteorological, and magnetic observations were made; by the time they were taken off the floe, the group had drifted 19° of latitude and 58° of longitude. Arctic drift was further explored (1937-40) by the Soviet icebreaker Sedov. Before World War II the USSR had established many meteorological and radio stations in the Arctic. Soviet activity in practical exploitation of resources also pointed the way to the development of arctic regions. Between 1940 and 1942 the Canadian vessel St. Roch made the first west-east journey through the Northwest Passage. In World War II, interest in transporting supplies gave rise to considerable study of arctic conditions.
After the war interest in the Arctic was keen. The Canadian army in 1946 undertook a project that had as one of its objects the testing of new machines (notably the snowmobile) for use in developing the Arctic. There was also a strong impulse to develop Alaska and N Canada, but no consolidated effort, like that of the Soviets, to take the natives into partnership for a full-scale development of the regions. Since 1954 the United States and Russia have established a number of drifting observation stations on ice floes for the purpose of intensified scientific observations. In 1955, as part of joint U.S.-Canadian defense, construction was begun on a c.3,000-mi (4,830-km) radar network (the Distant Early Warning line, commonly called the DEW line) stretching from Alaska to Greenland. As older radar stations were replaced and new ones built, a more sophisticated surveillance system developed. In 1993 the system, now stretching from NW Alaska to the coast of Newfoundland, was renamed the North Warning System.
With the continuing development of northern regions (e.g., Alaska, N Canada, and Russia), the Arctic has assumed greater importance in the world. During the International Geophysical Year (1957-58) more than 300 arctic stations were established by the northern countries interested in the arctic regions. Atomic-powered submarines have been used for penetrating the Arctic. In 1958 the Nautilus, a U.S. navy atomic-powered submarine, became the first ship to cross the North Pole undersea. Two years later the Skate set out on a similar voyage and became the first to surface at the Pole. In 1977 the Soviet nuclear icebreaker Arktika reached the North Pole, the first surface ship to do so.
In the 1960s the Arctic became the scene of an intense search for mineral and power resources. The discovery of oil on the Alaska North Slope (1968) and on Canada's Ellesmere Island (1972) led to a great effort to find new oil fields along the edges of the continents. In the summer of 1969 the SS Manhattan, a specially designed oil tanker with ice breaker and oceanographic research vessel features, successfully sailed from Philadelphia to Alaska by way of the Northwest Passage in the first attempt to bring commercial shipping into the region.
In 1971 the Arctic Ice Dynamics Joint Experiment (AIDJEX) began an international effort to study over a period of years arctic pack ice and its effect on world climate. In 1986 a seasonal "hole" in the ozone layer above the Arctic was discovered, showing some similarities to a larger depletion of ozone over the southern polar region; depletion of the ozone layer results in harmful levels of ultraviolet radiation reaching the earth from the sun. In the 21st cent. increased interest in the resources of the Arctic Ocean, prompted by a decrease in permanent ice cover due to global warming, have led to disputes among the Arctic nations over territorial claims. Practically all parts of the Arctic have now been photographed and scanned (by remote sensing devices) from aircraft and satellites. From these sources accurate maps of the Arctic have been compiled.
Classic narratives of arctic exploration include F. Nansen, In Northern Mists (tr. 1911); R. E. Amundsen, The North West Passage (tr., 2 vol., 1908); R. E. Peary, The North Pole (1910, repr. 1969); V. Stefansson, My Life with the Eskimo (1913) and The Friendly Arctic (1921).
For history and geography, see L. P. Kirwan, A History of Polar Exploration (1960); R. Thorén, Picture Atlas of the Arctic (1969); L. H. Neatby, Conquest of the Last Frontier (1966) and Discovery in Russian and Siberian Waters (1973); L. Rey et al., ed., Unveiling the Arctic (1984); F. Bruemmer and W. E. Taylor, The Arctic World (1987); R. McCormick, Voyages of Discovery in the Antarctic and Arctic Seas (1990); F. Fleming, Barrow's Boys (1998) and Ninety Degrees North: The Quest for the North Pole (2002); C. Officer and J. Page, A Fabulous Kingdom: The Exploration of the Arctic (2001).
The original Celtic inhabitants, converted to Christianity by St. Columba (6th cent.), were conquered by the Norwegians (starting in the 8th cent.). They held the Southern Islands, as they called them, until 1266. From that time the islands were formally held by the Scottish crown but were in fact ruled by various Scottish chieftains, with the Macdonalds asserting absolute rule after 1346 as lords of the isles. In the mid-18th cent. the Hebrides were incorporated into Scotland. The tales of Sir Walter Scott did much to make the islands famous. Emigration from the overpopulated islands occurred in the 20th cent., especially to Canada.
Although it has some industries (the manufacture of clothing, metal goods, printed materials, and food products), The Hague's economy revolves around government administration, which is centered there rather than in Amsterdam, the constitutional capital of the Netherlands. The Hague is the seat of the Dutch legislature, the Dutch supreme court, the International Court of Justice, and foreign embassies. The city is the headquarters of numerous companies, including the Royal Dutch Shell petroleum company. Also of economic importance are banking, insurance, and trade.
Among the numerous landmarks of The Hague is the Binnenhof, which grew out of the 13th-century palace and houses both chambers of the legislature; the Binnenhof contains the 13th-century Hall of Knights (Dutch Ridderzaal), where many historic meetings have been held. Nearby is the Gevangenenpoort, the 14th-century prison where Jan de Witt and Cornelius de Witt were murdered in 1672. The Mauritshuis, a 17th-century structure built as a private residence for John Maurice of Nassau, is an art museum and contains several of the greatest works of Rembrandt and Vermeer.
The Peace Palace (Dutch Vredespaleis), which was financed by Andrew Carnegie and opened in 1913, houses the Permanent Court of Arbitration and, since 1945, the International Court of Justice. Among the other notable buildings are the former royal palace; the Groote Kerk, a Gothic church (15th-16th cent.); the Nieuwe Kerk, containing Spinoza's tomb; the 16th-century town hall; and the Netherlands Conference Center (1969). Educational institutions in The Hague include schools of music and international law. Northwest of the city is Scheveningen, a popular North Sea resort and a fishing port.
The Hague was (13th cent.) the site of a hunting lodge of the counts of Holland ('s Gravenhage means "the count's hedge"). William, count of Holland, began (c.1250) the construction of a palace, around which a town grew in the 14th and 15th cent. In 1586 the States-General of the United Provs. of the Netherlands convened in The Hague, which later (17th cent.) became the residence of the stadtholders and the capital of the Dutch republic. In the 17th cent., The Hague rose to be one of the chief diplomatic and intellectual centers of Europe. William III (William of Orange), stadtholder of Holland and other Dutch provinces as well as king of England (1689-1702), was born in The Hague.
In the early 19th cent., after Amsterdam had become the constitutional capital of the Netherlands, The Hague received its own charter from Louis Bonaparte. It was (1815-30) the alternative meeting place, with Brussels, of the legislature of the United Netherlands. The Dutch royal residence from 1815 to 1948, the city was greatly expanded and beautified in the mid-19th cent. by King William II. In 1899 the First Hague Conference met there on the initiative of Nicholas II of Russia; ever since, The Hague has been a center for the promotion of international justice and arbitration.
The smallest country on the continent of Africa, The Gambia comprises Saint Mary's Island (site of Banjul) and, on the adjacent mainland, a narrow strip never more than 30 mi (48 km) wide; this finger of land borders both banks of the Gambia River for c.200 mi (320 km) above its mouth. The river, which rises in Guinea and flows c.600 mi (970 km) to the Atlantic, is navigable throughout The Gambia and is the main transport artery. Along The Gambia's coast are fine sand beaches; inland is the swampy river valley, whose fertile alluvial soils support rice cultivation. Peanuts, the country's chief cash crop, and some grains are raised on higher land. The climate is tropical and fairly dry.
The Gambia's population consists primarily of Muslim ethnic groups; the Malinke (Mandinka) is the largest, followed by the Fulani (Fula), Wolof, Diola (Jola), and Soninke (Serahuli). Almost a tenth of the population is Christian. English is the official language, but a number of African dialects are widely spoken. During the sowing and reaping seasons migrants from Senegal and Guinea also come to work in the country.
Despite attempts at diversification, The Gambia's economy remains overwhelmingly dependent on the export of peanuts and their byproducts and the re-exporting of imported foreign goods to other African nations. About three quarters of the population is employed in agriculture. Rice, millet, sorghum, corn, and cassava are grown for subsistence, and cattle, sheep, and goats are raised. There is also a fishing industry. The main industrial activities center around the processing of agricultural products and some light manufacturing. Tourism, which suffered following the 1994 military takeover, rebounded in the late 1990s. Besides peanut products, dried and smoked fish, cotton lint, palm kernels, and hides and skins are exported; foodstuffs, manufactures, fuel, machinery, and transportation equipment are imported. India, Great Britain, China, and Senegal are the country's leading trading partners. The Gambia is one of the world's poorest nations and relies heavily on foreign aid.
The Gambia is governed under the constitution of 1997. The president, who is both head of state and head of government, is popularly elected for a five-year term; there are no term limits. The unicameral legislature consists of a 53-seat National Assembly whose members also serve five-year terms; 48 members are elected and 5 are appointed by the president. Administratively, The Gambia is made up of five divisions and the capital city.
Portuguese explorers reaching the Gambia region in the mid-15th cent. reported a group of small Malinke and Wolof states that were tributary to the empire of Mali. The English won trading rights from the Portuguese in 1588, but their hold was weak until the early 17th cent., when British merchant companies obtained trading charters and founded settlements along the Gambia River. In 1816 the British purchased Saint Mary's Island from a local chief and established Banjul (called Bathurst until 1973) as a base against the slave trade. The city remained a colonial backwater under the administration of Sierra Leone until 1843, when it became a separate crown colony. Between 1866 and 1888 it was again governed from Sierra Leone. As the French extended their rule over Senegal's interior, they sought control over Britain's Gambia River settlements but failed during negotiations to offer Britain acceptable territory in compensation. In 1889, The Gambia's boundaries were defined, and in 1894 the interior was declared a British protectorate. The whole of the country came under British rule in 1902 and that same year a system of government was initiated in which chiefs supervised by British colonial commissioners ruled a variety of localities. In 1906 slavery in the colony was ended.
The Gambia continued the system of local rule under British supervision until after World War II, when Britain began to encourage a greater measure of self-government and to train some Gambians for administrative positions. By the mid-1950s a legislative council had been formed, with members elected by the Gambian people, and a system had been initiated wherein appointed Gambian ministers worked along with British officials. The Gambia achieved full self-government in 1963 and independence in 1965 under Dauda Kairaba Jawara and the People's Progressive party (PPP), made up of the predominant Malinke ethnic group. Following a referendum in 1970, The Gambia became a republic in the Commonwealth of Nations. In contrast to many other new African states, The Gambia preserved democracy and remarkable political stability in its early years of independence.
Since the mid-1970s large numbers of Gambians have migrated from rural to urban areas, resulting in high urban unemployment and overburdened services. The PPP demonstrated an interest in expanding the agricultural sector, but droughts in the late 1970s and early 1980s prompted a serious decline in agricultural production and a rise in inflation. In 1978, The Gambia entered into an agreement with Senegal to develop the Gambia River and its basin. Improvements in infrastructure and a heightened popular interest by outsiders in the country (largely because of the popularity of Alex Haley's novel Roots, set partially in The Gambia) helped spur a threefold increase in tourism between 1978 and 1988.
The Gambia was shaken in 1981 by a coup attempt by junior-ranking soldiers; it was put down with the intervention of Senegalese troops. In 1982, The Gambia and Senegal formed a confederation, while maintaining individual sovereignty; by 1989, however, popular opposition and minor diplomatic problems led to the withdrawal of Senegalese troops and the dissolution of Senegambia. In July, 1994, Jawara was overthrown in a bloodless coup and Yahya Jammeh assumed power as chairman of the armed forces and head of state.
Jammeh survived an attempted countercoup in Nov., 1994, and won the presidential elections of Sept., 1996, from which the major opposition leaders effectively had been banned. Only in 2001, in advance of new presidential elections, was the ban on political activities by the opposition parties lifted, and in Oct., 2001, Jammeh was reelected. The 2002 parliamentary elections, in which Jammeh's party won nearly all the seats, were boycotted by the main opposition party.
There was a dispute with Senegal in Aug.-Oct., 2005, over increased ferry charges across the Gambia river, which led to a Senegalese ferry boycott and a blockade of overland transport through Gambia, which hurt Senegal S of Gambia but also affected Gambian merchants. Gambia subsequently reduced the charges. A coup plot led by the chief of defense staff was foiled in Mar., 2006. Jammeh was again reelected in Sept., 2006, but the opposition denounced and rejected the election for being marred by intimidation. In the subsequent parliamentary elections (Jan. 2007), Jammeh's party again won all but a handful of the seats. Jammeh's rule has been marked by the often brutal treatment of real and percieved opponents.
See B. Rice, Enter Gambia (1968); H. B. Bachmann et al., Gambia: Basic Needs in The Gambia (1981); H. A. Gailey, Historical Dictionary of The Gambia (1987); D. P. Gamble, The Gambia (1988); F. Wilkins, Gambia (1988); M. F. McPherson and S. C. Radelet, ed., Economic Recovery in The Gambia (1996); D. R. Wright, The World and a Very Small Place in Africa (1997).
|Subfamily||Group||Subgroup||Languages and Principal Dialects|
|Anatolian||Hieroglypic Hittite*, Hittite (Kanesian)*, Luwian*, Lycian*, Lydian*, Palaic*|
|Baltic||Latvian (Lettish), Lithuanian, Old Prussian*|
|Celtic||Brythonic||Breton, Cornish, Welsh|
|Goidelic or Gaelic||Irish (Irish Gaelic), Manx*, Scottish Gaelic|
|Germanic||East Germanic||Burgundian*, Gothic*, Vandalic*|
|North Germanic||Old Norse* (see Norse), Danish, Faeroese, Icelandic, Norwegian, Swedish|
(see Grimm's law)
|High German||German, Yiddish|
|Low German||Afrikaans, Dutch, English, Flemish, Frisian, Plattdeutsch (see German language)|
|Greek||Aeolic*, Arcadian*, Attic*, Byzantine Greek*, Cyprian*, Doric*, Ionic*, Koiné*, Modern Greek|
|Indo-Iranian||Dardic or Pisacha||Kafiri, Kashmiri, Khowar, Kohistani, Romany (Gypsy), Shina|
|Indic or Indo-Aryan||Pali*, Prakrit*, Sanskrit*, Vedic*|
|Central Indic||Hindi, Hindustani, Urdu|
|East Indic||Assamese, Bengali (Bangla), Bihari, Oriya|
|Northwest Indic||Punjabi, Sindhi|
|Pahari||Central Pahari, Eastern Pahari (Nepali), Western Pahari|
|South Indic||Marathi (including major dialect Konkani), Sinhalese (Singhalese)|
|West Indic||Bhili, Gujarati, Rajasthani (many dialects)|
|Iranian||Avestan*, Old Persian*|
|East Iranian||Baluchi, Khwarazmian*, Ossetic, Pamir dialects, Pashto (Afghan), Saka (Khotanese)*, Sogdian*, Yaghnobi|
|West Iranian||Kurdish, Pahlavi (Middle Persian)*, Parthian*, Persian (Farsi), Tajiki|
|Italic||(Non-Romance)||Faliscan*, Latin, Oscan*, Umbrian*|
|Romance or Romanic||Eastern Romance||Italian, Rhaeto-Romanic, Romanian, Sardinian|
|Western Romance||Catalan, French, Ladino, Portuguese, Provençal, Spanish|
|Slavic or Slavonic||East Slavic||Belarusian (White Russian), Russian, Ukrainian|
|South Slavic||Bulgarian, Church Slavonic*, Macedonian, Serbo-Croatian, Slovenian|
|West Slavic||Czech, Kashubian, Lusatian (Sorbian or Wendish), Polabian*, Polish, Slovak|
|Thraco-Illyrian||Albanian, Illyrian*, Thracian*|
|Thraco-Phrygian||Armenian, Grabar (Classical Armenian)*, Phrygian*|
|Tokharian (W China)||Tokharian A (Agnean)*, Tokharian B (Kuchean)*|
The Comoros is comprised of three main islands, Njazidja (or Ngazidja; also Grande Comore or Grand Comoros)—on which Moroni is located—Nzwani (or Ndzouani; also Anjouan), and Mwali (also Mohéli), and numerous coral reefs and islets. They are volcanic in origin, with interiors that vary from high peaks to low hills and coastlines that feature many sandy beaches. Njazidja is the site of an active volcano, Karthala, which, at 7,746 ft (2,361 m), is the islands' highest peak. The Comoros have a tropical climate with the year almost evenly divided between dry and rainy seasons; cyclones (hurricanes) are quite frequent. The islands once supported extensive rain forests, but most have been severely depleted.
The inhabitants are a mix mostly of African, Arab, Indian, and Malay ethnic strains. Sunni Muslims make up 98% of the population; there is a small Roman Catholic minority. Arabic and French are the official languages, and Comorian (or Shikomoro, a blend of Swahili and Arabic) is also spoken.
With few natural resources, poor soil, and overpopulation, the islands are one of the world's poorest nations. Some 80% of the people are involved in agriculture. Vanilla, ylang-ylang (used in perfumes), cloves, and copra are the major exports; coconuts, bananas, and cassava are also grown. Fishing, tourism, and perfume distillation are the main industries, and remittances from Comorans working abroad are an important source of revenue. Rice and other foodstuffs, consumer goods, petroleum products, and transportation equipment are imported. The country is heavily dependent on France for trade and foreign aid.
The Comoros is governed under the constitution of 2001. The president, who is head of state, is chosen from among the elected heads of the three main islands; the presidency rotates every five years. The government is headed by the prime minister, who is appointed by the president. The unicameral legislature consists of the 33-seat Assembly of the Union. Fifteen members are selected by the individual islands' local assemblies, and 18 are popularly elected. All serve five-year terms. Administratively, the country is divided into the three main islands and four municipalities.
The islands were populated by successive waves of immigrants from Africa, Indonesia, Madagascar, and Arabia. They were long under Arab influence, especially Shiragi Arabs from Persia who first arrived in A.D. 933. Portugal, France, and England staked claims in the Comoros in the 16th cent., but the islands remained under Arab domination. All of the islands were ceded to the French between 1841 and 1909. Occupied by the British during World War II, the islands were granted administrative autonomy within the French Union in 1946 and internal self-government in 1968. In 1975 three of the islands voted to become independent, while Mayotte chose to remain a French dependency.
Ahmed Abdallah Abderrahman was Comoros's first president. He was ousted in a 1976 coup, returned to power in a second coup in 1978, survived a coup attempt in 1983, and was assassinated in 1989. The nation's first democratic elections were held in 1990, and Saïd Mohamed Djohar was elected president. In 1991, Djohar was impeached and replaced by an interim president, but he returned to power with French backing. Multiparty elections in 1992 resulted in a legislative majority for the president and the creation of the office of prime minister.
Comoros joined the Arab League in 1993. A coup attempt in 1995 was suppressed by French troops. In 1996, Mohamed Taki Abdulkarim was elected president. In 1997, following years of economic decline, rebels took control of the islands of Nzwani and Mwali, declaring their secession and desire to return to French rule. The islands were granted greater autonomy in 1999, but voters on Nzwani endorsed independence in Jan., 2000, and rebels continue to control the island. Taki died in 1998 and was succeeded by Tadjiddine Ben Said Massounde. As violence spread to the main island, the Comoran military staged a coup in Apr., 1999, and Col. Azali Assoumani became president of the Comoros. An attempted coup in Mar., 2000, was foiled by the army.
Forces favoring reuniting with the Comoros seized power in Nzwani in 2001, and in December Comoran voters approved giving the three islands additional autonomy (and their own presidents) within a Comoran federation. Under the new constitution, the presidency of the Comoros Union rotates among the islands. In Jan., 2002, Azali resigned, and Prime Minister Hamada Madi became also interim president in the transitional government preparing for new elections. After two disputed elections (March and April), a commission declared Azali national president in May, 2002.
An accord in Dec., 2003, concerning the division of powers between the federal and island governments paved the way for legislative elections in 2004, in which parties favoring autonomy for the individual islands won a majority of the seats. The 2006 presidential election was won by Ahmed Abdallah Mohamed Sambi, a Sunni cleric regarded as a moderate Islamist.
In Apr., 2007, the president of Nzwani, Mohamed Bacar, refused to resign as required by the constitutional courts and used his police forces to retain power, holding an illegal election in June, after which he was declared the winner. The moves were denounced by the central government and the African Union, but the central government lacked the forces to dislodge Bacar. In Nov., 2007, the African Union began a naval blockade of Nzwani and imposed a travel ban on its government's officials. With support from African Union forces, Comoran troops landed on Mzwani in Mar., 2008, and reestablished federal control over the island. Bacar fled to neighboring Mayotte, then was taken to Réunion; in July he was flown to Benin. A referendum in May, 2009, approved of a constitutional amendment to extend the president's term to five years and replace the islands' presidents with governors.
See World Bank, Comoros (1983); M. and H. Ottenheimer, Historical Dictionary of the Comoro Islands (1994).
See R. Greenfield, Dark Star: An Oral Biography of Jerry Garcia (1996); C. Brightman, Sweet Chaos: The Grateful Dead's American Adventure (1999); B. Jackson, Garcia: An American Life (1999); S. Peters, What a Long, Strange Trip (1999); R. G. Adams, ed., Deadhead Social Science (2000); D. McNally, A Long Strange Trip: The Inside History of the Grateful Dead (2002); P. Lesh, Searching for the Sound: My Life with the Grateful Dead (2005).
See V. Weybright, The Star-spangled Banner (1935).
The public information stored in the multitude of computer networks connected to the Internet forms a huge electronic library, but the enormous quantity of data and number of linked computer networks also make it difficult to find where the desired information resides and then to retrieve it. A number of progressively easier-to-use interfaces and tools have been developed to facilitate searching. Among these are search engines, such as Archie, Gopher, and WAIS (Wide Area Information Server), and a number of commercial, Web-based indexes, such as Google or Yahoo, which are programs that use a proprietary algorithm or other means to search a large collection of documents for keywords and return a list of documents containing one or more of the keywords. Telnet is a program that allows users of one computer to connect with another, distant computer in a different network. The File Transfer Protocol (FTP) is used to transfer information between computers in different networks. The greatest impetus to the popularization of the Internet came with the introduction of the World Wide Web (WWW), a hypertext system that makes browsing the Internet both fast and intuitive. Most e-commerce occurs over the Web, and most of the information on the Internet now is formatted for the Web, which has led Web-based indexes to eclipse the other Internet-wide search engines.
Each computer that is directly connected to the Internet is uniquely identified by a 32-bit binary number called its IP address. This address is usually seen as a four-part decimal number, each part equating to 8 bits of the 32-bit address in the decimal range 0-255. Because an address of the form 126.96.36.199 could be difficult to remember, a system of Internet addresses, or domain names, was developed in the 1980s. An Internet address is translated into an IP address by a domain-name server, a program running on an Internet-connected computer.
Reading from left to right, the parts of a domain name go from specific to general. For example, www.cms.hhs.gov is a World Wide Web site for the Centers for Medicare and Medicaid Services, which is part of the U.S. Health and Human Services Dept., which is a government agency. The rightmost part, or top-level domain (or suffix or zone), can be a two-letter abbreviation of the country in which the computer is in operation; more than 250 abbreviations, such as "ca" for Canada and "uk" for United Kingdom, have been assigned. Although such an abbreviation exists for the United States (us), it is more common for a site in the United States to use a generic top-level domain such as edu (educational institution), gov (government), or mil (military) or one of the four domains originally designated for open registration worldwide, com (commercial), int (international), net (network), or org (organization). In 2000 seven additional top-level domains (aero, biz, coop, info, museum, name, and pro) were approved for worldwide use, and other domains, including the regional domains asia and eu, have since been added. In 2008 new rules were adopted that would allow a top-level domain to be any group of letters, and the following year further rules changes permitted the use of other writing systems in addition to the Latin alphabet in domain names beginning in 2010.
The Internet evolved from a secret feasibility study conceived by the U.S. Dept. of Defense in 1969 to test methods of enabling computer networks to survive military attacks, by means of the dynamic rerouting of messages. As the ARPAnet (Advanced Research Projects Agency network), it began by connecting three networks in California with one in Utah—these communicated with one another by a set of rules called the Internet Protocol (IP). By 1972, when the ARPAnet was revealed to the public, it had grown to include about 50 universities and research organizations with defense contracts, and a year later the first international connections were established with networks in England and Norway.
A decade later, the Internet Protocol was enhanced with a set of communication protocols, the Transmission Control Program/Internet Protocol (TCP/IP), that supported both local and wide-area networks. Shortly thereafter, the National Science Foundation (NSF) created the NSFnet to link five supercomputer centers, and this, coupled with TCP/IP, soon supplanted the ARPAnet as the backbone of the Internet. In 1995 the NSF decommissioned the NSFnet, and responsibility for the Internet was assumed by the private sector. Progress toward the privatization of the Internet continued when Internet Corporation for Assigned Names and Numbers (ICANN), a nonprofit U.S. corporation, assumed oversight responsibility for the domain name system in 1998 under an agreement with the U.S. Dept. of Commerce.
Fueled by the increasing popularity of personal computers, e-mail, and the World Wide Web (which was introduced in 1991 and saw explosive growth beginning in 1993), the Internet became a significant factor in the stock market and commerce during the second half of the decade. By 2000 it was estimated that the number of adults using the Internet exceeded 100 million in the United States alone. The increasing globalization of the Internet has led a number of nations to call for oversight and governance of the Internet to pass from the U.S. government and ICANN to an international body, but a 2005 international technology summit agreed to preserve the status quo while establishing an international forum for the discussion of Internet policy issues.
See B. P. Kehoe, Zen and the Art of the Internet: A Beginner's Guide (4th ed. 1995); B. Pomeroy, ed., Beginnernet: A Beginner's Guide to the Internet and the World Wide Web (1997); L. E. Hughes, Internet E-Mail: Protocols, Standards, and Implementation (1998); J. S. Gonzalez, The 21st Century Internet (1998); D. P. Dern, Internet Business Handbook: The Insider's Internet Guide (1999).
See S. Vogel, The Pentagon: A History (2007).
The Philippines extend 1,152 mi (1,855 km) from north to south, between Taiwan and Borneo, and 688 mi (1,108 km) from east to west, and are bounded by the Philippine Sea on the east, the Celebes Sea on the south, and the South China Sea on the west. They comprise three natural divisions—the northern, which includes Luzon and attendant islands; the central, occupied by the Visayan Islands and Palawan and Mindoro; and the southern, encompassing Mindanao and the Sulu Archipelago. In addition to Manila, other important centers are Quezon City, also on Luzon; Cebu, on Cebu Island; Iloilo, on Panay; Davao and Zamboanga, on Mindanao; and Jolo, on Jolo Island in the Sulu Archipelago.
The Philippines are chiefly of volcanic origin. Most of the larger islands are traversed by mountain ranges, with Mt. Apo (9,690 ft/2,954 m), on Mindanao, the highest peak. Narrow coastal plains, wide valleys, volcanoes, dense forests, and mineral and hot springs further characterize the larger islands. Earthquakes are common. Of the navigable rivers, Cagayan, on Luzon, is the largest; there are also large lakes on Luzon and Mindanao.
The Philippines are entirely within the tropical zone. Manila, with a mean daily temperature of 79.5°F; (26. 4°C;), is typical of the climate of the lowland areas—hot and humid. The highlands, however, have a bracing climate; e.g., Baguio, the summer capital, on Luzon, has a mean annual temperature of 64°F; (17.8°C;). The islands are subject to typhoons, whose torrential rains can cause devastating floods; 5,000 people died on Leyte in 1991 from such flooding, and several storms in 2004 and 2006 caused deadly flooding and great destruction.
The great majority of the people of the Philippines belong to the Malay group and are known as Filipinos. Other groups include the Negritos (negroid pygmies) and the Dumagats (similar to the Papuans of New Guinea), and there is a small Chinese minority. The Filipinos live mostly in the lowlands and constitute one of the largest Christian groups in Asia. Roman Catholicism is professed by over 80% of the population; 5% are Muslims (concentrated on Mindanao and the Sulu Archipelago; see Moros); about 2% are Aglipayans, members of the Philippine Independent Church, a nationalistic offshoot of Catholicism (see Aglipay, Gregorio); and there are Protestant and Evangelical groups. The official languages are Pilipino, based on Tagalog, and English; however, some 70 native languages are also spoken.
With their tropical climate, heavy rainfall, and naturally fertile volcanic soil, the Philippines have a strong agricultural sector, which employs over a third of the population. Sugarcane, coconuts, rice, corn, bananas, cassava, pineapples, and mangoes are the most important crops, and tobacco and coffee are also grown. Carabao (water buffalo), pigs, chickens, goats, and ducks are widely raised, and there is dairy farming. Fishing is a common occupation; the Sulu Archipelago is noted for its pearls and mother-of-pearl.
The islands have one of the world's greatest stands of commercial timber. There are also mineral resources such as petroleum, nickel, cobalt, silver, gold, copper, zinc, chromite, and iron ore. Nonmetallic minerals include rock asphalt, gypsum, asbestos, sulfur, and coal. Limestone, adobe, and marble are quarried.
Manufacturing is concentrated in metropolitan Manila, near the nation's prime port, but there has been considerable industrial growth on Cebu, Negros, and Mindanao. Garments, footwear, pharmaceuticals, chemicals, and wood products are manufactured, and the assembly of electronics and automobiles is important. Other industries include food processing and petroleum refining. The former U.S. military base at Subic Bay was redeveloped in the 1990s as a free-trade zone.
The economy has nonetheless remained weak, and many Filipinos have sought employment overseas; remittances from an estimated 8 million Filipinos abroad are economically important. Chief exports are semiconductors, electronics, transportation equipment, clothing, copper, petroleum products, coconut oil, fruits, lumber and plywood, machinery, and sugar. The main imports are electronic products, mineral fuels, machinery, transportation equipment, iron and steel, textiles, grains, chemicals, and plastic. The chief trading partners are the United States, Japan, China, Singapore, Hong Kong, and Taiwan.
The Philippines is governed under the constitution of 1987. The president, who is both head of state and head of the government, is elected by popular vote for a single six-year term. There is a bicameral legislature, the Congress. Members of the 24-seat Senate are popularly elected for six-year terms. The House of Representatives consists of not more than 250 members, who are popularly elected for three-year terms. There is an independent judiciary headed by a supreme court. Administratively, the republic is divided into 79 provinces and 117 chartered cities.
The Negritos are believed to have migrated to the Philippines some 30,000 years ago from Borneo, Sumatra, and Malaya. The Malayans followed in successive waves. These people belonged to a primitive epoch of Malayan culture, which has apparently survived to this day among certain groups such as the Igorots. The Malayan tribes that came later had more highly developed material cultures.
In the 14th cent. Arab traders from Malay and Borneo introduced Islam into the southern islands and extended their influence as far north as Luzon. The first Europeans to visit (1521) the Philippines were those in the Spanish expedition around the world led by the Portuguese explorer Ferdinand Magellan. Other Spanish expeditions followed, including one from New Spain (Mexico) under López de Villalobos, who in 1542 named the islands for the infante Philip, later Philip II.Spanish Control
The conquest of the Filipinos by Spain did not begin in earnest until 1564, when another expedition from New Spain, commanded by Miguel López de Legaspi, arrived. Spanish leadership was soon established over many small independent communities that previously had known no central rule. By 1571, when López de Legaspi established the Spanish city of Manila on the site of a Moro town he had conquered the year before, the Spanish foothold in the Philippines was secure, despite the opposition of the Portuguese, who were eager to maintain their monopoly on the trade of East Asia.
Manila repulsed the attack of the Chinese pirate Limahong in 1574. For centuries before the Spanish arrived the Chinese had traded with the Filipinos, but evidently none had settled permanently in the islands until after the conquest. Chinese trade and labor were of great importance in the early development of the Spanish colony, but the Chinese came to be feared and hated because of their increasing numbers, and in 1603 the Spanish murdered thousands of them (later, there were lesser massacres of the Chinese).
The Spanish governor, made a viceroy in 1589, ruled with the advice of the powerful royal audiencia. There were frequent uprisings by the Filipinos, who resented the encomienda system. By the end of the 16th cent. Manila had become a leading commercial center of East Asia, carrying on a flourishing trade with China, India, and the East Indies. The Philippines supplied some wealth (including gold) to Spain, and the richly laden galleons plying between the islands and New Spain were often attacked by English freebooters. There was also trouble from other quarters, and the period from 1600 to 1663 was marked by continual wars with the Dutch, who were laying the foundations of their rich empire in the East Indies, and with Moro pirates. One of the most difficult problems the Spanish faced was the subjugation of the Moros. Intermittent campaigns were conducted against them but without conclusive results until the middle of the 19th cent. As the power of the Spanish Empire waned, the Jesuit orders became more influential in the Philippines and acquired great amounts of property.Revolution, War, and U.S. Control
It was the opposition to the power of the clergy that in large measure brought about the rising sentiment for independence. Spanish injustices, bigotry, and economic oppressions fed the movement, which was greatly inspired by the brilliant writings of José Rizal. In 1896 revolution began in the province of Cavite, and after the execution of Rizal that December, it spread throughout the major islands. The Filipino leader, Emilio Aguinaldo, achieved considerable success before a peace was patched up with Spain. The peace was short-lived, however, for neither side honored its agreements, and a new revolution was brewing when the Spanish-American War broke out in 1898.
After the U.S. naval victory in Manila Bay on May 1, 1898, Commodore George Dewey supplied Aguinaldo with arms and urged him to rally the Filipinos against the Spanish. By the time U.S. land forces had arrived, the Filipinos had taken the entire island of Luzon, except for the old walled city of Manila, which they were besieging. The Filipinos had also declared their independence and established a republic under the first democratic constitution ever known in Asia. Their dreams of independence were crushed when the Philippines were transferred from Spain to the United States in the Treaty of Paris (1898), which closed the Spanish-American War.
In Feb., 1899, Aguinaldo led a new revolt, this time against U.S. rule. Defeated on the battlefield, the Filipinos turned to guerrilla warfare, and their subjugation became a mammoth project for the United States—one that cost far more money and took far more lives than the Spanish-American War. The insurrection was effectively ended with the capture (1901) of Aguinaldo by Gen. Frederick Funston, but the question of Philippine independence remained a burning issue in the politics of both the United States and the islands. The matter was complicated by the growing economic ties between the two countries. Although comparatively little American capital was invested in island industries, U.S. trade bulked larger and larger until the Philippines became almost entirely dependent upon the American market. Free trade, established by an act of 1909, was expanded in 1913.
When the Democrats came into power in 1913, measures were taken to effect a smooth transition to self-rule. The Philippine assembly already had a popularly elected lower house, and the Jones Act, passed by the U.S. Congress in 1916, provided for a popularly elected upper house as well, with power to approve all appointments made by the governor-general. It also gave the islands their first definite pledge of independence, although no specific date was set.
When the Republicans regained power in 1921, the trend toward bringing Filipinos into the government was reversed. Gen. Leonard Wood, who was appointed governor-general, largely supplanted Filipino activities with a semimilitary rule. However, the advent of the Great Depression in the United States in the 1930s and the first aggressive moves by Japan in Asia (1931) shifted U.S. sentiment sharply toward the granting of immediate independence to the Philippines.The Commonwealth
The Hare-Hawes-Cutting Act, passed by Congress in 1932, provided for complete independence of the islands in 1945 after 10 years of self-government under U.S. supervision. The bill had been drawn up with the aid of a commission from the Philippines, but Manuel L. Quezon, the leader of the dominant Nationalist party, opposed it, partially because of its threat of American tariffs against Philippine products but principally because of the provisions leaving naval bases in U.S. hands. Under his influence, the Philippine legislature rejected the bill. The Tydings-McDuffie Independence Act (1934) closely resembled the Hare-Howes-Cutting-Act, but struck the provisions for American bases and carried a promise of further study to correct "imperfections or inequalities."
The Philippine legislature ratified the bill; a constitution, approved by President Roosevelt (Mar., 1935) was accepted by the Philippine people in a plebiscite (May); and Quezon was elected the first president (Sept.). When Quezon was inaugurated on Nov. 15, 1935, the Commonwealth of the Philippines was formally established. Quezon was reelected in Nov., 1941. To develop defensive forces against possible aggression, Gen. Douglas MacArthur was brought to the islands as military adviser in 1935, and the following year he became field marshal of the Commonwealth army.World War II
War came suddenly to the Philippines on Dec. 8 (Dec. 7, U.S. time), 1941, when Japan attacked without warning. Japanese troops invaded the islands in many places and launched a pincer drive on Manila. MacArthur's scattered defending forces (about 80,000 troops, four fifths of them Filipinos) were forced to withdraw to Bataan Peninsula and Corregidor Island, where they entrenched and tried to hold until the arrival of reinforcements, meanwhile guarding the entrance to Manila Bay and denying that important harbor to the Japanese. But no reinforcements were forthcoming. The Japanese occupied Manila on Jan. 2, 1942. MacArthur was ordered out by President Roosevelt and left for Australia on Mar. 11; Lt. Gen. Jonathan Wainwright assumed command.
The besieged U.S.-Filipino army on Bataan finally crumbled on Apr. 9, 1942. Wainwright fought on from Corregidor with a garrison of about 11,000 men; he was overwhelmed on May 6, 1942. After his capitulation, the Japanese forced the surrender of all remaining defending units in the islands by threatening to use the captured Bataan and Corregidor troops as hostages. Many individual soldiers refused to surrender, however, and guerrilla resistance, organized and coordinated by U.S. and Philippine army officers, continued throughout the Japanese occupation.
Japan's efforts to win Filipino loyalty found expression in the establishment (Oct. 14, 1943) of a "Philippine Republic," with José P. Laurel, former supreme court justice, as president. But the people suffered greatly from Japanese brutality, and the puppet government gained little support. Meanwhile, President Quezon, who had escaped with other high officials before the country fell, set up a government-in-exile in Washington. When he died (Aug., 1944), Vice President Sergio Osmeña became president. Osmeña returned to the Philippines with the first liberation forces, which surprised the Japanese by landing (Oct. 20, 1944) at Leyte, in the heart of the islands, after months of U.S. air strikes against Mindanao. The Philippine government was established at Tacloban, Leyte, on Oct. 23.
The landing was followed (Oct. 23-26) by the greatest naval engagement in history, called variously the battle of Leyte Gulf and the second battle of the Philippine Sea. A great U.S. victory, it effectively destroyed the Japanese fleet and opened the way for the recovery of all the islands. Luzon was invaded (Jan., 1945), and Manila was taken in February. On July 5, 1945, MacArthur announced "All the Philippines are now liberated." The Japanese had suffered over 425,000 dead in the Philippines.
The Philippine congress met on June 9, 1945, for the first time since its election in 1941. It faced enormous problems. The land was devastated by war, the economy destroyed, the country torn by political warfare and guerrilla violence. Osmeña's leadership was challenged (Jan., 1946) when one wing (now the Liberal party) of the Nationalist party nominated for president Manuel Roxas, who defeated Osmeña in April.The Republic of the Philippines
Manuel Roxas became the first president of the Republic of the Philippines when independence was granted, as scheduled, on July 4, 1946. In Mar., 1947, the Philippines and the United States signed a military assistance pact (since renewed) and the Philippines gave the United States a 99-year lease on designated military, naval, and air bases (a later agreement reduced the period to 25 years beginning 1967). The sudden death of President Roxas in Apr., 1948, elevated the vice president, Elpidio Quirino, to the presidency, and in a bitterly contested election in Nov., 1949, Quirino defeated José Laurel to win a four-year term of his own.
The enormous task of reconstructing the war-torn country was complicated by the activities in central Luzon of the Communist-dominated Hukbalahap guerrillas (Huks), who resorted to terror and violence in their efforts to achieve land reform and gain political power. They were largely brought under control (1954) after a vigorous attack launched by the minister of national defense, Ramón Magsaysay. The Huks continued to function, however, until 1970, and other Communist guerrilla groups have persisted in their opposition to the Philippine government. Magsaysay defeated Quirino in Nov., 1953, to win the presidency. He had promised sweeping economic changes, and he did make progress in land reform, opening new settlements outside crowded Luzon island. His death in an airplane crash in Mar., 1957, was a serious blow to national morale. Vice President Carlos P. García succeeded him and won a full term as president in the elections of Nov., 1957.
In foreign affairs, the Philippines maintained a firm anti-Communist policy and joined the Southeast Asia Treaty Organization in 1954. There were difficulties with the United States over American military installations in the islands, and, despite formal recognition (1956) of full Philippine sovereignty over these bases, tensions increased until some of the bases were dismantled (1959) and the 99-year lease period was reduced. The United States rejected Philippine financial claims and proposed trade revisions.
Philippine opposition to García on issues of government corruption and anti-Americanism led, in June, 1959, to the union of the Liberal and Progressive parties, led by Vice President Diosdad Macapagal, the Liberal party leader, who succeeded García as president in the 1961 elections. Macapagal's administration was marked by efforts to combat the mounting inflation that had plagued the republic since its birth; by attempted alliances with neighboring countries; and by a territorial dispute with Britain over North Borneo (later Sabah), which Macapagal claimed had been leased and not sold to the British North Borneo Company in 1878.Marcos and After
Ferdinand E. Marcos, who succeeded to the presidency after defeating Macapagal in the 1965 elections, inherited the territorial dispute over Sabah; in 1968 he approved a congressional bill annexing Sabah to the Philippines. Malaysia suspended diplomatic relations (Sabah had joined the Federation of Malaysia in 1963), and the matter was referred to the United Nations. (The Philippines dropped its claim to Sabah in 1978.) The Philippines became one of the founding countries of the Association of Southeast Asian Nations (ASEAN) in 1967. The continuing need for land reform fostered a new Huk uprising in central Luzon, accompanied by mounting assassinations and acts of terror, and in 1969, Marcos began a major military campaign to subdue them. Civil war also threatened on Mindanao, where groups of Moros opposed Christian settlement. In Nov., 1969, Marcos won an unprecedented reelection, easily defeating Sergio Osmeña, Jr., but the election was accompanied by violence and charges of fraud, and Marcos's second term began with increasing civil disorder.
In Jan., 1970, some 2,000 demonstrators tried to storm Malcañang Palace, the presidential residence; riots erupted against the U.S. embassy. When Pope Paul VI visited Manila in Nov., 1970, an attempt was made on his life. In 1971, at a Liberal party rally, hand grenades were thrown at the speakers' platform, and several people were killed. President Marcos declared martial law in Sept., 1972, charging that a Communist rebellion threatened, and opposition to Marcos's government did swell the ranks of Communist guerrilla groups, which continued to grow into the mid-1980s and continued on a smaller scale into the 21st cent. The 1935 constitution was replaced (1973) by a new one that provided the president with direct powers. A plebiscite (July, 1973) gave Marcos the right to remain in office beyond the expiration (Dec., 1973) of his term. Meanwhile the fighting on Mindanao had spread to the Sulu Archipelago. By 1973 some 3,000 people had been killed and hundreds of villages burned. Throughout the 1970s poverty and governmental corruption increased, and Imelda Marcos, Ferdinand's wife, became more influential.
Martial law remained in force until 1981, when Marcos was reelected, amid accusations of electoral fraud. On Aug. 21, 1983, opposition leader Benigno Aquino was assassinated at Manila airport, which incited a new, more powerful wave of anti-Marcos dissent. After the Feb., 1986, presidential election, both Marcos and his opponent, Corazon Aquino (the widow of Benigno), declared themselves the winner, and charges of massive fraud and violence were leveled against the Marcos faction. Marcos's domestic and international support eroded, and he fled the country on Feb. 25, 1986, eventually obtaining asylum in the United States.
Aquino's government faced mounting problems, including coup attempts, significant economic difficulties, and pressure to rid the Philippines of the U.S. military presence (the last U.S. bases were evacuated in 1992). In 1990, in response to the demands of the Moros, a partially autonomous Muslim region was created in the far south. In 1992, Aquino declined to run for reelection and was succeeded by her former army chief of staff Fidel Ramos. He immediately launched an economic revitalization plan premised on three policies: government deregulation, increased private investment, and political solutions to the continuing insurgencies within the country. His political program was somethat successful, opening dialogues with the Communist and Muslim guerillas. Although Muslim unrest and violence continued into the 21st cent, the government signed a peace accord with the Moro National Liberation Front (MNLF) in 1996, which led to an expansion of the autonomous region in 2001.
Several natural disasters, including the 1991 eruption of Mt. Pinatubo on Luzon and a succession of severe typhoons, slowed the country's economic progress in the 1990s. The Philippines, however, escaped much of the economic turmoil seen in other East Asian nations in 1997 and 1998, in part by following a slower pace of development imposed by the International Monetary Fund. Joseph Marcelo Estrada, a former movie actor, was elected president in 1998, pledging to help the poor and develop the country's agricultural sector. In 1999 he announced plans to amend the constitution in order to remove protectionist provisions and attract more foreign investment.
Late in 2000, Estrada's presidency was buffetted by charges that he accepted millions of dollars in payoffs from illegal gambling operations. Although his support among the poor Filipino majority remained strong, many political, business, and church leaders called for him to resign. In Nov., 2000, Estrada was impeached by the house of representatives on charges of graft, but the senate, controlled by Estrada's allies, provoked a crisis (Jan., 2001) when it rejected examining the president's bank records. As demonstrations against Estrada mounted and members of his cabinet resigned, the supreme court stripped him of the presidency, and Vice President Gloria Macapagal-Arroyo was sworn in as Estrada's successor. Estrada was indicted on charges of corruption in April, and his supporters attempted to storm the presidential palace in May. In Sept., 2007, he was convicted of corruption and sentenced to life imprisonment, but Estrada, who had been under house arrest since 2001, was pardoned the following month by President Macapagal-Arroyo.
A second Muslim rebel group, the Moro Islamic Liberation Front (MILF), agreed to a cease-fire in June, 2001, but fighting with fundamentalist Islamic guerrillas continued, and there was a MNLF uprising on Jolo in November. Following the Sept. 11, 2001, terrorist attacks on the United States, the U.S. government provided (2002) training and assistance to Philippine troops fighting the guerrillas. In 2003 fighting with the MILF again escalated, despite pledges by both sides that they would negotiate and exercise restraint; however, a truce was declared in July. In the same month several hundred soldiers were involved in a mutiny in Manila that the government claimed was part of a coup attempt.
Macapagal-Arroyo was elected president in her own right in May, 2004, but the balloting was marred by violence and irregularities as well as a tedious vote-counting process that was completed six weeks after the election. A series of four devastating storms during November and December killed as many as 1,000 in the country's north and east, particularly on Luzon. In early 2005 heavy fighting broke out on Mindanao between government forces and a splinter group of MILF rebels, and there was also fighting with a MNLF splinter group in Jolo.
In June, 2005, the president was beset by a vote-rigging charge based on a tape of a conversation she had with an election official. She denied the allegation while acknowledging that she had been recorded and apologizing for what she called a lapse in judgment, but the controversy combined with other scandals (including allegations that her husband and other family members had engaged in influence peddling and received bribes) to create a national crisis. Promising government reform, she asked (July) her cabinet to resign, and several cabinet members subsequently called on Macapagal-Arroyo to resign (as did Corazon Aquino). At the same time the supreme court suspended sales tax increases that had been enacted in May as part of a tax reform package designed to reduce the government's debt. In August and September the president survived an opposition move to impeach her when her opponents failed to muster the votes needed to force a trial in the senate.
In Feb., 2006, the government engaged in talks, regarded as a prelude to formal peace negotiations, with the MILF, and dicussions between the two sides continued in subsequent months. Late in the month, President Macapagal-Arroyo declared a weeklong state of emergency when a coup plot against her was discovered. Intended to coincide with the 20th anniversary celebrations of the 1986 demonstrations that brought down Ferdinand Marcos, the coup was said to have involved several army generals and left-wing legislators. The state of emergency was challenged in court and upheld after the fact, but the supreme court declared aspects of the emergency's enforcement unconstitutional.
In October the supreme court declared a move to revise the constitution through a "people's initiative," replacing the presidential system of government with a parliamentary one, unconstitutional, but the government only abandoned its attempt to revise the constitution in December after the Roman Catholic church attacked an attempt by the house of representatives to call a constituent assembly and by the opposition-dominated senate. In 2006 there was fierce fighting on Jolo between government forces and Islamic militants; it continued into 2007, and there were also clashes in Basilan and Mindanao.
In Jan., 2007, a government commission blamed many of the more than 800 deaths of activists during Macapagal-Arroyo's presidency on the military. The president promised action in response to the report, but the chief of the armed forces denounced the report as unfair and strained. Congressional elections in May, 2007, were marred by fraud allegations and by violence during the campaign; the voting left the opposition in control of the senate and Macapagal-Arroyo's allies in control of the house. In November there was a brief occupation of a Manila hotel by soldiers, many of whom had been involved in the 2003 mutiny. In Oct., 2007, the president's husband was implicated in a kickback scandal involving a Chinese company; the investigation continued into 2008, and prompted demonstrations by her opponents and calls for her to resign.
A peace agreement that would have expanded the area of Mindanao that was part of the Muslim autonomous region was reached in principle with the MILF in Nov., 2007. Attempts to finalize the agreement, however, collapsed in July, 2008, when Muslims accused the government of reopening settled issues; the agreement was also challenged in court by Filipinos opposed to it. In August significant fighting broke out between government forces and rebels that the MILF said were renegades; two months later the supreme court declared the agreement unconstitutional. Fighting in the region continued into 2009.
Luzon was battered by several typhoons in Sept.-Oct, 2009; the Manila area and the mountainous north were most severely affected, and more than 900 persons died. In Nov., 2009, the country was stunned by the murder of the wife of an opposition candidate for the governorship of Maguindanao prov. and a convoy of 56 people who joined her as she went to register his candidacy; the governor, Andal Ampatuan, and his son were charged with rebellion and murder respectively in relation to the slaughter and events after it.
See E. H. Blair and J. A. Robertson, ed., The Philippine Islands, 1493-1888 (55 vol., 1903-9; Vol. LIII, Bibliography); L. Morton, The Fall of the Philippines (1953); T. Friend, Between Two Empires: The Ordeal of the Philippines, 1929-1946 (1965); E. G. Maring and J. M. Maring, Historical and Cultural Dictionary of the Philippines (1973); B. D. Romulo, Inside the Palace: The Rise and Fall of Ferdinand and Imelda Marcos (1987); S. Burton, Impossible Dream: The Marcoses, the Aquinos, and the Unfinished Revolution (1988); D. J. Steinberg, The Philippines (1988); D. Wurfel, Filipino Politics (1988); S. Karnow, In Our Image: America's Empire in the Philippines (1989); B. M. Linn, The U.S. Army and Counterinsurgency in the Philippine War, 1899-1902 (1989).
See A. G. Adair and M. H. Crockett, ed., Heroes of the Alamo (2d ed. 1957); Lon Tinkle, 13 Days to Glory (1958); W. Lord, A Time to Stand (1961); W. C. Davis, Three Roads to the Alamo (1998); R. Roberts and J. S. Olson, A Line in the Sand (2000).
The islands, composed mainly of limestone and coral, rise from a vast submarine plateau. Most are generally low and flat, riverless, with many mangrove swamps, brackish lakes (connected with the ocean by underground passages), and coral reefs and shoals. Fresh water is obtained from rainfall and from desalinization. Navigation is hazardous, and many of the outer islands are uninhabited and undeveloped, although steps have been taken to improve transportation facilities. Hurricanes occasionally cause severe damage, but the climate is generally excellent. In addition to New Providence, other main islands are Grand Bahama, Great and Little Abaco (see Abaco and Cays), the Biminis, Andros, Eleuthera, Cat Island, San Salvador, Great and Little Exuma (Exuma and Cays), Long Island, Crooked Island, Acklins Island, Mayaguana, and Great and Little Inagua (see Inagua).
The population is primarily of African and mixed African and European descent; some 12% is of European heritage, with small minorities of Asian and Hispanic descent. More than three quarters of the people belong to one of several Protestant denominations and nearly 15% are Roman Catholic. English is the official language. The Bahamas have a relatively low illiteracy rate. The government provides free education through the secondary level; the College of the Bahamas was established in 1974, although most Bahamians who seek a higher education study in Jamaica or elsewhere.
The islands' vivid subtropical atmosphere—brilliant sky and sea, lush vegetation, flocks of bright-feathered birds, and submarine gardens where multicolored fish swim among white, rose, yellow, and purple coral—as well as rich local color and folklore, has made the Bahamas one of the most popular resorts in the hemisphere. The islands' many casinos are an additional attraction, and tourism is by far the country's most important industry, providing 60% of the gross domestic product and employing about half of the workforce. Financial services are the nation's other economic mainstay, although many international businesses left after new government regulations on the financial sector were imposed in late 2000. Salt, rum, aragonite, and pharmaceuticals are produced, and these, along with animal products and chemicals, are the chief exports. The Bahamas also possess facilities for the transshipment of petroleum. The country's main trading partners are the United States and Spain. Since the 1960s, the transport of illegal narcotic drugs has been a problem, as has the flow of illegal refugees from other islands.
The Bahamas are governed under the constitution of 1973 and have a parliamentary form of government. There is a bicameral legislature consisting of a 16-seat Senate and a 40-seat House of Assembly. The prime minister is the head of government, and the monarch of Great Britain and Northern Ireland, represented by an appointed governor-general, is the titular head of state. The nation is divided into 21 administrative districts.
Before the arrival of Europeans, the Bahamas were inhabited by the Lucayos, a group of Arawaks. Christopher Columbus first set foot in the New World in the Bahamas (1492), presumably at San Salvador, and claimed the islands for Spain. Although the Lucayos were not hostile, they were soon exterminated by the Spanish, who did not in fact colonize the islands.
The first settlements were made in the mid-17th cent. by the English. In 1670 the islands were granted to the lords proprietors of Carolina, who did not relinquish their claim until 1787, although Woodes Rogers, the first royal governor, was appointed in 1717. Under Rogers the pirates and buccaneers, notably Blackbeard, who frequented the Bahama waters, were driven off. The Spanish attacked the islands several times, and an American force held Nassau for a short time in 1776. In 1781 the Spanish captured Nassau and took possession of the whole colony, but under the terms of the Treaty of Paris (1783) the islands were ceded to Great Britain.
After the American Revolution many Loyalists settled in the Bahamas, bringing with them black slaves to labor on cotton plantations. Plantation life gradually died out after the emancipation of slaves in 1834. Blockade-running into Southern ports in the U.S. Civil War enriched some of the islanders, and during the prohibition era in the United States the Bahamas became a base for rum-running.
The United States leased areas for bases in the Bahamas in World War II and in 1950 signed an agreement with Great Britain for the establishment of a proving ground and a tracking station for guided missiles. In 1955 a free trade area was established at the town of Freeport. It proved enormously successful in stimulating tourism and has attracted offshore banking.
In the 1950s black Bahamians, through the Progressive Liberal party (PLP), began to oppose successfully the ruling white-controlled United Bahamian party; but it was not until the 1967 elections that they were able to win control of the government. The Bahamas were granted limited self-government as a British crown colony in 1964, broadened (1969) through the efforts of Prime Minister Lynden O. Pindling. The PLP, campaigning on a platform of immediate independence, won an overwhelming victory in the 1972 elections and negotiations with Britain were begun.
On July 10, 1973, the Bahamas became a sovereign state within the Commonwealth of Nations. In 1992, after 25 years as prime minister and facing recurrent charges of corruption and ties to drug traffickers, Pindling was defeated by Hubert Ingraham of the Free National Movement (FNM). A feeble economy, mostly due to a decrease in tourism and the poor management of state-owned industries, was Ingraham's main policy concern. Ingraham was returned to office in 1997 with an ironclad majority, but lost power in 2002 when the PLP triumphed at the polls and PLP leader Perry Christie replaced Ingraham as prime minister. Concern over the government's readiness to accommodate the tourist industry contributed to the PLP's losses in the 2007 elections, and Ingraham and the FNM regained power.
See H. P. Mitchell, Caribbean Patterns (2d ed. 1970); J. E. Moore, Pelican Guide to the Bahamas (1988).
The legend echoed the epic poem Nibelungenlied in which the dragon-slaying hero Siegfried is stabbed in the back by Hagen von Tronje. Der Dolchstoß is cited as an important factor in Adolf Hitler's later rise to power, as the Nazi Party grew its original political base largely from embittered World War I veterans, and those who were sympathetic to the Dolchstoß interpretation of Germany's then-recent history.
Many were under the impression that the Triple Entente had ushered in the war, and as such saw the war as one in which Germany's cause was justified. Imperial Russia was seen to have expansionist ambitions and France's dissatisfaction due to the outcome of the Franco-Prussian War was widely known. Later, the Germans were shocked to learn that Great Britain had entered the war, and many felt their country was being "ganged up on"; the Germans felt that Britain was using the Belgian neutrality issue to enter the war and neutralize a Germany that was threatening its own commercial interests.
As the war dragged on, illusions of an easy victory were smashed, and Germans began to suffer tremendously from what would become a long and enormously costly war. With the initial Euphoria gone, old divisions resurfaced. Nationalist loyalties came into question once again as initial enthusiasms subsided. Subsequently, suspicion of Roman Catholics, Social Democrats and Jews grew. There was a considerable amount of political tension prior to the war, especially due to the growing presence of Social Democrats in the Reichstag. This was a great concern for aristocrats in power and the military; this contingent was particularly successful in denying Erich Ludendorff the funds for the German Army that he claimed were necessary and lobbied for.
On November 1, 1916, the German Military High Command administered the Judenzählung (German for "Jewish Census"). It was designed to confirm allegations of the lack of patriotism among German Jews, but the results of the census disproved the accusations and were not made public. A number of German Jews viewed "the Great War" as an opportunity to prove their commitment to the German homeland.
Civil disorder grew as a result of an inability to make ends meet, with or without the alleged "shortage of patriotism." While it is true that production slumped during the crucial years of 1917 and 1918, the nation had maximized its war effort and could take no more. Raw production figures confirm that Germany could not have possibly won a war of attrition against Britain, France and the United States combined. Despite its overwhelming individual power, Germany's industrial might and population were matched and outclassed by the Entente as a whole. Russia's exit in late 1917 did little to change the overall picture, as the United States had already joined the war on April 16th of that same year. American industrial capacity alone outweighed that of Germany.
Although the Germans were frequently depicted as "primordial aggressors responsible for the war", German peace proposals were all but rejected. Ludendorff was convinced that the Entente wanted little other than a Carthaginian peace. This was not the message most Germans heard coming from the other side. Woodrow Wilson's Fourteen Points were particularly popular among the German people. Socialists and liberals, especially the Social Democrats that formed the majority of the parliamentary body, were already known "agitators" for social change prior to 1914. When peace and full restoration were promised by the Allies, patriotic enthusiasm especially began to wane. Likewise, Germany's allies began to question the cause for the war as the conflict dragged on, and found their questions answered in the Allied propaganda.
When the armistice finally came in 1918, Ludendorff's prophecy appeared accurate almost immediately; although the fighting had ended, the British maintained their blockade of the European continent for a full year, leading to starvation and severe malnutrition. The non-negotiable peace agreed to by Weimar politicians in the Treaty of Versailles was certainly not what the German peace-seeking populace had expected.
Conservatives, nationalists and ex-military leaders began to speak critically about the peace and Weimar politicians, socialists, communists, and Jews were viewed with suspicion due to presumed extra-national loyalties. It was claimed that they had not supported the war and had played a role in selling out Germany to its enemies. These November Criminals, or those who seemed to benefit from the newly formed Weimar Republic, were seen to have "stabbed them in the back" on the home front, by either criticizing German nationalism, instigating unrest and strikes in the critical military industries or profiteering. In essence the accusation was that the accused committed treason against the "benevolent and righteous" common cause.
These theories were given credence by the fact that when Germany surrendered in November 1918, its armies were still in French and Belgian territory, Berlin remained 450 miles from the nearest front, and the German armies retired from the field of battle in good order. The Allies had been amply resupplied by the United States, which also had fresh armies ready for combat, but Britain and France were too war-weary to contemplate an invasion of Germany with its unknown consequences. No Allied army had penetrated the western German frontier, Western Front, and on the Eastern Front, Germany had already won the war against Russia, concluded with the Treaty of Brest-Litovsk. In the West, Germany had come close to winning the war with the Spring Offensive. Contributing to the Dolchstoßlegende, its failure was blamed on strikes in the arms industry at a critical moment of the offensive, leaving soldiers without an adequate supply of materiel. The strikes were seen to be instigated by treasonous elements, with the Jews taking most of the blame. This overlooked Germany's strategic position and ignored how the efforts of individuals were somewhat marginalized on the front, since the belligerents were engaged in a new kind of war. The industrialization of war had dehumanized the process, and made possible a new kind of defeat which the Germans suffered as a total war emerged.
The weakness of Germany's strategic position was exacerbated by the rapid collapse of its allies in late 1918, following allied victories on the Eastern and Italian fronts. Bulgaria was the first to sign an armistice on September 29 1918 at Saloniki. On October 30 the Ottoman Empire capitulated at Mudros. On November 3 Austria-Hungary sent a flag of truce to ask for an Armistice. The terms, arranged by telegraph with the Allied Authorities in Paris, were communicated to the Austrian Commander and accepted. The Armistice with Austria-Hungary was signed in the Villa Giusti, near Padua, on November 3. Austria and Hungary signed separate armistices following the overthrow of the Habsburg monarchy.
Nevertheless, this social mythos of domestic betrayal resonated among its audience, and its claims would codify the basis for public support for the emerging Nazi Party, under a racialist-based form of nationalism. The anti-Semitism was intensified by the Bavarian Soviet Republic, a Communist government which ruled the city of Munich for two weeks before being crushed by the Freikorps militia. Many of the Bavarian Soviet Republic's leaders were Jewish, a fact that allowed anti-Semitic propagandists to make the connection with "Communist treason".
I have asked His Excellency to now bring those circles to power which we have to thank for coming so far. We will therefore now bring those gentlemen into the ministries. They can now make the peace which has to be made. They can eat the broth which they have prepared for us!
On November 11, 1918, the representatives of the newly formed Weimar Republic signed an armistice with the Allies which would end World War I. The subsequent Treaty of Versailles led to further territorial and financial losses. As the Kaiser had been forced to abdicate and the military relinquished executive power, it was the temporary "civilian government" that sued for peace - the signature on the document was of the Catholic Centrist Matthias Erzberger, a civilian, who was later killed for his alleged treason. This led to the signing of the Treaty of Versailles. Even though they publicly despised the treaty, it was most convenient for the generals — there were no war-crime tribunals, they were celebrated as undefeated heroes, and they could covertly prepare for removing the republic that they had helped to create.
The official birth of the term itself possibly can be dated to mid-1919, when Ludendorff was having lunch with British general Sir Neil Malcolm. Malcolm asked Ludendorff why it was that he thought Germany lost the war. Ludendorff replied with his list of excuses: the home front failed us, etc. Then Sir Neil Malcolm said that "it sounds like you were stabbed in the back, then?" The phrase was to Ludendorff's liking, and he let it be known among the general staff that this was the 'official' version, then disseminated throughout German society. This was picked up by right-wing political factions and used as a form of attack against the SPD-led early Weimar government, which had come to power in the German Revolution of November 1918.
In November 1919, the newly elected Weimar National Assembly initiated a Untersuchungsausschuß für Schuldfragen to investigate the causes of the World War and Germany's defeat. On November 18th, von Hindenburg testified in front of this parliamentary commission, and cited a December 17, 1918 Neue Zürcher Zeitung article that summarized two earlier articles in the Daily Mail by British General Frederick Barton Maurice with the phrase that the German army had been 'dagger-stabbed from behind by the civilian populace' ("von der Zivilbevölkerung von hinten erdolcht."). (Maurice later disavowed having used the term himself.). It was particularly this testimony of Hindenburg that led to the wide spread of the Dolchstoßlegende in post-WWI Germany.
Richard Steigmann-Gall says that the stab-in-the-back legend traces back to a sermon preached on February 3, 1918, by Protestant Court Chaplain Bruno Doehring, six months before the war had even ended. German scholar Boris Barth, in contrast to Steigmann-Gall, implies that Doehring did not actually use the term, but spoke only of 'betrayal.' Barth traces the first documented use to a centrist political meeting in the Munich Löwenbräu-Keller on November 2, 1918, in which Ernst Müller-Meiningen, a member of the Progressive coalition in the Reichstag, used the term to exhort his listeners to keep fighting:
As long as the front holds, we damned well have the duty to hold out in the homeland. We would have to be ashamed of ourselves in front of our children and grandchildren if we attacked the battle front from the rear and gave it a dagger-stab. (wenn wir der Front in den Rücken fielen und ihr den Dolchstoss versetzten.)
Barth also shows that the term was primarily popularized by the patriotic German newspaper Deutsche Tageszeitung that repeatedly quoted the Neue Zürcher article after Hindenburg had referred to it in front of the parliamentary inquiry commission.
Charges of a Jewish conspirational element in Germany's defeat drew heavily upon figures like Kurt Eisner, a Berlin-born German Jew who lived in Munich. He had written about the illegal nature of the war from 1916 onward, and he also had a large hand in the Munich revolution until he was assassinated in February 1919. The Weimar Republic under Friedrich Ebert violently suppressed workers' uprisings with the help of Gustav Noske and Reichswehr General Groener, and tolerated the paramilitary Freikorps forming all across Germany. In spite of such tolerance, the Republic's legitimacy was constantly attacked with claims such as the stab-in-the-back. Many of its representatives such as Matthias Erzberger and Walther Rathenau were assassinated, and the leaders were branded as "criminals" and Jews by the right-wing press dominated by Alfred Hugenberg.
German historian Friedrich Meinecke already attempted to trace the roots of the term in a June 11, 1922, article in the Viennese newspaper Neue Freie Presse. In the 1924 national election, the Munich cultural journal Süddeutsche Monatshefte published a series of articles blaming the SPD and trade unions for Germany's defeat in World War I (the illustration on this page is the April 1924 title of that journal, which came out during the trial of Adolf Hitler and Ludendorff for high treason following the Beer Hall Putsch in 1923. The editor of an SPD newspaper sued the journal for defamation, giving rise to what is known as the Munich Dolchstossprozess from October 19 to November 20, 1924. Many prominent figures testified in that trial, including members of the parliamentary committee investigating the reasons for the defeat, so some of its results were made public long before the publication of the committee report in 1928.
The Dolchstoß was a central image in propaganda produced by the many right-wing and traditionally conservative political parties that sprang up in the early days of the Weimar Republic, including Hitler's NSDAP. For Hitler himself, this explanatory model for World War I was of crucial personal importance. He had learned of Germany's defeat while being treated for temporary blindness following a gas attack on the front. In Mein Kampf, he described a vision at this time which drove him to enter politics. Throughout his career, he railed against the "November criminals" of 1918, who had stabbed the German Army in the back.
Even provisional President Friedrich Ebert contributed to the myth when he saluted returning veterans with the oration that "no enemy has vanquished you" (kein Feind hat euch überwunden!) and "they returned undefeated from the battlefield (sie sind vom Schlachtfeld unbesiegt zurückgekehrt)" on November 10th, 1918. The latter quote was shortened to im Felde unbesiegt as a semi-official slogan of the Reichswehr. Ebert had meant these sayings as a tribute to the German soldier, but it only contributed to the prevailing feeling.
Research Team Publishes New Methods for Synthetic Generation of Influenza Vaccines Design Enables More Rapid Response to Potential Pandemics
May 15, 2013; LA JOLLA, Calif.and ROCKVILLE, Md. -- the following information was released by the J. Craig Venter Institute... | http://www.reference.com/browse/the | 13 |
39 | Molotov–Ribbentrop Pact negotiations
The Molotov–Ribbentrop Pact was an August 23, 1939 agreement between the Soviet Union and Nazi Germany colloquially named after Soviet foreign minister Vyacheslav Molotov and German foreign minister Joachim von Ribbentrop. The treaty renounced warfare between the two countries. In addition to stipulations of non-aggression, the treaty included a secret protocol dividing several eastern European countries between the parties.
Before the treaty's signing, the Soviet Union conducted negotiations with the United Kingdom and France regarding a potential "Tripartite" alliance. Long-running talks between the Soviet Union and Germany over a potential economic pact expanded to include the military and political discussions, culminating in the pact, along with a commercial agreement signed four days earlier.
After World War I
After the Russian Revolution of 1917, Bolshevist Russia ended its fight against the Central Powers, including Germany, in World War I by signing the Treaty of Brest-Litovsk. Therein, Russia agreed to cede sovereignty and influence over parts of several eastern European countries. Most of those countries became ostensible democratic republics following Germany's defeat and signing of an armistice in the autumn of 1918. With the exception of Belarus and Ukraine, those countries also became independent. However, the Treaty of Brest-Litovsk lasted only eight and a half months, when Germany renounced it and broke off diplomatic relations with Russia.
Before World War I, Germany and Russia had long shared a trading relationship. Germany is a relatively small country with few natural resources. It lacks natural supplies of several key raw materials needed for economic and military operations. Since the late 19th century, it had relied heavily upon Russian imports of raw materials. Germany imported 1.5 billion Rechsmarks of raw materials and other goods annually from Russia before the war.
In 1922, the countries signed the Treaty of Rapallo, renouncing territorial and financial claims against each other. The countries pledged neutrality in the event of an attack against one another with the 1926 Treaty of Berlin. While imports of Soviet goods to Germany fell after World War I, after trade agreements signed between the two countries in the mid-1920s, trade had increased to 433 million Reichsmarks per year by 1927.
In the early 1930s, this relationship fell as the more isolationist Stalinist regime asserted power and the abandonment of post-World War I military control decreased Germany's reliance on Soviet imports, such that Soviet imports fell to 223 million Reichsmarks in 1934.
In the mid-1930s, the Soviet Union made repeated efforts to reestablish closer contacts with Germany. The Soviet Union chiefly sought to repay debts from earlier trade with raw materials, while Germany sought to rearm, and the countries signed a credit agreement in 1935. The rise to power of the Nazi Party increased tensions between Germany, the Soviet Union and other countries with ethnic Slavs, which were considered "untermenschen" according to Nazi racial ideology. The Nazis were convinced that ethnic Slavs were incapable of forming their own state and, accordingly, must be ruled by others. Moreover, the anti-semitic Nazis associated ethnic Jews with both communism and international capitalism, both of which they opposed. Consequently, Nazis believed that Soviet untermenschen Slavs were being ruled by "Jewish Bolshevik" masters. Two primary goals of Nazism were to eliminate Jews and seek Lebensraum ("living space") for ethnic Aryans to the east. In 1934, Hitler spoke of an inescapable battle against "pan-Slav ideals", the victory in which would lead to "permanent mastery of the world", though he stated that they would "walk part of the road with the Russians, if that will help us."
Despite the political rhetoric, in 1936, the Soviets attempted to seek closer political ties to Germany along with an additional credit agreement, while Hitler rebuffed the advances, not wanting to seek closer political ties, even though a 1936 raw material crisis prompted Hitler to decree a Four Year Plan for rearmament "without regard to costs."
Tensions grew further after Germany and Fascist Italy supported the Fascist Spanish Nationalists in the Spanish Civil War in 1936, while the Soviets supported the partially socialist-led Spanish Republic opposition. In November 1936, Soviet-German relations sank further when Germany and Japan entered the Anti-Comintern Pact, which was purportedly directed against the Communist International, though it contained a secret agreement that either side would remain neutral if the other became involved with the Soviet Union. In November 1937, Italy also joined the Anti-Comintern Pact.
Late 1930s
The Moscow Trials of the mid-1930s seriously undermined Soviet prestige in the West. Soviet purges in 1937 and 1938 made a deal less likely by disrupting the already confused Soviet administrative structure necessary for negotiations and giving Hitler the belief that the Soviets' were militarily weak.
The Soviets were not invited to the Munich Conference regarding Czechoslovakia . The Munich Agreement that followed marked the dissolution of Czechoslovakia in 1938 through a partial German annexation, part of an appeasement of Germany.
After German needs for military supplies after the Munich Agreement and Soviet demand for military machinery increased, talks between the two countries occurred from late 1938 to March 1939. The Soviet Third Five Year Plan would require massive new infusions of technology and industrial equipment. An autarkic economic approach or an alliance with England were impossible for Germany, such that closer relations with the Soviet Union were necessary, if not just for economic reasons alone. At that time, Germany could supply only 25 percent of its petroleum needs, and without its primary United States petroleum source in a war, would have to look to Russia and Romania. Germany suffered the same natural shortfall and supply problems for rubber and metal ores needed for hardened steel in war equipment , for which Germany relied on Soviet supplies or transit using Soviet rail lines. Finally, Germany also imported 40 per cent of its fat and oil food requirements, which would grow if Germany conquered nations that were also net food importers, and, thus, needed Soviet imports of Ukrainian grains or Soviet transshipments of Manchurian soybeans. Moreover, an anticipated British blockade in the event of war and a cutoff of petroleum from the United States would create massive shortages for Germany regarding a number of key raw materials
Following Hitler's March 1939 denunciation of the 1934 German–Polish Non-Aggression Pact, Britain and France had made statements guaranteeing the sovereignty of Poland, and on April 25, signed a Common Defense Pact with Poland, when that country refused to be associated with a four-power guarantee involving the USSR.
Initial talks
Potential for Soviet-German talk expansion
Germany and the Soviet Union discussed entering into an economic deal throughout early 1939. For months, Germany had secretly hinted to Soviet diplomats that it could offer better terms for a political agreement than could Britain and France. On March 10, Hitler in his official speech proclaimed that directly. That same day, Stalin, in a speech to the Eighteenth Congress of the All-Union Communist Party, characterized western actions regarding Hitler as moving away from "collective security" and toward "nonintervention," with the goal being to direct Fascist aggression anywhere but against themselves. After the Congress concluded, the Soviet press mounted an attack on both France and Great Britain.
On April 7, a Soviet diplomat visited the German Foreign Ministry stating that there was no point in continuing the German-Soviet ideological struggle and that the countries could conduct a concerted policy. Ten days later, the Soviet ambassador met the German Deputy Foreign Minister and presented him a note requesting speedy removal of any obstacles for fulfillment of military contracts signed between Czechoslovakia and the USSR before the former was occupied by Germany. According to German accounts, at the end of the discussion, the ambassador stated "'there exists for Russia no reason why she should not live with us on a normal footing. And from normal the relations might become better and better." though other sources admit that it could be an exaggeration or inaccurate recounting of the ambassador's words. Immediately after that, the Soviet ambassador had been withdrawn to Moscow and never returned to Germany. According to Ulam, future conversations on the topic in Berlin were believed to continue with lower level officials working under the cover of a Soviet trade mission.
Tripartite talks begin
Starting in mid-March 1939, the Soviet Union, Britain and France traded a flurry of suggestions and counterplans regarding a potential political and military agreement. The Soviet Union feared Western powers and the possibility of a "capitalist encirclements", had little faith either that war could be avoided or in the Polish army, and wanted guaranteed support for a two-pronged attack on Germany. Britain and France believed that war could still be avoided and that the Soviet Union, weakened by purges, could not serve as a main military participant. France, as a continental power, was more anxious for an agreement with the USSR than Britain, which was more willing to make concessions and more aware of the dangers of an agreement between the USSR and Germany. On April 17, Soviet foreign minister Maxim Litvinov outlined a French–British–Soviet mutual assistance pact between the three powers for five to 10 years, including military support, if any of the powers were the subject of aggression.
May changes
Litvinov Dismissal
On May 3, Stalin replaced Foreign Minister Litinov with Vyacheslav Molotov, which significantly increased Stalin's freedom to maneuver in foreign policy. The dismissal of Litvinov, whose Jewish ethnicity was viewed disfavorably by Nazi Germany, removed an obstacle to negotiations with Germany. Stalin immediately directed Molotov to "purge the ministry of Jews." Given Litvinov's prior attempts to create of an anti-fascist coalition, association with the doctrine of collective security with France and Britain, and pro-Western orientation by the standards of the Kremlin, his dismissal indicated the existence of a Soviet option of rapprochement with Germany. Likewise, Molotov's appointment served as a signal to Germany that the USSR was open to offers. The dismissal also signaled to France and Britain the existence of a potential negotiation option with Germany. One British official wrote that Litvinov's disappearance also meant the loss of an admirable technician or shock-absorber, while Molotov's "modus operandi" was "more truly Bolshevik than diplomatic or cosmopolitan." But Stalin sent double message: Molotov appointed Solomon Lozovsky, a Jew, as one of his deputies.
May tripartite negotiations
Although informal consultations started in late April, the main negotiations between the Soviet Union, Britain and France began in May. At a meeting in May 1939, the French Foreign Minister told the Soviet Ambassador to France that he was willing to support turning over all of eastern Poland to the Soviet Union, regardless of Polish opposition, if that was the price of an alliance with Moscow.
German supply concerns and potential political discussions
In May, German war planners also became increasingly concerned that, without Russian supplies, Germany would need to find massive substitute quantities of 165,000 tons of manganese and almost 2 million tons of oil per year. In the context of further economic discussions, on May 17, the Soviet ambassador told a German official that he wanted to restate "in detail that there were no conflicts in foreign policy between Germany and Soviet Russia and that therefore there was no reason for any enmity between the two countries." Three days later, on May 20, Molotov told the German ambassador in Moscow that he no longer wanted to discuss only economic matters, and that it was necessary to establish a "political basis", which German officials saw an "implicit invitation."
On May 26, German officials feared a potential positive result to come from the Soviets talks regarding proposals by Britain and France. On May 30, fearing potential positive results from a British and French offer to the Soviets, Germany directed its diplomats in Moscow that "we have now decided to undertake definite negotiations with the Soviet Union." The ensuing discussions were channeled through the economic negotiation, because the economic needs of the two sides were substantial and because close military and diplomatic connections had been severed in the mid-1930s, leaving these talks as the only means of communication.
Baltic sticking point and German rapprochement
Mixed signals
The Soviets sent mixed signals thereafter. In his first main speech as Soviet Foreign Minister on May 31, Molotov criticized an Anglo-French proposal, stated that the Soviets did not "consider it necessary to renounce business relations with countries like Germany" and proposed to enter a wide-ranging mutual assistance pact against aggression. However, Soviet Commissar for Foreign Trade Mikoyan argued on June 2 to a German official that Moscow "had lost all interest in these [economic] negotiations' as a result of earlier German procrastination."
Tripartite talks progress and Baltic moves
On June 2, the Soviet Union insisted that any mutual assistance pact should be accompanied by a military agreement describing in detail the military assistance that the Soviets, French and British would provide. That day, the Soviet Union also submitted a modification to a French and British proposal that specified the states that would be given aid in the event of "direct aggression", which included Belgium, Greece, Turkey, Romania, Poland, Estonia, Latvia and Finland. Five days later, Estonia and Latvia signed non-aggression pacts with Germany, creating suspicions that Germany had ambitions in a region through which it could attack the Soviet Union.
British attempt to stop German armament
On June 8, the Soviets had agreed that a high ranking German official could come to Moscow to continue the economic negotiations, which occurred in Moscow on July 3. Thereafter, official talks were started in Berlin on July 22.
Meanwhile, hoping to stop the German war machine, in July, Britain conducted talks with Germany regarding a potential plan to bail out the debt-ridden German economy, at the cost of one billion pounds, in exchange for Germany ending its armaments program. The British press broke a story on the talks, and Germany eventually rejected the offer.
Tripartite talks regarding "indirect aggression"
After weeks of political talks that began after the arrival of Central Department Foreign Office head William Strang, on July 8, the British and French submitted a proposed agreement, to which Molotov added a supplementary letter. Talks in late July stalled over a provision in Molotov's supplementary letter stating that a political turn to Germany by the Baltic states constituted "indirect aggression", which Britain feared might justify Soviet intervention in Finland and the Baltic states or push those countries to seek closer relations with Germany (while France was less resistant to the supplement). On July 23, France and Britain agreed with the Soviet proposal to draw up a military convention specifying a reaction to a German attack.
Soviet-German political negotiation beginnings
Only July 18, Soviet trade representative Yevgeniy Barbarin visited Julius Schnurre, saying that the Soviets would like to extend and intensify German-Soviet relations. On July 25, the Soviet Union and Germany were very close to finalizing the terms of a proposed economic deal. On July 26, over dinner, the Soviets accepted a proposed three stage agenda which included the economic agenda first and "a new arrangement which took account of the vital political interests of both parties." On July 28, Molotov sent a first political instruction to the Soviet ambassador in Berlin that finally opened the door to a political detente with Germany.
Germany had learned about the military convention talks before the July 31 British announcement and were skeptical that the Soviets would reach a deal with Britain and France during those planned talks in August. On August 1, the Soviet ambassador stated that two conditions must be met before political negotiations could begin: a new economic treaty and the cessation of anti-Soviet attacks by German media, with which German officials immediately agreed. On August 2, Soviet political discussions with France and Britain were suspended when Molotov stated they could not be restarted until progress was made in the scheduled military talks.
Addressing past hostilities
On August 3, German Foreign Minister Joachim Ribbentrop told Soviet diplomats that "there was no problem between the Baltic and the Black Sea that could not be solved between the two of us." The Germans discussed prior hostility between the nations in the 1930s. They addressed the common ground of anti-capitalism, stating "there is one common element in the ideology of Germany, Italy and the Soviet Union: opposition to the capitalist democracies," "neither we nor Italy have anything in common with the capitalist west" and "it seems to us rather unnatural that a socialist state would stand on the side of the western democracies." They explained that their prior hostility toward Soviet Bolshevism had subsided with the changes in the Comintern and the Soviet renunciation of a world revolution. Astakhov characterized the conversation as "extremely important."
Final negotiations
Finalizing the economic agreement
In August, as Germany scheduled its invasion of Poland on August 25 and prepared for war with France, German war planners estimated that, with an expected British naval blockade, if the Soviet Union became hostile, Germany would fall short of their war mobilization requirements of oil, manganese, rubber and foodstuffs by huge margins. Every internal German military and economic study had argued that Germany was doomed to defeat without at least Soviet neutrality. On August 5, Soviet officials stated that the completion of the trading credit agreement was the most important stage that could be taken in the direction of further such talks.
By August 10, the countries worked out the last minor technical details to to make all but final the their economic arrangement, but the Soviets delayed signing that agreement for almost ten days until they were sure that they had reached a political agreement with Germany. The Soviet ambassador explained to German officials that the Soviets had begun their British negotiations "without much enthusiasm" at a time when they felt Germany would not "come to an understanding", and the parallel talks with the British could not be simply broken off when they had been initiated after 'mature consideration.' On August 12, Germany received word that Molotov wished to further discuss these issues, including Poland, in Moscow.
Tripartite military talks begin
The Soviets, British and French began military negotiations in August. They were delayed until August 12 because the British military delegation, which did not include Strang, took six days to make the trip traveling in a slow merchant ship, undermining the Soviets' confidence in British resolve. On August 14, the question of Poland was raised by Voroshilov for the first time, requesting that the British and French pressure the Poles to enter into an agreement allowing the Soviet army to be stationed in Poland. The Polish government feared that the Soviet government sought to annex disputed territories, the Eastern Borderlands, received by Poland in 1920 after the Treaty of Riga ending the Polish–Soviet War. The British and French contingent communicated the Soviet concern over Poland to their home offices and told the Soviet delegation that they could not answer this political matter without their governments' approval.
Meanwhile, Molotov spoke with Germany's Moscow ambassador on August 15 regarding the possibility of "settling by negotiation all outstanding problems of Soviet–German relations." The discussion included the possibility of a Soviet-German non-aggression pact, the fates of the Baltic states and potential improvements in Soviet-Japanese relations. Molotov stated that "should the German foreign minister come here" these issues "must be discussed in concrete terms." Within hours of receiving word of the meeting, Germany sent a reply stating that it was prepared to conclude a 25 year non-aggression pact, ready to "guarantee the Baltic States jointly with the Soviet Union", and ready to exert influence to improve Soviet-Japanese relations. The Soviets responded positively, but stated that a "special protocol" was required "defining the interests" of the parties. Germany replied that, in contrast to the British delegation in Moscow at that time without Strang, Ribbentrop personally would travel to Moscow to conclude a deal.
In the Soviet-British-French talks, the Anglo-Franco military negotiators were sent to discuss "general principles" rather than details. On August 15, the British contingent was instructed to move more quickly to bring the military talks to a conclusion, and thus, were permitted to give Soviet negotiators confidential British information. The British contingent stated that Britain currently only possessed six army divisions but, in the event of a war, they could employ 16 divisions initially, followed by a second contingent of 16 divisions—a sum far less than the 120 Soviet divisions. French negotiators stated that they had 110 divisions available. In discussions on August 18–19, the Poles informed the French ambassador that they would not approve Red Army troops operating in Poland.
Delayed commercial agreement signing
After Soviet and German officials in Moscow first finalized the terms of a seven-year German-Soviet Commercial Agreement, German officials became nervous that the Soviets were delaying its signing on August 19 for political reasons. When Tass published a report that the Soviet-British-French talks had become snarled over the Far East and "entirely different matters", Germany took it as a signal that there was still time and hope to reach a Soviet-German deal. Hitler himself sent out a coded telegram to Stalin stating that because "Poland has become intolerable," Stalin must receive Ribbentrop in Moscow by August 23 at the latest to sign a Pact. Controversy surrounds a related alleged Stalin's speech on August 19, 1939 asserting that a great war between the Western powers was necessary for the spread of World Revolution. Historians debate whether that speech ever actually occurred.
At 2:00 a.m. on August 20, Germany and the Soviet Union signed a commercial agreement, dated August 19, providing for the trade of certain German military and civilian equipment in exchange for Soviet raw materials. The agreement covered "current" business, which entailed a Soviet obligation to deliver 180 million Reichsmarks in raw materials in response to German orders, while Germany would allow the Soviets to order 120 million Reichsmarks for German industrial goods. Under the agreement, Germany also granted the Soviet Union a merchandise credit of 200 million Reichsmarks over 7 years to buy German manufactured goods at an extremely favorable interest rate.
Soviets adjourn tripartite military talks and strike a deal with Germany
After the Poles' resistance to pressure, on August 21, Voroshilov proposed adjournment of the military talks with the British and French, using the excuse that the absence of the senior Soviet personnel at the talks interfered with the autumn manoeuvres of the Soviet forces though the primary reason was the progress being made in the Soviet-German negotiations.
That same day, August 21, Stalin has received assurance would approve secret protocols to the proposed non-aggression pact that would grant the Soviets land in Poland, the Baltic states, Finland and Romania. That night, with Germany nervously awaiting a response to Hitler's August 19 telegram, Stalin replied at 9:35 p.m. that the Soviets were willing to sign the pact and that he would receive Ribbentrop on August 23. The Pact was signed sometime in the night between August 23–24.
Pact signing
On August 24, a 10-year non-aggression pact was signed with provisions that included: consultation; arbitration if either party disagreed; neutrality if either went to war against a third power; no membership of a group "which is directly or indirectly aimed at the other." Most notably, there was also a secret protocol to the pact, according to which the states of Northern and Eastern Europe were divided into German and Soviet "spheres of influence".
Poland was to be partitioned in the event of its "political rearrangement". The USSR was promised an eastern part of Poland, primarily populated with Ukrainians and Belarusians, in case of its dissolution, and additionally Latvia, Estonia and Finland. Bessarabia, then part of Romania, was to be joined to the Moldovan ASSR, and become the Moldovan SSR under control of Moscow. The news was met with utter shock and surprise by government leaders and media worldwide, most of whom were aware only of the British-French-Soviet negotiations that had taken place for months.
Ribbentrop and Stalin enjoyed warm conversations at the signing, exchanging toasts and further discussing the prior hostilities between the countries in the 1930s. Ribbentrop stated that Britain had always attempted to disrupt Soviet-German relations, was "weak", and "wants to let others fight for her presumptuous claim to world dominion." Stalin concurred, adding "[i]f England dominated the world, that was due to the stupidity of the other countries that always let themselves be bluffed." Ribbentrop stated that the Anti-Comintern Pact was directed not against the Soviet Union, but against Western democracies, "frightened principally the City of London [i.e., the British financiers] and the English shopkeepers" and stated that Berliners had joked that Stalin would yet joint the Anti-Comintern Pact himself. Stalin proposed a toast to Hitler, and Stalin and Molotov repeatedly toasted the German nation, the Molotov-Ribbentrop Pact and Soviet-German relations. Ribbentrop countered with a toast to Stalin and a toast the countries' relations. As Ribbentrop left, Stalin took him aside and stated that the Soviet Government took the new pact very seriously, and he would "guarantee his word of honor that the Soviet Union would not betray its partner."
Events during the Pact's operation
Immediate dealings with Britain
The day after the Pact was signed, the French and British military negotiation delegation urgently requested a meeting with Voroshilov. On August 25, Voroshilov told them "[i]n view of the changed political situation, no useful purpose can be served in continuing the conversation." That day, Hitler told the British ambassador to Berlin that the pact with the Soviets prevented Germany from facing a two front war, changing the strategic situation from that in World War I, and that Britain should accept his demands regarding Poland. Surprising Hitler, Britain signed a mutual-assistance treaty with Poland that day, causing Hitler to delay the planned August 26 invasion of western Poland.
Division of eastern Europe
On September 1, 1939, the German invasion of its agreed upon portion of western Poland started World War II. On September 17 the Red Army invaded eastern Poland and occupied the Polish territory assigned to it by the Molotov-Ribbentrop Pact, followed by co-ordination with German forces in Poland. Eleven days later, the secret protocol of the Molotov-Ribbentrop Pact was modified, allotting Germany a larger part of Poland, while ceding most of Lithuania to the Soviet Union.
After a Soviet attempt to invade Finland faced stiff resistance, the combatants signed an interim peace, granting the Soviets approximately 10 per cent of Finnish territory. The Soviet Union also sent troops into Lithuania, Estonia and Latvia. Thereafter, governments in all three Baltic countries requesting admission to the Soviet Union were installed.
Further dealings
Germany and the Soviet Union entered an intricate trade pact on February 11, 1940 that was over four times larger than the one the two countries had signed in August 1939, providing for millions of tons of shipment to Germany of oil, foodstuffs and other key raw materials, in exchange for German war machines and other equipment. This was followed by a January 10, 1941 agreement setting several ongoing issues, including border specificity, ethnic migrations and further commercial deal expansion.
Discussions in the fall and winter of 1940-41 ensued regarding the potential entry of the Soviet Union as the fourth members of the Axis powers. The countries never came to an agreement on the issue.
German invasion of the Soviet Union
Nazi Germany terminated the Molotov–Ribbentrop Pact with its invasion of the Soviet Union in Operation Barbarossa on June 22, 1941. After the launch of the invasion, the territories gained by the Soviet Union due to the Molotov–Ribbentrop Pact were lost in a matter of weeks. In the three weeks following the Pact's breaking, attempting to defend against large German advances, the Soviet Union suffered 750,000 casualties, and lost 10,000 tanks and 4,000 aircraft. Within six months, the Soviet military had suffered 4.3 million casualties and the Germans had captured three million Soviet prisoners, two million of which would die in German captivity by February 1942. German forces had advanced 1,050 miles (1,690 kilometers), and maintained a linearly-measured front of 1,900 miles (3,058 kilometers).
Post-war commentary regarding Pact negotiations
The reasons behind signing the pact
There is no consensus among historians regarding the reasons that prompted the Soviet Union to sign the pact with Nazi Germany. According to Ericson, the opinions "have ranged from seeing the Soviets as far-sighted anti-Nazis, to seeing them as reluctant appeasers, as cautious expansionists, or as active aggressors and blackmailers". Edward Hallett Carr argued that it was necessary to enter into a non-aggression pact to buy time, since the Soviet Union was not in a position to fight a war in 1939, and needed at least three years to prepare. He stated: "In return for non-intervention Stalin secured a breathing space of immunity from German attack." According to Carr, the "bastion" created by means of the Pact, "was and could only be, a line of defense against potential German attack." An important advantage (projected by Carr) was that "if Soviet Russia had eventually to fight Hitler, the Western Powers would already be involved."
However, during the last decades, this view has been disputed. Historian Werner Maser stated that "the claim that the Soviet Union was at the time threatened by Hitler, as Stalin supposed,...is a legend, to whose creators Stalin himself belonged." (Maser 1994: 64). In Maser's view (1994: 42), "neither Germany nor Japan were in a situation [of] invading the USSR even with the least perspective [sic] of success," and this could not have been unknown to Stalin.
Some critics, such as Viktor Suvorov, claim that Stalin's primary motive for signing the Soviet–German non-aggression treaty was Stalin's calculation that such a pact could result in a conflict between the capitalist countries of Western Europe. This idea is supported by Albert L. Weeks. However, other claims by Suvorov, such as the Stalin's planning to invade Germany in 1941, have remained under debate among historians, with some like David Glantz opposing, and others like Mikhail Meltyukhov supporting it.
The extent to which the Soviet Union's post-Pact territorial acquisitions may have contributed to preventing its fall (and thus a Nazi victory in the war) remains a factor in evaluating the Pact. Soviet sources point out that the German advance eventually stopped just a few kilometers away from Moscow, so the role of the extra territory might have been crucial in such a close call. Others postulate that Poland and the Baltic countries played the important role of buffer states between the Soviet Union and Nazi Germany, and that the Molotov–Ribbentrop Pact was a precondition not only for Germany's invasion of Western Europe, but also for the Third Reich's invasion of the Soviet Union. The military aspect of moving from established fortified positions on the Stalin Line into undefended Polish territory could also be seen as one of the causes of rapid disintegration of Soviet armed forces in the border area during the German 1941 campaign, as the newly constructed Molotov Line was unfinished and unable to provide Soviet troops with the necessary defense capabilities.
Documentary evidence of early Soviet-German rapprochement
In 1948, the U.S. State Department published a collection of documents recovered from the Foreign Office of Nazi Germany, that formed a documentary base for studies of Nazi-Soviet relations. This collection contains the German State Secretary's account on a meeting with Soviet ambassador Merekalov. This memorandum reproduces the following ambassador's statement: "'there exists for Russia no reason why she should not live with us on a normal footing. And from normal the relations might become better and better." According to Carr, this document is the first recorded Soviet step in the rapprochement with Germany.
The next documentary evidence is the memorandum on the May 17 meeting between the Soviet ambassador and German Foreign Office official, where the ambassador "stated in detail that there were no conflicts in foreign policy between Germany and Soviet Russia and that therefore there was no reason for any enmity between the two countries."
The third document is the summary of the May 20 meeting between Molotov and German ambassador von der Schulenburg. According to the document, Molotov told the German ambassador that he no longer wanted to discuss only economic matters, and that it was necessary to establish a "political basis", which German officials saw as an "implicit invitation."
The last document is the German State Office memorandum on the telephone call made on June 17 by Bulgarian ambassador Draganov. In German accounts of Draganov's report, Astakhov explained that a Soviet deal with Germany better suited the Soviets than one with Britain and France, although from the Bulgarian ambassador it "could not be ascertained whether it had reflected the personal opinions of Herr Astakhov or the opinions of the Soviet Government".
This documentary evidence of an early Nazi-Soviet rapprochement were questioned by Geoffrey Roberts, who analyzed Soviet archival documents that had been de-classified and released on the eve of 1990s. Roberts found no evidence that the alleged statements quoted by the Germans had ever been made in reality, and came to the conclusion that the German archival documents cannot serve as evidence for the existence of a dual policy during first half of 1939. According to him, no documentary evidence exists that the USSR responded to or made any overtures to the Germans "until the end of July 1939 at the earliest".
Litvinov's dismissal and Molotov's appointment
Many historians note that the dismissal of Foreign Minister Litvinov, whose Jewish ethnicity was viewed unfavorably by Nazi Germany, removed a major obstacle to negotiations between them and the USSR.
Carr, however, has argued that the Soviet Union's replacement of Litvinov with Molotov on May 3, 1939 indicated not an irrevocable shift towards alignment with Germany, but rather was Stalin’s way of engaging in hard bargaining with the British and the French by appointing a tough negotiator, namely Molotov, to the Foreign Commissariat. Albert Resis argued that the replacement of Litvinov by Molotov was both a warning to Britain and a signal to Germany. Derek Watson argued that Molotov could get the best deal with Britain and France because he was not encumbered with the baggage of collective security and could more easily negotiate with Germany. Geoffrey Roberts argued that Litvinov's dismissal helped the Soviets with British-French talks, because Litvinov doubted or maybe even opposed such discussions.
See also
- George F. Kennan Soviet Foreign Policy 1917-1941, Kreiger Publishing Company, 1960.
- Text of the 3 March, 1918 Peace Treaty of Brest-Litovsk
- Ericson 1999, pp. 11–12
- Ericson 1999, pp. 1–2
- Hehn 2005, p. 15
- Ericson 1999, pp. 14–5
- Hehn 2005, p. 212
- Ericson 1999, pp. 17–18
- Ericson 1999, pp. 23–24
- Bendersky,Joseph W., A History of Nazi Germany: 1919-1945, Rowman & Littlefield, 2000, ISBN 0-8304-1567-X, page 177
- Wette, Wolfram, Deborah Lucas SchneiderThe Wehrmacht: History, Myth, Reality, Harvard University Press, 2006 ISBN 0-674-02213-0, page 15
- Lee, Stephen J. and Paul Shuter, Weimar and Nazi Germany, Heinemann, 1996, ISBN 0-435-30920-X, page 33
- Bendersky, Joseph W., A History of Nazi Germany: 1919-1945, Rowman & Littlefield, 2000, ISBN 0-8304-1567-X, page 159
- Müller, Rolf-Dieter, Gerd R. Ueberschär, Hitler's War in the East, 1941-1945: A Critical Assessment, Berghahn Books, 2002, ISBN 157181293, page 244
- Rauschning, Hermann, Hitler Speaks: A Series of Political Conversations With Adolf Hitler on His Real Aims, Kessinger Publishing, 2006,ISBN 142860034, pages 136-7
- Hehn 2005, p. 37
- Jurado, Carlos Caballero and Ramiro Bujeiro, The Condor Legion: German Troops in the Spanish Civil War, Osprey Publishing, 2006, ISBN 1-84176-899-5, page 5-6
- Gerhard Weinberg: The Foreign Policy of Hitler's Germany Diplomatic Revolution in Europe 1933-36, Chicago: University of Chicago Press, 1970, pages 346.
- Robert Melvin Spector. World Without Civilization: Mass Murder and the Holocaust, History, and Analysis, pg. 257
- Piers Brendon, The Dark Valley, Alfred A. Knopf, 2000, ISBN 0-375-40881-9
- Ericson 1999, pp. 27–28
- Text of the Agreement concluded at Munich, September 29, 1938, between Germany, Great Britain, France and Italy
- Kershaw, Ian, Hitler, 1936-1945: Nemesis, W. W. Norton & Company, 2001, ISBN 0-393-32252-1, page 157-8
- Ericson 1999, pp. 29–35
- Hehn 2005, pp. 42–3
- Ericson 1999, pp. 3–4
- Manipulating the Ether: The Power of Broadcast Radio in Thirties America Robert J. Brown ISBN 0-7864-2066-9
- Watson 2000, p. 698
- Ericson 1999, pp. 23–35
- Roberts 2006, p. 30
- Tentative Efforts To Improve German–Soviet Relations, April 17 – August 14, 1939
- "Natural Enemies: The United States and the Soviet Union in the Cold War 1917–1991" by Robert C. Grogin 2001, Lexington Books page 28
- Zachary Shore. What Hitler Knew: The Battle for Information in Nazi Foreign Policy. Published by Oxford University Press US, 2005 ISBN 0-19-518261-8, ISBN 978-0-19-518261-3, p. 109
- Nekrich, Ulam & Freeze 1997, p. 107
- Karski, J. The Great Powers and Poland, University Press, 1985, p.342
- Nekrich, Ulam & Freeze 1997, pp. 108–9
- Roberts (1992; Historical Journal) p. 921-926
- Ericson 1999, p. 43
- Biskupski & Wandycz 2003, pp. 171–72
- Ulam 1989, p. 508
- Watson 2000, p. 695
- In Jonathan Haslam's view it shouldn't be overlooked that Stalin's adherence to the collective security line was purely conditional. [Review of] Stalin's Drive to the West, 1938–1945: The Origins of the Cold War. by R. Raack; The Soviet Union and the Origins of the Second World War: Russo-German Relations and the Road to War, 1933–1941. by G. Roberts. The Journal of Modern History > Vol. 69, No. 4 (Dec., 1997), p.787
- D.C. Watt, How War Came: the Immediate Origins of the Second World War 1938-1939 (London, 1989), p. 118. ISBN 0-394-57916-X, 9780394579160
- Watson 2000, p. 696
- Resis 2000, p. 47
- Israeli?, Viktor Levonovich, On the Battlefields of the Cold War: A Soviet Ambassador's Confession, Penn State Press, 2003, ISBN 0-271-02297-3, page 10
- Nekrich, Ulam & Freeze 1997, pp. 109–110
- Shirer 1990, pp. 480–1
- Herf 2006, pp. 97–98
- Osborn, Patrick R., Operation Pike: Britain Versus the Soviet Union, 1939-1941, Greenwood Publishing Group, 2000, ISBN 0-313-31368-7, page xix
- Levin, Nora, The Jews in the Soviet Union Since 1917: Paradox of Survival, NYU Press, 1988, ISBN 0-8147-5051-6, page 330. Litvniov "was referred to by the German radio as 'Litvinov-Finkelstein'-- was dropped in favor of Vyascheslav Molotov. 'The emininent Jew', as Churchill put it, 'the target of German antagonism was flung aside . . . like a broken tool . . . The Jew Litvinov was gone and Hitler's dominant prejudice placated.'"
- In an introduction to a 1992 paper, Geoffrey Roberts writes: "Perhaps the only thing that can be salvaged from the wreckage of the orthodox interpretation of Litvinov's dismissal is some notion that, by appointing Molotov foreign minister, Stalin was preparing for the contingency of a possible deal with Hitler. In view of Litvinov's Jewish heritage and his militant anti-nazism, that is not an unreasonable supposition. But it is a hypothesis for which there is as yet no evidence. Moreover, we shall see that what evidence there is suggests that Stalin's decision was determined by a quite different set of circumstances and calculations", Geoffrey Roberts. The Fall of Litvinov: A Revisionist View Journal of Contemporary History, Vol. 27, No. 4 (Oct., 1992), pp. 639-657 Stable URL: http://www.jstor.org/stable/260946
- Resis 2000, p. 35
- Moss, Walter, A History of Russia: Since 1855, Anthem Press, 2005, ISBN 1-84331-034-1, page 283
- Gorodetsky, Gabriel, Soviet Foreign Policy, 1917-1991: A Retrospective, Routledge, 1994, ISBN 0-7146-4506-0, page 55
- Resis 2000, p. 51
- According to Paul Flewers, Stalin’s address to the eighteenth congress of the Communist Party of the Soviet Union on March 10, 1939 discounted any idea of German designs on the Soviet Union. Stalin had intended: "To be cautious and not allow our country to be drawn into conflicts by warmongers who are accustomed to have others pull the chestnuts out of the fire for them." This was intended to warn the Western powers that they could not necessarily rely upon the support of the Soviet Union. As Flewers put it, “Stalin was publicly making the none-too-subtle implication that some form of deal between the Soviet Union and Germany could not be ruled out.” From the Red Flag to the Union Jack: The Rise of Domestic Patriotism in the Communist Party of Great Britain 1995
- Resis 2000, pp. 33–56
- Watson 2000, p. 699
- Montefiore 2005, p. 312
- Imlay, Talbot, "France and the Phony War, 1939-1940", pages 261-280 from French Foreign and Defence Policy, 1918-1940 edited by Robert Boyce, London, United Kingdom: Routledge, 1998 page 264
- Ericson 1999, p. 44
- Ericson 1999, p. 45
- Nekrich, Ulam & Freeze 1997, p. 111
- Ericson 1999, p. 46
- Biskupski & Wandycz 2003, p. 179
- Watson 2000, p. 703
- Shirer 1990, p. 502
- Watson 2000, p. 704
- Roberts 1995, p. 1995
- J. Haslam, The Soviet Union and the Struggle for Collective Security in Europe, 1933-39 (London, 1984), pp. 207, 210. ISBN 0-333-30050-5, ISBN 978-0-333-30050-3
- Ericson 1999, p. 47
- Nekrich, Ulam & Freeze 1997, p. 114
- Hehn 2005, p. 218
- Biskupski & Wandycz 2003, p. 186
- Watson 2000, p. 708
- Hiden, John, The Baltic and the Outbreak of the Second World War, Cambridge University Press, 2003, ISBN 0-521-53120-9, page 46
- Shirer 1990, p. 447
- Ericson 1999, pp. 54–55
- Fest 2002, p. 588
- Ulam 1989, pp. 509–10
- Roberts 1992, p. 64
- Shirer 1990, p. 503
- Shirer 1990, p. 504
- Fest 2002, pp. 589–90
- Vehviläinen, Olli, Finland in the Second World War: Between Germany and Russia, Macmillan, 2002, ISBN 0-333-80149-0, page 30
- Bertriko, Jean-Jacques Subrenat, A. and David Cousins, Estonia: Identity and Independence, Rodopi, 2004, ISBN 90-420-0890-3 page 131
- Nekrich, Ulam & Freeze 1997, p. 115
- Ericson 1999, p. 56
- Erickson 2001, p. 539-30
- Shirer 1990, p. 513
- Watson 2000, p. 713
- Shirer 1990, pp. 533–4
- Shirer 1990, p. 535
- Taylor and Shaw, Penguin Dictionary of the Third Reich, 1997, p.246.
- Shirer 1990, p. 521
- Shirer 1990, pp. 523–4
- Murphy 2006, p. 22
- Shirer 1990, p. 536
- Shirer 1990, p. 525
- Shirer 1990, pp. 526–7
- Murphy 2006, pp. 24–28
- Ericson 1999, p. 57
- Shirer 1990, p. 668
- Wegner 1997, p. 99
- Grenville & Wasserstein 2001, p. 227
- Ericson 1999, p. 61
- Watson 2000, p. 715
- Murphy 2006, p. 23
- Shirer 1990, p. 528
- Shirer 1990, p. 540
- Text of the Nazi-Soviet Non-Aggression Pact, executed August 23, 1939
- Shirer 1990, p. 539
- Shirer 1990, pp. 541–2
- Nekrich, Ulam & Freeze 1997, p. 123
- Sanford, George (2005). Katyn and the Soviet Massacre Of 1940: Truth, Justice And Memory. London, New York: Routledge. ISBN 0-415-33873-5.
- Wettig, Gerhard, Stalin and the Cold War in Europe, Rowman & Littlefield, Landham, Md, 2008, ISBN 0-7425-5542-9, page 20
- Kennedy-Pipe, Caroline, Stalin's Cold War, New York : Manchester University Press, 1995, ISBN 0-7190-4201-1
- Senn, Alfred Erich, Lithuania 1940 : revolution from above, Amsterdam, New York, Rodopi, 2007 ISBN 978-90-420-2225-6
- Wettig, Gerhard, Stalin and the Cold War in Europe, Rowman & Littlefield, Landham, Md, 2008, ISBN 0-7425-5542-9, page 21
- Ericson 1999, pp. 150–3
- Johari, J.C., Soviet Diplomacy 1925-41: 1925-27, Anmol Publications PVT. LTD., 2000, ISBN 81-7488-491-2 pages 134-137
- Roberts 2006, p. 58
- Brackman, Roman, The Secret File of Joseph Stalin: A Hidden Life, London and Portland, Frank Cass Publishers, 2001, ISBN 0-7146-5050-1, page 341
- Roberts 2006, p. 59
- Roberts 2006, p. 82
- Roberts 2006, p. 85
- Roberts 2006, pp. 116–7
- Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 7
- Edward E. Ericson, III. Karl Schnurre and the Evolution of Nazi-Soviet Relations, 1936-1941. German Studies Review, Vol. 21, No. 2 (May, 1998), pp. 263-283
- Carr, Edward H., German–Soviet Relations between the Two World Wars, 1919–1939, Oxford 1952, p. 136.
- E. H. Carr., From Munich to Moscow. I., Soviet Studies, Vol. 1, No. 1, (Jun., 1949), pp. 3–17. Published by: Taylor & Francis, Ltd.
- Taylor, A.J.P., The Origins of the Second World War, London 1961, p. 262–3
- Max Beloff. Soviet Foreign Policy, 1929-41: Some Notes Soviet Studies, Vol. 2, No. 2 (Oct., 1950), pp. 123-137
- Stalin's Other War: Soviet Grand Strategy, 1939–1941 ISBN 0-7425-2191-5
- Nazi-Soviet relations 1939-1941. : Documents from the Archives of The German Foreign Office. Raymond James Sontag and James Stuart Beddie, ed. 1948. Department of State. Publication 3023
- Geoffrey Roberts.The Soviet Decision for a Pact with Nazi Germany. Soviet Studies, Vol. 44, No. 1 (1992), pp. 57-78
- Memorandum by the State Secretary in the German Foreign Office - Weizsacker
- E. H. Carr. From Munich to Moscow. II Soviet Studies, Vol. 1, No. 2 (Oct., 1949), pp. 93-105
- Foreign Office Memorandum : May 17, 1939
- Memorandum by the German Ambassador in the Soviet Union (Schulenburg) May 20, 1939
- Nekrich, Ulam & Freeze 1997, pp. 112–3
- God krizisa: 1938-1939 : dokumenty i materialy v dvukh tomakh.By A. P. Bondarenko, Soviet Union Ministerstvo inostrannykh del. Contributor A. P. Bondarenko. Published by Izd-vo polit. lit-ry, 1990. Item notes: t. 2. Item notes: v.2. Original from the University of Michigan. Digitized Nov 10, 2006. ISBN 5-250-01092-X, 9785250010924
- Roberts 1992, pp. 57–78
- Geoffrey Roberts. On Soviet-German Relations: The Debate Continues. A Review Article Europe-Asia Studies, Vol. 50, No. 8 (Dec., 1998), pp.1471-1475
- Carr, E.H. German-Soviet Relations Between the Two World Wars, Harper & Row: New York, 1951, 1996 pages 129-130
- Albert Resis. The Fall of Litvinov: Harbinger of the German-Soviet Non-Aggression Pact. Europe-Asia Studies, Vol. 52, No. 1 (Jan., 2000), pp. 33-56 Published by: Taylor & Francis, Ltd. Stable URL: http://www.jstor.org/stable/153750 "By replacing Litvinov with Molotov, Stalin significantly increased his freedom of maneuver in foreign policy. Litvinov's dismissal served as a warning to London and Paris that Moscow had another option: rapprochement with Germany. After Litvinov's dismissal, the pace of Soviet-German contacts quickened. But that did not mean that Moscow had abandoned the search for collective security, now exemplified by the Soviet draft triple alliance. Meanwhile, Molotov's appointment served as an additional signal to Berlin that Moscow was open to offers. The signal worked, the warning did not."
- Derek Watson. Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939, Europe-Asia Studies, Vol. 52, No. 4 (Jun., 2000), pp. 695-722. Stable URL: http://www.jstor.org/stable/153322 "The choice of Molotov reflected not only the appointment of a nationalist and one of Stalin's leading lieutenants, a Russian who was not a Jew and who could negotiate with Nazi Germany, but also someone unencumbered with the baggage of collective security who could obtain the best deal with Britain and France, if they could be forced into an agreement."
- Geoffrey Roberts. The Fall of Litvinov: A Revisionist View. Journal of Contemporary History Vol. 27, No. 4 (Oct., 1992), pp. 639-657. Stable URL: http://www.jstor.org/stable/260946. "the foreign policy factor in Litvinov's downfall was the desire of Stalin and Molotov to take charge of foreign relations in order to pursue their policy of a triple alliance with Britain and France - a policy whose utility Litvinov doubted and may even have opposed or obstructed."
- Biskupski, Mieczyslaw B.; Wandycz, Piotr Stefan (2003), Ideology, Politics, and Diplomacy in East Central Europe, Boydell & Brewer, ISBN 1-58046-137-9
- Ericson, Edward E. (1999), Feeding the German Eagle: Soviet Economic Aid to Nazi Germany, 1933-1941, Greenwood Publishing Group, ISBN 0-275-96337-3
- Fest, Joachim C. (2002), Hitler, Houghton Mifflin Harcourt, ISBN 0-15-602754-2
- Herf, Jeffrey (2006), The Jewish Enemy: Nazi Propaganda During World War II and the Holocaust, Harvard University Press, ISBN 0-674-02175-4
- Montefiore, Simon Sebac (2005) . Stalin: The Court of the Red Tsar (5th ed.). Great Britain: Phoenix. ISBN 0-7538-1766-7.
- Murphy, David E. (2006), What Stalin Knew: The Enigma of Barbarossa, Yale University Press, ISBN 0-300-11981-X
- Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997), Pariahs, Partners, Predators: German-Soviet Relations, 1922-1941, Columbia University Press, ISBN 0-231-10676-9
- Philbin III, Tobias R. (1994), The Lure of Neptune: German-Soviet Naval Collaboration and Ambitions, 1919–1941, University of South Carolina Press, ISBN 0-87249-992-8
- Resis, Albert (2000), "The Fall of Litvinov: Harbinger of the German-Soviet Non-Aggression Pact", Europe-Asia Studies 52 (1), JSTOR 153750
- Roberts, Geoffrey (2006), Stalin's Wars: From World War to Cold War, 1939–1953, Yale University Press, ISBN 0-300-11204-1
- Roberts, Geoffrey (1995), "Soviet Policy and the Baltic States, 1939-1940: A Reappraisal", Diplomacy and Statecraft 6 (3), JSTOR 153322
- Roberts, Geoffrey (1992), "The Soviet Decision for a Pact with Nazi Germany", Soviet Studies 55 (2), JSTOR 152247
- Roberts, Geoffrey (1992), "Infamous Encounter? The Merekalov-Weizsacker Meeting of 17 April 1939", The Historical Journal 35 (4), JSTOR 2639445
- Shirer, William L. (1990), The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, ISBN 0-671-72868-7
- Watson, Derek (2000), "Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939", Europe-Asia Studies 52 (4), JSTOR 153322
- Ulam, Adam Bruno (1989), Stalin: The Man and His Era, Beacon Press, ISBN 0-8070-7005-X | http://en.wikipedia.org/wiki/Molotov%e2%80%93Ribbentrop_Pact_negotiations | 13 |
28 | Introduction to Chemical Bonding
Chemical bonding is one of the most basic fundamentals of chemistry that explains other concepts such as molecules and reactions. Without it, scientists wouldn't be able to explain why atoms are attracted to each other or how products are formed after a chemical reaction has taken place. To understand the concept of bonding, one must first know the basics behind atomic structure.
A common atom contains a nucleus composed of protons and neutrons, with electrons in certain energy levels revolving around the nucleus. In this section, the main focus will be on these electrons. Elements are distinguishable from each other due to their "electron cloud," or the area where electrons move around the nucleus of an atom. Because each element has a distinct electron cloud, this determines their chemical properties as well as the extent of their reactivity (i.e. noble gases are inert/not reactive while alkaline metals are highly reactive). In chemical bonding, only valence electrons, electrons located in the orbitals of the outermost energy level (valence shell) of an element, are involved.
Lewis diagrams are graphical representations of elements and their valence electrons. Valance electrons are the electrons that form the outermost shell of an atom. In a Lewis diagram of an element, the symbol of the element is written in the center and the valence electrons are drawn around it as dots. The position of the valence electrons drawn is unimportant. However, the general convention is to start from 12o'clock position and go clockwise direction to 3 o'clock, 6 o'clock, 9 o'clock, and back to 12 o'clock positions respectively. Generally the Roman numeral of the group corresponds with the number of valance electrons of the element.
Below is the periodic table representation of the number of valance electrons. The alkali metals of Group IA have one valance electron, the alkaline-earth metals of Group IIA have 2 valance electrons, Group IIIA has 3 valance electrons, and so on. The nonindicated transition metals, lanthanoids, and actinoids are more difficult in terms of distinguishing the number of valance electrons they have; however, this section only introduces bonding, hence they will not be covered in this unit.
Lewis diagrams for Molecular Compounds/Ions
To draw the lewis diagrams for molecular compounds or ions, follow these steps below (we will be using H2O as an example to follow):
1) Count the number of valance electrons of the molecular compound or ion. Remember, if there are two or more of the same element, then you have to double or multiply by however many atoms there are of the number of valance electrons. Follow the roman numeral group number to see the corresponding number of valance electrons there are for that element.
Oxygen (O)--Group VIA: therefore, there are 6 valance electrons
Hydrogen (H)--Group IA: therefore, there is 1 valance electron
NOTE: There are TWO hydrogen atoms, so multiply 1 valance electron X 2 atoms
Total: 6 + 2 = 8 valance electrons
2) If the molecule in question is an ion, remember to add or subract the respective number of electrons to the total from step 1.
For ions, if the ion has a negative charge (anion), add the corresponding number of electrons to the total number of electrons (i.e. if NO3- has a negative charge of 1-, then you add 1 extra electron to the total; 5 + 3(6)= 23 +1 = 24 total electrons). A - sign mean the molecule has an overall negative charge, so it must have this extra electron. This is because anions have a higher electron affinity (tendency to gain electrons). Most anions are composed of nonmetals, which have high electronegativity.
If the ion has a positive charge (cation), subtract the corresponding number of electrons to the total number of electrons (i.e. H3O+ has a positive charge of 1+, so you subtract 1 extra electron to the total; 6 + 1(3) = 9 - 1 = 8 total electrons). A + sign means the molecule has an overall postive charge, so it must be missing one electron. Cations are positive and have weaker electron affinity. They are mostly composed of metals; their atomic radii are larger than the nonmetals. This consequently means that shielding is increased, and electrons have less tendency to be attracted to the "shielded" nucleus.
From our example, water is a neutral molecule, therefore no electrons need to be added or subtracted from the total.
3) Write out the symbols of the elements, making sure all atoms are accounted for (i.e. H2O, write out O and 2 H's on either side of the oxygen). Start by adding single bonds (1 pair of electrons) to all possible atoms while making sure they follow the octet rule (with the exceptions of the duet rule and other elements mentioned above).
4) If there are any leftover electrons, then add them to the central atom of the molecule (i.e. XeF4 has 4 extra electrons after being distributed, so th4 extra electrons are given to Xe: like so. Finally, rearrange the electron pairs into double or triple bonds if possible.
Most elements follow the octet rule in chemical bonding, which means that an element should have contact to eight valence electrons in a bond or exactly fill up its valence shell. Having eight electrons total ensures that the atom is stable. This is the reason why noble gases, a valence electron shell of 8 electrons, are chemically inert; they are already stable and tend to not need the transfer of electrons when bonding with another atom in order to be stable. On the other hand, alkali metals have a valance electron shell of one electron. Since they want to complete the octet rule they often simply lose one electron. This makes them quite reactive because they can easily donate this electron to other elements. This explains the highly reactive properties of the Group IA elements.
Some elements that are exceptions to the octet rule include Aluminum(Al), Phosphorus(P), Sulfur(S), and Xenon(Xe).
Hydrogen(H) and Helium(He) follow the duet rule since their valence shell only allows two electrons. There are no exceptions to the duet rule; hydrogen and helium will always hold a maximum of two electrons.
Ionic bonding is the process of not sharing electrons between two atoms. It occurs between a nonmetal and a metal. Ionic bonding is also known as the process in which electrons are "transferred" to one another because the two atoms have different levels of electron affinity. In the picture below, a sodium (Na) ion and a chlorine (Cl) ion are being combined through ionic bonding. Na+ has less electronegativity due to a large atomic radius and essentially does not want the electron it has. This will easily allow the more electronegative chlorine atom to gain the electron to complete its 3rd energy level. Throughout this process, the transfer of the electron releases energy to the atmosphere.
Another example of ionic bonding is the crystal lattice structure shown above. The ions are arranged in such a way that shows unifomity and stablity; a physical characteristic in crystals and solids. Moreover, in a concept called "the sea of electrons," it is seen that the molecular structure of metals is composed of stabilized positive ions (cations) and "free-flowing" electrons that weave in-between the cations. This attributes to the metal property of conductivity; the flowing electrons allow the electric current to pass through them. In addition, this explains why strong electrolytes are good conductors. Ionic bonds are easily broken by water because the polarity of the water molecules shield the anions from attracting the cations. Therefore, the ionic compounds dissociate easily in water, and the metallic properties of the compound allow conductivity of the solution.
Covalent bonding is the process of sharing of electrons between two atoms. The bonds are typically between a nonmetal and a nonmetal. Since their electronegativities are all within the high range, the electrons are attracted and pulled by both atom's nuceli. In the case of two identical atoms that are bonded to each other (also known as a nonpolar bond, explained later below), they both emit the same force of pull on the electrons, thus there is equal attraction between the two atoms (i.e. oxygen gas, or O2, have an equal distribution of electron affinity. This makes covalent bonds harder to break.
There are three types of covalent bonds: single, double, and triple bonds. A single bond is composed of 2 bonded electrons. Naturally, a double bond has 4 electrons, and a triple bond has 6 bonded electrons. Because a triple bond will have more strength in electron affinity than a single bond, the attraction to the positively charged nucleus is increased, meaning that the distance from the nucleus to the electrons is less. Simply put, the more bonds or the greater the bond strength, the shorter the bond length will be. In other words:
Bond length: triple bond < double bond < single bond
Polar Covalent Bonding
Polar covalent bonding is the process of unequal sharing of electrons. It is considered the middle ground between ionic bonding and covalent bonding. It happens due to the differing electronegativity values of the two atoms. Because of this, the more electronegative atom will attract and have a stronger pulling force on the electrons. Thus, the electrons will spend more time around this atom.
The symbols above indicate that on the flourine side it is slightly negitive and the hydrogen side is slightly positive.
Polar and Non-polar molecules
Polarity is the competing forces between two atoms for the electrons. It is also known as the polar covalent bond. A molecule is polar when the electrons are attracted to a more electronegative atom due to its greater electron affinity. A nonpolar molecule is a bond between two identical atoms. They are the ideal example of a covalent bond. Some examples are nitrogen gas (N2), oxygen gas (O2), and hydrogen gas (H2).
One way to figure out what type of bond a molecule has is by determining the difference of the electronegativity values of the molecules.
If the difference is between 0.0-0.3, then the molecule has a non-polar bond.
If the difference is between 0.3-1.7, then the molecule has a polar bond.
If the difference is 1.7 or more, then the molecule has an ionic bond.
1) All elements must fufill the _____ rule except for H/He which instead fufill the _____ rule.
Octet and duet
2) What is the Lewis structure of NH3?
3) What has a longer bond length? A double bond or a triple bond? Why?
Answer: A double bond because the attractive pull of electrons to the nucleus is weaker than a triple bond's, therefore the electrons will be farther away meaning longer bond length.
4) What type of bond does an HCl molecule have? Find using the electronegativity values of the elements.
Using estimated values: 3.5 (~Cl) - 2.1 (~H) = 1.4 => HCl is a polar molecule and has a polar covalent bond.
5) Covalent bonding invloves the ______ of electrons while ionic bonding is the ______ of electrons.
sharing and transferr
This page viewed 20857 times
This page viewed 20857 times | http://chemwiki.ucdavis.edu/Physical_Chemistry/Quantum_Mechanics/Atomic_Theory/Chemical_Compounds/Introduction_to_Chemical_Bonding | 13 |
46 | General Chemistry/Covalent bonds
Covalent bonds create molecules, which can be represented by a molecular formula. For chemicals such as a basic sugar (C6H12O6), the ratios of atoms have a common multiple, and thus the empirical formula is CH2O. Note that a molecule with a certain empirical formula is not necessarily the same as one with the same molecular formula.
Formation of Covalent Bonds
Covalent bonds form between two atoms which have incomplete octets — that is, their outermost shells have fewer than eight electrons. They can share their electrons in a covalent bond. The simplest example is water (H2O). Oxygen has six valence electrons (and needs eight) and the hydrogens have one electron each (and need two). The oxygen shares two of its electrons with the hydrogens, and the hydrogens share their electrons with the oxygen. The result is a covalent bond between the oxygen and each hydrogen. The oxygen has a complete octet and the hydrogens have the two electrons they each need.
When atoms move closer, their orbitals change shape, letting off energy. However, there is a limit to how close the atoms get to each other—too close, and the nuclei repel each other.
One way to think of this is a ball rolling down into a valley. It will settle at the lowest point. As a result of this potential energy "valley", there is a specific bond length for each type of bond. Also, there is a specific amount of energy, measured in kilojoules per mole (kJ/mol) that is required to break the bonds in one mole of the substance. Stronger bonds have a shorter bond length and a greater bond energy.
The Valence Bond Model
One useful model of covalent bonding is called the Valence Bond model. It states that covalent bonds form when atoms share electrons with each other in order to complete their valence (outer) electron shells. They are mainly formed between non-metals.
An example of a covalently bonded substance is hydrogen gas (H2). A hydrogen atom on its own has one electron—it needs two to complete its valence shell. When two hydrogen atoms bond, each one shares its electron with the other so that the electrons move about both atoms instead of just one. Both atoms now have access to two electrons: they become a stable H2 molecule joined by a single covalent bond.
Double and Triple Bonds
Covalent bonds can also form between other non-metals, for example chlorine. A chlorine atom has 7 electrons in its valence shell—it needs 8 to complete it. Two chlorine atoms can share 1 electron each to form a single covalent bond. They become a Cl2 molecule.
Oxygen can also form covalent bonds, however, it needs a further 2 electrons to complete its valence shell (it has 6). Two oxygen atoms must share 2 electrons each to complete each other's shells, making a total of 4 shared electrons. Because twice as many electrons are shared, this is called a double covalent bond. Double bonds are much stronger than single bonds, so the bond length is shorter and the bond energy is higher.
Furthermore, nitrogen has 5 valence electrons (it needs a further 3). Two nitrogen atoms can share 3 electrons each to make a N2 molecule joined by a triple covalent bond. Triple bonds are stronger than double bonds. They have the shortest bond lengths and highest bond energies.
Electron Sharing and Orbitals
Carbon, contrary to the trend, does not share four electrons to make a quadruple bond. The reason for this is that the fourth pair of electrons in carbon cannot physically move close enough to be shared. The valence bond model explains this by considering the orbitals involved.
Recall that electrons orbit the nucleus within a cloud of electron density (orbitals). The valence bond model works on the principle that orbitals on different atoms must overlap to form a bond. There are several different ways that the orbitals can overlap, forming several distinct kinds of covalent bonds.
The Sigma Bond
The first and simplest kind of overlap is when two s orbitals come together. It is called a sigma bond (sigma, or σ, is the Greek equivalent of 's'). Sigma bonds can also form between two p orbitals that lie pointing towards each other. Whenever you see a single covalent bond, it exists as a sigma bond. When two atoms are joined by a sigma bond, they are held close to each other, but they are free to rotate like beads on a string.
The Pi Bond
The second, and equally important kind of overlap is between two parallel p orbitals. Instead of overlapping head-to-head (as in the sigma bond), they join side-to-side, forming two areas of electron density above and below the molecule. This type of overlap is referred to as a pi (π, from the Greek equivalent of p) bond. Whenever you see a double or triple covalent bond, it exists as one sigma bond and one or two pi bonds. Due to the side-by-side overlap of a pi bond, there is no way the atoms can twist around each other as in a sigma bond. Pi bonds give the molecule a rigid shape.
Pi bonds are weaker than sigma bonds since there is less overlap. Thus, two single bonds are stronger than a double bond, and more energy is needed to break two single bonds than a single double bond.
Consider a molecule of methane: a carbon atom attached to four hydrogen atoms. Each atom is satisfying the octet rule, and each bond is a single covalent bond.
Now look at the electron configuration of carbon: 1s22s22p2. In its valence shell, it has two s electrons and two p electrons. It would not be possible for the four electrons to make equal bonds with the four hydrogen atoms (each of which has one s electron). We know, by measuring bond length and bond energy, that the four bonds in methane are equal, yet carbon has electrons in two different orbitals, which should overlap with the hydrogen 1s orbital in different ways.
To solve the problem, hybridization occurs. Instead of a s orbital and three p orbital, the orbitals mix, to form four orbitals, each with 25% s character and 75% p character. These hybrid orbitals are called sp3 orbitals, and they are identical. Observe:
Now these orbitals can overlap with hydrogen 1s orbitals to form four equal bonds. Hybridization may involve d orbitals in the atoms that have them, allowing up to a sp3d2 hybridization. | http://en.wikibooks.org/wiki/General_Chemistry/Covalent_bonds | 13 |
17 | A gender role is a set of social and behavioral norms that are generally considered appropriate for either a man or a woman in a social or interpersonal relationship. Gender roles vary widely between cultures and even in the same cultural tradition have differed over time and context. There are differences of opinion as to which observed differences in behavior and personality between genders are entirely due to innate personality of the peron and which are due to cultural or social factors, and are therefore the product of socialization, or to what extent gender differences are due to biological and physiological differences.
Views on gender-based differentiation in the workplace and in interpersonal relationships have often undergone profound changes as a result of feminist and/or economic influences, but there are still considerable differences in gender roles in almost all societies. It is also true that in times of necessity, such as during a war or other emergency, women are permitted to perform functions which in "normal" times would be considered a male role, or vice versa.
Gender has several definitions. It usually refers to a set of characteristics that are considered to distinguish between male and female, reflect one's biological sex, or reflect one's gender identity. Gender identity is the gender(s), or lack thereof, a person self-identifies as; it is not necessarily based on biological sex, either real or perceived, and it is distinct from sexual orientation. It is one's internal, personal sense of being a man or a woman (or a boy or girl). There are two main genders: masculine (male), or feminine (female), although some cultures acknowledge more genders. Androgyny, for example, has been proposed as a third gender. Some societies have more than five genders, and some non-Western societies have three genders – man, woman and third gender. Gender expression refers to the external manifestation of one's gender identity, through "masculine," "feminine," or gender-variant or gender neutral behavior, clothing, hairstyles, or body characteristics.
Gender role theory
||This section needs additional citations for verification. (January 2012)|
Gender role theory posits that boys and girls learn the appropriate behavior and attitudes from the family and overall culture they grow up with, and so non-physical gender differences are a product of socialization. Social role theory proposes that the social structure is the underlying force for the gender differences. Social role theory proposes that the sex-differentiated behavior is driven by the division of labor between two sexes within a society. Division of labor creates gender roles, which in turn, lead to gendered social behavior.
The physical specialization of the sexes is considered to be the distal cause of gender roles. Men’s unique physical advantages in terms of body size and upper body strength provided them an edge over women in those social activities that demanded such physical attributes such as hunting, herding and warfare. On the other hand, women’s biological capacity for reproduction and child-bearing is proposed to explain their limited involvement in other social activities. Such divided activity arrangement for the purpose of achieving activity-efficiency led to the division of labor between sexes. Social role theorists have explicitly stressed that the labor division is not narrowly defined as that between paid employment and domestic activities, rather, is conceptualized to include all activities performed within a society that are necessary for its existence and sustainability. The characteristics of the activities performed by men and women became people's perceptions and beliefs of the dispositional attributes of men or women themselves. Through the process of correspondent inference (Gilbert, 1998), division of labor led to gender roles, or gender stereotype. Ultimately, people expect men and women who occupy certain position to behave according to these attributes.
These socially constructed gender roles are considered to be hierarchical and characterized as a male-advantaged gender hierarchy (Wood & Eagly, 2002). The activities men were involved in were often those that provided them with more access to or control of resources and decision making power, rendering men not only superior dispositional attributes via correspondence bias (Gilbert, 1998), but also higher status and authority as society progressed. The particular pattern of the labor division within a certain society is a dynamic process and determined by its specific economical and cultural characteristics. For instance, in an industrial economy, the emphasis on physical strength in social activities becomes less compared with that in a less advanced economy. In a low birth rate society, women will be less confined to reproductive activities and thus more likely to be involved in a wide range of social activities. The beliefs that people hold about the sexes are derived from observations of the role performances of men and women and thus reflect the sexual division of labor and gender hierarchy of the society (Eagly et al., 2000).
The consequences of gender roles and stereotypes are sex-typed social behavior (Eagly et al., 2004) because roles and stereotypes are both socially shared descriptive norms and prescriptive norms. Gender roles provide guides to normative behaviors that are typical, ought-to-be and thus “likely effective” for each sex within certain social context. Gender roles also depict ideal, should-be, and thus desirable behaviors for men and women who are occupying a particular position or involving in certain social activities. Put it another way, men and women, as social beings, strive to belong and seek for approval by complying and conforming to the social and cultural norms within their society. The conformity to social norms not only shapes the pattern, but also maintains the very existence of sex-typed social behavior (Eagly et al., 2004).
In summary, social role theory “treats these differing distributions of women and men into roles as the primary origin of sex-differentiated social behavior, their impact on behavior is mediated by psychological and social processes” (Eagly, 1997), including “developmental and socialization processes, as well as by processes involved in social interaction (e.g., expectancy confirmation) and self-regulation” (Eagly et al., 2004).
The cognitive development theory of gender roles is mentioned in Human Sexuality by Janelle Carroll. This assumes that children go through a pattern of development that is universal to all. This theory follows Piaget's proposition that children can only process a certain amount of information at each stage of development. As children mature they become more aware that gender roles are situational. Therefore theorists predict that rigid gender role behavior may decrease around the ages of 7 or 8. Carroll also mentions a theory under the name of "Gender Schema Theory: Our Cultural Maps" which was first proposed by Sandra Bem. Bem believed that we all thought according to schemas, which is a cognitive way to organize our world. She further said that we all have a gender schema to organize the ways we view gender around us. Information is consistently being transferred to us about gender and what it is to be masculine and feminine. This is where Bem splits from cognitive theorists who believe gender is important "to children because of their physicalistic ways of thinking". Carroll also says that the gender schema can become so ingrained that we are not aware of its power.
Social construction of gender difference
This[who?] perspective proposes that gender difference is socially constructed (see Social construction of gender difference). Social constructionism of gender moves away from socialization as the origin of gender differences; people do not merely internalize gender roles as they grow up but they respond to changing norms in society. Children learn to categorize themselves by gender very early on in life. A part of this is learning how to display and perform gendered identities as masculine or feminine. Boys learn to manipulate their physical and social environment through physical strength or other skills, while girls learn to present themselves as objects to be viewed. Children monitor their own and others’ gendered behavior. Gender-segregated children's activities creates the appearance that gender differences in behavior reflect an essential nature of male and female behavior.
Judith Butler, in works such as Gender Trouble and Undoing Gender, contends that being female is not "natural" and that it appears natural only through repeated performances of gender; these performances in turn, reproduce and define the traditional categories of sex and/or gender. A social constructionist view looks beyond categories and examines the intersections of multiple identities, the blurring of the boundaries of essentialist categories. This is especially true with regards to categories of male and female that are typically viewed by others as binary and opposites of each other. By deconstructing categories of gender, the value placed on masculine traits and behaviors disappears. However, the elimination of categories makes it difficult to make any comparisons between the genders or to argue and fight against male domination.
Talcott Parsons' view
Working in the United States, Talcott Parsons developed a model of the nuclear family in 1955, which at that place and time was the prevalent family structure. It compared a strictly traditional view of gender roles (from an industrial-age American perspective) to a more liberal view.
The Parsons model was used to contrast and illustrate extreme positions on gender roles. Model A describes total separation of male and female roles, while Model B describes the complete dissolution of gender roles. (The examples are based on the context of the culture and infrastructure of the United States.)
|Model A – Total role segregation||Model B – Total integration of roles|
|Education||Gender-specific education; high professional qualification is important only for the man||Co-educative schools, same content of classes for girls and boys, same qualification for men and women.|
|Profession||The workplace is not the primary area of women; career and professional advancement is deemed unimportant for women||For women, career is just as important as for men; equal professional opportunities for men and women are necessary.|
|Housework||Housekeeping and child care are the primary functions of the woman; participation of the man in these functions is only partially wanted.||All housework is done by both parties to the marriage in equal shares.|
|Decision making||In case of conflict, man has the last say, for example in choosing the place to live, choice of school for children, buying decisions||Neither partner dominates; solutions do not always follow the principle of finding a concerted decision; status quo is maintained if disagreement occurs.|
|Child care and education||Woman takes care of the largest part of these functions; she educates children and cares for them in every way||Man and woman share these functions equally.|
However, these structured positions become less common in a liberal-individualist society; actual behavior of individuals is usually somewhere between these poles.
According to the interactionist approach, roles (including gender roles) are not fixed, but are constantly negotiated between individuals. In North America and southern South America, this is the most common approach among families whose business is agriculture.
Gender roles can influence all kinds of behaviors, such as choice of clothing, choice of work and personal relationships, e.g., parental status (See also Sociology of fatherhood).
The process through which the individual learns and accepts roles is called socialization. Socialization works by encouraging wanted and discouraging unwanted behavior. These sanctions by agents of socialization such as the family, schools, and the media make it clear to the child what is expected of the child by society. Mostly, accepted behavior is not produced by outright reforming coercion from an accepted social system. In some other cases, various forms of coercion have been used to acquire a desired response or function.
Homogenization vs. ethnoconvergence difference
It is claimed[by whom?] that even in monolingual, industrial societies like much of urban North America, some individuals do cling to a "modernized" primordial identity, apart from others and with this a more diverse gender role is recognized or developed. Some intellectuals, such as Michael Ignatieff, argue that convergence of a general culture does not directly entail a similar convergence in ethnic, social and self identities. This can become evident in social situations, where people divide into separate groups by gender roles and cultural alignments, despite being of an identical "super-ethnicity", such as nationality.
Within each smaller ethnicity, individuals may tend to see it perfectly justified to assimilate with other cultures including sexuality and some others view assimilation as wrong and incorrect for their culture or institution. This common theme, representing dualist opinions of ethnoconvergence itself, within a single ethnic or common values groups is often manifested in issues of sexual partners and matrimony, employment preferences, etc. These varied opinions of ethnoconvergence represent themselves in a spectrum; assimilation, homogenization, acculturation, gender identities and cultural compromise are commonly used terms for ethnoconvergence which flavor the issues to a bias.
Often it is in a secular, multi-ethnic environment that cultural concerns are both minimalized and exacerbated; Ethnic prides are boasted, hierarchy is created ("center" culture versus "periphery") but on the other hand, they will still share a common "culture", and common language and behaviors. Often the elderly, more conservative-in-association of a clan, tend to reject cross-cultural associations, and participate in ethnically similar community-oriented activities.
Anthropology and evolution
The idea that differences in gender roles originate in differences in biology has found support in parts of the scientific community. 19th-century anthropology sometimes used descriptions of the imagined life of paleolithic hunter-gatherer societies for evolutionary explanations for gender differences. For example, those accounts maintain that the need to take care of offspring may have limited the females' freedom to hunt and assume positions of power.
Due to the influence of (among others) Simone de Beauvoir's feminist works and Michel Foucault's reflections on sexuality, the idea that gender was unrelated to sex gained ground during the 1980s, especially in sociology and cultural anthropology. This view claims that a person could therefore be born with male genitals but still be of feminine gender. In 1987, R.W. Connell did extensive research on whether there are any connections between biology and gender role and concluded that there were none. However, there continues to be debate on the subject. Simon Baron-Cohen, a Cambridge Univ. professor of psychology and psychiatry, claims that "the female brain is predominantly hard-wired for empathy, while the male brain is predominantly hard-wired for understanding and building systems."
Dr. Sandra Lipsitz Bem is a psychologist who developed the gender schema theory to explain how individuals come to use gender as an organizing category in all aspects of their life. It is based on the combination of aspects of the social learning theory and the cognitive-development theory of sex role acquisition. In 1971, she created the Bem Sex Role Inventory to measure how well you fit into your traditional gender role by characterizing your personality as masculine, feminine, androgynous, or undifferentiated. She believed that through gender-schematic processing, a person spontaneously sorts attributes and behaviors into masculine and feminine categories. Therefore, an individual processes information and regulate their behavior based on whatever definitions of femininity and masculinity their culture provides.
The current trend in Western societies toward men and women sharing similar occupations, responsibilities and jobs suggests that the sex one is born with does not directly determine one's abilities. While there are differences in average capabilities of various kinds (E.g. physical strength) between the sexes, the capabilities of some members of one sex will fall within the range of capabilities needed for tasks conventionally assigned to the other sex.
In addition, research at the Yerkes National Primate Research Center has also shown that gender roles may be biological among primates. Yerkes researchers studied the interactions of 11 male and 23 female Rhesus monkeys with human toys, both wheeled and plush. The males played mostly with the wheeled toys while the females played with both types equally. Psychologist Kim Wallen has, however, warned against overinterpeting the results as the color and size of the toys may also be factors in the monkey's behavior.
Changing roles
A person's gender role is composed of several elements and can be expressed through clothing, behaviour, choice of work, personal relationships and other factors. These elements are not concrete and have evolved through time (for example women's trousers).
Traditionally only feminine and masculine gender roles existed, however, over time many different acceptable male or female gender roles have emerged. An individual can either identify themselves with a subculture or social group which results in them having diverse gender roles. Historically, for example, eunuchs had a different gender role because their biology was changed.
Androgyny, a term denoting the display of both male and female behaviour, also exists. Many terms have been developed to portray sets of behaviors arising in this context. The masculine gender role in the West has become more malleable since the 1950s. One example is the "sensitive new age guy", which could be described as a traditional male gender role with a more typically "female" empathy and associated emotional responses. Another is the metrosexual, a male who adopts or claims to be born with similarly "female" grooming habits. Some have argued that such new roles are merely rebelling against tradition more so than forming a distinct role. However, traditions regarding male and female appearance have never been concrete, and men in other eras have been equally interested with their appearance. The popular conceptualization of homosexual men, which has become more accepted in recent decades, has traditionally been more androgynous or effeminate, though in actuality homosexual men can also be masculine and even exhibit machismo characteristics. One could argue that since many homosexual men and women fall into one gender role or another or are androgynous, that gender roles are not strictly determined by a person's physical sex. Whether or not this phenomenon is due to social or biological reasons is debated. Many homosexual people find the traditional gender roles to be very restrictive, especially during childhood. Also, the phenomenon of intersex people, which has become more publicly accepted, has caused much debate on the subject of gender roles. Many intersexual people identify with the opposite sex, while others are more androgynous. Some see this as a threat to traditional gender roles, while others see it as a sign that these roles are a social construct, and that a change in gender roles will be liberating.
According to sociology research, traditional feminine gender roles have become less relevant in Western society since industrialization started. For example, the cliché that women do not follow a career is obsolete in many Western societies. On the other hand, the media sometimes portrays women who adopt an extremely classical role as a subculture. Women take on many roles that were traditionally reserved for men, as well as behaviors and fashions, which may cause pressure on many men to be more masculine and thus confined within an even smaller gender role, while other men react against this pressure. For example, men's fashions have become more restrictive than in other eras, while women's fashions have become more broad. One consequence of social unrest during the Vietnam War era was that men began to let their hair grow to a length that had previously (within recent history) been considered appropriate only for women. Somewhat earlier, women had begun to cut their hair to lengths previously considered appropriate only to men.
Some famous people known for their androgynous appearances in the 20th century include Brett Anderson, Gladys Bentley, David Bowie, Pete Burns, Boy George, Norman Iceberg, k.d. lang, Annie Lennox, Jaye Davidson, Marilyn Manson, Freddie Mercury, Marlene Dietrich, Mylène Farmer, Gackt, Mana (musician), Michael Jackson, Grace Jones, Marc Bolan, Brian Molko, Julia Sweeney (as Pat), Genesis P-Orridge, Prince and Kristen McMenamy.
Ideas of appropriate behavior according to gender vary among cultures and era, although some aspects receive more widespread attention than others. R.W. Connell in Men, Masculinities and Feminism claims:
- "There are cultures where it has been normal, not exceptional, for men to have homosexual relations. There have been periods in 'Western' history when the modern convention that men suppress displays of emotion did not apply at all, when men were demonstrative about their feeling for their friends. Mateship in the Australian outback last century is a case in point."
Other aspects, however, may differ markedly with time and place. In the Middle Ages, women were commonly associated with roles related to medicine and healing. Due to the rise of witch-hunts across Europe and the institutionalization of medicine, these roles eventually came to be monopolized by men. In the last few decades, however, these roles have become largely gender-neutral in Western society.
The elements of convention or tradition seem to play a dominant role in deciding which occupations fit in with which gender roles. In the United States, physicians have traditionally been men, and the few people who defied that expectation received a special job description: "woman doctor". Similarly, there were special terms like "male nurse", "woman lawyer", "lady barber", "male secretary," etc. But in the former Soviet Union countries, medical doctors are predominantly women. Also, throughout history, some jobs that have been typically male or female have switched genders. For example, clerical jobs used to be considered a men's jobs, but when several women began filling men's job positions due to World War II, clerical jobs quickly became dominated by women. It became more feminized, and women workers became known as "typewriters" or "secretaries". There are many other jobs that have switched gender roles. Many jobs are continually evolving as far as being dominated by women or men.
The majority of Western society is not often tolerant of one gender fulfilling another role. In fact, homosexual communities are more tolerant of and do not complain about such behavior. For instance, someone with a masculine voice, a five o'clock shadow (or a fuller beard), an Adam's apple, etc., wearing a woman's dress and high heels, carrying a purse, etc., would most likely draw ridicule or other unfriendly attention in ordinary social contexts (the stage and screen excepted). It is seen by some in that society that such a gender role for a man is not acceptable. The traditions of a particular culture often direct that certain career choices and lifestyles are appropriate to men, and other career choices and lifestyles are appropriate to women. In recent years, many people have strongly challenged the social forces that would prevent people from taking on non-traditional gender roles, such as women becoming fighter pilots or men becoming stay-at-home fathers. Men who defy or fail to fulfill their expected gender role are often called effeminate. In modern western societies, women who fail to fulfill their expected gender roles frequently receive only minor criticism for doing so.
I Corinthians, 11:14 and 15 indicates that it is inappropriate for a man to wear his hair long, and good for a woman to wear her hair long.
Muhammad described the high status of mothers in both of the major hadith Collections (Bukhari and Muslim). One famous account is:
"A man asked the Prophet: 'Whom should I honor most?' The Prophet replied: 'Your mother'. 'And who comes next?' asked the man. The Prophet replied: 'Your mother'. 'And who comes next?' asked the man. The Prophet replied: 'Your mother!'. 'And who comes next?' asked the man. The Prophet replied: 'Your father'"
In Islam, the primary role played by women is to be mothers, and mothers are considered the most important part of the family. A well known Hadith of the prophet says: "I asked the Prophet who has the greatest right over a man, and he said, 'His mother'". While a woman is considered the most important member of the family, she is not the head of the family. Therefore, it is possible to conclude that importance has no relevance with being the head of the family.
Hindu deities are more ambiguously gendered than deities of other world religions, such as Christianity, Islam, and others. For example, Shiva, deity of creation, destruction, and fertility, can appear entirely or mostly male, predominately female, and ambiguously gendered. Despite this, females are more restricted than males in their access to sacred objects and spaces, such as the inner sanctums in Hindu temples. This can be explained in part because women’s bodily functions, such as menstruation, are often seen by both men and women as polluting and/or debilitating. Males are therefore more symbolically associated with divinity and higher morals and ethics than females are. This informs female and males relations, and informs how the differences between males and females are understood.
However, in a religious cosmology like Hinduism, which prominently features female and androgynous deities, some gender transgression is allowed. For instance, in India, a group of people adorn themselves as women and are typically considered to be neither man nor woman, or man plus woman. This group is known as the hijras, and has a long tradition of performing in important rituals, such as the birth of sons and weddings. Despite this allowance for transgression, Hindu cultural traditions portray women in contradictory ways. On one hand, women’s fertility is given great value, and on the other, female sexuality is depicted as potentially dangerous and destructive.
In the USA, single men are greatly outnumbered by single women at a ratio of 100 single women to every 86 single men, though never-married men over age 15 outnumber women by a 5:4 ratio (33.9% to 27.3%) according to the 2006 US Census American Community Survey. This very much depends on age group, with 118 single men per 100 single women in their 20s, versus 33 single men to 100 single women over 65.
The numbers are different in other countries. For example, China has many more young men than young women, and this disparity is expected to increase. In regions with recent conflict such as Chechnya, women may greatly outnumber men.
In a cross-cultural study by David Buss, men and women were asked to rank certain traits in order of importance in a long-term partner. Both men and women ranked "kindness" and "intelligence" as the two most important factors. Men valued beauty and youth more highly than women, while women valued financial and social status more highly than men.
Masculine and feminine cultures and individuals generally differ in how they communicate with others. For example, feminine people tend to self-disclose more often than masculine people, and in more intimate details. Likewise, feminine people tend to communicate more affection, and with greater intimacy and confidence than masculine people. Generally speaking, feminine people communicate more and prioritize communication more than masculine.
Traditionally, masculine people and feminine people communicate with people of their own gender in different ways. Masculine people form friendships with other masculine people based on common interests, while feminine people build friendships with other feminine people based on mutual support. However, both genders initiate opposite-gender friendships based on the same factors. These factors include proximity, acceptance, effort, communication, common interests, affection and novelty.
Context is very important when determining how we communicate with others. It is important to understand what script it is appropriate to use in each respective relationship. Specifically, understanding how affection is communicated in a given context is extremely important. For example, masculine people expect competition in their friendships. They avoid communicating weakness and vulnerability. They avoid communicating personal and emotional concerns. Masculine people tend to communicate affection by including their friends in activities and exchanging favors. Masculine people tend to communicate with each other shoulder-to-shoulder (e.g. watching sports on a television).
In contrast, feminine people do not mind communicating weakness and vulnerability. In fact, they seek out friendships more in these times. For this reason, feminine people often feel closer to their friends than masculine people do. Feminine people tend to value their friends for listening and communicating non-critically, communicating support, communicating feelings of enhances self-esteem, communicating validation, offering comfort and contributing to personal growth. Feminine people tend to communicate with each other face-to-face (e.g. meeting together to talk over lunch).
Communicating with a friend of the opposite gender is often difficult because of the fundamentally different scripts that masculine people and feminine people use in their friendships. Another challenge in these relationships is that masculine people associate physical contact with communicating sexual desire more than feminine people. Masculine people also desire sex in their opposite-gender relationships more than feminine people. This presents serious challenges in cross-gender friendship communication. In order to overcome these challenges, the two parties must communicate openly about the boundaries of the relationship.
Communication and gender cultures
A communication culture is a group of people with an existing set of norms regarding how they communicate with each other. These cultures can be categorized as masculine or feminine. Other communication cultures include African Americans, older people, Indian Native Americans, gay men, lesbians, and people with disabilities. Gender cultures are primarily created and sustained by interaction with others. Through communication we learn about what qualities and activities our culture prescribes to our sex.
While it is commonly believed that our sex is the root source of differences and how we relate and communicate to others, it is actually gender that plays a larger role. Whole cultures can be broken down into masculine and feminine, each differing in how they get along with others through different styles of communication. Julia T. Wood's studies explain that "communication produces and reproduces cultural definitions of masculinity and femininity." Masculine and feminine cultures differ dramatically in when, how and why they use communication
Communication styles
- Men tend to talk more than women in public situations, but women tend to talk more than men at home.
- Women are more inclined to face each other and make eye contact when talking, while men are more likely to look away from each other.
- Men tend to jump from topic to topic, but women tend to talk at length about one topic.
- When listening, women make more noises such as “mm-hmm” and “uh-huh”, while men are more likely to listen silently.
- Women are inclined to express agreement and support, while men are more inclined to debate.
The studies also reported that in general both genders communicated in similar ways. Critics, including Suzette Haden Elgin, have suggested that Tannen's findings may apply more to women of certain specific cultural and economic groups than to women in general. Although it is widely believed that women speak far more words than men, this is actually not the case.
Julia T. Wood describes how "differences between gender cultures infuse communication." These differences begin at childhood. Maltz and Broker’s research showed that the games children play contribute to socializing children into masculine and feminine cultures. For example, girls playing house promotes personal relationships, and playing house does not necessarily have fixed rules or objectives. Boys, however, tended to play more competitive team sports with different goals and strategies. These differences as children make women operate from assumptions about communication and use rules for communication that differ significantly from those endorsed by most men. Wood produced the following theories regarding gender communication:
- Misunderstandings stem from differing interaction styles
- Men and women have different ways of showing support, interest and caring
- Men and women often perceive the same message in different ways
- Women tend to see communication more as a way to connect and enhance the sense of closeness in the relationship
- Men see communication more as a way to accomplish objectives
- Women give more response cues and nonverbal cues to indicate interest and build a relationship
- Men use feedback to signal actual agreement and disagreement
- For women, "ums" "uh-huhs" and "yeses" simply mean they are showing interest and being responsive
- For men, these same responses indicate is agreement or disagreement with what is being communicated
- For women, talking is the primary way to become closer to another person
- For men, shared goals and accomplishing tasks is the primary way to become close to another person
- Men are more likely to express caring by doing something concrete for or doing something together with another person
- Women can avoid being hurt by men by realizing how men communicate caring
- Men can avoid being hurt by women by realizing how women communicate caring
- Women who want to express caring to men can do so more effectively by doing something for them or doing something with them
- Men who want to express caring to women can do so more effectively by verbally communicating that they care
- Men emphasize independence and are therefore less likely to ask for help in accomplishing an objective
- Men are much less likely to ask for directions when they are lost than women
- Men desire to maintain autonomy and to not appear weak or incompetent
- Women develop identity within relationships more than men
- Women seek out and welcome relationships with others more than men
- Men tend to think that relationships jeopardize their independence
- For women, relationships are a constant source of interest, attention and communication
- For men, relationships are not as central
- The term "Talking about us" means very different things to men and women
- Men feel that there is no need to talk about a relationship that is going well
- Women feel that a relationship is going well as long as they are talking about it
- Women can avoid being hurt by realizing that men don't necessarily feel the need to talk about a relationship that is going well
- Men can help improve communication in a relationship by applying the rules of feminine communication
- Women can help improve communication in a relationship by applying the rules of masculine communication
- Just as Western communication rules wouldn't necessarily apply in an Asian culture, masculine rules wouldn't necessarily apply in a feminine culture, and vice versa.
Finally, Wood describes how different genders can communicate to one another and provides six suggestions to do so.
- Individuals should suspend judgment. When a person finds his or herself confused in a cross-gender conversation, he or she should resist the tendency to judge and instead explore what is happening and how that person and their partner might better understand each other.
- Recognize the validity of different communication styles. Feminine tendency to emphasize relationships, feelings and responsiveness does not reflect inability to adhere to masculine rules for competing any more than masculine stress on instrumental outcomes is a failure to follow feminine rules for sensitivity to others. Wood says that it is inappropriate to apply a single criterion - either masculine or feminine - to both genders' communication. Instead, people must realize that different goals, priorities and standards pertain to each.
- Provide translation cues. Following the previous suggestions helps individuals realize that men and women tend to learn different rules for interaction and that it makes sense to think about helping the other gender translate your communication. This is especially important because there is no reason why one gender should automatically understand the rules that are not part of his or her gender culture.
- Seek translation cues. Interactions can also be improved by seeking translation cues from others. Taking constructive approaches to interactions can help improve the opposite gender culture's reaction.
- Enlarge your own communication style. By studying other culture's communication we learn not only about other cultures, but also about ourselves. Being open to learning and growing can enlarge one's own communication skills by incorporating aspects of communication emphasized in other cultures. According to Wood, individuals socialized into masculinity could learn a great deal from feminine culture about how to support friends. Likewise, feminine cultures could expand the ways they experience intimacy by appreciating "closeness in doing" that is a masculine specialty.
- Wood reiterates again, as her sixth suggestion, that individuals should suspend judgment. This concept is incredibly important because judgment is such a part of Western culture that it is difficult not to evaluate and critique others and defend our own positions. While gender cultures are busy judging other gender cultures and defending themselves, they are making no headway in communicating effectively. So, suspending judgment is the first and last principle for effective cross-gender communication.
Gender stereotypes
Stereotypes create expectations regarding emotional expression and emotional reaction. Many studies find that emotional stereotypes and the display of emotions "correspond to actual gender differences in experiencing emotion and expression."
Stereotypes generally dictate how and by whom and when it is socially acceptable to display an emotion. Reacting in a stereotype-consistent manner may result in social approval while reacting in a stereotype-inconsistent manner could result in disapproval. It should be noted that what is socially acceptable varies substantially over time and between local cultures and subcultures.
According to Niedenthal et al.:
Virginia Woolf, in the 1920s, made the point: "It is obvious that the values of women differ very often from the values which have been made by the other sex. Yet it is the masculine values that prevail" (A Room of One's Own, N.Y. 1929, p. 76). Sixty years later, psychologist Carol Gilligan was to take up the point, and use it to show that psychological tests of maturity have generally been based on masculine parameters, and so tended to show that women were less 'mature'. She countered this in her ground-breaking work, In a Different Voice, (Harvard University Press, 1982), holding that maturity in women is shown in terms of different, but equally important, human values.
Communication and sexual desire
Mets, et al. explain that sexual desire is linked to emotions and communicative expression. Communication is central in expressing sexual desire and "complicated emotional states," and is also the "mechanism for negotiating the relationship implications of sexual activity and emotional meanings." Gender differences appear to exist in communicating sexual desire.
For example, masculine people are generally perceived to be more interested in sex than feminine people, and research suggests that masculine people are more likely than feminine people to express their sexual interest. This can be attributed to masculine people being less inhibited by social norms for expressing their desire, being more aware of their sexual desire or succumbing to the expectation of their gender culture. When feminine people employ tactics to show their sexual desire, they are typically more indirect in nature.
Various studies show different communication strategies with a feminine person refusing a masculine person's sexual interest. Some research, like that of Murnen, show that when feminine people offer refusals, the refusals are verbal and typically direct. When masculine people do not comply with this refusal, feminine people offer stronger and more direct refusals. However, research from Perper and Weis showed that rejection includes acts of avoidance, creating distractions, making excuses, departure, hinting, arguments to delay, etc. These differences in refusal communication techniques are just one example of the importance of communicative competence for both masculine and feminine gender cultures.
As long as a person's perceived physiological sex is consistent with that person's gender identity, the gender role of a person is so much a matter of course in a stable society that people rarely even think of it. Only in cases where an individual has a gender role that is inconsistent with his or her sex will the matter draw attention. Some people mix gender roles to form a personally comfortable androgynous combination or violate the scheme of gender roles completely, regardless of their physiological sex. People who are transgender have a gender identity or expression that differs from the sex which they were assigned at birth. The Preamble of The Yogyakarta Principles cite the idea of the Convention on the Elimination of All Forms of Discrimination Against Women that "States must take measures to seek to eliminate prejudices and customs based on the idea of the inferiority or the superiority of one sex or on stereotyped roles for men and women." for the rights of transgender people.
For approximately the last 100 years women have been fighting for the same rights as men (especially around the turn from 19th to 20th century with the struggle for women's suffrage and in the 1960s with second-wave feminism and radical feminism) and were able to make changes to the traditionally accepted feminine gender role. However, most feminists today say there is still work to be done.
Numerous studies and statistics show that even though the situation for women has improved during the last century, discrimination is still widespread: women earn an average of 77 cents to every one dollar men earn ("The Shriver Report", 2009), occupy lower-ranking job positions than men, and do most of the housekeeping work. There are several reasons for the wage disparity. A recent (October 2009) report from the Center for American Progress, "The Shriver Report: A Woman's Nation Changes Everything" tells us that women now make up 48% of the US workforce and "mothers are breadwinners or co-breadwinners in a majority of families" (63.3%, see figure 2, page 19 of the Executive Summary of The Shriver Report).
A recent article in The New York Times indicated that gender roles are still prevalent in many upscale restaurants. A restaurant's decor and menu typically play into which gender frequents which restaurant. Whereas Cru, a restaurant in New York's, Greenwich Village, "decorated in clubby brown tones and distinguished by a wine list that lets high rollers rack up breathtaking bills," attracts more men than women, places like Mario Batali's, Otto, serves more women than men, as a result that the restaurant has been "designed to be more approachable, with less swagger." Servers of both men and women at the same table still often go with the assumption that the male is the go-to person, as far as who receives the check and makes the wine decisions, but this appears to be a trend that is being used with more caution, especially with groups of younger people. Restaurants that used to cater to more men or women are now also trying to change their decor in the hopes of attracting broader equity.
Note that many people consider some or all of the following terms to have negative connotations.
- A male adopting (or who is perceived as adopting) a female gender role might be described as effeminate, foppish, or sissy. Even more pejorative terms include mollycoddled, milksop, sop, mamma's boy, namby-pamby, pansy, fru-fru, girlie-boy, girlie-man, and nancy boy.
- A female adopting (or who is perceived as adopting) a male role might be described as butch, a dyke, a tomboy, or as an amazon (See amazon feminism). More pejorative terms include battleaxe.
Sexual orientation
||This section needs additional citations for verification. (November 2010)|
The demographics of sexual orientation in any population is difficult to establish with reasonable accuracy. However, some surveys suggest that a greater proportion of men than women report that they are exclusively homosexual, whereas more women than men report being bisexual.
Studies have suggested that heterosexual men are only aroused by images of women, whereas some women who claim to be heterosexual are aroused by images of both men and women. However, different methods are required to measure arousal for the anatomy of a man versus that of a woman.
Traditional gender roles include male attraction to females, and vice versa. Homosexual and bisexual people, among others, usually don't conform to these expectations. An active conflict over the cultural acceptability of non-heterosexuality rages worldwide. The belief or assumption that heterosexual relationships and acts are "normal" is described – largely by the opponents of this viewpoint – as heterosexism or in queer theory, heteronormativity. Gender identity and sexual orientation are two separate aspects of individual identity, although they are often mistakenly conflated in the media.
Perhaps it is an attempt to reconcile this conflict that leads to a common assumption that one same-sex partner assumes a pseudo-male gender role and the other assumes a pseudo-female role. For a gay male relationship, this might lead to the assumption that the "wife" handled domestic chores, was the receptive sexual partner during sex, adopted effeminate mannerisms, and perhaps even dressed in women's clothing. This assumption is flawed, as many homosexual couples tend to have more equal roles, and the effeminate behavior of some gay men is usually not adopted consciously, and is often more subtle. Feminine or masculine behaviors in some homosexual people might be a product of the socialization process, adopted unconsciously due to stronger identification with the opposite sex during development. The role of both this process and the role of biology is debated.
Cohabitating couples with same-sex partners are typically egalitarian when they assign domestic chores. Though sometimes these couples assign traditional female responsibilities to one partner and traditional male responsibilities to the other, generally same-sex domestic partners challenge traditional gender roles in their division of household responsibilities, and gender roles within homosexual relationships are flexible. For instance, cleaning and cooking, traditionally both female responsibilities, might be assigned to different people. Carrington (1999) observed the daily home lives of 52 gay and lesbian couples and found that the length of the work week and level of earning power substantially affected the assignment of housework, regardless of gender or sexuality.
Cross-dressing is often restricted to festive occasions, though people of all sexual orientations routinely engage in various types of cross-dressing either as a fashion statement or for entertainment. Distinctive styles of dress, however, are commonly seen in gay and lesbian circles. These fashions sometimes emulate the traditional styles of the opposite gender (For example, lesbians who wear t-shirts and boots instead of skirts and dresses, or gay men who wear clothing with traditionally feminine elements, including displays of jewelry or coloration), but others do not. Fashion choices also do not necessarily align with other elements of gender identity. Some fashion and behavioral elements in gay and lesbian culture are novel, and do not really correspond to any traditional gender roles, such as rainbow jewelry or the gay techno/dance music subculture. In addition to the stereotypically effeminate one, another significant gay male subculture is homomasculinity, emphasizing certain traditionally masculine or hypermasculine traits.
The term dyke, commonly used to mean lesbian, sometimes carries associations of a butch or masculine identity, and the variant bulldyke certainly does. Other gender-role-charged lesbian terms include lipstick lesbian, chapstick lesbian, and stone femme. "Butch," "femme," and novel elements are also seen in various lesbian subcultures.
External social pressures may lead some people to adopt a persona which is perceived as more appropriate for a heterosexual (for instance, in an intolerant work environment) or homosexual (for instance, in a same-sex dating environment), while maintaining a somewhat different identity in other, more private circumstances. The acceptance of new gender roles in Western societies, however, is rising. However, during childhood and adolescence, gender identities which differ from the norm are often the cause of ridicule and ostracism, which often results in psychological problems. Some are able to disguise their differences, but others are not. Even though much of society has become more tolerant, gender roles are still very prevalent in the emotionally charged world of children and teenagers, which makes life very difficult for those who differ from the established norms.
The role of ideology in enculturation
||This article's factual accuracy is disputed. Please help to ensure that disputed facts are reliably sourced. See the relevant discussion on the talk page. (May 2008)|
||This section may contain original research. (July 2008)|
High levels of agreement on the characteristics different cultures to males and females reflects consensus in gender role ideology. The Netherlands, Germany, Finland, England and Italy are among the most egalitarian modern societies concerning gender roles, whereas the most traditional roles are found in Nigeria, Pakistan, India, Japan, and Malaysia. Men and women cross-culturally rate the ideal self as more masculine than their self. Women in nomadic, high-food-accumulator cultures are more likely to have their work honored and respected than those in sedentary, agricultural societies. US females are more conforming to others than are males. Men use more rational-appearing aggression than women, while women use more social manipulation than men do. This is related to, but not solely determined by, age and hormones though some researchers would suggest that women are not necessarily less aggressive than men but tend to show their aggression in more subtle and less overt ways (Bjorkqvist et al. 1994, Hines and Saudino 2003). Male aggression may be a "gender marking" issue breaking away from the instruction of the mother during adolescence. Native American gender roles depend on the cultural history of the tribe.
Criminal justice
A number of studies conducted since the mid-90s have found direct correlation between a female criminal’s ability to conform to gender role stereotypes, particularly murder committed in self-defense, and the severity of their sentencing.
In prison
||This section's factual accuracy is disputed. Please help to ensure that disputed facts are reliably sourced. See the relevant discussion on the talk page. (March 2008)|
The following tendencies have been observed in U.S. prisons - not internationally. Gender roles in male prisons go further than the "Don't drop the soap"-joke. The truth is that some prisoners, either by choice or by force, take on strict 'female roles' according to prison set guidelines. For instance, a 'female' in prison is seen as timid, submissive, passive, and a means of sexual pleasure. When entering the prison environment some inmates "turn out" on their own free will, meaning they actively pursue the 'female role' in prison to gain some form of social power and/or prestige. Other, unlucky inmates, are forced to partake in 'female role' activities through coercion; the most common means being physical abuse. The inmates that are forced to "turn out" are commonly referred to as "punks". Other terms used to describe 'female' inmates are "girls", "kids", and "gumps". Some of the labels may be used as a means of describing one's ascribed status. For example, a "kid" is one that is usually dominated by their owner, or "daddy". The "daddy" is usually one with a high social status and prestige within the prison (e.g. gang leader). The "female" gender role is constructed through the mirror image of what the inmates perceive as a male. For instance, inmates view men as having strength, power, prestige, and an unyielding personality. However, the inmates don't refer to the female guards, who have power and prestige over the inmates, as males. The female guards are commonly referred to as "dykes", "ditch lickers", and lesbians. These roles are also assumed in female prisons.
Women who enter prison society often voluntarily enter into lesbianism, as a means of protection from gangs or stronger females. In doing so, they will take on the submissive role to a dominant female in exchange for that dominating female keeping them safe. Those who do not enter voluntarily into lesbianism might at one time or another be group raped, to introduce them into that circle, and sometimes they will be referred to as sheep, meaning anyone can have them. It is to avoid that status that most female inmates choose a mate, or allow themselves to be chosen as a mate, which can make them available to only a minimal number of partners during their incarceration, as opposed to a large number. So, in a sense, an inmate undergoes a "female role" in the prison system either by choice or by yielding to excessive coercion, and it is that yielding that terms the once male inmates as "females", and which identifies the stronger females in a female prison system as "males".
See also
|Content from Wikipedia was used in the development of this page.|
|The Wikipedia version is Gender role|
|Special thank you to participants of Wikipedia's WikiProject LGBT studies!|
- "What do we mean by "sex" and "gender"?". World Health Organization. Retrieved 2009-09-29.
- Gay and Lesbian Alliance Against Defamation. ‘’GLAAD Media Reference Guide, 8th Edition. Transgender Glossary of Terms”, ‘’GLAAD’’, USA, May 2010. Retrieved on 2011-11-20.
- (Maccoby, E.E., Sex differences in intellectual functioning, 1966.; Bem S.L., 1975)
- Graham, Sharyn (2001), Sulawesi's fifth gender, Inside Indonesia, April–June 2001.
- Roscoe, Will (2000). Changing Ones: Third and Fourth Genders in Native North America. Palgrave Macmillan (June 17, 2000) ISBN 0-312-22479-6
See also: Trumbach, Randolph (1994). London’s Sapphists: From Three Sexes to Four Genders in the Making of Modern Culture. In Third Sex, Third Gender: Beyond Sexual Dimorphism in Culture and History, edited by Gilbert Herdt, 111-36. New York: Zone (MIT). ISBN 978-0-942299-82-3
- Eagly, A. H., Beall, A., & Sternberg, R. S. (Eds.). (2004). The psychology of gender (2nd ed.). New York: Guilford Press. ISBN 978-1593852443.
- Carroll, J. L. (2013). Sexuality now: embracing diversity: Gender role theory. Belmont, CA Wandsworth.p. 93
- Deustch, F. M. (2007). Undoing gender. ‘’Gender and Society, 21,’’ 106-127.
- Cahill, S. E. (1986).The male communication pattern and traits tend to be honest, direct, and factual, and is considered the “report” type of talk. When a male speaks, he is basing his information on facts and is being direct as possible without beating around the bush with back-channeling or holding words (Svecz. A,M. 2010). Childhood socialization as recruitment process: Some lessons from the study of gender development. In P. Alder and P. Alder. (Eds). ‘’Sociological studies of child development.’’ Greenwich, CT: JAI Press.
- Fenstermaker, S. & West, C. (2002). ‘’Doing gender, doing difference: Inequality, power, and institutional change.’’ New York, NY; Routledge; p. 8
- Butler, J. (1990). ‘’Gender trouble: Feminism and the subversion of identity.’’ New York; Routledge.
- Franco-German TV Station ARTE, Karambolage, August 2004.
- Brockhaus: Enzyklopädie der Psychologie, 2001.
- Connell, Robert William: Gender and Power, Cambridge: University Press 1987.
- Bem,S.L.(1981). Gender schema theory:A cognitive account of sex typing. Psychological Review,88,354–364
- Gender Gap: The Biology of Male-Female Differences. David P. Barash, Judith Eve Lipton. Transaction Publishers, 2002.
- Barash, David P. Lipton, Judith Eve. "Gender Gap: The Biology of Male-Female Differences". Transaction Publishers, 2002.
- Yerkes Researchers Find Sex Differences in Monkey Toy Preferences Similar to Humans
- "Male monkeys prefer boys' toys". Newscientist.com. doi:10.1016/j.yhbeh.2008.03.008. Retrieved 2010-04-17.
- Ehrenreich, Barbara; Deirdre English (2010). Witches, Midwives and Nurses: A History of Women Healers (2nd ed.). The Feminist Press. pp. 44–87. ISBN 0-912670-13-4.
- Boulis, Ann K.; Jacobs, Jerry A. (2010). The changing face of medicine: women doctors and the evolution of health care in America. Ithaca, N.Y.: ILR. ISBN 0-8014-7662-3. "Encouraging one's daughter to pursue a career in medicine is no longer an unusual idea… Americans are now more likely to report that they feel comfortable recommending a career in medicine for a young woman than for a young man."
- Bullough, Vern L.; Bonnie Bullough (1993). Crossdressing, Sex, and Gender (1st ed.). University of Pennsylvania Press. 1993. p. 390. ISBN 978-0-8122-1431-4.
- Box Office Mojo, LLC (1998). "Cross Dressing / Gender Bending Movies". Box Office Mojo, LLC. Retrieved 2006-11-08.
- The Human Rights Campaign (2004). "Transgender Basics". The Human Rights Campaign. Archived from the original on November 9, 2006. Retrieved 2006-11-08.
- Peletz, Michael Gates. Gender, Sexuality, and Body Politics in Modern Asia. Ann Arbor, MI: Association for Asian Studies, 2011. Print.
- Men hold the edge on gender gap odds' Oakland Tribune October 21, 2003
- Facts for features: Valentine’s Day U.S. Census Bureau Report February 7, 2006
- '40m Bachelors And No Women' The Guardian March 9, 2004
- 'Polygamy Proposal for Chechen Men' BBC January 13, 2006
- Wood, J. T. (1998). Gender Communication, and Culture. In Samovar, L. A., & Porter, R. E., Intercultural communication: A reader. Stamford, CT: Wadsworth.
- Tannen, Deborah (1990) Sex, Lies and Conversation; Why Is It So Hard for Men and Women to Talk to Each Other? The Washington Post, June 24, 1990
- Maltz, D., & Borker, R. (1982). A cultural approach to male-female miscommunication. In J. Gumperz (Ed.), Language and social identity (pp. 196-216). Cambridge, UK: Cambridge University Press.
- Metts, S., Sprecher, S., & Regan, P. C. (1998). Communication and sexual desire. In P. A. Andersen & L. K. Guerrero (Eds.) Handbook of communication and emotion. (pp. 354-377). San Diego: Academic Press.
- Perot and Byrne Murnen SK, Perot A, Byrne D. Coping with unwanted sexual activity: Normative responses, situational determinants, and individual differences. Journal of Sex Research. 1989;26(1):85–106.,
- Perper, T., & Weis, D. L. (1987). Proceptive and rejective strategies of U.S. and Canadian college women. The Journal of Sex Research, 23, 455-480.
- Kiger, Kiger; Riley, Pamela J. (July 1, 1996). "Gender differences in perceptions of household labor". The Journal of Psychology. Retrieved 2009-10-23.
- Maria Shriver and the Center for American Progress (October 19, 2009). "The Shriver Report: A Woman's Nation Changes Everything". Center for American Progress. Retrieved 2009-10-23.
- The New York Times."Old Gender Roles With Your Dinner?" Oct. 8, 2008.
- Statistics Canada, Canadian Community Health Survey, Cycle 2.1. off-site links: Main survey page.
- Pas de Deux of Sexuality Is Written in the Genes
- Dwyer, D. (2000). Interpersonal Relationships [e-book] (2nd ed.). Routledge. p. 104. ISBN 0-203-01971-7.
- Cherlin, Andrew (2010). Public and Private Families, an introduction. McGraw-Hill Companies, Inc. p. 234.
- Crook, Robert (2011). Our Sexuality. Wadsworth Cengage Learning. p. 271.
- Cherlin, Andrew (2010). Public and Private Families, an Introduction. McGraw-Hill Companies, Inc. p. 234.
- According to John Money, in the case of androgen-induced transsexual status, "The clitoris becomes hypertrophied so as to become a penile clitoris with incomplete fusion and a urogenital sinus, or, if fusion is complete, a penis with urethra and an empty scrotum" (Gay, Straight, and In-Between, p. 31). At ovarian puberty, "menstruation through the penis" begins (op. cit., p. 32). In the case of the adrenogenital syndrome, hormonal treatment could bring about "breast growth and menstruation through the penis" (op. cit., p. 34). In one case an individual was born with a fully formed penis and empty scrotum. At the age of puberty that person's own physician provided treatment with cortisol. "His breasts developed and heralded the approach of first menstruation, through the penis".
- Williams, J.E., & Best, D.L. (1990). Sex and psyche: Gender and self viewed cross-culturally. Newbury Park, CA: Sage
- Williams, J.E., & Best, D.L. (1990) Measuring sex stereotypes: A multination study. Newbury Park, CA: Sage
- van Leeuwen, M. (1978). A cross-cultural examination of psychological differentiation in males and females. International Journal Of Psychology, 13(2), 87.
- Bjfrkqvist, K., Osterman, K., & Lagerspetz, K. M. J. (1993). Sex differences in covert aggression among adults. Aggressive Behavior. 19.
- C. André Christie-Mizell. The Effects of Traditional Family and Gender Ideology on Earnings: Race and Gender Differences. Journal of Family and Economic Issues 2006. Volume 27, Number 1 / April, 2006.
- Chan, W. (2001). Women, Murder and Justice. Hampshire: Palgrave.
- Hart, L. (1994). Fatal Women: Lesbian Sexuality and the Mark of Aggression. Princeton: Princeton University Press.
- Ballinger, A. (1996.) The Guilt of the Innocent and the Innocence of the Guilty: The Cases of Marie Fahmy and Ruth Ellis. In Wight, S. & Myers, A. (Eds.) No Angels: Women Who Commit Violence. London: Pandora.
- Filetti, J. S. (2001). From Lizzie Borden to Lorena Bobbitt: Violent Women and Gendered Justice. Journal of American Culture, Vol.35, No. 3, pp.471–484.
- John M. Coggeshall: The Best of Anthropology Today: ‘Ladies’ Behind Bars: A Liminal Gender as Cultural Mirror
- International Foundation (For) Gender Education,
- Gender PAC.
- Career advancement for professional women returners to the workplace
- Men and Masculinity Research Center (MMRC), seeks to give people (especially men) across the world a chance to contribute their perspective on topics relevant to men (e.g., masculinity, combat sports, fathering, health, and sexuality) by participating in Internet-based psychological research.
- The Society for the Psychological Study of Men and Masculinity (Division 51 of the American Psychological Association): SPSMM advances knowledge in the psychology of men through research, education, training, public policy, and improved clinical practice.
- Gender Stereotypes - Changes in People's Thoughts, A report based on a survey on roles of men and women. | http://www.wikiqueer.org/w/Gender_role | 13 |
28 | Broadcast: Monday 02 October 2006 08:00 PM
From the foundation and fall of Mandalay through to the present day.
A Brief History of Burma
Foundation and Fall of Mandalay | Arrival of the British | Rise of Nationalism | Arrival of the Japanese | Independence | Civil War and the Rise of Ne Win | Military Rule | Constitutional Dictatorship | 'Road to Democracy'
Burma has been home to many different ethnic groups for over four thousand years. The country was unified on three occasions: under the Pagan dynasty, from the 11th to the 13th century; the Toungoo dynasty in the 16th; and the Konbaung dynasty, founded at the end the 18th century.
The Konbaung dynasty founded Mandalay, the last capital of the Burmese kings, and extended Burmese control as far as Assam in the west and north into Thailand. But this was the age of European colonisation and they soon became embroiled in conflict with British colonial forces in India.
Some ethnic groups had no state; other states were poorly demarcated. In a well-crafted deal, it was agreed that the Karen and the Shan states were permitted to secede from the Union after 10 years. The first war with the British, in 1824, concluded with the surrender of the provinces on the Indian frontier; after a second war in 1852, the whole of Lower Burma was lost. Mandalay fell in 1885. After this third defeat, the Burmese King Thibaw was carried off to captivity in India.
Following the fall of Mandalay, it was inevitable that Burma would soon be under British rule. In 1886 the country began to be administered as a province of British India.
Throughout their Empire the British used a policy called 'divide and rule' where they played upon ethnic differences to establish their authority. This policy was applied rigorously in Burma. More than a million Indian and Chinese migrants were brought in to run the country's affairs and thousands of Indian troops were used to crush Burmese resistance. In addition, hill tribes which had no strong Burmese affiliation, such as the Karen in the south-east, were recruited into ethnic regiments of the colonial army.
A two-tier administration was established. Ministerial Burman was the central area dominated by the Burman majority. The Frontier Areas was where the ethnic minorities lived. Economic development, largely in rice and timber production, was concentrated in the Ministerial area. The Frontier Areas were left largely undisturbed under their traditional rulers, but suffered economic neglect.
The British 'divide and rule' policy left a legacy of problems for Burma when it regained independence.
It was not surprising that the first serious manifestation of nationalism was a student strike at Rangoon University in 1920.
Burma owed its highly developed pre-colonial education system to the Buddhist tradition. Every Burmese boy was sent to the monastery to learn to read and write. Girls' education was also encouraged. Consequently, Burma had one of the highest literacy rates in the world.
Under British rule, young Burmans took advantage of colonial education and read widely, absorbing foreign influences – everyone from Marx to Nehru. As the Burmese monarchy had been abolished, the country's students were open to other forms of social organisation. The unique ideological mix of Marxism and Buddhism proved an inspiring and seductive combination whose popularity spread rapidly amongst the educated urban elite.
In 1930, poor farmers, inspired by peasant leader Saya San, mounted a violent rebellion in south Burma. Impoverished by high taxation and the collapse of the rice market, they resented the growing economic power of the Indian community. Despite the British employing Karen forces to brutally crush the revolt, the uprising demonstrated that nationalism was not just the preserve of the students. The uprising gave birth to a new militant nationalist group, Dohbama Asiayone (We Burmese Association) which openly hostile towards Indian and Chinese migrants and the British.
By the mid-1930s, the Rangoon University Students' Union had become solidly nationalist. In 1936, the attempt to expel two student leaders sparked a university strike with such widespread public support that the authorities were forced to back down. The leaders in question were U Nu, who was to become the first president of independent Burma, and Aung San, a young man of 21 who went on to negotiate Burmese independence.
The success of the 1936 strike instilled confidence and within two years there was insurrection throughout the country. There were workers' strikes, student protests, peasant marches, as well as inter-communal violence.
Aung San became president of Dohbama. In 1939, he helped found and became general secretary of the Burmese Communist Party. However, as it did across the world, Hitler's invasion of Poland was to have enormous repercussions for Burma.
The war provided a unique opportunity for nationalists in European colonies throughout the world. Aung San remarked that 'colonialism's difficulty was freedom's opportunity'. He united the factions in Burma's independence struggle into a Freedom Bloc. He even attempted to contact the Chinese communists to obtain military training.
He never found the Chinese communists. Instead, the Japanese found him. They welcomed the chance to ally themselves with Burmese nationalists who could lend legitimacy to their own intended occupation. This marked a crucial increase in Burmese nationalism's political capital which the movement was only too happy to use. During 1941, the leaders of various nationalist groups (the 'Thirty Comrades' as they became known) travelled to Tokyo for military training. They returned with the invading Japanese army in December. Recruits flocked to join the newly-established Burma Independence Army (BIA) and, in March 1942, the BIA entered Rangoon, close behind the Japanese.
Although the Burmese nationalists allied themselves with the Japanese, most ethnic minorities remained loyal to the British. The British, and many ethnic Indians, retreated to the hills where they fought alongside the Karen, Kachin and Chin. The nationalists particularly disliked the Karen who they held as collaborators. The BIA took revenge for what they believed to be traitorous behaviour with reprisal killings and massacres of Karen civilians. These wartime events created lasting suspicion and ethnic hostility, which was one of the principal reasons the Karen took up arms against the state after Burmese independence.
In August 1942, the Japanese established a puppet administration in Burma. A year later, they granted Burma 'independence'. But, in reality, Burma had merely exchanged one occupying power for another which, it can be argued, had less respect for the Burmese than its predecessor.
The Japanese appointed Aung San as War Minister in the new government. At the same time, they reorganised the BIA and renamed it the Burma National Army (BNA) in an attempt to render it ineffective.
Many nationalists understood the Japanese aims and went underground to fight for the communist resistance. However, just as the country geared up for free trade, in tragi-comic fashion, Ne Win declared the country's high denomination banknotes invalid, without warning or compensation! Three-quarters of the currency in circulation became worthless overnight. Based upon Ne Win's lucky number nine, new 45 and 90 kyat notes were issued.
In 1944, Aung San and various nationalist groups secretly formed the Anti-Fascist Organisation (AFO), a united front to co-ordinate action against the Japanese. Burmese army officers made contact with the British to plan resistance and, in March 1945, as British forces under General Slim swept back into central Burma, the BNA attacked the Japanese army.
Soon the British and Burmese forces were fighting together and, in June 1945, the two armies marched side by side in a victory parade through Rangoon. The marriage of convenience was always unlikely to have a long honeymoon.
With the war over, the British civil service rather optimistically wanted to reimpose the old colonial system. Burma's political landscape was unrecognisable from ten years previously and this was never going to be possible. Aung San was now an established figure and the momentum of his nationalist movement was irresistible at the end of the war. It had become a well-organised political and military force.
Mountbatten saw that the Burmese National Army had to be recognised and he made it a part of the regular army. However, Aung San kept back nearly half the old BNA force to form a paramilitary nationalist army. Under Aung San's leadership, not for the first time most left-wing nationalist parties united and this time formed the Anti-Fascist People's Freedom League (AFPFL), an expanded AFO. Despite internal squabbles, in particular with the Communist Party, the formation of the AFPFL gave additional impetus to the independence movement.
In September 1946, Aung San and several AFPFL leaders were allowed to form what was in essence a provisional government. Five months on in January 1947, after further talks in London, he signed an agreement with British Prime Minister Attlee conferring full independence within a year. In the meantime, a constitution was to be drafted and the ethnic minorities could decide for themselves whether they wished to join the Burmese state. Aung San met with the ethnic leaders, guaranteed them equality, and succeeded in winning most of them over. When the time came to elect a Constituent Assembly only the Karen (and the Communist Party, which believed in armed revolution) refused to participate.
The new federal constitution contained important anomalies. Some ethnic groups had no state; other states were poorly demarcated. In a well-crafted deal, it was agreed that the Karen and the Shan states were permitted to secede from the Union after 10 years.
In July 1947, Aung San and six leading members of the AFPFL pre-independence cabinet, veterans of the independence struggle to a man, were gunned down at a cabinet meeting. The loss of these experienced politicians on the eve of independence was a great tragedy for Burma. Indeed, the consequences of that fatal day still reverberate about the country. Despite Aung San's death, Burma became independent as planned on January 4 1948. AFPFL vice-president and former student leader U Nu became the first Prime Minister. But Aung San was sorely missed: by the age of 32, he had come to be revered as a unifier, as a politician who could realise the aspirations of the people and as a leader who had both the support of the army and the trust of many of the ethnic minorities. The assassins had been engaged by U Saw, the leading right-wing contender for the presidency. U Saw was executed for the murders. This though brought scant consolation.
Within three months, the newly independent state of Burma was plunged into civil war. The Communist Party, which had been opposing the terms of independence throughout, acted on its belief in armed struggle began to attack the government and its forces in March 1948.
Army regiments which had been infiltrated by the communists mutinied. In the summer, Karen forces, whose antipathy towards the nature of independence still remained despite Aung San's conciliatory additions to the constitution (see 'Independence'), defected and attacked government positions. Aung San's carefully constructed army collapsed with a speed that belied the delicacy of the enthnic relations within the force. Other ethnic groups allied themselves with the Karen rebels and took up arms against the government.
By 1949, government control was confined to the area around Rangoon. However, the army was re-organised and a revitalised political force and its belated support rescued the nascent democracy.
The rebels had become beset by factional infighting which allowed the army to re-establish authority. The remaining Karen commanders in the army were sacked and replaced by Burmans, and Ne Win became the new Supreme Commander of the Armed Forces. Born Shu Maung, he adopted the name Ne Win, meaning 'brilliant as the sun', during the Second World War. The name and the man were to cast a shadow across 20th Century Burmese history.
In hindsight, the assistance Ne Win gave democracy is ironic. His motivation is unlikely to have been anything other than self-promotion, but, for whatever reasons, he led the army in delivering the country back into the control of the government. It was the rebels' disunity which brought their defeat.
One of the 'Thirty Comrades' and a member of a minority faction, the Burmese Socialist Party, Ne Win had often quarrelled with Aung San. Despite Aung San's opposition, the Japanese appointed Ne Win commander of the BNA in 1943. After the war, he had remained in the army, as commander of the 4th Burmese Rifles, and built up a personal power base while Aung San negotiated independence.
Now Ne Win used the civil war to consolidate his own political power and the Burmese Socialist Party emerged as the strongest force in the AFPFL coalition.
Throughout the 1950s, U Nu attempted to run the country as a socialist democracy, despite enormous odds. However, the country had never truly recovered from the economic devastation of the Second World War. With the civil war and continued low-level insurgency adding to the destabilising of the already fractured society, the ethnic minorities were alienated and increasingly regarded the Burmese forces as an occupying army. The AFPFL after all was not a party at all but a coalition, often riven by dissent even when it shared the common goal of independence.
All of these factors contributed to creating a country that was impossible to govern decisively. In a continuing state of war, there was no reason for a country newly aquainted with democracy to associate it with anything other than economic, social and political upheaval. As the army grew stronger and increasingly demanded a political role in society, few could summon the energy , enthusiasm or the conviction to voice opposition.
In the late 1950s, the AFPFL split. The army unsurprisingly supported Ne Win's BSP faction and, in 1958, in an honourable but perhaps naive effort to prevent chaos, U Nu handed over power to a Caretaker Government led by Ne Win.
While democratic rule was restored when U Nu won a decisive election victory in 1960, this was short-lived. He had allowed Ne Win to taste power and its appeal proved far stronger than any faint commitment Ne Win had to democracy.
On March 2, 1962, Ne Win seized power in a coup which inaugurated more than three decades of military rule.
The army soon consolidated its control with military efficiency. Ne Win and a small clique of senior army officers formed the Revolutionary Council and ruled by decree with the liberal democratic constitution being swept away.
Government and minority group leaders, including both U Nu and state President Sao Shwe Thaike, were imprisoned along with hundreds of political activists.
Students protesting against the coup at Rangoon University were simply mown down by the army – which then dynamited the Students' Union building. No mercy was shown for those opposed to Ne Win. Within the new administration, Ne Win had total control. He instituted his programme, The Burmese Way to Socialism, which was published as the manifesto of a new political party, the Burmese Socialist Programme Party (BSPP).
The manifesto borrowed elements from Marxism, Buddhism and National Socialism to create an ideological hotchpotch which served as the justification for an arbitrary, left-wing military totalitarianism. The Burmese army, or Tatmadaw as it was known, was exalted as the only institution which could hold together such an ethnically-diverse country.
The regime closed Burma's borders to the outside world, rejecting foreign aid, trade and investment in favour of a narrow-minded isolationism.
As a result of Ne Win's 'socialist' economic reforms which included wholesale nationalisation, state control, rice procurement quotas and the abolition of the private sector - production damaged Burma's vulnerable economy; distribution declined, shortages increased and inevitably a black-market emerged. This particular brand of socialism was designed to increase central control rather than to benefit the people.
As the masses became increasingly poverty-stricken, those in government and in the army enriched themselves.
The army took over the role of the trading class, becoming Burma's largest commercial institution with interests in many key businesses such as banks, trading companies, construction, shipping, even newspapers.
It also controlled the only political party: 80% of the army were members of the BSPP and party membership was essential for any advancement in Burmese society. The army infused every aspect of Burmese society.
Once the army had centralised control in the Rangoon area, it launched a ruthless counter-insurgency campaign in the border areas. The main targets were a Karen alliance which held territory on the Thai border and the Burmese Communist Party, then backed by the Chinese, which had invaded the north-east region from bases in China.
During these campaigns in the border areas the army engaged in the human rights abuses – conscription of labour, forced relocations – which were to become its hallmark. The rebels could never remain united against the government and the threat of insurgency gradually diminished. Ne Win's power, if not absolute, was now established and fiercely imposed.
In 1974, Ne Win constitutionalised his dominance of Burma. A new constitution systematised central control over every aspect of life. Billed as a return to civilian rule, in reality the old leadership simply resigned from the army to lead the civilian government. It was no great surprise when, after one-party elections, Ne Win became President as well as Prime Minister.
This new government was no better at managing the economy. Within months, food shortages provoked riots. There was unrest in every quarter with workers, students at Rangoon University, ethnic minorities and improbably the army expressing discontent with the corrupted economy. In 1976, young army officers even attempted a coup. The ineptitude and corruption of the government had created disaffection throughout the country. However, the army responded with its customary ferocity, shooting, arresting and torturing demonstrators until the unrest was crushed.
By the end of the decade, there was no effective resistance outside the border areas. The government felt secure enough to release many political prisoners and to allow others, such as deposed Prime Minister U Nu, to return from exile.
In 1981, Ne Win resigned as President. He claimed to be making way for younger leaders, but, as Chairman of the Party, he remained in effective control.
Throughout the 1980s, the economy continued to decline. Problems were exacerbated because nearly half of all government revenue was devoted to the army and intelligence service.
In 1987, Burma, seeking relief on its massive foreign debt, applied for Least Developed Nation status. A quarter of a century of military rule had reduced this once-prosperous country to one of the ten poorest nations in the world. The economic crisis was so severe that, in the hope of stimulating agricultural production, the government finally permitted a free market in foods.
However, just as the country geared up for free trade, in tragi-comic fashion, Ne Win declared the country's high denomination banknotes invalid, without warning or compensation! Three-quarters of the currency in circulation became worthless overnight. Based upon Ne Win's lucky number nine, new 45 and 90 kyat notes were issued.
The motive was probably more than simple superstition: it dealt a crushing blow to private traders. But since ordinary Burmese used mattresses rather than bank accounts to hoard their savings, the effect was devastating. In Rangoon, students rioted for the first time since 1976. Discontent spread throughout the country. People had finally had enough of Ne Win's arbitrary and inefficient rule.
Continuing reports of human rights violations in Burma led the United States to intensify sanctions in 1997, and the European Union followed suit in 2000. Suu Kyi was placed under house arrest again in September 2000 and remained under arrest until May 2002, when her travel restrictions outside of Rangoon were also lifted.
Reconciliation talks were held with the government, but these stalemated and Suu Kyi was once again taken into custody in May 2003 after an ambush on her motorcade and remains under house arrest once again.
In August 2003, General Kyin Nyunt announced a seven-step 'roadmap to democracy,' which the government claims it is in the process of implementing. There is no timetable associated with the government's plan, or any conditionality or independent mechanism for verifying it is moving forward.
In February 2005, the government reconvened the National Convention, for the first time since 1993, in an attempt to rewrite the Constitution. However, major pro-democracy organisations and parties, including the National League for Democracy, were barred from participating, and the government selected smaller parties to participate. It was adjourned once again in January 2006.
In November 2005, the military junta started moving the government away from Yangon to an unnamed location near Pyinmana and Kyetpyay, to the newly designated capital city. This public action followed a long term unofficial policy of moving critical military and government infrastructure away from Yangon to avoid a repetition of the events of 1988. In March 2006, the capital was officially named Naypyidaw Myodaw.
Share this article
Watch Channel 4 News when you want to, from the last week.
Subscribe to the FactCheck email service and receive regular updates straight to your inbox.
Keep track of the claims as they happen.
Find out which reports and videos are getting people clicking online. | http://www.channel4.com/news/articles/dispatches/a+brief+history+of+burma/158170.html | 13 |
18 | The Spanish colonial empire lasted three centuries, a period nearly as long as that of the sway of imperial Rome over western Europe. During these ten generations the language, the religion, the culture, and the political institutions of Castile were transplanted over an area twenty times as large as that of the parent state. What Rome did for Spain, Spain in turn did for Spanish America. In surveying, therefore, the work of Spain in the New World, we must realize from the start that we are studying one of the great historical examples of the transmission of culture by the establishment of imperial domain, and not, as in the case of English America, by the growth of little settlements of immigrants acting on their own impulse.
The colonial systems of Spain and of England have often been compared, to the great disparagement of the work of Spain; but the comparison of unlike and even contrasted social processes is more misleading than instructive. If we seek in English p203history a counterpart to the Spanish colonial empire, we shall find it rather in India than in Massachusetts or Virginia. Even here qualifications are necessary, for America never sustained such enormous masses of people as are found in India; and, small on the whole as was Spanish migration to the New World, it was relatively much larger than the English migration to India. Nor as yet have the people of Hindustan absorbed so much of the culture of the ruling nation in its various aspects as did the Indians in the American possessions of Spain.
It will be nearer the truth if we conceive of Spanish America as an intermediate and complex product, approximating on the political side to British India, on the social side in some respects to Roman Africa, and in the West Indies to the English plantation colonies in Virginia and South Carolina. British India is a more extreme example of imperial rule than is presented by New Spain and Peru; there was a far less ethnic divergence between the Roman and the Gaul or Briton than between the Spaniard and the red men, and the absorption of Roman culture was more complete in the ancient than in the modern instance.
In the West Indies and southern colonies of the English the same conditions confronted both England and Spain, and here a comparison of their respective systems is instructive; but for a fair counterpart to the English colonies of the north Atlantic seaboard we look in vain in the Spanish world, for p204Spain, in the commercial interest of Peru, steadily neglected the opportunity to develop the La Plata River country, where, alone of all her empire, there has sprung up since the era of independence and the rise of steam transportation a community rivalling the Mississippi Valley, in its wealth from agriculture and grazing, in its attractiveness to Europe emigration, and in the rapidity of its growth.
Of the three general divisions of their empire — the imperial dependencies of Peru and Mexico, the plantation colonies of the islands, and the unutilized areas of La Plata — the Spaniards always regarded the first as the most important; and it was only when these slipped from their grasp that the resources of the West Indies were adequately developed. Hence in a survey of Spanish colonial institutions our study will be mainly directed, after a brief examination of the beginnings of the West Indies, to Mexico, Central America, and Peru.
The earliest outline of a distinctive colonial policy for the new discoveries was drawn up by Columbus shortly before his second voyage. In this paper he proposed that emigration should be allowed at first up to the number of two thousand households to Española; that three or four towns should be founded, with municipal governments similar to those in Castile; that gold hunting should be restricted to actual settlers in the towns; that there should be churches with parish priests or friars to conduct divine worship and convert the Indians; that no p205colonist should go off prospecting without a licence or without having given his oath to return to his town and render a faithful account of his findings; that all gold brought in should be smelted at once and stamped with the town mark; that one per cent be set apart for the support of the church; that the privilege of gold hunting be limited to certain seasons so that planting and other business would not be neglected; that there should be free opportunity to all to go on voyages of discovery; that one or two ports in Española be made the exclusive ports of entry, and that all ships from the island should report at Cadiz.1
In the following January, Columbus, further instructed by experience as to the actual difficulties of establishing a colony in a distant tropical island, supplemented these proposals with the recommendations which were summarized above.2 The most notable addition is the suggestion to ship to Spain captives taken from the cannibals so as to pay for the importations of cattle and provisions. Of all the productions of this new world the only ones immediately marketable in Spain were the precious metals and the inhabitants. These two documents reveal Columbus's ideas as to a colonial policy for Spain. They forecast several features of the system as subsequently developed, and establish his right to be regarded as the pioneer law-giver of the p206New World, a distinction which has been eclipsed by his failure or misfortunes as viceroy.
In the narrative of the second voyage of Columbus the beginnings of the history of the colony of Española were touched upon.3 It was there noted that after the suppression of the revolt of the natives in 1495 a system of tribute was imposed upon them. In commutation of this tribute, perhaps in pursuance of the suggestion of the cacique Guarionex,4 the labor of the Indians on the farms of the Spaniards was accepted, this being the manner in which they rendered services to their own caciques.5
Two years later, one of the conditions exacted by the followers of the Spanish insurgent Roldan, when they came to terms with the admiral, was to be granted citizenship and lands. In fulfilling this last stipulation Columbus allotted to each of them the cultivated lands of the Indians, apportioning to one ten thousand cassava plants or hillocks and to another twenty thousand. These allotments, repartimientos, or encomiendas, as they were subsequently called, carried with them the enforced labor of the Indians,6 and were the beginning of a system almost universally applied in Spanish America to make the colonies self-supporting.
p207 The next advance in the development of colonial institutions was made under the administration of Ovando, who came out in 1502 to take the place of Bobadilla and upon whom fell the burden of establishing ordered life there. Ovando was a man of scrupulous integrity and unbending firmness, just to the Spaniards, but relentless in striking unexpected and terrible blows if convinced or suspicious of intended Indian revolt. Las Casas' affecting pictures of some instances of these terrorizing strokes have blackened Ovando's name, almost completely eclipsing his many admirable qualities as a governor, upon which Oviedo dwells with enthusiasm.
An examination of Ovando's instructions clearly reveals the ideas entertained at this date by Ferdinand and Isabella. Their first injunction was to provide for the kindly treatment of the Indians and the maintenance of peaceful relations between them and the settlers. The Indians were to pay tribute and were to help in the collection of gold, receiving wages for their labor. Emigration must be restricted to natives of Spain; no one was to sell arms to the natives, nor were Jews, Moors, or recent converts from Mohammedanism to be allowed to go thither. Negro slaves born in Christian lands could be taken to Española, but not others. Great care should be exercised not to dispose the Indians against Christianity.7
p208 Ovando set sail with thirty-two ships and two thousand five hundred colonists and adventurers, the largest number in any one expedition in early American history. Among them was Las Casas, the historian and advocate of the Indians. The experiences of these colonists bring out into strong light the perplexing problem of the situation. The number of Spaniards in the colony before the arrival of this force was about three hundred.8 Many of them were survivors of the criminals taken over by Columbus on his third voyage. Bobadilla, in pursuance of his weak policy of conciliation, had allowed them to extend the system of compulsory labor by all Indians; and the indignant Las Casas records that one might see rabble who had been scourged and clipped of their ears in Castile lording it over the native chiefs.9 Most of the Spaniards had Indian concubines, and other Indians as household servants or as draughted laborers.10 The Spaniards who had relied upon mining were in poverty; the farmers were fairly prosperous, and directed their efforts to breeding swine and cultivating cassava and yams and sweet-potatoes.11
Such was the community now overrun with gold seekers and new settlers. The prospectors rushed off to the mines, but found there unexpected labor, "as gold did not grow on the trees." In a new climate, the failing supply of food quickly p209exhausted them, and they straggled back to the town stricken with fever. Here, without shelter, they died faster than the clergy could conduct funerals.12 More than a thousand perished thus and five hundred were disabled by sickness. The fate that impended over the American soldiers in Cuba in 1898 fell upon these new settlers without mitigation.
Ovando had been ordered to treat the Indians as free men and subjects of the king and queen, but he soon had to report that if left to themselves they would not work even for wages and withdrew from all association with the Spaniards, so that it was impossible to teach or convert them. To meet the first of these difficulties, the sovereigns instructed him, March, 1503, to establish the Indians in villages, to give them lands which they could not alienate, to place them under a protector, to provide a school-house in each village that the children might be taught reading, writing, and Christian doctrine, to prevent oppression by their chiefs, to suppress their native ceremonies, to make efforts to have the Indians marry their wives in due religious form, and to encourage the intermarriage of some Christians with the Indians, both men and women.13
To meet the difficulty of getting the Indians to work, a royal order was issued in December, 1503, p210that the Indians should be compelled to work on buildings, in collecting gold, and farming for such wages as the governor should determine. For such purposes the chiefs must furnish specified numbers of men, "as free men, however, and not servants."14 These two edicts fairly represent the colonial policy of the crown and its intentions to civilize the Indians. As time went on these two lines of effort were more and more evenly carried out; but at first attention was principally directed to making use of the labor of the Indians, and only incidentally to their systematic civilization.15
In pursuance of the royal order, Ovando allotted to one Spaniard fifty and to another one hundred Indians under their chiefs; other allotments, or repartimientos, were assigned to cultivate lands for the king. These assignments were accompanied with a patent reading, "To you, so-and‑so, are given in trust ("se os encomiendan") under chief so-and‑so, fifty or one hundred Indians, with the chief, for you to make use of them in your farms and mines, and you are to teach them the things of our holy Catholic faith."16 At first the term of service in the mines lasted six months and later eight months. As the mines were from thirty to two hundred and fifty miles distant this involved prolonged separations of p211husbands and wives, and upon the wives fell the entire burden of supporting the families. According to Las Casas this separation, the consequent overwork of both husbands and wives, and the general despair led to high infant mortality and a very great diminution of births. If the same conditions existed throughout the world the human race, he writes, would soon die out.17
The rapid melting away of the population of the West Indies during the first quarter of a century of the Spanish rule was the first appearance in modern times of a phenomenon of familiar occurrence in the later history of the contact of nature peoples with a ruling race.18 Through the impassioned descriptions of Las Casas, which were translated into the principal languages of Europe, it is the most familiar instance of the kind; and, as a consequence, it is generally believed that the Spaniards were cruel and destructive above all other colonists, in spite of the fact that in their main-land settlements the native stock still constitutes numerically a very numerous element in the population. That the wars of subjugation were very destructive of life is only too clear; that famine followed war to prolong its ravages is equally certain; that the average p212Spaniard recklessly and cruelly overworked the Indians there is no doubt.
Nevertheless, there were other and more subtle causes in operation. Diseases were imported by the whites, which were mitigated for them by some degree of acquired immunity, but which raged irresistibly through a population without that defence. Of these new diseases small-pox was one of the most destructive.19 In the epidemic of small-pox in 1518 the natives, Peter Martyr reports,20 died like sheep with the distemper. Small-pox appeared in Mexico at the beginning of the conquest. When Pamfilo de Narvaez was despatched to recall Cortés, a negro on one of his ships was stricken with the disease, which was soon communicated to the Indians and raged irresistibly, sweeping off in some provinces half the population.21 Mortality was greatly increased because in their ignorance they plunged into cold water when attacked. The disease seems to have been particularly fatal to women. Eleven years later came an epidemic of a disease called "sarampion," which carried off great numbers.22 At more or less long intervals the Indian populations were swept by a pestilence from which p213the whites were exempt. It was known in Mexico as the "matlazahuatl," and in 1545 and 1576 it caused an enormous mortality.23 Humboldt conjectured that possibly this might be the same as the pestilence which visited Massachusetts in 1618, sweeping off the vast majority of the Indian population.24 Jourdanet finds evidence of endemic typhus and pleuropneumonia in Mexico at the time of the conquest, but that yellow fever did not appear until the next century. Besides the famines consequent upon the conquest, those incident to a failure of the crops were a wide-reaching cause of depopulation from which Mexico on occasion suffered comparably to India in the nineteenth century.25
Just what the population of Española was when Columbus discovered the island there is no means of knowing, but there can be no doubt that the estimateº of Las Casas that there were over three million people in the island is a wild exaggeration.26 Oscar Peschel, an experienced ethnologist and a critical historian, after weighing all the evidence, places the population of Española in 1492 at less than three p214hundred thousand and at over two hundred thousand. In 1508 the number of the natives was sixty thousand; in 1510, forty-six thousand; in 1512, twenty thousand; and in 1514, fourteen thousand.27
In 1548 Oviedo doubted whether five hundred natives of pure stock remained, and in 1570 only two villages of Indians were left. A similar fate befell all the islands. Accelerated as this extermination was by the cruelty and greed of the early Spanish colonists, the history of the native stock in the Sandwich Islands, which has been exempt from conquest and forced labor, indicates that it was perhaps inevitable, without the adjunct of ruthless exploitation. The same phenomenon appeared among the less numerous aborigines of our eastern state where there was little enslavement of the Indians. But here there was no Las Casas, and the disappearance of the natives was regarded as providential.
Daniel Denton in 1670, in recording the rapid decrease of the Indian population of Long Island, quaintly observes: "It hath been generally observed that where the English come to settle, a divine hand makes way for them by removing or cutting off the Indians either by wars one with the other or by some raging mortal disease."28
The melancholy fate of these nature folk and the romantic incidents of the Spanish conquest have p215naturally obscured the more humdrum phases of their earlier colonial history, and have given rise to such erroneous assertions as the following: "Not the slightest thought or recognition was given during the first half-century of the invasion to any such enterprise as is suggested by the terms colonization, the occupancy of the soil for husbandry and domestication."29 How far from true such a sweeping statement is, appears from the equipment of Columbus's second voyage, from the offer of supplies for a year to all settlers in 1498,30 and from the provisions made by the sovereigns to promote colonization in connection with his third voyage, which have been summarized in an earlier chapter.
In addition to the arrangements there quoted, in order to promote colonization the king and queen exempted from the payment of duties necessary articles taken to the Indies; and granted a similar exemption upon articles of every sort imported from the Indies.31 Further, they ordered that there should be prepared a sort of public farm open to cultivation by Spaniards in the island, who should receive as a loan to start with fifty bushels of wheat and corn and as many couple of cows and mares and other beasts of burden.32 This loan was to be paid back at harvest with a tenth part of the crop; the rest the cultivators could retain for themselves or p216sell. In July of the same year, 1497, in response to petitions from actual and proposed settlers in Española for lands for cultivating grain, fruits, and sugar-cane, and for erecting sugar and grist mills, the king and queen authorized Columbus to allot lands free of charge to actual settlers, subject to the condition that they live there four years and that all the precious metals be reserved for the crown.33
Five years later Luis de Arriaga, a gentleman of Seville, proposed to take out to the island two hundred Biscayans, or more, with their wives, to be settled in four villages; and the sovereigns on their part offered free passage for these colonists, free land for cultivation, and exemption from taxes excepting tithes and first-fruits for five years. Large reservations of the sources of monopoly profits, such as mines, salt-pits, Indian trade, harbors, etc., were made for the crown; but the terms for farming were certainly liberal. Arriaga was unable to get together more than forty married people, and they soon petitioned for a reduction of the royalties payable on gold mined and for other concessions. These were granted, but the colony did not preserve its identity and soon merged in the mass.34
In 1501 the crown, to promote trade with the Indies, and especially exports from Castile, relieved p217this commerce entirely from the payment of duties.35 Still further, as early as 1503, Ovando was instructed to promote the cultivation of mulberry-trees that the silk culture might be developed.36
One of the most remarkable efforts of the Spanish government to promote the colonization of the New World by actual workers was that made in 1518 in response to Las Casas' representations of the evils of the compulsory labor of the Indians. Those that would go to Terra-Firma were offered free passage and their living on board ship, promised the attendance of physicians, and upon arrival at their destination lands and live-stock; for twenty years they were to be relieved of the alcabala, or tax on exchanges, and all taxes on their produce except the church tithes. Further premiums were offered of $200 for the first one who produced twelve pounds of silk; of $150 for the one who first gathered ten pounds of cloves, ginger, cinnamon, or other spices; of $100 for the first fifteen hundredweight of woad, and $65 for the first hundredweight of rice.37
A formal expression of contemporary opinion in Española as to the needs of the colony towards the end of Ovando's administration affords us an interesting picture of its general condition and of the p218spirit of the inhabitants, and of the defects in the government trade policy. Two proctors or representatives of the people presented a petition to King Ferdinand in 1508 in which they ask for assistance in building stone churches and additional endowments for their hospitals; for permission to engage in the local coasting trade; that all the natives of Spain be allowed to engage in trade with Española; that their imports of wine be not limited to that grown near Seville; that they may bring in Indians from the neighboring islands, which are of little use and not likely to be settled; that by this means the Indians could be more easily converted; for the devotion of the product of salt-mines to the building of public works; for the establishment of a higher court of appeals; for more live-stock; that no descendants of Jews, Moors, or of heretics, burned or reconciled, down to the fourth generation, be allowed to come to the island; that hogs be considered common property as they have multiplied so greatly and run wild; that the towns be ennobled and granted arms, likewise the island; that the artisans who come to the island may be compelled to stick to their trades not be allowed, as they desire, to desert them and to secure an allotment of Indians; for the choice of sheriffs and notaries by election of the regidores, etc.38
That the Spanish authorities were not indifferent to the establishment of agricultural colonies in the West Indies is abundantly evident. That their success p219was not more striking was quite as much the result of the superior attractiveness of Mexico and Peru as of any defects in their policy. The early history of Española compares not unfavorably with the early years of Virginia. Had a California of 1849 been as accessible to the Virginia of 1620 as Mexico was to Española in 1520, Virginia might have suffered a similar eclipse.
1 Thacher, Columbus, III, 94‑113, also translated in Amer. Hist. Assoc., Report, 1894, pp452 ff.
4 Las Casas, Historia, II, 103.
5 Herrera, Historia General, dec. I, lib. III, chap. XIII.
6 Las Casas, Historia, I, 373. Las Casas draws no other distinction between "repartimiento" and "encomienda" than that noted in the text, that "encomienda" was the later term.
7 Herrera, Historia General, dec. I, lib. IV, chap. XII; Helps, Spanish Conquest, I, 127‑130.
8 Las Casas, Historia, III, 33.
9 Ibid., 3.
10 Ibid., 5.
11 Ibid., 35.
12 Las Casas, Historia, III, 36.
13 Fabié, Ensayo Historico, 42; Herrera, Historia General, dec. I, lib. V, chap. XII.
14 Las Casas, Historia, III, 65; Fabié, Ensayo Historico, 57, text in Docs. Ined. de Indias, XXXI, 209.
15 Las Casas, Historia, III, 70. See Van Middeldyk, History of Puerto Rico, 29, 45, for tables illustrating Indian allotments in that island.
16 Las Casas, Historia, III, 71.
17 Las Casas, Historia, III, 72.
18 Waitz, Introduction to Anthropology (London, 1863), 144‑167, amasses a great variety of evidence illustrating this decay of population. Cf. also Peschel, Races of Man, 152‑155; and G. Stanley Hall, Adolescence, II, 648‑748, on "Treatment of Adolescent Races."
19 On the small-pox, see Waitz, Introduction to Anthropology, 145.
20 Peter Martyr, De Rebus Oceanicis, dec. III, lib. VIII; Hakluyt, Voyages, V, 296.
21 Motolinia, Historia de los Indios de la Nueva España, in Col. de Docs. para la Hist. de Mexico, I, 15; Herrera, Historia General, dec. II, lib. X, chap. XVIII.
22 Motolinia, Historia, 15.
23 See Jourdanet, "Considérations Médicales sur la Campagne de Fernand Cortés," in his ed. of Bernal Diaz, 895.
24 Cf. extract from Johnson, "Wonder-working Providence," in Hart, American History Told by Contemporaries, I, 368; H. H. Bancroft, Mexico, III, 756.
25 Cf. Humboldt, New Spain, I, 121.
26 Las Casas, Historia, III, 101. The prevalent Spanish estimate was one million one hundred thousand, ibid.; Oviedo, Historia General, I, 71; Peter Martyr, De Rebus Oceanicis, III, dec. III, lib. VIII; Hakluyt, Voyages, V, 296.
27 Peschel, Zeitalter der Entdeckungen, 430; Oviedo, Historia General, I, 71; Lopez de Velasco, Geografia y Descripcion, 97.
28 Denton, New York (ed. 1902), 45.
29 G. E. Ellis, in Winsor, Narr. and Crit. Hist., II, 301.
30 Memorials of Columbus, 91; Navarrete, Viages, II, 167.
31 Fabié, Ensayo Historico, 32.
32 Memorials of Columbus, 74; Navarrete, Viages, II, 183.
33 Memorials of Columbus, 127‑129; Navarrete, Viages, II, 215.
34 Docs. Ined. de Indias, XXX, 526; Las Casas, Historia, III, 36‑38; Southey, History of the West Indies, 77.
35 Docs. Ined. de Indias, XXXI, 62 ff.; Fabié, Ensayo Historico, 40.
36 Herrera, Historia General, dec. I, lib. V, chap. XII; Southey, History of the West Indies, 91.
37 Col. de Docs. Ined. de Ultramar, IX (Docs. Leg., II), 77‑83; Fabié, Ensayo Historico, 163‑164.
38 Col. de Docs. Ined. de Ultramar, V (Docs. Leg., I), 125‑142.
Images with borders lead to more information.
Spain in America
A page or image on this site is in the public domain ONLY
Page updated: 16 Dec 09 | http://penelope.uchicago.edu/Thayer/E/Gazetteer/Places/America/United_States/_Topics/history/_Texts/BOUSIA/14*.html | 13 |
169 | What is Deflation and What Causes it to Occur?
Defining Inflation and Deflation
Webster's says, "Inflation is an increase in the volume of money and credit relative to available goods," and "Deflation is a contraction in the volume of money and credit relative to available goods." To understand inflation and deflation, we have to understand the terms money and credit. Defining Money and Credit Money is a socially accepted medium of exchange, value storage and final payment. A specified amount of that medium also serves as a unit of account.
According to its two financial definitions, credit may be summarized as a right to access money. Credit can be held by the owner of the money, in the form of a warehouse receipt for a money deposit, which today is a checking account at a bank. Credit can also be transferred by the owner or by the owner's custodial institution to a borrower in exchange for a fee or fees - called interest - as specified in a repayment contract called a bond, note, bill or just plain IOU, which is debt. In today's economy, most credit is lent, so people often use the terms "credit" and "debt" interchangeably, as money lent by one entity is simultaneously money borrowed by another.
Price Effects of Inflation and Deflation
When the volume of money and credit rises relative to the volume of goods available, the relative value of each unit of money falls, making prices for goods generally rise. When the volume of money and credit falls relative to the volume of goods available, the relative value of each unit of money rises, making prices of goods generally fall. Though many people find it difficult to do, the proper way to conceive of these changes is that the value of units of money are rising and falling, not the values of goods.
The most common misunderstanding about inflation and deflation - echoed even by some renowned economists - is the idea that inflation is rising prices and deflation is falling prices. General price changes, though, are simply effects.
The price effects of inflation can occur in goods, which most people recognize as relating to inflation, or in investment assets, which people do not generally recognize as relating to inflation. The inflation of the 1970s induced dramatic price rises in gold, silver and commodities. The inflation of the 1980s and 1990s induced dramatic price rises in stock certificates and real estate. This difference in effect is due to differences in the social psychology that accompanies inflation and disinflation, respectively.
The price effects of deflation are simpler. They tend to occur across the board, in goods and investment assets simultaneously.
The Primary Precondition of Deflation
Deflation requires a precondition: a major societal buildup in the extension of credit (and its flip side, the assumption of debt). Austrian economists Ludwig von Mises and Friedrich Hayek warned of the consequences of credit expansion, as have a handful of other economists, who today are mostly ignored. Bank credit and Elliott wave expert Hamilton Bolton, in a 1957 letter, summarized his observations this way:
In reading a history of major depressions in the U.S. from 1830 on, I was impressed with the following:
(a) All were set off by a deflation of excess credit. This was the one factor in common.
(b) Sometimes the excess-of-credit situation seemed to last years before the bubble broke.
(c) Some outside event, such as a major failure, brought the thing to a head, but the signs were visible many months, and in some cases years, in advance.
(d) None was ever quite like the last, so that the public was always fooled thereby.
(e) Some panics occurred under great government surpluses of revenue (1837, for instance) and some under great government deficits.
(f) Credit is credit, whether non-self-liquidating or self-liquidating.
(g) Deflation of non-self-liquidating credit usually produces the greater slumps.
Self-liquidating credit is a loan that is paid back, with interest, in a moderately short time from production. Production facilitated by the loan - for business start-up or expansion, for example - generates the financial return that makes repayment possible. The full transaction adds value to the economy.
Non-self-liquidating credit is a loan that is not tied to production and tends to stay in the system. When financial institutions lend for consumer purchases such as cars, boats or homes, or for speculations such as the purchase of stock certificates, no production effort is tied to the loan. Interest payments on such loans stress some other source of income. Contrary to nearly ubiquitous belief, such lending is almost always counter-productive; it adds costs to the economy, not value. If someone needs a cheap car to get to work, then a loan to buy it adds value to the economy; if someone wants a new SUV to consume, then a loan to buy it does not add value to the economy. Advocates claim that such loans "stimulate production," but they ignore the cost of the required debt service, which burdens production. They also ignore the subtle deterioration in the quality of spending choices due to the shift of buying power from people who have demonstrated a superior ability to invest or produce (creditors) to those who have demonstrated primarily a superior ability to consume (debtors). Near the end of a major expansion, few creditors expect default, which is why they lend freely to weak borrowers. Few borrowers expect their fortunes to change, which is why they borrow freely. Deflation involves a substantial amount of involuntary debt liquidation because almost no one expects deflation before it starts.
What Triggers the Change to Deflation?
A trend of credit expansion has two components: the general willingness to lend and borrow and the general ability of borrowers to pay interest and principal. These components depend respectively upon (1) the trend of people’s confidence, i.e., whether both creditors and debtors think that debtors will be able to pay, and (2) the trend of production, which makes it either easier or harder in actuality for debtors to pay. So as long as confidence and production increase, the supply of credit tends to expand. The expansion of credit ends when the desire or ability to sustain the trend can no longer be maintained. As confidence and production decrease, the supply of credit contracts.
The psychological aspect of deflation and depression cannot be overstated. When the social mood trend changes from optimism to pessimism, creditors, debtors, producers and consumers change their primary orientation from expansion to conservation. As creditors become more conservative, they slow their lending. As debtors and potential debtors become more conservative, they borrow less or not at all. As producers become more conservative, they reduce expansion plans. As consumers become more conservative, they save more and spend less. These behaviors reduce the "velocity" of money, i.e., the speed with which it circulates to make purchases, thus putting downside pressure on prices. These forces reverse the former trend.
The structural aspect of deflation and depression is also crucial. The ability of the financial system to sustain increasing levels of credit rests upon a vibrant economy. At some point, a rising debt level requires so much energy to sustain - in terms of meeting interest payments, monitoring credit ratings, chasing delinquent borrowers and writing off bad loans - that it slows overall economic performance. A high-debt situation becomes unsustainable when the rate of economic growth falls beneath the prevailing rate of interest on money owed and creditors refuse to underwrite the interest payments with more credit.
When the burden becomes too great for the economy to support and the trend reverses, reductions in lending, spending and production cause debtors to earn less money with which to pay off their debts, so defaults rise. Default and fear of default exacerbate the new trend in psychology, which in turn causes creditors to reduce lending further. A downward " spiral" begins, feeding on pessimism just as the previous boom fed on optimism. The resulting cascade of debt liquidation is a deflationary crash. Debts are retired by paying them off, " restructuring" or default. In the first case, no value is lost; in the second, some value; in the third, all value. In desperately trying to raise cash to pay off loans, borrowers bring all kinds of assets to market, including stocks, bonds, commodities and real estate, causing their prices to plummet. The process ends only after the supply of credit falls to a level at which it is collateralized acceptably to the surviving creditors.
Who benefits from Deflation?
Obviously creditors benefit. They loaned money and are getting paid back with dollars that have a greater purchasing power. This scenario is distasteful to those with a "Robin Hood" mentality i.e. steal from the rich and give to the poor.
But Deflation (falling prices) also benefits low debt consumers and those on fixed incomes, because they receive a fixed number of dollars but can buy more with each dollar .
The periods in our history with the lowest inflation have also been when our Gross Domestic Product has grown the fastest in terms of "Real Dollars". (Real Dollars are measured after prices are adjusted for inflation or deflation).
In addition to encouraging fiscal responsibility on the part of consumers, low but stable inflation (or even deflation) is also good for the long term economy, because it allows producers to know their costs. This predictability allows producers to generate reliable profits which will eventually result in a strong healthy economy.
Inflation is bad for the economy because economies built upon debt and encouraging consumers to go further into debt eventually crumble of their own weight. As more and more consumers get over burdened by debt, they declare bankruptcy, introducing uncertainty to the creditors and robbing them of their rightful income.
Somehow it is difficult to feel compassion for the "rich creditors" but everyone with a bank account is a creditor. How would you like it if someone who owed you money failed to pay you back? Or you were never sure if you would be able to take your money out of the bank? What would this uncertainty do? You would probably be less likely to put money in. Banks feel the same way, if the chances of being repaid decrease they are less likely to make loans and that decreases the health of the overall economy.
Rapidly falling or rising inflation is usually a sign of a suffering economy with high unemployment and a lack of spending power (i.e. recession/ depression). But it is the change that is the problem not the altitude (or lack of it).
The Historical Inflation Rates show that even when we have had price deflation (falling prices) the country has been prosperous if the reason for the falling prices is that goods are being produced so economically that prices can fall and producers can still make a profit. This generally occurs after major productivity enhancements like the invention of the assembly line or the completion of the transcontinental railroad.
Disinflationary pressures in the late 1990s and early 2000s were most likely the result of cheap productive capacity in China and other former communist countries coupled with the deflationary forces of the 9/11 attack and the stock market crash.
Why Deflationary Crashes and Depressions Go Together
A deflationary crash is characterized in part by a persistent, sustained, deep, general decline in people's desire and ability to lend and borrow. A depression is characterized in part by a persistent, sustained, deep, general decline in production. Since a decline in production reduces debtors' means to repay and service debt, a depression supports deflation. Since a decline in credit reduces new investment in economic activity, deflation supports depression. Because both credit and production support prices for investment assets, their prices fall in a deflationary depression. As asset prices fall, people lose wealth, which reduces their ability to offer credit, service debt and support production. This mix of forces is self-reinforcing.
The U.S. has experienced two major deflationary depressions, which lasted from 1835 to 1842 and from 1929 to 1932 respectively. Each one followed a period of substantial credit expansion. Credit expansion schemes have always ended in bust. The credit expansion scheme fostered by worldwide central banking (see Chapter 10) is the greatest ever. The bust, however long it takes, will be commensurate. If my outlook is correct, the deflationary crash that lies ahead will be even bigger than the two largest such episodes of the past 200 years.
Financial Values Can Disappear
People seem to take for granted that financial values can be created endlessly seemingly out of nowhere and pile up to the moon. Turn the direction around and mention that financial values can disappear into nowhere, and they insist that it is not possible. "The money has to go somewhere...It just moves from stocks to bonds to money funds...It never goes away...For every buyer, there is a seller, so the money just changes hands." That is true of the money, just as it was all the way up, but it's not true of the values, which changed all the way up. Asset prices rise not because of "buying" per se, because indeed for every buyer, there is a seller. They rise because those transacting agree that their prices should be higher. All that everyone else - including those who own some of that asset and those who do not - need do is nothing. Conversely, for prices of assets to fall, it takes only one seller and one buyer who agree that the former value of an asset was too high. If no other bids are competing with that buyer's, then the value of the asset falls, and it falls for everyone who owns it. If a million other people own it, then their net worth goes down even though they did nothing. Two investors made it happen by transacting, and the rest of the investors made it happen by choosing not to disagree with their price. Financial values can disappear through a decrease in prices for any type of investment asset, including bonds, stocks and land.
Anyone who watches the stock or commodity markets closely has seen this phenomenon on a small scale many times. Whenever a market "gaps" up or down on an opening, it simply registers a new value on the first trade, which can be conducted by as few as two people. It did not take everyone's action to make it happen, just most people's inaction on the other side. In financial market "explosions" and panics, there are prices at which assets do not trade at all as they cascade from one trade to the next in great leaps.
A similar dynamic holds in the creation and destruction of credit. Let's suppose that a lender starts with a million dollars and the borrower starts with zero. Upon extending the loan, the borrower possesses the million dollars, yet the lender feels that he still owns the million dollars that he lent out. If anyone asks the lender what he is worth, he says, "a million dollars," and shows the note to prove it. Because of this conviction, there is, in the minds of the debtor and the creditor combined, two million dollars worth of value where before there was only one. When the lender calls in the debt and the borrower pays it, he gets back his million dollars. If the borrower can't pay it, the value of the note goes to zero. Either way, the extra value disappears. If the original lender sold his note for cash, then someone else down the line loses. In an actively traded bond market, the result of a sudden default is like a game of "hot potato": whoever holds it last loses. When the volume of credit is large, investors can perceive vast sums of money and value where in fact there are only repayment contracts, which are financial assets dependent upon consensus valuation and the ability of debtors to pay. IOUs can be issued indefinitely, but they have value only as long as their debtors can live up to them and only to the extent that people believe that they will.
The dynamics of value expansion and contraction explain why a bear market can bankrupt millions of people. At the peak of a credit expansion or a bull market, assets have been valued upward, and all participants are wealthy - both the people who sold the assets and the people who hold the assets. The latter group is far larger than the former, because the total supply of money has been relatively stable while the total value of financial assets has ballooned. When the market turns down, the dynamic goes into reverse. Only a very few owners of a collapsing financial asset trade it for money at 90 percent of peak value. Some others may get out at 80 percent, 50 percent or 30 percent of peak value. In each case, sellers are simply transforming the remaining future value losses to someone else. In a bear market, the vast, vast majority does nothing and gets stuck holding assets with low or non-existent valuations. The "million dollars" that a wealthy investor might have thought he had in his bond portfolio or at a stock's peak value can quite rapidly become $50,000 or $5000 or $50. The rest of it just disappears. You see, he never really had a million dollars; all he had was IOUs or stock certificates. The idea that it had a certain financial value was in his head and the heads of others who agreed. When the point of agreement changed, so did the value. Poof! Gone in a flash of aggregated neurons. This is exactly what happens to most investment assets in a period of deflation.
It would seem obvious that low inflation is good for consumers, because costs are not rising faster than their paychecks. But recently commentators have been saying that "Low inflation introduces uncertainty". This is nonsense.
During the high inflation "Eighties" I remember commentators saying "High Inflation introduces uncertainty". This is not quite true either. The truth is that steady inflation, if it can be relied upon to remain steady, does not introduce uncertainty. Changing (fluctuating) inflation rates is what introduces uncertainty.
Call Us @ 09887188505 | http://www.snpnifty.com/Deflation.html | 13 |
30 | Hearing loss is the reduced ability to hear sound. Deafness is the complete inability to hear sound. Deafness and hearing loss have many causes and can occur at any age. People can go deaf suddenly as a complication of a virus, or lose their hearing over time because of disease, nerve damage, or injury caused by noise. About 1 in 800 babies is born deaf, often because of genetic factors. Approximately 1 out of every 10 Canadians has hearing loss, and more than half of Canadians over 65 years of age have hearing loss.
Hearing loss is a spectrum with minor hearing problems at one end and profound, complete deafness at the other. Conductive hearing loss occurs when something blocks sound waves from reaching the inner ear. Sensorineural hearing loss is caused by damage to the inner ear or to the nerves that send sound to the brain. Sensorineural hearing loss is more likely to be permanent and to cause deafness. Sometimes a mixture of conductive and sensorineural hearing loss can occur.
Many different conditions lead to partial and total deafness. Ear infections, fluid buildup behind the eardrum, holes in the eardrum, and problems with the middle ear bones can cause deafness from conductive hearing loss. In rare cases, tumours can also cause conductive hearing loss - they block sound from getting into the inner ear. Birth defects and diseases passed on by genes can do this, too. Genetics is one cause of sensorineural hearing loss. Half of all cases of profound deafness in children have a genetic source.
Presbycusis, or age-associated hearing loss, also has a genetic component. It's a condition that makes someone deaf over time as they age due to the slow decay of sensitive hair cells lining the inner ear. Aside from aging, other causes of the decay include circulatory problems, diseases such as diabetes, and long-term exposure to noise. Without the hair cells, recognizing sounds becomes difficult or impossible.
Exposure to loud noise in certain occupations from sources such as construction machinery, heavy equipment, or amplified music can cause sensorineural hearing loss in people of all ages and is the most common cause of hearing loss. Other sources of excess noise include attendance at concerts and nightclubs, and use of music headphones, household power tools, or firearms. The louder the noise, and the longer a person is exposed to it, the greater their risk of this type of hearing loss. To prevent this type of hearing loss it is important to wear proper hearing protection and avoid exposure to loud noise whenever possible.
Some kinds of sensorineural hearing loss or deafness may be caused by infectious diseases, such as shingles, meningitis, and cytomegalovirus. In childhood, the auditory nerve can be damaged by mumps, meningitis, German measles (rubella), or inner ear infections.
More rarely, deafness or hearing loss can occur suddenly. This condition can be permanent or temporary, and usually affects only one ear. The cause is unknown but may be due to viral infections, or disorders of the circulatory or immune system. The loss is potentially reversible with corticosteroid medications; however, the likelihood of recovery is lower if the loss was severe initially. Treatment is more likely to have greater effect if it is started early - ideally within a week of the loss of hearing.
If a woman contracts German measles during pregnancy, her child may have a permanent hearing disability. Lack of oxygen at birth can also badly damage the ears and hearing.
Other causes of sensorineural hearing loss include diabetes and various brain and nerve disorders (such as a stroke). Tumours of the auditory nerve or brain are rare causes of hearing loss. High doses of acetylsalicylic acid* (ASA), quinine, some antibiotics, and diuretics used to treat high blood pressure may all permanently damage the inner ear. Nerve pathways in the brain that transmit sound impulses can be damaged by multiple sclerosis and other diseases attacking the coverings of nerves. Violent injury and physical blows to the ear may cause permanent deafness.
Profound deafness is easy to recognize, since people will notice such a large change in hearing. Milder hearing loss may not be noticed right away, since it often comes on gradually and people "get used to it." If you notice that you need to turn the volume up on the radio or television, have difficulty understanding conversations, or need to ask people to repeat what they say, you may have hearing loss.
Age-related hearing loss often starts at the high frequencies, meaning that people may have trouble understanding women and children (whose voices are higher pitched) or telling the difference between similar sounds such as "th" and "sh." Many people are alerted by friends or relatives. The problem is initially most apparent in noisy environments.
All infants and children should be screened for hearing loss, as early diagnosis and intervention can have a dramatic impact on the child's future development and educational needs. Signs of deafness in young children include not responding to noises, responding slowly, or not learning to speak by the expected age. A deaf child may also lag behind in developing motor skills and coordination, or in learning how to balance, crawl, or walk. The biggest obstacles to early diagnosis are typically a delay in a referral to a specialist (usually when the signs of hearing loss are not recognized), or a lack of access to appropriate infant hearing screening. | http://health.mytelus.com/channel_condition_info_details.asp?disease_id=152&channel_id=165&relation_id=55291 | 13 |
38 | Role of Reserve Bank of India
- The Reserve Bank of India (RBI) is the central bank of the country.
- It was established on April 1, 1935 under the Reserve Bank of India Act, 1934, which provides the statutory basis for SCARDB stands for state co-operative agricultural and rural development banks and PCARDB stands for primary co-operative agricultural and rural development banks.
- In addition, the rural areas are served by a very large number of primary agricultural credit societies (94,942 at end-March 2008).
- Financial Inclusion implies provision of financial services at affordable cost to those who are excluded from the formal financial system.
- Every country has its own central bank. The central bank of USA is called the Federal Reserve Bank, the central bank of UK is Bank of England and the central bank in China is known as the People’s Bank of China and so on.
- Most central banks were established around the early twentieth century.
Functions of RBI
When the RBI was established, it took over the functions of currency issue from the Government of India and the power of credit control from the then Imperial Bank of India. As the central bank of the country, the RBI performs a wide range of functions; particularly, it:
- Acts as the currency authority
- Controls money supply and credit
- Manages foreign exchange
- Serves as a banker to the government
- Builds up and strengthens the country’s financial infrastructure
- Acts as the banker of banks
- Supervises banks
RBI as Bankers’ Bank
- As the bankers’ bank, RBI holds a part of the cash reserves of banks,; lends the banks funds for short periods, and provides them with centralized clearing and cheap and quick remittance facilities.
- Banks are supposed to meet their shortfalls of cash from sources other than RBI and approach RBI only as a matter of last resort, because RBI as the central bank is supposed to function as only the ‘lender of last resort’.
- To ensure liquidity and solvency of individual commercial banks and of the banking system as a whole, the RBI has stipulated that banks maintain a Cash Reserve Ratio (CRR).
- The CRR refers to the share of liquid cash that banks have to maintain with RBI of their net demand and time liabilities (NDTL).
- CRR is one of the key instruments of controlling money supply. By increasing CRR, the RBI can reduce the funds available with the banks for lending and thereby tighten liquidity in the system; conversely reducing the CRR increases the funds available with the banks and thereby raises liquidity in the financial system.
RBI as supervisor
- To ensure a sound banking system in the country, the RBI exercises powers of supervision, regulation and control over commercial banks.
- The bank’s regulatory functions relating to banks cover their establishment (i.e. licensing), branch expansion, liquidity of their assets, management and methods of working, amalgamation, reconstruction and liquidation.
- RBI controls the commercial banks through periodic inspection of banks and follow-up action and by calling for returns and other information from them, besides holding periodic meetings with the top management of the banks.
- While RBI is directly involved with commercial banks in carrying out these two roles, the commercial banks help RBI indirectly to carry out some of its other roles as well. For example, commercial banks are required by law to invest a prescribed minimum percentage of their respective net demand and time liabilities (NDTL) in prescribed securities, which are mostly government securities. This helps the RBI to perform its role as the banker to the Government, under which the RBI conducts the Government’s market borrowing program. | http://banks-india.com/banking/ibps-study-material/page/2/ | 13 |
15 | Science Fair Project Encyclopedia
An oil platform is a large structure used to house workers and machinery needed to drill and then produce oil and natural gas in the ocean. Depending on the circumstances, the platform may be attached to the ocean floor, consist of an artificial island, or be floating.
Generally, oil platforms are located on the continental shelf though as technology improves, drilling and production in ever deeper waters becomes both feasible and economic. A typical platform may have around thirty wellheads located on the platform and directional drilling allows reservoirs to be accessed at both different depths and at remote positions up to maybe 5 miles (8 kilometres) from the platform. Many platforms also have remote wellheads attached by umbilical connections, these may be single wells or a manifold centre for multiple wells.
Larger lake and sea-based oil rigs are some of the largest moveable man-made structures in the world. There are at least five distinct types of rig platform:
- Immobile Platforms, a rig built on concrete and/or steel legs anchored directly onto the seabed. Such platforms are, by virtue of their immobility, designed for very long term use (for instance the Hibernia platform).
- Semi-submersible Platforms having legs of sufficient buoyancy to cause the structure to float, but of weight sufficient to keep the structure upright. Semi-submersible rigs can be moved from place to place; and can be lowered into or raised by altering the amount of flooding in buoyancy tanks; they are generally anchored by cable anchors during drilling operations, though they can also be kept in place by active steering.
- Jack-up Platforms , as the name suggests, are platforms that can be jacked up above the sea, by dint of legs than can be lowered like jacks. These platforms are designed to move from place to place, and then anchor themselves by deploying the jack-like legs.
- Ship-board Rigs. Active steering of ships, especially based on Global Positioning System measurements, enables certain drilling operations to be conducted from a ship which holds its position relative to the drilling point, within the parameters for movement acceptable in a given circumstance — i.e. within the point at which movement of the ship would cause the drill string to break.
- Tension-leg Platforms, also known as "spars", a rig tethered to the seabed in a manner that eliminates most vertical movement of the structure.
Various types of structure are used, steel jacket, concrete caisson, floating steel and even floating concrete. The concrete caisson structures often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore (Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Steel jackets are fabricated on land and towed by barge to their destination where a crane is used to upright the jacket and locate it on the seabed. Steel jackets are usually piled into the seabed.
A typical oil production platform is self-sufficient in energy and water needs, housing electrical generation, water desalinators and all of the equipment necessary to process oil and gas such that it can be either delivered directly onshore by pipeline or to a Floating Storage Unit and/or tanker loading facility. Elements in the oil/gas production process include wellhead , production manifold , production separator , glycol process to dry gas, gas compressors, water injection pumps , oil/gas export metering and main oil line pumps. All production facilities are designed to have minimal environmental impact.
The Petronius platform is an oil and gas platform in the Gulf of Mexico, which stands 610 metres (2,000 feet) above the ocean floor. This structure is partially supported by buoyancy. Depending on the criteria it may be the world's tallest structure.
In practice, these larger platforms are assisted by smaller ESVs (emergency support vessels) like the British Iolair that are summoned when something has gone wrong, e.g. when a search and rescue operation is required. During normal operations, PSVs (platform supply vessels) keep the platforms provisioned and supplied, and AHTS vessels can also supply them, as well as tow them to location and serve as standby rescue and firefighting vessels.
The nature of their operation — extraction of volatile substances sometimes under extreme pressure in a hostile environment — has risk and not infrequent accidents and tragedies occur. In July 1988, 167 people died when Occidental Petroleum's Alpha offshore production platform, on the Piper field in the North Sea, exploded after a gas leak. The accident greatly accelerated the practice of housing living accommodation on self-contained separate rigs, away from those used for extraction.
Further risks are the leeching of heavy metals that accumulate in buoyancy tanks into water; and risks associated with their disposal. There has been concern expressed at the practice of partially demolishing offshore rigs to the point that ships can traverse across their site; there have been instances of fishery vessels snagging nets on the remaining structures. Proposals for the disposal at sea of the Brent Spar, a 137-metre-tall storage buoy (another true function of that which is termed an oil rig), was for a time in 1996 an environmental cause célèbre in the UK after Greenpeace occupied the floating structure. The event led to a reconsideration of disposal policy in the UK and Europe, though Greenpeace, in hindsight, admitted some inaccuracies leading to hyperbole in their statements about Brent Spar.
In British waters, the cost of removing all platform rig structures entirely was estimated in 1995 at £1.5 billion, and the cost of removing all structures including pipelines — a so-called "clean sea" approach — at £3 billion.
- Oil Rig Disposal (pdf) — Post note issued by the UK Parliamentary Office of Science and Technology.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Oil_platform | 13 |
15 | Together for Girls
Scope of the Problem – Sexual Violence against Girls
Sexual violence against girls is a global human rights injustice of vast proportions with severe health and social consequences. In 2002, the World Health Organization estimated that 150 million girls under the age of 18 had experienced sexual violence.1 Further, studies also indicate that 36% –62% of reported sexual assaults are committed against girls age 15 and younger.2 A survey conducted in Swaziland by UNICEF, the Centers for Disease Control and Prevention (CDC), and local partners in 2007 illustrated the scope of the problem, with approximately one in three girls a victim of sexual violence prior to the age of 18, and three-quarters of the perpetrators being men and boys (including boyfriends, husbands, and male relatives) from the neighborhoods the victims resided in.5
In addition, sexual violence has overarching consequences both over the short and long term. Girls who are victims of sexual violence are three times more likely to have an unwanted pregnancy, and girls under 15 who are pregnant are five times more likely to die in childbirth than pregnant women ages 20 to 24.6 Sexual violence is often hidden and underreported, due to the panic, shame, and disbelief associated with the act. Only an estimated 10%–20% of child sexual abuse cases are reported to authorities.7
Sexual violence is also connected to issues of social and economic injustice. The threat of sexual violence can affect a survivor’s of receiving an education. Girls who have experienced sexual violence can find themselves pulled from school by their families and caregivers, or they choose to leave because of their fear and depression. A lack of education hinders a girl’s prospects of earning a sustainable income, perpetuating and deepening the cycle of vulnerability. Ultimately, societies pay a deep price for these outcomes because educated women are vital to the health and prosperity of countries. One study has shown that a 1 percent increase in girls attending secondary school adds 0.3 percent in economic growth in developing countries. 3
Sexual violence can also increase the risk of infectious diseases and chronic disease later in life. Girls who are victims of sexual violence are at increased biological risk of contracting HIV/AIDS and other sexually transmitted infections. Even if a girl is not infected with HIV immediately after an act of sexual violence, research indicates that she becomes more likely to contract infectious and chronic diseases later.4 This is because sexual violence alters the life path of many girls, leading them down a road of depression, substance abuse, and high-risk behaviors.4
Ending sexual violence will allow girls worldwide to live safer and healthier lives and fulfill their right to freedom from violence, exploitation, and abuse
In 2007, intending to investigate the causes and scale of sexual violence, Swaziland launched a national survey. This survey, administered by UNICEF, CDC, and the Swaziland Action Group Against Abuse (SWAGAA) partners, showed that one-third of girls in the country had experienced sexual violence prior to age 18.5 To address this, Swaziland created numerous programs, including a national education campaign raising awareness and promoting prevention, a safe court system for survivors, safe school initiatives, and increased capacity for police officials to enforce the law, establishing units to investigate sexual violence against minors.
Building on the experience in Swaziland, a group of international organizations from the public, private, and nonprofit sectors joined forces to form the first global partnership focused on ending sexual violence against girls. Together for Girls was launched at the annual meeting of the Clinton Global Initiative on September 25, 2009. It brings together 10 public and private sector organizations focused on one common goal: halting sexual violence against girls. Many organizations are participating, including the United States Department of State-President’s Emergency Plan for AIDS Relief (PEPFAR), the Office of Global Women’s Health Issues, CDC, United Nations Children's Fund (UNICEF), United Nations Population Fund, Joint United Nations Programme on HIV/AIDS (UNAIDS), United Nations Development Fund For Women, Becton, Dickinson, and Company (BD), CDC Foundation, Grupo ABC, and the Nduna Foundation.
This initiative focuses on three core activities:
- Conducting national surveys and collecting data to document the magnitude and effect of sexual violence against girls to inform government leaders, communities, and donors.
- Supporting a plan of action at country level with interventions tailored to address sexual violence. These range from national policy-level dialogue and legal reform to improved services and community-based approaches.
- Launching communications and public awareness campaigns to draw attention to the problem and motivate changes in societal and gender norms and behaviors
Current actions under the Together for Girls initiative include surveys, programs, and partnership activities.
The Nduna Foundation has provided funding to the CDC Foundation to enable CDC’s Division of Violence Prevention to secure staff and resources to expand the national survey methodology to three additional countries: Tanzania, Kenya, and Zimbabwe.
Countries where surveys have been implemented or are in the planning stages
- In Tanzania, UNICEF Tanzania, CDC, and local partners completed data collection in December 2009. Government partners and UNICEF Tanzania released the final report [PDF-1.7MB] in August 2011.
- The UNICEF office in Kenya secured full funding for the survey from UNICEF headquarters, and data collection was completed in December 2010. Kenya measured violence against boys and girls, and assessed health consequences and access to and use of support services for victims of violence. Additionally, Kenya requested tailored questions to see whether violence occurred as a result of the highly contested national elections in 2008.
- UNICEF Zimbabwe has secured funding for survey implementation from the Nduna Foundation. CDC, UNCIEF Zimbabwe, and other stakeholders plan to begin data collection in August 2011.
In Swaziland, interventions to prevent and respond to sexual violence against girls are ongoing, supported by initiative partners such as UNAIDS. Additionally, in Tanzania, government partners, UNICEF, PEPFAR and other stakeholders are coordinating the post-survey follow-up in order to ensure effective, collaborative approaches to preventing and responding to sexual violence against girls.
There is considerable momentum in developing a formal structure to deepen the partnership and guide future efforts. The initiative has also continued to actively engage with key stakeholders from a wide range of sectors, exploring avenues for collaboration and generating further interest in partnering to end sexual violence against girls.
Future projects for Together for Girls continue to focus on survey implementation, programmatic strategies, and partnership activities.
Discussions are ongoing to conduct the survey in other countries, including countries in South-East Asia and in Sub-Saharan Africa.
Partners will collaborate with government, communities, and other stakeholders to develop and sustain coordinated, evidence-based plans and take action to address violence against girls. With surveys complete in Swaziland and Tanzania, and soon in Kenya, partners will focus attention on linking the survey results to further programmatic follow-up.
Additionally, UNICEF, Grupo ABC, and other partners will develop communications campaigns within countries and plan a global advocacy campaign.
The initiative continues to actively engage key stakeholders from a wide range of sectors, exploring avenues for collaboration and generating further interest in partnering to end sexual violence against girls.
- Collaborating partners
- BD (Becton, Dickenson, and Company)
- CDC Foundation
- Grupo ABC
- Joint United Nations Programme on HIV/AIDS (UNAIDS)
- United Nations Population Fund (UNFPA)
- United Nations Entity for Gender Equality and the Empowerment of Women (UN WOMEN)
- United Nations Children’s Fund (UNICEF)
- U.S. President’s Emergency Plan for AIDS Relief (PEPFAR)
- BD (Becton, Dickenson, and Company)
- Global Health E-Brief
- Together for Girls booklet [PDF 1MB]
- Together for Girls web site
- Swaziland publication and commentary
- Andrews G et al. Child sexual abuse. In Ezzati M, Lopez AD, Rodgers A, Murray C, eds. Comparative Quantification of Health Risks: Global and Regional Burden of Disease Attributable to Selected Major Risk Factors. Vol. 1. World Health Organization. Geneva, 2004; United Nations, United Nations Study on Violence Against Children. Geneva, 2006
- Heise LL, Pitanguy J, Germain A. Violence against women: The hidden health burden. World Bank Discussion Papers, No. 255. Washington, D.C., 1994
- Herz B and Sperling G. What Works in Girls' Education: Evidence and Policies from the Developing World. New York: Council on Foreign Relations, 2004
- Jewkes R, Sen P, Garcia-Moreno C. Sexual violence. In: Krug E, Dahlberg LL, Mercy JA, Zwi AB, Lozano R, eds. World Report on Violence and Health. World Health Organization. Geneva, 2002
- Reza et al, Sexual violence and its health consequences for female children in Swaziland: a cluster survey study. Lancet, 2009
- United Nations Population Fund. Factsheet: Young People and Times of Change. New York, 2009. Accessed at: http://www.unfpa.org/public/site/global/lang/en/young_people#fn09
- Violence Against Children: United Nations Secretary-General’s Study, 2006; Save the Children, 10 Essential Learning Points: Listen and Speak out against Sexual Abuse of Girls and Boys – Global Submission by the International Save the Children Alliance to the UN Study on Violence Against Children. Oslo, 2005
Get email updates
To receive email updates about this page, enter your email address:
- Centers for Disease Control and Prevention
National Center for Injury Prevention and Control (NCIPC)
4770 Buford Hwy, NE
Atlanta, GA 30341-3717
TTY: (888) 232-6348
New Hours of Operation:
- Contact CDC-INFO | http://www.cdc.gov/ViolencePrevention/sexualviolence/together/index.html | 13 |
48 | Themes > Economic Briefs
The Fiscal Deficit
What exactly is the Fiscal Deficit?
The fiscal deficit is the
difference between the government's total expenditure and its total
receipts (excluding borrowing). The elements of the fiscal deficit are (a)
the revenue deficit, which is the difference between the government’s
current (or revenue) expenditure and total current receipts (that is,
excluding borrowing) and (b) capital expenditure. The fiscal deficit can
be financed by borrowing from the Reserve Bank of India (which is also
called deficit financing or money creation) and market borrowing (from the
money market, that is mainly from banks).
Does a Fiscal Deficit Necessarily Lead to Inflation?
No. Two arguments are generally given in order to link a high fiscal deficit to inflation. The first argument is based on the fact that the part of the fiscal deficit which is financed by borrowing from the RBI leads to an increase in the money stock. Some people hold the unsubstantiated belief that a higher money stock automatically leads to inflation since "more money chases the same goods". There are, however, two flaws in this argument. Firstly, it is not the "same goods" which the new money stock chases since output of goods may increase because of the increased fiscal deficit. In an economy with unutilized resources, output is held in check by the lack of demand and a high fiscal deficit may be accompanied by greater demand and greater output. Secondly, the speed with which money "chases" goods is not constant and varies as a result of changes in other economic variables. Hence even if a part of the fiscal deficit translates into a larger money stock, it need not lead to inflation.
The second argument
linking fiscal deficits and inflation is that in an economy in which the
output of some essential commodities cannot be increased, the increase in
demand caused by a larger fiscal deficit will raise prices. There are
several problems with this argument as well. Firstly, this argument is
evidently irrelevant for the Indian economy in 2002 which is in the midst
of an industrial recession and which has abundant supplies of foodgrains
and foreign exchange. Secondly, even if some particular commodities are in
short supply, rationing and similar strategies can check a price increase.
Finally, if the economy is in a state which the proponents of this
argument believe it to be in, that is, with output constrained by supply
rather than demand, then not just fiscal deficits but any way of
increasing demand (such as private investment) is inflationary.
Doesn't a Greater Fiscal Deficit Lead to a Greater Drain on the Exchequer in terms of Interest Outlay?
Yes and no. Certainly, for
a given interest rate a larger fiscal deficit by raising the accumulated
debt of the government raises the interest burden. However, in the
particular case of our economy since liberalization, a large part of the
increasing interest burden is because of the rise in the interest rates in
the post '91 period. This itself is related to the process of
liberalization since the rate of interest has to be kept high in a
liberalized economy to prevent capital outflow. Moreover, a growing
domestic debt is not a problem for a government in the same way in which
growing debt is a problem for a family since the government can always
raise its receipts through taxation and by printing money. Some would say
that printing money would lead to inflation, but as we have shown above,
this is not necessarily the case.
Is it a Good Idea to Reduce Fiscal Deficits Through Disinvestment?
No. The PSUs that the
government has been disinvesting in are the profit making ones. Thus,
while the government earns a lump-sum amount in one year, it loses the
profits that the PSU would have contributed to the exchequer in the
future. Therefore, it is not a good idea even if the objective is to
reduce the fiscal deficit.
Does increased government expenditure necessarily lead to a greater fiscal deficit?
Not necessarily. Suppose
the government spends more on an electricity project for which the
contract is given to a PSU like BHEL. Then the money that the government
spends comes back to it in the form of BHEL's earnings. Similarly, suppose
that the government spends on food-for-work programmes. Then a significant
part of the expenditure allocation would consist of foodgrain from the
Public Distribution System which would account for part of the wages of
workers employed in such schemes. This in turn means that the losses of
the Food Corporation of India (which also includes the cost of holding
stocks) would go down and hence the money would find its way back to the
government. In both cases, the increased expenditure has further
multiplier effects because of the subsequent spending of those whose
incomes go up because of the initial expenditure. The overall rise in
economic activity in turn means that the government’s tax revenues also
increase. Therefore there is no increase in the fiscal deficit in such
What is the Impact of the Government's Policy of Decreasing the Fiscal Deficit?
Logically, there are two
ways in which the fiscal deficit can be reduced — by raising revenues or
by reducing expenditure. However given the character of our State and the
constraints of a liberalized economy, the government has not increased
revenues. In fact, in budget after budget the government has actually
given away tax cuts to the rich. Even when it has tried to raise revenues,
it has been through counterproductive means like disinvestment.
The main impact of the policy of reduced fiscal deficits has therefore been on the government's expenditure. This has had a number of effects. First, government investment in sectors such as agriculture has been cut. Secondly, expenditure on social sectors like education, health and poverty alleviation has been reduced leading to greater hardship for the poor already bearing the brunt of liberalization. Perhaps most importantly, in an economy going through a recession the government is not allowed to play any role in boosting demand.
© MACROSCAN 2002 | http://www.macroscan.com/eco/aug02/print/prnt170802Fiscal_Deficit.htm | 13 |
21 | The Federal Reserve System is the central banking structure of the U.S. Its establishment dates back to 23 December 1913 with the enactment of the Federal Reserve Act.
As time has passed, the responsibilities and roles of the Federal Reserve System have augmented and its basic structure has changed. Major economic events like the Great Depression were crucial in shaping the changes taking place in this system. Today, its duties are to regulate and supervise banking institutions, conduct the national monetary policy, offering financial services to the U.S government, foreign official and depository institutions and maintain stability of national financial system. Its task is also to maintain stability of prices, maintain employment and regulate the monetary policy for maintaining the moderateness of the interest rates. The components of this system also conduct supervision of banks, provide financing services and carry out research on the American economy and the economy of nations in the region.
The Federal Reserve System consists of a Board of Governors also called Federal Reserve Board (which the president of the U.S appoints), the FOMC or Federal Open Market Committee, private American member banks, several advisory councils and 12 Federal Reserve Banks that are regional. The responsibility of deciding the monetary policy is upon the FOMC, which consists of seven members from the Board of Governors along with presidents of twelve provincial/regional banks. However, the presidents of only five banks have the power to vote at the given time. It consists of both the public and private components and is for serving the interests of private bankers as well the general public.
As per the Board of Governors, the decisions for monetary policies that the Federal Reserve System makes do not need the approval of the U.S president or someone else in legislative or executive branches. The President chooses the members in the Board of Governors including vice-chairman and the chairman, and the Senate confirms them. The government appoints and decides the salaries of employees of the highest level of the Federal Reserve System, thus giving it both public and private characteristic. | http://www.federal-loan.net/what-is-the-federal-reserve-system-federal-reserve-system-definition/ | 13 |
14 | America's Volcanic Past
|"Though few people in the United States may actually experience an erupting volcano, the evidence for earlier volcanism is preserved in many rocks of North America. Features seen in volcanic rocks only hours old are also present in ancient volcanic rocks, both at the surface and buried beneath younger deposits." -- Excerpt from: Brantley, 1994|
Volcanic Highlights and Features:
|[NOTE: This list is just a sample of various Ohio features or events and is by no means inclusive. All information presented here was gathered from other online websites and each excerpt is attributed back to the original source. Please use those sources in referencing any information on this webpage, and please visit those websites for more information on the Geology of Ohio.]|
Between 1.4 and 990
million years ago,
crustal rifting, and
filling of basins formed
by rifting took place.
Between 990 and 880
million years ago, a
formed in eastern
Ohio. Between 880
and 544 million years
ago, these mountains
were eroded, reducing
the landscape to a
gently rolling surface.
rocks are present only at
great depths 2,500 to
13,000 feet beneath
The Interior Plains:6
The Appalachians are old. A look at rocks exposed in today's Appalachian mountains reveals elongate belts of folded and thrust faulted marine sedimentary rocks, volcanic rocks and slivers of ancient ocean floor. Strong evidence that these rocks were deformed during plate collision. The birth of the Appalachian ranges, some 480 million years ago, marks the first of several mountain building plate collisions that culminated in the construction of the supercontinent Pangea with the Appalachians near the center.
Three Kinds of Rocks:8
Precambrian (4.6 billion - 544 million years ago):1
Cambrian (544-505 million years ago)1
Ordovician (505-440 million years ago):1
Silurian (440-410 million years ago):1
Devonian (410-360 million years ago):1
Mississippian (360-325 million years ago):1
Pennsylvanian (325-286 million years ago):1
Permian (286-248 million years ago):1
Mesozoic and Tertiary (248-1.6 million years ago):1
Quaternary (1.6 million years ago to present):1
|Ohio's Volcanic Rocks|
The island arcs associated with continental collision were the sites of active volcanoes, as documented by the widespread beds of volcanic ash preserved in Ohio's Ordovician rocks. The ash layers, which to geologists are wonderful time lines because they were deposited instantaneously over a wide geographic area, have been altered to a special type of clay known as a bentonite. There are a number of bentonite beds in Ohio's Ordovician rocks, but two beds in Middle Ordovician rocks, the Deicke bentonite and the Millbrig bentonite, may represent some of the largest explosive volcanic eruptions in the geologic record. These beds have been traced from the Mississippi River eastward across North America and Europe and into Russia. It has been estimated that these eruptions generated about 5,000 times the volume of volcanic ash produced by the eruption of Mt. St. Helens in 1980.
Ohio's Granite and Rhyolite:2
Known Precambrian history of Ohio began with the emplacement of a vast, horizontal, 7-mile-thick layered sheet of granite (coarse-grained igneous rock formed at depth) and rhyolite (fine-grained volcanic equivalent of granite formed near the surface) beneath western Ohio and neighboring states to the west. This emplacement has been attributed to an uprising in the Earth's mantle, known as a superswell. Radioisotopic dating suggests that this event took place between about 1.4 and 1.5 billion years ago, forming what geologists call the Granite-Rhyolite Province. Continued continental doming of the superswell caused the crust beneath western Ohio, Indiana, and Kentucky to extend and split (rifting), resulting in major faulting and consequent downdropping to form a complex rift basin, now known as the East Continent Rift Basin. Molten basalt flowed upward as erosion began to fill the basin with clastic sediment, perhaps as much as 20,000 feet This extensive deposit is known as the Middle Run Formation. About 1 billion years ago, doming ceased and the rift became a failed or aborted rift. Rifting, volcanic activity, and basin filling also ceased.
Ohio's Volcanic Rocks are Beneath the Surface:2
Despite the immense span of time it represents, the Precambrian is the most poorly known of the geologic subdivisions in Ohio, in part because Precambrian rocks are nowhere exposed in the state. These primarily crystalline igneous and metamorphic rocks are deeply buried beneath younger Paleozoic sedimentary rocks at depths ranging from about 2,500 feet in western Ohio to more than 13,000 feet in southeastern Ohio. These rocks are collectively referred to by geologists as the "basement" because they form the foundation for the overlying Paleozoic rocks. Drillers commonly refer to the Precambrian rocks as the "granite," in reference to a common rock type found below the Paleozoic rocks. Ohio's Precambrian rocks appear to have formed in the late Precambrian, between about 1.5 billion and 800 million years ago. Older Precambrian rocks have not as yet been found in the state. Our knowledge of Precambrian rocks is derived from direct sampling of them through deep oil and gas wells or other boreholes or indirect geophysical means such as aeromagnetic and gravity maps, reflection seismic lines, or study of earthquake waves. Geophysical techniques are comparatively new, and it has only been since the early 1980's that geophysical data have become widely available.
|Cleveland - Volcanic Building Stones|
Cleveland Public Library:7
The Cleveland Public Library Main Building, constructed in 1923-25, is a treasure trove for those who enjoy fine building stone. It is clad with Cherokee marble, a coarse-grained white marble with light-gray veining. The steps of the main entrance are made of North Jay granite quarried in Maine.
The 57-story Key Tower (formerly known as the Society Tower) was constructed in 1990-91. Most of the facing is Stony Creek granite, quarried in Connecticut. It is more than 245 million years old. Napoleon Red granite from Vanga, Sweden, is used for the lower two floors of the building.
Soldiers' and Sailors' Monument:7
The Soldiers' and Sailors' Monument was dedicated in 1894. Much of the monument, including the large ramps and pedestals, a portion of the column, and the trim on the building, is composed of light-colored Berea Sandstone. The outer steps and esplanade are made of red Medina stone. This sandstone also was used for paving in Cleveland at the turn of the century. The formal name of this rock is the Grimsby Sandstone. Most of the outer walls of the building and the tall central column at the top of the monument are composed of dark-gray Quincy Granite, quarried in Quincy, Massachusetts. The building is made of roughly dressed blocks; the column is polished. Each of the 10 blocks of Quincy Granite composing the column weighs about 14 tons. White marble said to have come from Italy, red and green slate, and red and white Medina stone are used in the interior of the monument. The outside of the monument was cleaned in 1966 and 1979. Low stone walls and outer stairways installed around the monument in 1989 are made of Charcoal Black granite and Cavallo buff sandstone. Charcoal Black is a 1.8-billion-year-old Precambrian granite and was quarried in St. Cloud, Minnesota.
The Terminal Tower is Cleveland's best know landmark. It is 52 stories and 708 feet high, measured from the concourse level. At the time it was built, in 1927-28, the Terminal Tower was the second tallest building in the United States. Much of the exterior of the Terminal Tower is clad with Salem Limestone, quarried in southern Indiana. A small amount of granite is used along the base of the exterior of the Terminal Tower.
|Gold in Ohio|
Gold originates in primary vein deposits that were formed in association with silica-rich igneous rocks. These veins are rich in quartz and sulfide minerals such as pyrite and were deposited by hot, mineral-bearing (hydrothermal) solutions that ascended from deep within crystalline rocks. Upon weathering and erosion, the chemically inert gold is washed into streams and is mechanically concentrated by flowing waters to form secondary or placer deposits. All gold that has been found in Ohio is of the secondary or placer type. It is a long-accepted theory that the placer gold in the state originated in the igneous rocks of Canada (Canadian Shield) and was transported to Ohio during one or more episodes of Pleistocene glaciation. This theory has support because Ohio gold is always found in association with glacial deposits formed by meltwater from the glaciers. In addition, gold-bearing areas of Canada lie north of Ohio, more or less in line with the projected paths of the southward flow of various ice sheets. Gold can be found in small quantities throughout the glaciated two-thirds of Ohio. Most reported occurrences are in the zone of Illinoian and Wisconsinan end moraines--areas which commonly mark the farthest advance of these ice sheets. The highest concentrations of gold appear to be associated with Illinoian deposits. Almost all gold recovered is in the form of tiny, flattened flakes only a millimeter or so in diameter. Less common are pieces the size of a wheat grain, and rare are those the size of a pea. At most productive locations, several hours of panning will produce only a few flakes. No locality has been demonstrated to have concentrations sufficient for commercial exploitation, although many attempts were made in the 1800's and early 1900's to mine gold in the state. Most of these ventures were in Clermont County, near Batavia, in southwestern Ohio and in Richland County, near Bellville, in north-central Ohio. All of them were financial failures.
|Volcanic Ash Deposits|
Ohio's Volcanic Ash Beds - Ordovician:3
During the Ordovician, Ohio was in southern tropical latitudes and dominated by warm, shallow seas. The Iapetus, or proto-Atlantic, Ocean, which formed in Late Precambrian and Cambrian time, began to close during the Ordovician. Collision between the North American and European continents during the Middle Ordovician formed a series of island arcs and mountains to the east of Ohio. This event, the Taconic Orogeny, which culminated in the Late Ordovician, is recorded in rocks stretching from Newfoundland to Alabama. Although Ordovician rocks in Ohio were not directly involved in the collisional event, they record these activities. The widespread Knox unconformity, an episode of emergence and erosion, was formed when the land surface bulged upward (known as a peripheral bulge), accompanying development of a foreland basin to the east at the edge of the orogenic belt. As the Taconic Orogeny reached its zenith in the Late Ordovician, sediments eroded from the rising mountains were carried westward, forming a complex delta system that discharged mud into the shallow seas that covered Ohio and adjacent areas. The development of this delta, the Queenston Delta, is recorded by the many beds of shale in Upper Ordovician rocks exposed in southwestern Ohio. The island arcs associated with continental collision were the sites of active volcanoes, as documented by the widespread beds of volcanic ash preserved in Ohio's Ordovician rocks. The ash layers, which to geologists are wonderful time lines because they were deposited instantaneously over a wide geographic area, have been altered to a special type of clay known as a bentonite. There are a number of bentonite beds in Ohio's Ordovician rocks, but two beds in Middle Ordovician rocks, the Deicke bentonite and the Millbrig bentonite, may represent some of the largest explosive volcanic eruptions in the geologic record. These beds have been traced from the Mississippi River eastward across North America and Europe and into Russia. It has been estimated that these eruptions generated about 5,000 times the volume of volcanic ash produced by the eruption of Mount St. Helens in 1980.
Ohio's Volcanic Ash Beds - Devonian:4
Devonian rocks crop out in two areas in Ohio. They are best exposed in a 20-mile-wide, north-south-oriented belt in the central part of the state. At its northern terminus, the outcrop belt narrows and swings eastward along the southern shore of Lake Erie. These rocks dip and thicken southeastward into the Appalachian Basin and are present in the subsurface of eastern Ohio. An arcuate belt of Devonian rocks is present in northwestern Ohio, although there are few exposures of these rocks because of a thick mantle of glacial sediment. These rocks dip northwestward into the Michigan Basin. A small area of Devonian rock crops out on the Bellefontaine Outlier in Logan and Champaign Counties. With one exception, all of the outcropping Devonian rocks in the state are of Middle or Late Devonian age. The exception is the Holland Quarry Shale, a Lower Devonian unit known only from a single, small, lens-shaped outcrop in a now-reclaimed quarry in Lucas County, west of Toledo.
By Middle Devonian time the warm, shallow seas once again spread across the state and limy sediment began to accumulate. These limestones were part of the "Cliff limestone," which also included Silurian limestone units, in the classification of John Locke in 1838 during his reconnaissance work for the first Geological Survey of Ohio. New York State Geologist James Hall in 1843 referred to the Middle Devonian limestones of Ohio as the "Coniferous Limestone," correlating them with carbonate rocks of that name in New York State. In 1859, William W. Mather, Ohio's first State Geologist (1837-1838), used the name Columbus Limestone in reference to Middle Devonian limestones encountered during drilling of an artesian well at the Ohio State House in Columbus. In 1878, Edward Orton, Ohio's third State Geologist (1878-1899), formally divided this limestone sequence into the Columbus Limestone and the overlying Delaware Limestone, subdivisions that are still used. Clinton R. Stauffer, in his 1909 Ohio Survey bulletin (No. 10), The Middle Devonian of Ohio, divided the Columbus and Delaware Limestones into a series of alphabetical zones. Later researchers have proposed other subdivisions. The Columbus Limestone reaches a thickness of a little more than 100 feet, whereas the Delaware averages about 35 feet in thickness. These units pinch out to the south but continue northward to Lake Erie. The Columbus Limestone is present on the Bellefontaine Outlier in Logan County, 30 miles west of the contiguous outcrop belt in central Ohio, but the Delaware Limestone appears to be absent. The Columbus is a fairly pure limestone, dolomitic in the lower part and very fossiliferous in the upper part. The Delaware Limestone, by contrast, is less pure, having a higher silt content that gives it a darker gray or bluish color; this unit has been referred to informally as the "Blue Limestone."
The change in lithology between the Columbus
and the Delaware reflects large-scale events, namely the beginning of the Acadian Orogeny, as
North America and Europe met once again on their periodic collisional course. The rise of the
Acadian Mountains to the east is reflected not only by clastic sediment beginning to be
washed into the Middle Devonian seas, but also by
the evidence for significant volcanic activity
associated with this mountain building. A series of ash beds, collectively called the Tioga
Bentonite, are present in Middle Devonian rocks throughout much of the Appalachian Basin
and into the Illinois and Michigan Basins. The Tioga volcanism is thought to have originated
from a source in eastern Virginia.
In the subsurface of eastern Ohio, the Columbus and
Delaware Limestones are referred to as the Onondaga Limestone.
1) Ohio Department of Natural Resources, Division of Geological Survey, Ohio Geological Survey Website, 2002, A Brief Summary of the Geologic History of Ohio, GeoFacts No.23., Time assignments are based on Geological Society of America Decade of North American Geology 1983 Geologic Time Scale.
2) Michael C. Hansen, The Geology of Ohio -- The Precambrian, GeoFacts No. 13, Ohio Department of Natural Resources, Division of Geological Survey, Ohio Geological Survey Website, 2001
3) Michael C. Hansen, The Geology of Ohio -- The Ordovician, Ohio Geology, Fall 1997, Ohio Department of Natural Resources, Division of Geological Survey, Ohio Geological Survey Website, 2002
4) Michael C. Hansen, The Geology of Ohio -- The Devonian, Ohio Geology, 1999 No.1, Ohio Department of Natural Resources, Division of Geological Survey, Ohio Geological Survey Website, 2002
5) Ohio Department of Natural Resources, Division of Geological Survey, Ohio Geological Survey Website, 2002, Gold in Ohio, GeoFacts No.9.
6) USGS/NPS Geology in the Parks Website, 2001
7) Building Stone in the Vicinity of Public Square, Cleveland, Ohio, Ohio Department of Natural Resources, Ohio Geological Survey Website, 2002.
8) Ohio Historical Society Website, 2002
America's Volcanic Past - States and Regions]
[Return to America's Volcanic Past - National Parks and Monuments]
[Return to Visit A Volcano Menu] | http://vulcan.wr.usgs.gov/LivingWith/VolcanicPast/Places/volcanic_past_ohio.html | 13 |