score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
28 | The business of agriculture, California’s next major growth field, was started on a wide scale throughout the state.
However, the Gold Rush also had negative effects: Native Americans were attacked and pushed off traditional lands, and gold mining caused environmental harm.
How did California change after the Gold Rush?
Miners extracted more than 750,000 pounds of gold during the California Gold Rush. Just days after Marshall’s discovery at Sutter’s Mill, the Treaty of Guadalupe Hidalgo was signed, ending the Mexican-American War and leaving California in the hands of the United States.
How did the gold rush affect California population?
The gold rush of California is said to have put the state on the map. California city population growth began in 1849. San Francisco, which was one city that grew immensely as a result of the gold rush, had a population of approximately 1000 people in 1848, which grew to an astounding 3500 people in 1850.
How did the California Gold Rush impact Native Americans?
The Gold Rush Impact on Native Tribes. The gold rush of 1848 brought still more devastation. Violence, disease and loss overwhelmed the tribes. By 1870, an estimated 30,000 native people remained in the state of California, most on reservations without access to their homelands.
Was the California Gold Rush positive or negative?
The Californian Gold Rush of the 1849 had its positive and negative impacts on westward expansion including the increase in population leading to development of California as a state, the removal of Native Americans, and both the stimulation of economy and monetary instability.
Why was the California Gold Rush important to westward expansion?
Although westward expansion had been going on for a while before the discovery of gold, the Gold Rush increased the pace of that expansion. The Gold Rush led to tens of thousands of people trying to make it to California. So, the Gold Rush led to the railroads which led to a major boom in westward expansion.
Why is the Gold Rush important to California?
The gold rush beginning in 1849 brought a flood of workers to California and played an important role in integrating California’s economy into that of the eastern United States. The California Gold Rush began with the discovery of significant gold deposits near Sacramento in 1848.
What happened at the end of the California Gold Rush?
The Mexican–American War ended on February 3, 1848, although California was a de facto American possession before that. The Treaty of Guadalupe Hidalgo provided for, among other things, the formal transfer of Upper California to the United States. The California Gold Rush began at Sutter’s Mill, near Coloma.
Photo in the article by “Wikimedia Commons” | https://californiainform.com/advices/how-did-the-gold-rush-impact-california/ | 21 |
21 | Liberty reinvented after WWII
In the text Give Me Liberty! An American History, Columbia historian Eric Foner presents the thesis that the struggle to expand liberty to all Americans is a recurring theme throughout American history. From the Reconstruction Period to the Progressive Era of the 1920s, African-Americans, women, immigrants, and working-class Americans are among the groups of Americans who fought to receive the promises of liberty that are guaranteed in the founding documents of America. While incremental gains have been made in increasing liberty for ordinary citizens, World War II represents a watershed moment in reversing the trends of oppression that characterized the Gilded Age and Progressive Era. Though the war period was not the panacea for groups traditionally struggling for the recognition of their rights, the period brought both immediate and gradual changes that led to improved economic and social conditions for workers, women, and minorities.
Conceptions of freedom
The struggle for freedom among many groups in America led to different conceptions of freedom that were based on the unique historical conditions of each group. As Foner notes, African-Americans derived their definition of freedom from their experience as slaves ( Foner 587). For African-Americans, freedom meant finally receiving the same Constitutional protections and economic opportunities as white people in America (586). For immigrants, freedom included receiving equal treatment before the law, freedom of religion, and economic opportunities that were unavailable to them in their homelands (731).
Additionally, during the 1920s, the concept of the “working woman” represented the idea that the ability to receive the same wages and occupational opportunities as men was critical to the concept of freedom for women (735). Further, women sought to be liberated from unfulfilling lives as homemakers whose only role in life was to serve their families (735). Yet as Progressives during the 1920s noted, low wages and demeaning work conditions in factories and offices threatened the concept of freedom in the United States. While the Progressive Era brought many gains to working class Americans, it wasn’t until World War II that the advancement of freedom was strongly enforced at the federal level.
Roosevelt makes effort to even the playing field
Preceding World War II, the Roosevelt Administration took several concrete steps to expand freedom for all Americans. As Foner notes, the New Deal elevated economic freedom as the most critical element of liberty (861). In order to address the threats to personal security that were posed by the Great Depression, President Roosevelt implemented a series of programs that were intended to secure the banking system and provide public works opportunities (864-866). Yet, many New Deal programs had mixed results in improving conditions for minorities. For example, while Social Security improved economic conditions for the elderly, African-Americans were initially excluded from receiving Social Security benefits because they were most likely to hold occupations that were exempt (886-87). Further, while the New Deal improved conditions for Native Americans by eliminating boarding school, ending policies of forced assimilation, and recognizing the rights of Native Americans to self-governance, poverty still remained rampant on reservations (887). Further, Mexican-Americans were adversely impacted by the decreased demand for labor, which forced families to return to Mexico, despite the fact that many children of the migrant laborers were American citizens by virtue of birth (888). Thus, while the Roosevelt Administration took significant efforts to alleviate conditions of the poor during the Great Depression, the impact of these efforts on many minorities and working class Americans was limited.
Significant gains in economic freedom
Entering World War II resulted in significant gains in economic freedom for all members of society. As Foner notes, World War II reversed the economic insecurity that plagued the United States entering the war. Because of the increased demand for laborers in the war industry, the number of federal workers in 1940 increased from 1 million to 4 million and the unemployment rate decreased from 14 percent to 2 percent (915). Further, organized labor reached an arrangement with the government and business to prevent unrest among laborers in return for relaxed restrictions on union activity (917). Thus, union membership increased significantly during the period and businesses agreed to achieve modest profits and recognize the rights of employees (917).
Most significantly, World War II brought about unprecedented economic freedoms for women. Because the war mobilized over 15 million men to serve in the armed forces, women, out of necessity, rose to account for one-third of the civilian labor force while 350,000 served in support roles in the military (921). Further, women obtained industrial jobs that were formerly restricted to men and received similar pay (922). Thus, the war provided immediate expansions of economic freedom to American citizens of all backgrounds by offering consistent work at high wages.
Efforts to sustain prosperity after World War II resulted in the expansion of freedom for citizens after the war. Returning war veterans received benefits that enabled them to achieve economic mobility, including unemployment benefits, education scholarships, mortgage loans, and vocational training (925). Housing benefits led to the development of suburbs and the rise of a well-developed middle class in America (925). However, because Congress failed to enact the Full Employment Bill, veterans were the only group to directly benefit from targeted measures following the war (925). Thus, while the war period brought economic benefits to the whole of society, the postwar period was limited in its ability to create sustained prosperity.
Efforts to minimize racial tension
Further, the war had mixed results in taming the negative impact of racism on the rights of minorities. The patriotic atmosphere created by World War II benefited children of immigrants by providing them the opportunity to assimilate into American culture (926). Further, shock over the extreme racism exhibited by the Nazis caused the government to decry racism in official statements (927). For example, an OWI pamphlet asserted that racism was a foreign evil that threatened American security (928). Further, while Native Americans played a crucial role as code talkers and moved to cities to take advantage of veteran benefits, such as the GI Bill, a higher percentage of Native Americans joined American society (929). Yet the society, as well as the military, still remained segregated and Jews were still excluded in businesses and the government (927).
Further, the 1943 “Zoot Suit Riot” incident, involving a clash between sailors and Mexican-American youth, resulted in a publicized trial that demonstrated the unfair consideration that Mexican-Americans and many other minorities received in the justice system (928-29). Additionally, the internment of Japanese-Americans during the war completely deprived Japanese-Americans of the legal rights they were entitled to as American citizens (930). Additionally, African-Americans were restricted in their ability to take advantage of their college, housing, and job training benefits because of the discriminatory practices of local administrators (933). Thus, while the war set the precedent for publically condemning racism, it was ineffective in removing the restrictions that racism placed upon minorities.
Understanding how WWII improved freedom
Prior to World War II, deplorable economic and social conditions threatened the freedoms of workers, women, and minority groups. Yet, the sustained economic prosperity created by the boom of war industries during World War II created an unprecedented expansion of freedoms for Americans. Citizens from all class backgrounds were provided with stable employment opportunities at higher wages while women significantly expanded their participation in the civilian economy. Further, returning veterans received significant housing, education, and job training benefits following the war, which enabled them to maintain their economic security.
Yet, the war brought inconsistent results in guarding the freedoms of minorities. Discriminatory social customs and laws still hindered African-Americans and other minority groups in enjoying the benefits of citizenship following the war. Further, the legal rights of Japanese-Americans and Mexican-Americans were infringed upon in the cases of the Zoot Suit Riot incident and Japanese internment policies. Yet, World War II serves as a crucial turning point in expanding freedoms for the public because it marks the first time that federal policy played a significant role in defending the economic freedoms of citizens, which set the precedent for future movements to expand the benefits of liberty to all Americans.
Foner, Eric. Give Me Liberty! An American History, Volume Two. 3rd ed. New York: W.W. Norton & Company, 2011. Print.
Cite This Post
This blog post is provided free of charge and we encourage you to use it for your research and writing. However, we do require that you cite it properly using the citation provided below (in MLA format).
Ultius, Inc. “The Expansion of Liberty During World War II.” Ultius Blog. Ultius | Custom Writing and Editing Services, 20 Sep. 2014. Web.
Thank you for practicing fair use.
This citation is in MLA format, if you need help with MLA format, click here to follow our citation style guide. | https://paperbacksbooks.com/the-expansion-of-liberty-during-world-war-ii/ | 21 |
47 | - 1 What happens when the economy is in equilibrium?
- 2 What happens when a market is not in equilibrium?
- 3 What do you mean by equilibrium price How is it determined?
- 4 How can you tell if the economy is in equilibrium?
- 5 What happens when prices are above equilibrium?
- 6 Does a market reach equilibrium on its own?
- 7 Will consumers benefit from a market being in disequilibrium?
- 8 How do you solve market equilibrium?
- 9 What causes changes in market equilibrium?
- 10 What is an example of market equilibrium?
- 11 What is the importance of market equilibrium?
- 12 What is equilibrium in demand and supply?
- 13 How short equilibrium in the economy is achieved?
- 14 How does the economy adjust back to long run equilibrium?
- 15 What is the difference between long run and short run equilibrium?
What happens when the economy is in equilibrium?
Economic equilibrium is a condition or state in which economic forces are balanced. In effect, economic variables remain unchanged from their equilibrium values in the absence of external influences. Economic equilibrium is also referred to as market equilibrium.
What happens when a market is not in equilibrium?
If the market price is below the equilibrium price, quantity supplied is less than quantity demanded, creating a shortage. It is in shortage. Market price will rise because of this shortage. Example: if you are the producer, your product is always out of stock.
What do you mean by equilibrium price How is it determined?
The equilibrium price is the price at which the quantity demanded equals the quantity supplied. It is determined by the intersection of the demand and supply curves. A decrease in demand will cause the equilibrium price to fall; quantity supplied will decrease.
How can you tell if the economy is in equilibrium?
As defined in microeconomics – which studies economies at the level of individuals and companies – economic equilibrium is the price in which supply equals demand for a product or service. There is a supply curve and demand curve. That point represents the economic equilibrium.
What happens when prices are above equilibrium?
If the price of a good is above equilibrium, this means that the quantity of the good supplied exceeds the quantity of the good demanded. There is a surplus of the good on the market.
Does a market reach equilibrium on its own?
Equilibrium ” Every market has its own equilibrium. Equilibrium lasts until either supply or demand changes, at which point the price will adjust.
Will consumers benefit from a market being in disequilibrium?
However, consumers may reduce the quantity of wheat that they purchase, given the higher price in the market. When this imbalance occurs, quantity supplied will be greater than quantity demanded, and a surplus will exist, causing a disequilibrium market.
How do you solve market equilibrium?
The equilibrium in a market occurs where the quantity supplied in that market is equal to the quantity demanded in that market. Therefore, we can find the equilibrium by setting supply and demand equal and then solving for P.
What causes changes in market equilibrium?
Changes in either demand or supply cause changes in market equilibrium. Similarly, the increase or decrease in supply, the demand curve remaining constant, would have an impact on equilibrium price and quantity. Both supply and demand for goods may change simultaneously causing a change in market equilibrium.
What is an example of market equilibrium?
Company A sells Mangoes. During summer there is a great demand and equal supply. Hence the markets are at equilibrium. Post-summer season, the supply will start falling, demand might remain the same.
What is the importance of market equilibrium?
The lower price entices more people to buy, which will reduce the supply further. This process will result in demand increasing and supply decreasing until the market price equals the equilibrium price. If the market price is below the equilibrium value, then there is excess in demand (supply shortage).
What is equilibrium in demand and supply?
The equilibrium price and equilibrium quantity occur where the supply and demand curves cross. The equilibrium occurs where the quantity demanded is equal to the quantity supplied. If the price is below the equilibrium level, then the quantity demanded will exceed the quantity supplied.
How short equilibrium in the economy is achieved?
Short-run macroeconomic equilibrium is achieved when aggregate demand and aggregate supply are equal in the short term. In the short run, macroeconomic equilibrium exists at the point where aggregate demand is equal to aggregate supply.
How does the economy adjust back to long run equilibrium?
The idea behind this assumption is that an economy will self-correct; shocks matter in the short run, but not the long run. At its core, the self-correction mechanism is about price adjustment. When a shock occurs, prices will adjust and bring the economy back to long–run equilibrium.
What is the difference between long run and short run equilibrium?
We can compare that national income to the full employment national income to determine the current phase of the business cycle. An economy is said to be in long–run equilibrium if the short–run equilibrium output is equal to the full employment output. | https://bestloveastrologer.com/when/quick-answer-when-a-market-is-in-equilibrium.html | 21 |
20 | In economics, money illusion, or price illusion, is the name for the human cognitive bias to think of money in nominal, rather than real, terms. In other words, the face value (nominal value) of money is mistaken for its purchasing power (real value) at a previous point in time. Viewing purchasing power as measured by the nominal value is false, as modern fiat currencies have no intrinsic value and their real value depends purely on the price level. The term was coined by Irving Fisher in Stabilizing the Dollar. It was popularized by John Maynard Keynes in the early twentieth century, and Irving Fisher wrote an important book on the subject, The Money Illusion, in 1928.
The existence of money illusion is disputed by monetary economists who contend that people act rationally (i.e. think in real prices) with regard to their wealth. Eldar Shafir, Peter A. Diamond, and Amos Tversky (1997) have provided empirical evidence for the existence of the effect and it has been shown to affect behaviour in a variety of experimental and real-world situations.
Shafir et al. also state that money illusion influences economic behaviour in three main ways:
- Price stickiness. Money illusion has been proposed as one reason why nominal prices are slow to change even where inflation has caused real prices to fall or costs to rise.
- Contracts and laws are not indexed to inflation as frequently as one would rationally expect.
- Social discourse, in formal media and more generally, reflects some confusion about real and nominal value.
Money illusion can also influence people's perceptions of outcomes. Experiments have shown that people generally perceive an approximate 2% cut in nominal income with no change in monetary value as unfair, but see a 2% rise in nominal income where there is 4% inflation as fair, despite them being almost rational equivalents. This result is consistent with the 'Myopic Loss Aversion theory'. Furthermore, the money illusion means nominal changes in price can influence demand even if real prices have remained constant.
Explanations and implications
Explanations of money illusion generally describe the phenomenon in terms of heuristics. Nominal prices provide a convenient rule of thumb for determining value and real prices are only calculated if they seem highly salient (e.g. in periods of hyperinflation or in long term contracts).
Some have suggested that money illusion implies that the negative relationship between inflation and unemployment described by the Phillips curve might hold, contrary to more recent macroeconomic theories such as the "expectations-augmented Phillips curve". If workers use their nominal wage as a reference point when evaluating wage offers, firms can keep real wages relatively lower in a period of high inflation as workers accept the seemingly high nominal wage increase. These lower real wages would allow firms to hire more workers in periods of high inflation.
Money illusion is believed to be instrumental in the Friedmanian version of the Phillips curve. Actually, money illusion is not enough to explain the mechanism underlying this Phillips curve. It requires two additional assumptions. First, prices respond differently to modified demand conditions: an increased aggregate demand exerts its influence on commodity prices sooner than it does on labour market prices. Therefore, the drop in unemployment is, after all, the result of decreasing real wages and an accurate judgement of the situation by employees is the only reason for the return to an initial (natural) rate of unemployment (i.e. the end of the money illusion, when they finally recognize the actual dynamics of prices and wages). The other (arbitrary) assumption refers to a special informational asymmetry: whatever employees are unaware of in connection with the changes in (real and nominal) wages and prices can be clearly observed by employers. The new classical version of the Phillips curve was aimed at removing the puzzling additional presumptions, but its mechanism still requires money illusion.
- Behavioural economics
- Fiscal Illusion
- Framing (social science)
- Homo economicus
- Map-territory relation
- Fisher, Irving (1928), The Money Illusion, New York: Adelphi Company
- Marianne Bertran; Sendhil Mullainathan & Eldar Shafir (May 2004). "A behavioral-economics view of poverty". The American Economic Review. 94 (2): 419–423. doi:10.1257/0002828041302019. JSTOR 3592921.
- Shafir, E.; Diamond, P. A.; Tversky, A. (1997), "On Money Illusion", Quarterly Journal of Economics, 112 (2): 341–374, doi:10.1162/003355397555208
- Benartzi, Shlomo; Thaler, Richard H. (1995). "Myopic Loss Aversion and the Equity Premium Puzzle". Quarterly Journal of Economics. 110 (1): 73–92. CiteSeerX 10.1.1.353.2566. doi:10.2307/2118511. JSTOR 2118511. S2CID 55030273.
- Patinkin, Don (1969), "The Chicago Tradition, The Quantity Theory, And Friedman", Journal of Money, Credit and Banking, 1 (1): 46–70, doi:10.2307/1991376, JSTOR 1991376
- Romer, David (2006), Advanced macroeconomics, McGraw-Hill, p. 252, ISBN 9780072877304
- Galbács, Peter (2015). The Theory of New Classical Macroeconomics. A Positive Critique. Contributions to Economics. Heidelberg/New York/Dordrecht/London: Springer. doi:10.1007/978-3-319-17578-2. ISBN 978-3-319-17578-2.
- Fehr, Ernst; Tyran, Jean-Robert (2001), "Does Money Illusion Matter?" (PDF), American Economic Review, 91 (5): 1239–1262, doi:10.1257/aer.91.5.1239, hdl:20.500.11850/146556, JSTOR 2677924, S2CID 15342301
- Howitt, P. (1987), "money illusion", The New Palgrave: A Dictionary of Economics, 3, London: Macmillan, pp. 518–519, ISBN 978-0-333-37235-7
- Weber, Bernd; Rangel, Antonio; Wibral, Matthias; Falk, Armin (2009), "The medial prefrontal cortex exhibits money illusion", PNAS, 106 (13): 5025–5028, Bibcode:2009PNAS..106.5025W, doi:10.1073/pnas.0901490106, PMC 2664018, PMID 19307555
- Akerlof, George A.; Shiller, Robert J. (2009), Animal Spirits, Princeton University Press, pp. 41–50, ISBN 9780691142333
- Thaler, Richard H.(1997) "Irving Fisher: Modern Behavioral Economist" in The American Economic Review Vol 87, No 2, Papers and Proceedings of the Hundred and Fourth Annual Meeting of the American Economic Association (May, 1997)
- Huw Dixon (2008), New Keynesian Economics, New Palgrave Dictionary of Economics New Keynesian macroeconomics. | https://en.wikipedia.org/wiki/Money_illusion | 21 |
14 | Permian–Triassic extinction event
The Permian–Triassic (P-T, P-Tr) extinction event, also known as the End-Permian Extinction and colloquially as the Great Dying, formed the boundary between the Permian and Triassic geologic periods, as well as between the Paleozoic and Mesozoic eras, approximately 251.9 million years ago. It is the Earth's most severe known extinction event, with the extinction of 57% of biological families, 83% of genera, 81% of marine species and 70% of terrestrial vertebrate species. It was the largest known mass extinction of insects.
There is evidence for one to three distinct pulses, or phases, of extinction. The scientific consensus is that the causes of extinction were elevated temperatures and widespread oceanic anoxia due to the large amounts of carbon dioxide that were emitted by the eruption of the Siberian Traps. It has also been proposed that the burning of hydrocarbon deposits, including oil and coal, by the Siberian Traps and emissions of methane by methanogenic microorganisms contributed to the extinction.
The speed of recovery from the extinction is disputed. Some scientists estimate that it took 10 million years (until the Middle Triassic), due both to the severity of the extinction and because grim conditions returned periodically for another 5 million years. However, studies in Bear Lake County, near Paris, Idaho, showed a relatively quick rebound in a localized Early Triassic marine ecosystem, taking around 2 million years to recover, suggesting that the impact of the extinction may have been felt less severely in some areas than others.
Previously, it was thought that rock sequences spanning the Permian–Triassic boundary were too few and contained too many gaps for scientists to reliably determine its details. However, it is now possible to date the extinction with millennial precision. U–Pb zircon dates from five volcanic ash beds from the Global Stratotype Section and Point for the Permian–Triassic boundary at Meishan, China, establish a high-resolution age model for the extinction – allowing exploration of the links between global environmental perturbation, carbon cycle disruption, mass extinction, and recovery at millennial timescales. The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 million years ago, a duration of 60 ± 48 thousand years. A large (approximately 0.9%), abrupt global decrease in the ratio of the stable isotope carbon-13 to that of carbon-12 coincides with this extinction, and is sometimes used to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating. Further evidence for environmental change around the P–Tr boundary suggests an 8 °C (14 °F) rise in temperature, and an increase in CO
2 levels by 2000 ppm (for comparison, the concentration immediately before the industrial revolution was 280 ppm, and the amount today is about 415 ppm). There is also evidence of increased ultraviolet radiation reaching the earth, causing the mutation of plant spores.
It has been suggested that the Permian–Triassic boundary is associated with a sharp increase in the abundance of marine and terrestrial fungi, caused by the sharp increase in the amount of dead plants and animals fed upon by the fungi. For a while this "fungal spike" was used by some paleontologists to identify the Permian–Triassic boundary in rocks that are unsuitable for radiometric dating or lack suitable index fossils, but even the proposers of the fungal spike hypothesis pointed out that "fungal spikes" may have been a repeating phenomenon created by the post-extinction ecosystem in the earliest Triassic. The very idea of a fungal spike has been criticized on several grounds, including: Reduviasporonites, the most common supposed fungal spore, may be a fossilized alga; the spike did not appear worldwide; and in many places it did not fall on the Permian–Triassic boundary. The reduviasporonites may even represent a transition to a lake-dominated Triassic world rather than an earliest Triassic zone of death and decay in some terrestrial fossil beds. Newer chemical evidence agrees better with a fungal origin for Reduviasporonites, diluting these critiques.
Uncertainty exists regarding the duration of the overall extinction and about the timing and duration of various groups' extinctions within the greater process. Some evidence suggests that there were multiple extinction pulses or that the extinction was spread out over a few million years, with a sharp peak in the last million years of the Permian. Statistical analyses of some highly fossiliferous strata in Meishan, Zhejiang Province in southeastern China, suggest that the main extinction was clustered around one peak. Recent research shows that different groups became extinct at different times; for example, while difficult to date absolutely, ostracod and brachiopod extinctions were separated by 670,000 to 1.17 million years. In a well-preserved sequence in east Greenland, the decline of animals is concentrated in a period 10,000 to 60,000 years long, with plants taking an additional several hundred thousand years to show the full impact of the event.
An older theory, still supported in some recent papers, is that there were two major extinction pulses 9.4 million years apart, separated by a period of extinctions well above the background level, and that the final extinction killed off only about 80% of marine species alive at that time while the other losses occurred during the first pulse or the interval between pulses. According to this theory one of these extinction pulses occurred at the end of the Guadalupian epoch of the Permian. For example, all but one of the surviving dinocephalian genera died out at the end of the Guadalupian, as did the Verbeekinidae, a family of large-size fusuline foraminifera. The impact of the end-Guadalupian extinction on marine organisms appears to have varied between locations and between taxonomic groups — brachiopods and corals had severe losses.
|Marine extinctions||Genera extinct||Notes|
|Eurypterids||100%||May have become extinct shortly before the P–Tr boundary|
|Trilobites||100%||In decline since the Devonian; only 2 genera living before the extinction|
|Brachiopods||96%||Orthids and productids died out|
|Bryozoans||79%||Fenestrates, trepostomes, and cryptostomes died out|
|Acanthodians||100%||In decline since the Devonian, with only one living family|
|Anthozoans||96%||Tabulate and rugose corals died out|
|Blastoids||100%||May have become extinct shortly before the P–Tr boundary|
|Crinoids||98%||Inadunates and camerates died out|
|Ammonites||97%||Goniatites died out|
|Foraminiferans||97%||Fusulinids died out, but were almost extinct before the catastrophe|
Marine invertebrates suffered the greatest losses during the P–Tr extinction. Evidence of this was found in samples from south China sections at the P–Tr boundary. Here, 286 out of 329 marine invertebrate genera disappear within the final two sedimentary zones containing conodonts from the Permian. The decrease in diversity was probably caused by a sharp increase in extinctions, rather than a decrease in speciation.
The extinction primarily affected organisms with calcium carbonate skeletons, especially those reliant on stable CO2 levels to produce their skeletons. These organisms were susceptible to the effects of the ocean acidification that resulted from increased atmospheric CO2.
Among benthic organisms the extinction event multiplied background extinction rates, and therefore caused maximum species loss to taxa that had a high background extinction rate (by implication, taxa with a high turnover). The extinction rate of marine organisms was catastrophic.
Surviving marine invertebrate groups included articulate brachiopods (those with a hinge), which had undergone a slow decline in numbers since the P–Tr extinction; the Ceratitida order of ammonites; and crinoids ("sea lilies"), which very nearly became extinct but later became abundant and diverse.
The groups with the highest survival rates generally had active control of circulation, elaborate gas exchange mechanisms, and light calcification; more heavily calcified organisms with simpler breathing apparatuses suffered the greatest loss of species diversity. In the case of the brachiopods, at least, surviving taxa were generally small, rare members of a formerly diverse community.
The ammonoids, which had been in a long-term decline for the 30 million years since the Roadian (middle Permian), suffered a selective extinction pulse 10 million years before the main event, at the end of the Capitanian stage. In this preliminary extinction, which greatly reduced disparity, or the range of different ecological guilds, environmental factors were apparently responsible. Diversity and disparity fell further until the P–Tr boundary; the extinction here (P–Tr) was non-selective, consistent with a catastrophic initiator. During the Triassic, diversity rose rapidly, but disparity remained low.
The range of morphospace occupied by the ammonoids, that is, their range of possible forms, shapes or structures, became more restricted as the Permian progressed. A few million years into the Triassic, the original range of ammonoid structures was once again reoccupied, but the parameters were now shared differently among clades.
The Permian had great diversity in insect and other invertebrate species, including the largest insects ever to have existed. The end-Permian is the largest known mass extinction of insects; according to some sources, it is the only insect mass extinction. Eight or nine insect orders became extinct and ten more were greatly reduced in diversity. Palaeodictyopteroids (insects with piercing and sucking mouthparts) began to decline during the mid-Permian; these extinctions have been linked to a change in flora. The greatest decline occurred in the Late Permian and was probably not directly caused by weather-related floral transitions.
Most fossil insect groups found after the Permian–Triassic boundary differ significantly from those before: Of Paleozoic insect groups, only the Glosselytrodea, Miomoptera, and Protorthoptera have been discovered in deposits from after the extinction. The caloneurodeans, monurans, paleodictyopteroids, protelytropterans, and protodonates became extinct by the end of the Permian. In well-documented Late Triassic deposits, fossils overwhelmingly consist of modern fossil insect groups.
Plant ecosystem response
The geological record of terrestrial plants is sparse and based mostly on pollen and spore studies. Plants are relatively immune to mass extinction, with the impact of all the major mass extinctions "insignificant" at a family level. Even the reduction observed in species diversity (of 50%) may be mostly due to taphonomic processes. However, a massive rearrangement of ecosystems does occur, with plant abundances and distributions changing profoundly and all the forests virtually disappearing; the Palaeozoic flora scarcely survived this extinction.
At the P–Tr boundary, the dominant floral groups changed, with many groups of land plants entering abrupt decline, such as Cordaites (gymnosperms) and Glossopteris (seed ferns). Dominant gymnosperm genera were replaced post-boundary by lycophytes—extant lycophytes are recolonizers of disturbed areas.
Palynological or pollen studies from East Greenland of sedimentary rock strata laid down during the extinction period indicate dense gymnosperm woodlands before the event. At the same time that marine invertebrate macrofauna declined, these large woodlands died out and were followed by a rise in diversity of smaller herbaceous plants including Lycopodiophyta, both Selaginellales and Isoetales. Later, other groups of gymnosperms again become dominant but again suffered major die-offs. These cyclical flora shifts occurred a few times over the course of the extinction period and afterward. These fluctuations of the dominant flora between woody and herbaceous taxa indicate chronic environmental stress resulting in a loss of most large woodland plant species. The successions and extinctions of plant communities do not coincide with the shift in δ13C values but occurred many years after. The recovery of gymnosperm forests took 4–5 million years.
No coal deposits are known from the Early Triassic, and those in the Middle Triassic are thin and low-grade. This "coal gap" has been explained in many ways. It has been suggested that new, more aggressive fungi, insects, and vertebrates evolved and killed vast numbers of trees. These decomposers themselves suffered heavy losses of species during the extinction and are not considered a likely cause of the coal gap. It could simply be that all coal-forming plants were rendered extinct by the P–Tr extinction and that it took 10 million years for a new suite of plants to adapt to the moist, acid conditions of peat bogs. Abiotic factors (factors not caused by organisms), such as decreased rainfall or increased input of clastic sediments, may also be to blame.
On the other hand, the lack of coal may simply reflect the scarcity of all known sediments from the Early Triassic. Coal-producing ecosystems, rather than disappearing, may have moved to areas where we have no sedimentary record for the Early Triassic. For example, in eastern Australia a cold climate had been the norm for a long period, with a peat mire ecosystem adapted to these conditions. Approximately 95% of these peat-producing plants went locally extinct at the P–Tr boundary; coal deposits in Australia and Antarctica disappear significantly before the P–Tr boundary.
There is enough evidence to indicate that over two thirds of terrestrial labyrinthodont amphibians, sauropsid ("reptile") and therapsid ("proto-mammal") families became extinct. Large herbivores suffered the heaviest losses.
All Permian anapsid reptiles died out except the procolophonids (although testudines have morphologically-anapsid skulls, they are now thought to have separately evolved from diapsid ancestors). Pelycosaurs died out before the end of the Permian. Too few Permian diapsid fossils have been found to support any conclusion about the effect of the Permian extinction on diapsids (the "reptile" group from which lizards, snakes, crocodilians, and dinosaurs (including birds) evolved).
The groups that survived suffered extremely heavy losses of species and some terrestrial vertebrate groups very nearly became extinct at the end of the Permian. Some of the surviving groups did not persist for long past this period, but others that barely survived went on to produce diverse and long-lasting lineages. However, it took 30 million years for the terrestrial vertebrate fauna to fully recover both numerically and ecologically.
An analysis of marine fossils from the Permian's final Changhsingian stage found that marine organisms with a low tolerance for hypercapnia (high concentration of carbon dioxide) had high extinction rates, and the most tolerant organisms had very slight losses.
The most vulnerable marine organisms were those that produced calcareous hard parts (from calcium carbonate) and had low metabolic rates and weak respiratory systems, notably calcareous sponges, rugose and tabulate corals, calcite-depositing brachiopods, bryozoans, and echinoderms; about 81% of such genera became extinct. Close relatives without calcareous hard parts suffered only minor losses, such as sea anemones, from which modern corals evolved. Animals with high metabolic rates, well-developed respiratory systems, and non-calcareous hard parts had negligible losses except for conodonts, in which 33% of genera died out.
This pattern is consistent with what is known about the effects of hypoxia, a shortage but not total absence of oxygen. However, hypoxia cannot have been the only killing mechanism for marine organisms. Nearly all of the continental shelf waters would have had to become severely hypoxic to account for the magnitude of the extinction, but such a catastrophe would make it difficult to explain the very selective pattern of the extinction. Mathematical models of the Late Permian and Early Triassic atmospheres show a significant but protracted decline in atmospheric oxygen levels, with no acceleration near the P–Tr boundary. Minimum atmospheric oxygen levels in the Early Triassic are never less than present-day levels and so the decline in oxygen levels does not match the temporal pattern of the extinction.
Marine organisms are more sensitive to changes in CO
2 (carbon dioxide) levels than terrestrial organisms for a variety of reasons. CO
2 is 28 times more soluble in water than is oxygen. Marine animals normally function with lower concentrations of CO
2 in their bodies than land animals, as the removal of CO
2 in air-breathing animals is impeded by the need for the gas to pass through the respiratory system's membranes (lungs' alveolus, tracheae, and the like), even when CO
2 diffuses more easily than oxygen. In marine organisms, relatively modest but sustained increases in CO
2 concentrations hamper the synthesis of proteins, reduce fertilization rates, and produce deformities in calcareous hard parts. In addition, an increase in CO
2 concentration is inevitably linked to ocean acidification, consistent with the preferential extinction of heavily calcified taxa and other signals in the rock record that suggest a more acidic ocean. The decrease in ocean pH is calculated to be up to 0.7 units.
It is difficult to analyze extinction and survival rates of land organisms in detail because few terrestrial fossil beds span the Permian–Triassic boundary. Triassic insects are very different from those of the Permian, but a gap in the insect fossil record spans approximately 15 million years from the late Permian to early Triassic. The best-known record of vertebrate changes across the Permian–Triassic boundary occurs in the Karoo Supergroup of South Africa, but statistical analyses have so far not produced clear conclusions. However, analysis of the fossil river deposits of the floodplains indicate a shift from meandering to braided river patterns, indicating an abrupt drying of the climate. The climate change may have taken as little as 100,000 years, prompting the extinction of the unique Glossopteris flora and its herbivores, followed by the carnivorous guild. End-Permian extinctions did not occur at an instantaneous time horizon; particularly, floral extinction was delayed in time.
In the wake of the extinction event, the ecological structure of present-day biosphere evolved from the stock of surviving taxa. In the sea, the "Modern Evolutionary Fauna" became dominant over elements of the "Palaeozoic Evolutionary Fauna". Typical taxa of shelly benthic faunas were now bivalves, snails, sea urchins and Malacostraca, whereas bony fishes and marine reptiles diversified in the pelagic zone. On land, dinosaurs and mammals arose in the course of the Triassic. The profound change in the taxonomic composition was partly a result of the selectivity of the extinction event, which affected some taxa (e.g., brachiopods) more severely than others (e.g., bivalves). However, recovery was also differential between taxa. Some survivors became extinct some million years after the extinction event without having rediversified (dead clade walking, e.g. the snail family Bellerophontidae ), whereas others rose to dominance over geologic times (e.g., bivalves).
Changes in marine ecosystems
Marine post-extinction faunas were mostly species-poor and dominated by few disaster species such as the bivalves Claraia and Unionites. Seafloor communities maintained a comparatively low diversity until the end of the Early Triassic, approximately 4 million years after the extinction event. This slow recovery stands in remarkable contrast with the quick recovery seen in nektonic organisms such as ammonoids, which exceeded pre-extinction diversities already two million years after the crisis. The relative delay in the recovery of benthic organisms has been attributed to widespread anoxia, but high abundances of benthic species contradict this explanation. More recent work suggests that the pace of recovery was intrinsically driven by the intensity of competition among species, which drives rates of niche differentiation and speciation. Accordingly, low levels of interspecific competition in seafloor communities that are dominated by primary consumers correspond to slow rates of diversification and high levels of interspecific competition among nektonic secondary and tertiary consumers to high diversification rates. Whereas most marine communities were fully recovered by the Middle Triassic, global marine diversity reached pre-extinction values no earlier than the Middle Jurassic, approximately 75 million years after the extinction event.
Prior to the extinction, about two-thirds of marine animals were sessile and attached to the seafloor. During the Mesozoic, only about half of the marine animals were sessile while the rest were free-living. Analysis of marine fossils from the period indicated a decrease in the abundance of sessile epifaunal suspension feeders such as brachiopods and sea lilies and an increase in more complex mobile species such as snails, sea urchins and crabs.
Before the Permian mass extinction event, both complex and simple marine ecosystems were equally common. After the recovery from the mass extinction, the complex communities outnumbered the simple communities by nearly three to one, and the increase in predation pressure led to the Mesozoic Marine Revolution.
Bivalves were fairly rare before the P–Tr extinction but became numerous and diverse in the Triassic, and one group, the rudist clams, became the Mesozoic's main reef-builders. Some researchers think much of the change happened in the 5 million years between the two major extinction pulses.
Crinoids ("sea lilies") suffered a selective extinction, resulting in a decrease in the variety of their forms. Their ensuing adaptive radiation was brisk, and resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent.
Lystrosaurus, a pig-sized herbivorous dicynodont therapsid, constituted as much as 90% of some earliest Triassic land vertebrate fauna. Smaller carnivorous cynodont therapsids also survived, including the ancestors of mammals. In the Karoo region of southern Africa, the therocephalians Tetracynodon, Moschorhinus and Ictidosuchoides survived, but do not appear to have been abundant in the Triassic.
Archosaurs (which included the ancestors of dinosaurs and crocodilians) were initially rarer than therapsids, but they began to displace therapsids in the mid-Triassic. In the mid to late Triassic, the dinosaurs evolved from one group of archosaurs, and went on to dominate terrestrial ecosystems during the Jurassic and Cretaceous. This "Triassic Takeover" may have contributed to the evolution of mammals by forcing the surviving therapsids and their mammaliform successors to live as small, mainly nocturnal insectivores; nocturnal life probably forced at least the mammaliforms to develop fur and higher metabolic rates, while losing part of the differential color-sensitive retinal receptors reptilians and birds preserved.
Some temnospondyl amphibians made a relatively quick recovery, in spite of nearly becoming extinct. Mastodonsaurus and trematosaurians were the main aquatic and semiaquatic predators during most of the Triassic, some preying on tetrapods and others on fish.
Land vertebrates took an unusually long time to recover from the P–Tr extinction; Palaeontologist Michael Benton estimated the recovery was not complete until 30 million years after the extinction, i.e. not until the Late Triassic, in which dinosaurs, pterosaurs, crocodiles, archosaurs, amphibians, and mammal forms were abundant and diverse.
Theories about cause
Pinpointing the exact causes of the Permian–Triassic extinction event is difficult, mostly because it occurred over 250 million years ago, and since then much of the evidence that would have pointed to the cause has been destroyed or is concealed deep within the Earth under many layers of rock. The sea floor is also completely recycled every 200 million years by the ongoing process of plate tectonics and seafloor spreading, leaving no useful indications beneath the ocean.
Yet, scientists have gathered significant evidence for causes, and several mechanisms have been proposed. The proposals include both catastrophic and gradual processes (similar to those theorized for the Cretaceous–Paleogene extinction event).
- The catastrophic group includes one or more large bolide impact events, increased volcanism, and sudden release of methane from the seafloor, either due to dissociation of methane hydrate deposits or metabolism of organic carbon deposits by methanogenic microbes.
- The gradual group includes sea level change, increasing anoxia, and increasing aridity.
Any hypothesis about the cause must explain the selectivity of the event, which affected organisms with calcium carbonate skeletons most severely; the long period (4 to 6 million years) before recovery started, and the minimal extent of biological mineralization (despite inorganic carbonates being deposited) once the recovery began.
Evidence that an impact event may have caused the Cretaceous–Paleogene extinction has led to speculation that similar impacts may have been the cause of other extinction events, including the P–Tr extinction, and thus to a search for evidence of impacts at the times of other extinctions, such as large impact craters of the appropriate age.
Reported evidence for an impact event from the P–Tr boundary level includes rare grains of shocked quartz in Australia and Antarctica; fullerenes trapping extraterrestrial noble gases; meteorite fragments in Antarctica; and grains rich in iron, nickel, and silicon, which may have been created by an impact. However, the accuracy of most of these claims has been challenged. For example, quartz from Graphite Peak in Antarctica, once considered "shocked", has been re-examined by optical and transmission electron microscopy. The observed features were concluded to be due not to shock, but rather to plastic deformation, consistent with formation in a tectonic environment such as volcanism.
An impact crater on the seafloor would be evidence of a possible cause of the P–Tr extinction, but such a crater would by now have disappeared. As 70% of the Earth's surface is currently sea, an asteroid or comet fragment is now perhaps more than twice as likely to hit the ocean as it is to hit land. However, Earth's oldest ocean-floor crust is the only 200 million years old as it is continually being destroyed and renewed by spreading and subduction. Furthermore, craters produced by very large impacts may be masked by extensive flood basalting from below after the crust is punctured or weakened. Yet, subduction should not be entirely accepted as an explanation for the lack of evidence: as with the K-T event, an ejecta blanket stratum rich in siderophilic elements (such as iridium) would be expected in formations from the time.
A large impact might have triggered other mechanisms of extinction described below, such as the Siberian Traps eruptions at either an impact site or the antipode of an impact site. The abruptness of an impact also explains why more species did not rapidly evolve to survive, as would be expected if the Permian–Triassic event had been slower and less global than a meteorite impact.
Possible impact sites
Several possible impact craters have been proposed as the site of an impact causing the P–Tr extinction, including the 250 km (160 mi) Bedout structure off the northwest coast of Australia and the hypothesized 480 km (300 mi) Wilkes Land crater of East Antarctica. An impact has not been proved in either case, and the idea has been widely criticized. The Wilkes Land sub-ice geophysical feature is of very uncertain age, possibly later than the Permian–Triassic extinction.
The 40 km (25 mi) Araguainha crater in Brazil has been most recently dated to 254.7 ± 2.5 million years ago, overlapping with estimates for the Permo-Triassic boundary. Much of the local rock was oil shale. The estimated energy released by the Araguainha impact is insufficient to have directly caused the global mass extinction, but the colossal local earth tremors would have released huge amounts of oil and gas from the shattered rock. The resulting sudden global warming might have precipitated the Permian–Triassic extinction event.
A 2017 paper by Rampino, Rocca and Presser (after a 1992 abstract by Rampino) noted the discovery of a circular gravity anomaly near the Falkland Islands which might correspond to an impact crater with a diameter of 250 km (160 mi), as supported by seismic and magnetic evidence. Estimates for the age of the structure range up to 250 million years old. This would be substantially larger than the well-known 180 km (110 mi) Chicxulub impact crater associated with a later extinction. However, Dave McCarthy and colleagues from the British Geological Survey illustrated that the gravity anomaly is not circular and also that the seismic data presented by Rocca, Rampino and Baez Presser did not cross the proposed crater or provide any evidence for an impact crater.
The final stages of the Permian had two flood basalt events. A smaller one, the Emeishan Traps in China, occurred at the same time as the end-Guadalupian extinction pulse, in an area close to the equator at the time. The flood basalt eruptions that produced the Siberian Traps constituted one of the largest known volcanic events on Earth and covered over 2,000,000 square kilometres (770,000 sq mi) with lava. The date of the Siberian Traps eruptions and the extinction event are in good agreement.
The Emeishan and Siberian Traps eruptions may have caused dust clouds and acid aerosols, which would have blocked out sunlight and thus disrupted photosynthesis both on land and in the photic zone of the ocean, causing food chains to collapse. The eruptions may also have caused acid rain when the aerosols washed out of the atmosphere. That may have killed land plants and mollusks and planktonic organisms which had calcium carbonate shells. The eruptions would also have emitted carbon dioxide, causing global warming. When all of the dust clouds and aerosols washed out of the atmosphere, the excess carbon dioxide would have remained and the warming would have proceeded without any mitigating effects.
The Siberian Traps had unusual features that made them even more dangerous. Pure flood basalts produce fluid, low-viscosity lava, and do not hurl debris into the atmosphere. It appears, however, that 20% of the output of the Siberian Traps eruptions was pyroclastic (consisted of ash and other debris thrown high into the atmosphere), increasing the short-term cooling effect. The basalt lava erupted or intruded into carbonate rocks and into sediments that were in the process of forming large coal beds, both of which would have emitted large amounts of carbon dioxide, leading to stronger global warming after the dust and aerosols settled.
In January 2011, a team, led by Stephen Grasby of the Geological Survey of Canada—Calgary, reported evidence that volcanism caused massive coal beds to ignite, possibly releasing more than 3 trillion tons of carbon. The team found ash deposits in deep rock layers near what is now the Buchanan Lake Formation. According to their article, "coal ash dispersed by the explosive Siberian Trap eruption would be expected to have an associated release of toxic elements in impacted water bodies where fly ash slurries developed.... Mafic megascale eruptions are long-lived events that would allow significant build-up of global ash clouds." In a statement, Grasby said, "In addition to these volcanoes causing fires through coal, the ash it spewed was highly toxic and was released in the land and water, potentially contributing to the worst extinction event in earth history." In 2013, a team led by Q.Y. Yang reported the total amounts of important volatiles emitted from the Siberian Traps are 8.5 × 107 Tg CO2, 4.4 × 106 Tg CO, 7.0 × 106 Tg H2S and 6.8 × 107 Tg SO2, the data support a popular notion that the end-Permian mass extinction on the Earth was caused by the emission of enormous amounts of volatiles from the Siberian Traps into the atmosphere.
In 2015, evidence and a timeline indicated the extinction was caused by events in the large igneous province of the Siberian Traps. Carbon dioxide levels prior to and after the eruptions are poorly constrained, but may have jumped from between 500 and 4000 ppm prior to the extinction event to around 8000 ppm after the extinction.
In 2020 scientists reconstructed the mechanisms that led to the extinction event in a biogeochemical model, showed the consequences of the greenhouse effect on the marine environment and reported that the mass extinction can be traced back to volcanic CO2 emissions. Further evidence – based on paired coronene-mercury spikes – for a volcanic combustion cause of the mass extinction was published in 2020.
Methane hydrate gasification
Scientists have found worldwide evidence of a swift decrease of about 1% in the 13C/12C isotope ratio in carbonate rocks from the end-Permian. This is the first, largest, and most rapid of a series of negative and positive excursions (decreases and increases in 13C/12C ratio) that continues until the isotope ratio abruptly stabilised in the middle Triassic, followed soon afterwards by the recovery of calcifying life forms (organisms that use calcium carbonate to build hard parts such as shells).
- Gases from volcanic eruptions have a 13C/12C ratio about 0.5 to 0.8% below standard (δ13C about −0.5 to −0.8%), but an assessment made in 1995 concluded that the amount required to produce a reduction of about 1.0% worldwide requires eruptions greater by orders of magnitude than any for which evidence has been found. (However, this analysis addressed only CO2 produced by the magma itself, not from interactions with carbon bearing sediments, as later proposed.)
- A reduction in organic activity would extract 12C more slowly from the environment and leave more of it to be incorporated into sediments, thus reducing the 13C/12C ratio. Biochemical processes preferentially use the lighter isotopes since chemical reactions are ultimately driven by electromagnetic forces between atoms and lighter isotopes respond more quickly to these forces, but a study of a smaller drop of 0.3 to 0.4% in 13C/12C (δ13C −3 to −4 ‰) at the Paleocene-Eocene Thermal Maximum (PETM) concluded that even transferring all the organic carbon (in organisms, soils, and dissolved in the ocean) into sediments would be insufficient: even such a large burial of material rich in 12C would not have produced the 'smaller' drop in the 13C/12C ratio of the rocks around the PETM.
- Buried sedimentary organic matter has a 13C/12C ratio 2.0 to 2.5% below normal (δ13C −2.0 to −2.5%). Theoretically, if the sea level fell sharply, shallow marine sediments would be exposed to oxidation. But 6500–8400 gigatons (1 gigaton = 109 metric tons) of organic carbon would have to be oxidized and returned to the ocean-atmosphere system within less than a few hundred thousand years to reduce the 13C/12C ratio by 1.0%, which is not thought to be a realistic possibility. Moreover, sea levels were rising rather than falling at the time of the extinction.
- Rather than a sudden decline in sea level, intermittent periods of ocean-bottom hyperoxia and anoxia (high-oxygen and low- or zero-oxygen conditions) may have caused the 13C/12C ratio fluctuations in the Early Triassic; and global anoxia may have been responsible for the end-Permian blip. The continents of the end-Permian and early Triassic were more clustered in the tropics than they are now, and large tropical rivers would have dumped sediment into smaller, partially enclosed ocean basins at low latitudes. Such conditions favor oxic and anoxic episodes; oxic/anoxic conditions would result in a rapid release/burial, respectively, of large amounts of organic carbon, which has a low 13C/12C ratio because biochemical processes use the lighter isotopes more. That or another organic-based reason may have been responsible for both that and a late Proterozoic/Cambrian pattern of fluctuating 13C/12C ratios.
Other hypotheses include mass oceanic poisoning, releasing vast amounts of CO
2, and a long-term reorganisation of the global carbon cycle.
Prior to consideration of the inclusion of roasting carbonate sediments by volcanism, the only proposed mechanism sufficient to cause a global 1% reduction in the 13C/12C ratio was the release of methane from methane clathrates. Carbon-cycle models confirm that it would have had enough effect to produce the observed reduction. Methane clathrates, also known as methane hydrates, consist of methane molecules trapped in cages of water molecules. The methane, produced by methanogens (microscopic single-celled organisms), has a 13C/12C ratio about 6.0% below normal (δ13C −6.0%). At the right combination of pressure and temperature, it gets trapped in clathrates fairly close to the surface of permafrost and in much larger quantities at continental margins (continental shelves and the deeper seabed close to them). Oceanic methane hydrates are usually found buried in sediments where the seawater is at least 300 m (980 ft) deep. They can be found up to about 2,000 m (6,600 ft) below the sea floor, but usually only about 1,100 m (3,600 ft) below the sea floor.
The area covered by lava from the Siberian Traps eruptions is about twice as large as was originally thought, and most of the additional area was shallow sea at the time. The seabed probably contained methane hydrate deposits, and the lava caused the deposits to dissociate, releasing vast quantities of methane. A vast release of methane might cause significant global warming since methane is a very powerful greenhouse gas. Strong evidence suggests the global temperatures increased by about 6 °C (10.8 °F) near the equator and therefore by more at higher latitudes: a sharp decrease in oxygen isotope ratios (18O/16O); the extinction of Glossopteris flora (Glossopteris and plants that grew in the same areas), which needed a cold climate, with its replacement by floras typical of lower paleolatitudes.
However, the pattern of isotope shifts expected to result from a massive release of methane does not match the patterns seen throughout the Early Triassic. Not only would such a cause require the release of five times as much methane as postulated for the PETM, but would it also have to be reburied at an unrealistically high rate to account for the rapid increases in the 13C/12C ratio (episodes of high positive δ13C) throughout the early Triassic before it was released several times again.
Evidence for widespread ocean anoxia (severe deficiency of oxygen) and euxinia (presence of hydrogen sulfide) is found from the Late Permian to the Early Triassic. Throughout most of the Tethys and Panthalassic Oceans, evidence for anoxia, including fine laminations in sediments, small pyrite framboids, high uranium/thorium ratios, and biomarkers for green sulfur bacteria, appear at the extinction event. However, in some sites, including Meishan, China, and eastern Greenland, evidence for anoxia precedes the extinction. Biomarkers for green sulfur bacteria, such as isorenieratane, the diagenetic product of isorenieratene, are widely used as indicators of photic zone euxinia because green sulfur bacteria require both sunlight and hydrogen sulfide to survive. Their abundance in sediments from the P-T boundary indicates hydrogen sulfide was present even in shallow waters.
This spread of toxic, oxygen-depleted water would have devastated marine life, causing widespread die-offs. Models of ocean chemistry suggest that anoxia and euxinia were closely associated with hypercapnia (high levels of carbon dioxide). This suggests that poisoning from hydrogen sulfide, anoxia, and hypercapnia acted together as a killing mechanism. Hypercapnia best explains the selectivity of the extinction, but anoxia and euxinia probably contributed to the high mortality of the event. The persistence of anoxia through the Early Triassic may explain the slow recovery of marine life after the extinction. Models also show that anoxic events can cause catastrophic hydrogen sulfide emissions into the atmosphere (see below).
The sequence of events leading to anoxic oceans may have been triggered by carbon dioxide emissions from the eruption of the Siberian Traps. In that scenario, warming from the enhanced greenhouse effect would reduce the solubility of oxygen in seawater, causing the concentration of oxygen to decline. Increased weathering of the continents due to warming and the acceleration of the water cycle would increase the riverine flux of phosphate to the ocean. The phosphate would have supported greater primary productivity in the surface oceans. The increase in organic matter production would have caused more organic matter to sink into the deep ocean, where its respiration would further decrease oxygen concentrations. Once anoxia became established, it would have been sustained by a positive feedback loop because deep water anoxia tends to increase the recycling efficiency of phosphate, leading to even higher productivity.
Hydrogen sulfide emissions
A severe anoxic event at the end of the Permian would have allowed sulfate-reducing bacteria to thrive, causing the production of large amounts of hydrogen sulfide in the anoxic ocean. Upwelling of this water may have released massive hydrogen sulfide emissions into the atmosphere and would poison terrestrial plants and animals and severely weaken the ozone layer, exposing much of the life that remained to fatal levels of UV radiation. Indeed, biomarker evidence for anaerobic photosynthesis by Chlorobiaceae (green sulfur bacteria) from the Late-Permian into the Early Triassic indicates that hydrogen sulfide did upwell into shallow waters because these bacteria are restricted to the photic zone and use sulfide as an electron donor.
The hypothesis has the advantage of explaining the mass extinction of plants, which would have added to the methane levels and should otherwise have thrived in an atmosphere with a high level of carbon dioxide. Fossil spores from the end-Permian further support the theory: many show deformities that could have been caused by ultraviolet radiation, which would have been more intense after hydrogen sulfide emissions weakened the ozone layer.
In the mid-Permian (during the Kungurian age of the Permian's Cisuralian epoch), Earth's major continental plates joined, forming a supercontinent called Pangaea, which was surrounded by the superocean, Panthalassa.
Oceanic circulation and atmospheric weather patterns during the mid-Permian produced seasonal monsoons near the coasts and an arid climate in the vast continental interior.
As the supercontinent formed, the ecologically diverse and productive coastal areas shrank. The shallow aquatic environments were eliminated and exposed formerly protected organisms of the rich continental shelves to increased environmental volatility.
Pangaea's formation depleted marine life at near catastrophic rates. However, Pangaea's effect on land extinctions is thought to have been smaller. In fact, the advance of the therapsids and increase in their diversity is attributed to the late Permian, when Pangaea's global effect was thought to have peaked.
While Pangaea's formation certainly initiated a long period of marine extinction, its impact on the "Great Dying" and the end of the Permian is uncertain.
A hypothesis published in 2014 posits that a genus of anaerobic methanogenic archaea known as Methanosarcina was responsible for the event. Three lines of evidence suggest that these microbes acquired a new metabolic pathway via gene transfer at about that time, enabling them to efficiently metabolize acetate into methane. That would have led to their exponential reproduction, allowing them to rapidly consume vast deposits of organic carbon that had accumulated in the marine sediment. The result would have been a sharp buildup of methane and carbon dioxide in the Earth's oceans and atmosphere, in a manner that may be consistent with the 13C/12C isotopic record. Massive volcanism facilitated this process by releasing large amounts of nickel, a scarce metal which is a cofactor for enzymes involved in producing methane. On the other hand, in the canonical Meishan sections, the nickel concentration increases somewhat after the δ13C concentrations have begun to fall.
Combination of causes
Possible causes supported by strong evidence appear to describe a sequence of catastrophes, each worse than the last: the Siberian Traps eruptions were bad enough alone, but because they occurred near coal beds and the continental shelf, they also triggered very large releases of carbon dioxide and methane. The resultant global warming may have caused perhaps the most severe anoxic event in the oceans' history: according to this theory, the oceans became so anoxic, anaerobic sulfur-reducing organisms dominated the chemistry of the oceans and caused massive emissions of toxic hydrogen sulfide.
However, there may be some weak links in this chain of events: the changes in the 13C/12C ratio expected to result from a massive release of methane do not match the patterns seen throughout the early Triassic; and the types of oceanic thermohaline circulation that may have existed at the end of the Permian are not likely to have supported deep-sea anoxia.
- Extinction events
- List of possible impact structures on Earth
- Silurian hypothesis
- Rohde, R.A. & Muller, R.A. (2005). "Cycles in fossil diversity". Nature. 434 (7030): 209–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998. S2CID 32520208.
- McLoughlin, Steven (8 January 2021). "Age and Paleoenvironmental Significance of the Frazer Beach Member—A New Lithostratigraphic Unit Overlying the End-Permian Extinction Horizon in the Sydney Basin, Australia". Frontiers in Earth Science. 8 (600976). Retrieved 26 March 2021.
- Algeo, Thomas J. (5 February 2012). "The P-T Extinction was a Slow Death". Astrobiology Magazine.
- Dirson Jian Li (18 December 2012). "The tectonic cause of mass extinctions and the genomic contribution to biodiversification". Quantitative Biology. arXiv:1212.4229. Bibcode:2012arXiv1212.4229L.
- ""Great Dying" lasted 200,000 years". National Geographic. 23 November 2011. Retrieved 1 April 2014.
- St. Fleur, Nicholas (16 February 2017). "After Earth's worst mass extinction, life rebounded rapidly, fossils suggest". The New York Times. Retrieved 17 February 2017.
- Jurikova, Hana; Gutjahr, Marcus; Wallmann, Klaus; Flögel, Sascha; Liebetrau, Volker; Posenato, Renato; Angiolini, Lucia; Garbelli, Claudio; Brand, Uwe; Wiedenbeck, Michael; Eisenhauer, Anton (October 19, 2020). "Permian–Triassic mass extinction pulses driven by major marine carbon cycle perturbations". Nature Geoscience. 13 (11): 745–750. doi:10.1038/s41561-020-00646-4. ISSN 1752-0908.
- Stanley, Steven M. (2016-10-18). "Estimates of the magnitudes of major marine mass extinctions in earth history". Proceedings of the National Academy of Sciences. 113 (42): E6325–E6334. doi:10.1073/pnas.1613094113. ISSN 0027-8424. PMC 5081622. PMID 27698119.
- Benton M J (2005). When Life Nearly Died: The greatest mass extinction of all time. London: Thames & Hudson. ISBN 978-0-500-28573-2.
- Carl T. Bergstrom; Lee Alan Dugatkin (2012). Evolution. Norton. p. 515. ISBN 978-0-393-92592-0.
- Sahney S, Benton MJ (2008). "Recovery from the most profound mass extinction of all time". Proceedings of the Royal Society B. 275 (1636): 759–765. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- Jin YG, Wang Y, Wang W, Shang QH, Cao CQ, Erwin DH (2000). "Pattern of marine mass extinction near the Permian–Triassic boundary in south China". Science. 289 (5478): 432–436. Bibcode:2000Sci...289..432J. doi:10.1126/science.289.5478.432. PMID 10903200.
- Yin H, Zhang K, Tong J, Yang Z, Wu S (2001). "The Global Stratotype Section and Point (GSSP) of the Permian-Triassic Boundary". Episodes. 24 (2): 102–114. doi:10.18814/epiiugs/2001/v24i2/004.
- Yin HF, Sweets WC, Yang ZY, Dickins JM (1992). "Permo-Triassic events in the eastern Tethys–an overview". In Sweet WC (ed.). Permo-Triassic events in the eastern Tethys: stratigraphy, classification, and relations with the western Tethys. Cambridge, UK: Cambridge University Press. pp. 1–7. ISBN 978-0-521-54573-0.
- Darcy E. Ogdena & Norman H. Sleep (2011). "Explosive eruption of coal and basalt and the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 109 (1): 59–62. Bibcode:2012PNAS..109...59O. doi:10.1073/pnas.1118675109. PMC 3252959. PMID 22184229.
- Kaiho, Kunio; Aftabuzzaman, Md.; Jones, David S.; Tian, Li (2020-11-04). "Pulsed volcanic combustion events coincident with the end-Permian terrestrial disturbance and the following global crisis". Geology. 49 (3): 289–293. doi:10.1130/G48022.1. ISSN 0091-7613.
- Rothman, D.H.; Fournier, G.P.; French, K.L.; Alm, E.J.; Boyle, E.A.; Cao, C.; Summons, R.E. (2014-03-31). "Methanogenic burst in the end-Permian carbon cycle". Proceedings of the National Academy of Sciences. 111 (15): 5462–7. Bibcode:2014PNAS..111.5462R. doi:10.1073/pnas.1318106111. PMC 3992638. PMID 24706773. – Lay summary: Chandler, David L. (March 31, 2014). "Ancient whodunit may be solved: Methane-producing microbes did it!". Science Daily.
- "It took Earth ten million years to recover from greatest mass extinction". ScienceDaily. 27 May 2012. Retrieved 28 May 2012.
- Brayard, Arnaud; Krumenacker, L. J.; Botting, Joseph P.; Jenks, James F.; Bylund, Kevin G.; Fara1, Emmanuel; Vennin, Emmanuelle; Olivier, Nicolas; Goudemand, Nicolas; Saucède, Thomas; Charbonnier, Sylvain; Romano, Carlo; Doguzhaeva, Larisa; Thuy, Ben; Hautmann, Michael; Stephen, Daniel A.; Thomazo, Christophe; Escarguel, Gilles (15 February 2017). "Unexpected Early Triassic marine ecosystem and the rise of the Modern evolutionary fauna". Science Advances. 13 (2): e1602159. Bibcode:2017SciA....3E2159B. doi:10.1126/sciadv.1602159. PMC 5310825. PMID 28246643.
- Payne, J.L.; Lehrmann, D.J.; Wei, J.; Orchard, M.J.; Schrag, D.P.; Knoll, A.H. (2004). "Large Perturbations of the Carbon Cycle During Recovery from the End-Permian Extinction" (PDF). Science. 305 (5683): 506–9. Bibcode:2004Sci...305..506P. CiteSeerX 10.1.1.582.9406. doi:10.1126/science.1097023. PMID 15273391. S2CID 35498132.
- McElwain, J. C.; Punyasena, S. W. (2007). "Mass extinction events and the plant fossil record". Trends in Ecology & Evolution. 22 (10): 548–557. doi:10.1016/j.tree.2007.09.003. PMID 17919771.
- Retallack, G. J.; Veevers, J. J.; Morante, R. (1996). "Global coal gap between Permian–Triassic extinctions and middle Triassic recovery of peat forming plants". GSA Bulletin. 108 (2): 195–207. Bibcode:1996GSAB..108..195R. doi:10.1130/0016-7606(1996)108<0195:GCGBPT>2.3.CO;2.
- Erwin, D.H (1993). The Great Paleozoic Crisis: Life and Death in the Permian. New York: Columbia University Press. ISBN 978-0-231-07467-4.
- Burgess, S.D. (2014). "High-precision timeline for Earth's most severe extinction". PNAS. 111 (9): 3316–3321. Bibcode:2014PNAS..111.3316B. doi:10.1073/pnas.1317692111. PMC 3948271. PMID 24516148.
- Magaritz M (1989). "13C minima follow extinction events: A clue to faunal radiation". Geology. 17 (4): 337–340. Bibcode:1989Geo....17..337M. doi:10.1130/0091-7613(1989)017<0337:CMFEEA>2.3.CO;2.
- Krull SJ, Retallack JR (2000). "13C depth profiles from paleosols across the Permian–Triassic boundary: Evidence for methane release". GSA Bulletin. 112 (9): 1459–1472. Bibcode:2000GSAB..112.1459K. doi:10.1130/0016-7606(2000)112<1459:CDPFPA>2.0.CO;2. ISSN 0016-7606.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian–Triassic boundary in Western Slovenia (Idrijca Valley section): Magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology. 175 (1): 175–190. Bibcode:2001ChGeo.175..175D. doi:10.1016/S0009-2541(00)00368-5.
- Musashi M, Isozaki Y, Koike T, Kreulen R (2001). "Stable carbon isotope signature in mid-Panthalassa shallow-water carbonates across the Permo–Triassic boundary: Evidence for 13C-depleted ocean". Earth and Planetary Science Letters. 193 (1–2): 9–20. Bibcode:2001E&PSL.191....9M. doi:10.1016/S0012-821X(01)00398-3.
- Dolenec T, Lojen S, Ramovs A (2001). "The Permian-Triassic boundary in Western Slovenia (Idrijca Valley section): magnetostratigraphy, stable isotopes, and elemental variations". Chemical Geology. 175 (1–2): 175–190. Bibcode:2001ChGeo.175..175D. doi:10.1016/S0009-2541(00)00368-5.
- "Daily CO2". Mauna Loa Observatory.
- Visscher H, Brinkhuis H, Dilcher DL, Elsik WC, Eshet Y, Looy CW, Rampino MR, Traverse A (1996). "The terminal Paleozoic fungal event: Evidence of terrestrial ecosystem destabilization and collapse". Proceedings of the National Academy of Sciences. 93 (5): 2155–2158. Bibcode:1996PNAS...93.2155V. doi:10.1073/pnas.93.5.2155. PMC 39926. PMID 11607638.
- Foster, C.B.; Stephenson, M.H.; Marshall, C.; Logan, G.A.; Greenwood, P.F. (2002). "A revision of Reduviasporonites Wilson 1962: Description, illustration, comparison and biological affinities". Palynology. 26 (1): 35–58. doi:10.2113/0260035.
- López-Gómez, J. & Taylor, E.L. (2005). "Permian-Triassic transition in Spain: A multidisciplinary approach". Palaeogeography, Palaeoclimatology, Palaeoecology. 229 (1–2): 1–2. doi:10.1016/j.palaeo.2005.06.028.
- Looy CV, Twitchett RJ, Dilcher DL, van Konijnenburg-Van Cittert JH, Visscher H (2005). "Life in the end-Permian dead zone". Proceedings of the National Academy of Sciences. 98 (4): 7879–7883. Bibcode:2001PNAS...98.7879L. doi:10.1073/pnas.131218098. PMC 35436. PMID 11427710.
See image 2
- Ward PD, Botha J, Buick R, De Kock MO, Erwin DH, Garrison GH, Kirschvink JL, Smith R (2005). "Abrupt and gradual extinction among late Permian land vertebrates in the Karoo Basin, South Africa" (PDF). Science. 307 (5710): 709–714. Bibcode:2005Sci...307..709W. CiteSeerX 10.1.1.503.2065. doi:10.1126/science.1107068. PMID 15661973. S2CID 46198018.
- Retallack, G.J.; Smith, R.M.H.; Ward, P.D. (2003). "Vertebrate extinction across Permian-Triassic boundary in Karoo Basin, South Africa". Bulletin of the Geological Society of America. 115 (9): 1133–1152. Bibcode:2003GSAB..115.1133R. doi:10.1130/B25215.1.
- Sephton, M.A.; Visscher, H.; Looy, C.V.; Verchovsky, A.B.; Watson, J.S. (2009). "Chemical constitution of a Permian-Triassic disaster species". Geology. 37 (10): 875–878. Bibcode:2009Geo....37..875S. doi:10.1130/G30096A.1.
- Rampino MR, Prokoph A, Adler A (2000). "Tempo of the end-Permian event: High-resolution cyclostratigraphy at the Permian–Triassic boundary". Geology. 28 (7): 643–646. Bibcode:2000Geo....28..643R. doi:10.1130/0091-7613(2000)28<643:TOTEEH>2.0.CO;2. ISSN 0091-7613.
- Wang, S.C.; Everson, P.J. (2007). "Confidence intervals for pulsed mass extinction events". Paleobiology. 33 (2): 324–336. doi:10.1666/06056.1. S2CID 2729020.
- Twitchett RJ, Looy CV, Morante R, Visscher H, Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology. 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- Retallack, G.J.; Metzger, C.A.; Greaver, T.; Jahren, A.H.; Smith, R.M.H.; Sheldon, N.D. (November–December 2006). "Middle-Late Permian mass extinction on land". Bulletin of the Geological Society of America. 118 (11–12): 1398–1411. Bibcode:2006GSAB..118.1398R. doi:10.1130/B26011.1.
- Stanley SM, Yang X (1994). "A double mass extinction at the end of the Paleozoic Era". Science. 266 (5189): 1340–1344. Bibcode:1994Sci...266.1340S. doi:10.1126/science.266.5189.1340. PMID 17772839. S2CID 39256134.
- Ota, A & Isozaki, Y. (March 2006). "Fusuline biotic turnover across the Guadalupian–Lopingian (Middle–Upper Permian) boundary in mid-oceanic carbonate buildups: Biostratigraphy of accreted limestone in Japan". Journal of Asian Earth Sciences. 26 (3–4): 353–368. Bibcode:2006JAESc..26..353O. doi:10.1016/j.jseaes.2005.04.001.
- Shen, S. & Shi, G.R. (2002). "Paleobiogeographical extinction patterns of Permian brachiopods in the Asian-western Pacific region". Paleobiology. 28 (4): 449–463. doi:10.1666/0094-8373(2002)028<0449:PEPOPB>2.0.CO;2. ISSN 0094-8373.
- Wang, X-D & Sugiyama, T. (December 2000). "Diversity and extinction patterns of Permian coral faunas of China". Lethaia. 33 (4): 285–294. doi:10.1080/002411600750053853.
- Racki G (1999). "Silica-secreting biota and mass extinctions: survival processes and patterns". Palaeogeography, Palaeoclimatology, Palaeoecology. 154 (1–2): 107–132. Bibcode:1999PPP...154..107R. doi:10.1016/S0031-0182(99)00089-9.
- Bambach, R.K.; Knoll, A.H.; Wang, S.C. (December 2004). "Origination, extinction, and mass depletions of marine diversity". Paleobiology. 30 (4): 522–542. doi:10.1666/0094-8373(2004)030<0522:OEAMDO>2.0.CO;2. ISSN 0094-8373.
- Knoll AH (2004). "Biomineralization and evolutionary history". In Dove PM, DeYoreo JJ, Weiner S (eds.). Reviews in Mineralogy and Geochemistry (PDF). Archived from the original (PDF) on 2010-06-20.
- Stanley, S.M. (2008). "Predation defeats competition on the seafloor". Paleobiology. 34 (1): 1–21. doi:10.1666/07026.1. S2CID 83713101. Retrieved 2008-05-13.
- Stanley, S.M. (2007). "An Analysis of the History of Marine Animal Diversity". Paleobiology. 33 (sp6): 1–55. doi:10.1666/06020.1. S2CID 86014119.
- Erwin, D.H. (1993). The great Paleozoic crisis; Life and death in the Permian. Columbia University Press. ISBN 978-0-231-07467-4.
- McKinney, M.L. (1987). "Taxonomic selectivity and continuous variation in mass and background extinctions of marine taxa". Nature. 325 (6100): 143–145. Bibcode:1987Natur.325..143M. doi:10.1038/325143a0. S2CID 13473769.
- Twitchett RJ, Looy CV, Morante R, Visscher H, Wignall PB (2001). "Rapid and synchronous collapse of marine and terrestrial ecosystems during the end-Permian biotic crisis". Geology. 29 (4): 351–354. Bibcode:2001Geo....29..351T. doi:10.1130/0091-7613(2001)029<0351:RASCOM>2.0.CO;2. ISSN 0091-7613.
- "Permian : The Marine Realm and The End-Permian Extinction". paleobiology.si.edu. Retrieved 2016-01-26.
- "Permian extinction". Encyclopædia Britannica. Retrieved 2016-01-26.
- Knoll, A.H.; Bambach, R.K.; Canfield, D.E.; Grotzinger, J.P. (1996). "Comparative Earth history and Late Permian mass extinction". Science. 273 (5274): 452–457. Bibcode:1996Sci...273..452K. doi:10.1126/science.273.5274.452. PMID 8662528. S2CID 35958753.
- Leighton, L.R.; Schneider, C.L. (2008). "Taxon characteristics that promote survivorship through the Permian–Triassic interval: transition from the Paleozoic to the Mesozoic brachiopod fauna". Paleobiology. 34 (1): 65–79. doi:10.1666/06082.1. S2CID 86843206.
- Villier, L.; Korn, D. (October 2004). "Morphological Disparity of Ammonoids and the Mark of Permian Mass Extinctions". Science. 306 (5694): 264–266. Bibcode:2004Sci...306..264V. doi:10.1126/science.1102127. ISSN 0036-8075. PMID 15472073. S2CID 17304091.
- Saunders, W. B.; Greenfest-Allen, E.; Work, D. M.; Nikolaeva, S. V. (2008). "Morphologic and taxonomic history of Paleozoic ammonoids in time and morphospace". Paleobiology. 34 (1): 128–154. doi:10.1666/07053.1. S2CID 83650272.
- Labandeira, Conrad (1 January 2005), "The fossil record of insect extinction: New approaches and future directions", American Entomologist, 51: 14–29, doi:10.1093/ae/51.1.14
- Labandeira CC, Sepkoski JJ (1993). "Insect diversity in the fossil record". Science. 261 (5119): 310–315. Bibcode:1993Sci...261..310L. CiteSeerX 10.1.1.496.1576. doi:10.1126/science.11536548. PMID 11536548.
- Sole RV, Newman M (2003). "Extinctions and Biodiversity in the Fossil Record". In Canadell JG, Mooney HA (eds.). Encyclopedia of Global Environmental Change, The Earth System. Biological and Ecological Dimensions of Global Environmental Change. 2. New York: Wiley. pp. 297–391. ISBN 978-0-470-85361-0.
- "The Dino Directory – Natural History Museum".
- Cascales-Miñana, B.; Cleal, C. J. (2011). "Plant fossil record and survival analyses". Lethaia. 45: 71–82. doi:10.1111/j.1502-3931.2011.00262.x.
- Retallack GJ (1995). "Permian–Triassic life crisis on land". Science. 267 (5194): 77–80. Bibcode:1995Sci...267...77R. doi:10.1126/science.267.5194.77. PMID 17840061. S2CID 42308183.
- Looy CV, Brugman WA, Dilcher DL, Visscher H (1999). "The delayed resurgence of equatorial forests after the Permian–Triassic ecologic crisis". Proceedings of the National Academy of Sciences of the United States of America. 96 (24): 13857–13862. Bibcode:1999PNAS...9613857L. doi:10.1073/pnas.96.24.13857. PMC 24155. PMID 10570163.
- Michaelsen P (2002). "Mass extinction of peat-forming plants and the effect on fluvial styles across the Permian–Triassic boundary, northern Bowen Basin, Australia". Palaeogeography, Palaeoclimatology, Palaeoecology. 179 (3–4): 173–188. Bibcode:2002PPP...179..173M. doi:10.1016/S0031-0182(01)00413-8.
- Maxwell, W.D. (1992). "Permian and Early Triassic extinction of non-marine tetrapods". Palaeontology. 35: 571–583.
- Erwin, D.H. (1990). "The End-Permian Mass Extinction". Annual Review of Ecology and Systematics. 21: 69–91. doi:10.1146/annurev.es.21.110190.000441.
- "Bristol University – News – 2008: Mass extinction".
- Knoll AH, Bambach RK, Payne JL, Pruss S, Fischer WW (2007). "Paleophysiology and end-Permian mass extinction" (PDF). Earth and Planetary Science Letters. 256 (3–4): 295–313. Bibcode:2007E&PSL.256..295K. doi:10.1016/j.epsl.2007.02.018. Retrieved 2008-07-04.
- Payne, J.; Turchyn, A.; Paytan, A.; Depaolo, D.; Lehrmann, D.; Yu, M.; Wei, J. (2010). "Calcium isotope constraints on the end-Permian mass extinction". Proceedings of the National Academy of Sciences of the United States of America. 107 (19): 8543–8548. Bibcode:2010PNAS..107.8543P. doi:10.1073/pnas.0914065107. PMC 2889361. PMID 20421502.
- Clarkson, M.; Kasemann, S.; Wood, R.; Lenton, T.; Daines, S.; Richoz, S.; Ohnemueller, F.; Meixner, A.; Poulton, S.; Tipper, E. (2015-04-10). "Ocean acidification and the Permo-Triassic mass extinction" (PDF). Science. 348 (6231): 229–232. Bibcode:2015Sci...348..229C. doi:10.1126/science.aaa0193. hdl:10871/20741. PMID 25859043. S2CID 28891777.
- Smith, R.M.H. (16 November 1999). "Changing fluvial environments across the Permian-Triassic boundary in the Karoo Basin, South Africa and possible causes of tetrapod extinctions". Palaeogeography, Palaeoclimatology, Palaeoecology. 117 (1–2): 81–104. Bibcode:1995PPP...117...81S. doi:10.1016/0031-0182(94)00119-S.
- Chinsamy-Turan (2012). Anusuya (ed.). Forerunners of mammals : radiation, histology, biology. Bloomington: Indiana University Press. ISBN 978-0-253-35697-0.
- Visscher, Henk; Looy, Cindy V.; Collinson, Margaret E.; Brinkhuis, Henk; Cittert, Johanna H. A. van Konijnenburg-van; Kürschner, Wolfram M.; Sephton, Mark A. (2004-08-31). "Environmental mutagenesis during the end-Permian ecological crisis". Proceedings of the National Academy of Sciences of the United States of America. 101 (35): 12952–12956. Bibcode:2004PNAS..10112952V. doi:10.1073/pnas.0404472101. ISSN 0027-8424. PMC 516500. PMID 15282373.
- Sepkoski, J. John (8 February 2016). "A kinetic model of Phanerozoic taxonomic diversity. III. Post-Paleozoic families and mass extinctions". Paleobiology. 10 (2): 246–267. doi:10.1017/S0094837300008186.
- Romano, Carlo; Koot, Martha B.; Kogan, Ilja; Brayard, Arnaud; Minikh, Alla V.; Brinkmann, Winand; Bucher, Hugo; Kriwet, Jürgen (February 2016). "Permian-Triassic Osteichthyes (bony fishes): diversity dynamics and body size evolution". Biological Reviews. 91 (1): 106–147. doi:10.1111/brv.12161. PMID 25431138. S2CID 5332637.
- Scheyer, Torsten M.; Romano, Carlo; Jenks, Jim; Bucher, Hugo (19 March 2014). "Early Triassic Marine Biotic Recovery: The Predators' Perspective". PLOS ONE. 9 (3): e88987. Bibcode:2014PLoSO...988987S. doi:10.1371/journal.pone.0088987. PMC 3960099. PMID 24647136.
- Gould, S.J.; Calloway, C.B. (1980). "Clams and brachiopods—ships that pass in the night". Paleobiology. 6 (4): 383–396. doi:10.1017/S0094837300003572.
- Jablonski, D. (8 May 2001). "Lessons from the past: Evolutionary impacts of mass extinctions". Proceedings of the National Academy of Sciences. 98 (10): 5393–5398. Bibcode:2001PNAS...98.5393J. doi:10.1073/pnas.101092598. PMC 33224. PMID 11344284.
- Kaim, Andrzej; Nützel, Alexander (July 2011). "Dead bellerophontids walking — The short Mesozoic history of the Bellerophontoidea (Gastropoda)". Palaeogeography, Palaeoclimatology, Palaeoecology. 308 (1–2): 190–199. Bibcode:2011PPP...308..190K. doi:10.1016/j.palaeo.2010.04.008.
- Hautmann, Michael (29 September 2009). "The first scallop" (PDF). Paläontologische Zeitschrift. 84 (2): 317–322. doi:10.1007/s12542-009-0041-5. S2CID 84457522.
- Hautmann, Michael; Ware, David; Bucher, Hugo (August 2017). "Geologically oldest oysters were epizoans on Early Triassic ammonoids". Journal of Molluscan Studies. 83 (3): 253–260. doi:10.1093/mollus/eyx018.
- Hofmann, Richard; Hautmann, Michael; Brayard, Arnaud; Nützel, Alexander; Bylund, Kevin G.; Jenks, James F.; Vennin, Emmanuelle; Olivier, Nicolas; Bucher, Hugo; Sevastopulo, George (May 2014). "Recovery of benthic marine communities from the end-Permian mass extinction at the low latitudes of eastern Panthalassa" (PDF). Palaeontology. 57 (3): 547–589. doi:10.1111/pala.12076.
- Brayard, A.; Escarguel, G.; Bucher, H.; Monnet, C.; Bruhwiler, T.; Goudemand, N.; Galfetti, T.; Guex, J. (27 August 2009). "Good Genes and Good Luck: Ammonoid Diversity and the End-Permian Mass Extinction". Science. 325 (5944): 1118–1121. Bibcode:2009Sci...325.1118B. doi:10.1126/science.1174638. PMID 19713525. S2CID 1287762.
- Wignall, P. B.; Twitchett, R. J. (24 May 1996). "Oceanic Anoxia and the End Permian Mass Extinction". Science. 272 (5265): 1155–1158. Bibcode:1996Sci...272.1155W. doi:10.1126/science.272.5265.1155. PMID 8662450. S2CID 35032406.
- Hofmann, Richard; Hautmann, Michael; Bucher, Hugo (October 2015). "Recovery dynamics of benthic marine communities from the Lower Triassic Werfen Formation, northern Italy". Lethaia. 48 (4): 474–496. doi:10.1111/let.12121.
- Hautmann, Michael; Bagherpour, Borhan; Brosse, Morgane; Frisk, Åsa; Hofmann, Richard; Baud, Aymon; Nützel, Alexander; Goudemand, Nicolas; Bucher, Hugo; Brayard, Arnaud (September 2015). "Competition in slow motion: the unusual case of benthic marine communities in the wake of the end-Permian mass extinction". Palaeontology. 58 (5): 871–901. doi:10.1111/pala.12186.
- Friesenbichler, Evelyn; Hautmann, Michael; Nützel, Alexander; Urlichs, Max; Bucher, Hugo (24 July 2018). "Palaeoecology of Late Ladinian (Middle Triassic) benthic faunas from the Schlern/Sciliar and Seiser Alm/Alpe di Siusi area (South Tyrol, Italy)" (PDF). PalZ. 93 (1): 1–29. doi:10.1007/s12542-018-0423-7. S2CID 134192673.
- Friesenbichler, Evelyn; Hautmann, Michael; Grădinaru, Eugen; Bucher, Hugo; Brayard, Arnaud (12 October 2019). "A highly diverse bivalve fauna from a Bithynian (Anisian, Middle Triassic) ‐microbial buildup in North Dobrogea (Romania)" (PDF). Papers in Palaeontology. doi:10.1002/spp2.1286.
- Sepkoski, J. John (1997). "Biodiversity: Past, Present, and Future". Journal of Paleontology. 71 (4): 533–539. doi:10.1017/S0022336000040026. PMID 11540302.
- Wagner PJ, Kosnik MA, Lidgard S (2006). "Abundance Distributions Imply Elevated Complexity of Post-Paleozoic Marine Ecosystems". Science. 314 (5803): 1289–1292. Bibcode:2006Sci...314.1289W. doi:10.1126/science.1133795. PMID 17124319. S2CID 26957610.
- Clapham ME, Bottjer DJ, Shen S (2006). "Decoupled diversity and ecology during the end-Guadalupian extinction (late Permian)". Geological Society of America Abstracts with Programs. 38 (7): 117. Archived from the original on 2015-12-08. Retrieved 2008-03-28.
- Foote, M. (1999). "Morphological diversity in the evolutionary radiation of Paleozoic and post-Paleozoic crinoids". Paleobiology. 25 (sp1): 1–116. doi:10.1666/0094-8373(1999)25[1:MDITER]2.0.CO;2. ISSN 0094-8373. JSTOR 2666042.
- Baumiller, T. K. (2008). "Crinoid Ecological Morphology". Annual Review of Earth and Planetary Sciences. 36 (1): 221–249. Bibcode:2008AREPS..36..221B. doi:10.1146/annurev.earth.36.031207.124116.
- Botha, J. & Smith, R.M.H. (2007). "Lystrosaurus species composition across the Permo–Triassic boundary in the Karoo Basin of South Africa" (PDF). Lethaia. 40 (2): 125–137. doi:10.1111/j.1502-3931.2007.00011.x. Archived from the original (PDF) on 2008-09-10. Retrieved 2008-07-02.
- Benton, M.J. (2004). Vertebrate Paleontology. Blackwell Publishers. xii–452. ISBN 978-0-632-05614-9.
- Ruben, J.A. & Jones, T.D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". American Zoologist. 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Yates AM, Warren AA (2000). "The phylogeny of the 'higher' temnospondyls (Vertebrata: Choanata) and its implications for the monophyly and origins of the Stereospondyli". Zoological Journal of the Linnean Society. 128 (1): 77–121. doi:10.1111/j.1096-3642.2000.tb00650.x.
- Retallack GJ, Seyedolali A, Krull ES, Holser WT, Ambers CP, Kyte FT (1998). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia". Geology. 26 (11): 979–982. Bibcode:1998Geo....26..979R. doi:10.1130/0091-7613(1998)026<0979:SFEOIA>2.3.CO;2.
- Becker L, Poreda RJ, Basu AR, Pope KO, Harrison TM, Nicholson C, Iasky R (2004). "Bedout: a possible end-Permian impact crater offshore of northwestern Australia". Science. 304 (5676): 1469–1476. Bibcode:2004Sci...304.1469B. doi:10.1126/science.1093925. PMID 15143216. S2CID 17927307.
- Becker L, Poreda RJ, Hunt AG, Bunch TE, Rampino M (2001). "Impact event at the Permian–Triassic boundary: Evidence from extraterrestrial noble gases in fullerenes". Science. 291 (5508): 1530–1533. Bibcode:2001Sci...291.1530B. doi:10.1126/science.1057243. PMID 11222855. S2CID 45230096.
- Basu AR, Petaev MI, Poreda RJ, Jacobsen SB, Becker L (2003). "Chondritic meteorite fragments associated with the Permian–Triassic boundary in Antarctica". Science. 302 (5649): 1388–1392. Bibcode:2003Sci...302.1388B. doi:10.1126/science.1090852. PMID 14631038. S2CID 15912467.
- Kaiho K, Kajiwara Y, Nakano T, Miura Y, Kawahata H, Tazaki K, Ueshima M, Chen Z, Shi GR (2001). "End-Permian catastrophe by a bolide impact: Evidence of a gigantic release of sulfur from the mantle". Geology. 29 (9): 815–818. Bibcode:2001Geo....29..815K. doi:10.1130/0091-7613(2001)029<0815:EPCBAB>2.0.CO;2. ISSN 0091-7613.
- Farley KA, Mukhopadhyay S, Isozaki Y, Becker L, Poreda RJ (2001). "An extraterrestrial impact at the Permian–Triassic boundary?". Science. 293 (5539): 2343a–2343. doi:10.1126/science.293.5539.2343a. PMID 11577203.
- Koeberl C, Gilmour I, Reimold WU, Philippe Claeys P, Ivanov B (2002). "End-Permian catastrophe by bolide impact: Evidence of a gigantic release of sulfur from the mantle: Comment and Reply". Geology. 30 (9): 855–856. Bibcode:2002Geo....30..855K. doi:10.1130/0091-7613(2002)030<0855:EPCBBI>2.0.CO;2. ISSN 0091-7613.
- Isbell JL, Askin RA, Retallack GR (1999). "Search for evidence of impact at the Permian–Triassic boundary in Antarctica and Australia; discussion and reply". Geology. 27 (9): 859–860. Bibcode:1999Geo....27..859I. doi:10.1130/0091-7613(1999)027<0859:SFEOIA>2.3.CO;2.
- Koeberl K, Farley KA, Peucker-Ehrenbrink B, Sephton MA (2004). "Geochemistry of the end-Permian extinction event in Austria and Italy: No evidence for an extraterrestrial component". Geology. 32 (12): 1053–1056. Bibcode:2004Geo....32.1053K. doi:10.1130/G20907.1.
- Langenhorst F, Kyte FT, Retallack GJ (2005). "Reexamination of quartz grains from the Permian–Triassic boundary section at Graphite Peak, Antarctica" (PDF). Lunar and Planetary Science Conference XXXVI. Retrieved 2007-07-13.
- Jones AP, Price GD, Price NJ, DeCarli PS, Clegg RA (2002). "Impact induced melting and the development of large igneous provinces". Earth and Planetary Science Letters. 202 (3): 551–561. Bibcode:2002E&PSL.202..551J. CiteSeerX 10.1.1.469.3056. doi:10.1016/S0012-821X(02)00824-5.
- White RV (2002). "Earth's biggest 'whodunnit': unravelling the clues in the case of the end-Permian mass extinction" (PDF). Philosophical Transactions of the Royal Society of London. 360 (1801): 2963–2985. Bibcode:2002RSPTA.360.2963W. doi:10.1098/rsta.2002.1097. PMID 12626276. S2CID 18078072. Retrieved 2008-01-12.
- AHager, Bradford H. (2001). "Giant Impact Craters Lead To Flood Basalts: A Viable Model". CCNet 33/2001: Abstract 50470.
- Hagstrum, Jonathan T. (2001). "Large Oceanic Impacts As The Cause Of Antipodal Hotspots And Global Mass Extinctions". CCNet 33/2001: Abstract 50288.
- Frese, Ralph R. B. von; Potts, Laramie V.; Wells, Stuart B.; Gaya-Piqué, Luis-Ricardo; Golynsky, Alexander V.; Hernandez, Orlando; Kim, Jeong Woo; Kim, Hyung Rae; Hwang, Jong Sun (2006). "Permian–Triassic mascon in Antarctica" (PDF). American Geophysical Union, Fall Meeting 2007: Abstract T41A–08. Bibcode:2006AGUSM.T41A..08V.
- Frese, Ralph R. B. von; Potts, Laramie V.; Wells, Stuart B.; Leftwich, Timothy E.; Kim, Hyung Rae; Kim, Jeong Woo; Golynsky, Alexander V.; Hernandez, Orlando; Gaya-Piqué, Luis-Ricardo (25 February 2009). "GRACE gravity evidence for an impact basin in Wilkes Land, Antarctica" (PDF). Geochemistry, Geophysics, Geosystems. 10 (2). doi:10.1029/2008GC002149. ISSN 1525-2027.
- Tohver, E.; Lana, C.; Cawood, P.A.; Fletcher, I.R.; Jourdan, F.; Sherlock, S.; Rasmussen, B.; Trindade, R.I.F.; Yokoyama, E.; Filho, C.R. Souza; Marangoni, Y. (2012). "Geochronological constraints on the age of a Permo–Triassic impact event: U–Pb and 40Ar/39Ar results for the 40 km Araguainha structure of central Brazil". Geochimica et Cosmochimica Acta. 86: 214–227. Bibcode:2012GeCoA..86..214T. doi:10.1016/j.gca.2012.03.005.
- Biggest extinction in history caused by climate-changing meteor. University of Western Australia University News Wednesday, 31 July 2013. http://www.news.uwa.ea.au/201307315921/international/biggest-extinction-history-caused-climate-changing-meteor
- Rocca, M.; Rampino, M.; Baez Presser, J. (2017). "Geophysical evidence for a la impact structure on the Falkland (Malvinas) Plateau". Terra Nova. 29 (4): 233–237. Bibcode:2017TeNov..29..233R. doi:10.1111/ter.12269.
- McCarthy, Dave; Aldiss, Don; Arsenikos, Stavros; Stone, Phil; Richards, Phil (2017-08-24). "Comment on "Geophysical evidence for a large impact structure on the Falkland (Malvinas) Plateau"" (PDF). Terra Nova. 29 (6): 411–415. Bibcode:2017TeNov..29..411M. doi:10.1111/ter.12285. ISSN 0954-4879.
- Zhou MF, Malpas J, Song XY, Robinson PT, Sun M, Kennedy AK, Lesher CM, Keays RR (2002). "A temporal link between the Emeishan large igneous province (SW China) and the end-Guadalupian mass extinction". Earth and Planetary Science Letters. 196 (3–4): 113–122. Bibcode:2002E&PSL.196..113Z. doi:10.1016/S0012-821X(01)00608-2.
- Wignall, Paul B.; et al. (2009). "Volcanism, Mass Extinction, and Carbon Isotope Fluctuations in the Middle Permian of China". Science. 324 (5931): 1179–1182. Bibcode:2009Sci...324.1179W. doi:10.1126/science.1171956. PMID 19478179. S2CID 206519019.
- Andy Saunders; Marc Reichow (2009). "The Siberian Traps – Area and Volume". Retrieved 2009-10-18.
- Andy Saunders & Marc Reichow (January 2009). "The Siberian Traps and the End-Permian mass extinction: a critical review" (PDF). Chinese Science Bulletin. 54 (1): 20–37. Bibcode:2009ChSBu..54...20S. doi:10.1007/s11434-008-0543-7. hdl:2381/27540. S2CID 1736350.
- Reichow, MarcK.; Pringle, M.S.; Al'Mukhamedov, A.I.; Allen, M.B.; Andreichev, V.L.; Buslov, M.M.; Davies, C.E.; Fedoseev, G.S.; Fitton, J.G.; Inger, S.; Medvedev, A.Ya.; Mitchell, C.; Puchkov, V.N.; Safonova, I.Yu.; Scott, R.A.; Saunders, A.D. (2009). "The timing and extent of the eruption of the Siberian Traps large igneous province: Implications for the end-Permian environmental crisis" (PDF). Earth and Planetary Science Letters. 277 (1–2): 9–20. Bibcode:2009E&PSL.277....9R. doi:10.1016/j.epsl.2008.09.030. hdl:2381/4204.
- Kamo, SL (2003). "Rapid eruption of Siberian flood-volcanic rocks and evidence for coincidence with the Permian–Triassic boundary and mass extinction at 251 Ma". Earth and Planetary Science Letters. 214 (1–2): 75–91. Bibcode:2003E&PSL.214...75K. doi:10.1016/S0012-821X(03)00347-9.
- Dan Verango (January 24, 2011). "Ancient mass extinction tied to torched coal". USA Today.
- Stephen E. Grasby; Hamed Sanei & Benoit Beauchamp (January 23, 2011). "Catastrophic dispersion of coal fly ash into oceans during the latest Permian extinction". Nature Geoscience. 4 (2): 104–107. Bibcode:2011NatGe...4..104G. doi:10.1038/ngeo1069.
- "Researchers find smoking gun of world's biggest extinction; Massive volcanic eruption, burning coal and accelerated greenhouse gas choked out life". University of Calgary. January 23, 2011. Retrieved 2011-01-26.
- Yang, QY (2013). "The chemical compositions and abundances of volatiles in the Siberian large igneous province: Constraints on magmatic CO2 and SO2 emissions into the atmosphere". Chemical Geology. 339: 84–91. Bibcode:2013ChGeo.339...84T. doi:10.1016/j.chemgeo.2012.08.031.
- Burgess, Seth D.; Bowring, Samuel; Shen, Shu-zhong (2014-03-04). "High-precision timeline for Earth's most severe extinction". Proceedings of the National Academy of Sciences. 111 (9): 3316–3321. Bibcode:2014PNAS..111.3316B. doi:10.1073/pnas.1317692111. ISSN 0027-8424. PMC 3948271. PMID 24516148.
- Black, Benjamin A.; Weiss, Benjamin P.; Elkins-Tanton, Linda T.; Veselovskiy, Roman V.; Latyshev, Anton (2015-04-30). "Siberian Traps volcaniclastic rocks and the role of magma-water interactions". Geological Society of America Bulletin. 127 (9–10): B31108.1. Bibcode:2015GSAB..127.1437B. doi:10.1130/B31108.1. ISSN 0016-7606.
- Burgess, Seth D.; Bowring, Samuel A. (2015-08-01). "High-precision geochronology confirms voluminous magmatism before, during, and after Earth's most severe extinction". Science Advances. 1 (7): e1500470. Bibcode:2015SciA....1E0470B. doi:10.1126/sciadv.1500470. ISSN 2375-2548. PMC 4643808. PMID 26601239.
- Fischman, Josh. "Giant Eruptions and Giant Extinctions [Video]". Scientific American. Retrieved 2016-03-11.
- Cui, Ying; Kump, Lee R. (October 2015). "Global warming and the end-Permian extinction event: Proxy and modeling perspectives". Earth-Science Reviews. 149: 5–22. doi:10.1016/j.earscirev.2014.04.007.
- "Driver of the largest mass extinction in the history of the Earth identified". phys.org. Retrieved 8 November 2020.
- Jurikova, Hana; Gutjahr, Marcus; Wallmann, Klaus; Flögel, Sascha; Liebetrau, Volker; Posenato, Renato; Angiolini, Lucia; Garbelli, Claudio; Brand, Uwe; Wiedenbeck, Michael; Eisenhauer, Anton (November 2020). "Permian–Triassic mass extinction pulses driven by major marine carbon cycle perturbations". Nature Geoscience. 13 (11): 745–750. doi:10.1038/s41561-020-00646-4. ISSN 1752-0908. S2CID 224783993. Retrieved 8 November 2020.
- "Large volcanic eruption caused the largest mass extinction". phys.org. Retrieved 8 December 2020.
- Kaiho, Kunio; Aftabuzzaman, Md; Jones, David S.; Tian, Li (2020). "Pulsed volcanic combustion events coincident with the end-Permian terrestrial disturbance and the following global crisis". Geology. doi:10.1130/G48022.1. Retrieved 8 December 2020. Available under CC BY 4.0.
- Palfy J, Demeny A, Haas J, Htenyi M, Orchard MJ, Veto I (2001). "Carbon isotope anomaly at the Triassic– Jurassic boundary from a marine section in Hungary". Geology. 29 (11): 1047–1050. Bibcode:2001Geo....29.1047P. doi:10.1130/0091-7613(2001)029<1047:CIAAOG>2.0.CO;2. ISSN 0091-7613.
- Berner, R.A. (2002). "Examination of hypotheses for the Permo-Triassic boundary extinction by carbon cycle modeling". Proceedings of the National Academy of Sciences. 99 (7): 4172–4177. Bibcode:2002PNAS...99.4172B. doi:10.1073/pnas.032095199. PMC 123621. PMID 11917102.
- Dickens GR, O'Neil JR, Rea DK, Owen RM (1995). "Dissociation of oceanic methane hydrate as a cause of the carbon isotope excursion at the end of the Paleocene". Paleoceanography. 10 (6): 965–71. Bibcode:1995PalOc..10..965D. doi:10.1029/95PA02087.
- White, R. V. (2002). "Earth's biggest 'whodunnit': Unravelling the clues in the case of the end-Permian mass extinction". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 360 (1801): 2963–85. Bibcode:2002RSPTA.360.2963W. doi:10.1098/rsta.2002.1097. PMID 12626276. S2CID 18078072.
- Schrag DP, Berner RA, Hoffman PF, Halverson GP (2002). "On the initiation of a snowball Earth". Geochemistry, Geophysics, Geosystems. 3 (6): 1–21. Bibcode:2002GGG....3fQ...1S. doi:10.1029/2001GC000219. Preliminary abstract at Schrag, D.P. (June 2001). "On the initiation of a snowball Earth". Geological Society of America. Archived from the original on 2018-04-25. Retrieved 2008-04-20.
- Benton, M.J.; Twitchett, R.J. (2003). "How to kill (almost) all life: the end-Permian extinction event". Trends in Ecology & Evolution. 18 (7): 358–365. doi:10.1016/S0169-5347(03)00093-4.
- Dickens GR (2001). "The potential volume of oceanic methane hydrates with variable external conditions". Organic Geochemistry. 32 (10): 1179–1193. doi:10.1016/S0146-6380(01)00086-9.
- Reichow MK, Saunders AD, White RV, Pringle MS, Al'Muhkhamedov AI, Medvedev AI, Kirda NP (2002). "40Ar/39Ar Dates from the West Siberian Basin: Siberian Flood Basalt Province Doubled" (PDF). Science. 296 (5574): 1846–1849. Bibcode:2002Sci...296.1846R. doi:10.1126/science.1071671. PMID 12052954. S2CID 28964473.
- Holser WT, Schoenlaub HP, Attrep Jr M, Boeckelmann K, Klein P, Magaritz M, Orth CJ, Fenninger A, Jenny C, Kralik M, Mauritsch H, Pak E, Schramm JF, Stattegger K, Schmoeller R (1989). "A unique geochemical record at the Permian/Triassic boundary". Nature. 337 (6202): 39–44. Bibcode:1989Natur.337...39H. doi:10.1038/337039a0. S2CID 8035040.
- Dobruskina IA (1987). "Phytogeography of Eurasia during the early Triassic". Palaeogeography, Palaeoclimatology, Palaeoecology. 58 (1–2): 75–86. Bibcode:1987PPP....58...75D. doi:10.1016/0031-0182(87)90007-1.
- Wignall, P.B.; Twitchett, R.J. (2002). Extent, duration, and nature of the Permian-Triassic superanoxic event. Geological Society of America Special Papers. 356. pp. 395–413. Bibcode:2002GSASP.356..679O. doi:10.1130/0-8137-2356-6.395. ISBN 978-0-8137-2356-3.
- Cao, Changqun; Gordon D. Love; Lindsay E. Hays; Wei Wang; Shuzhong Shen; Roger E. Summons (2009). "Biogeochemical evidence for euxinic oceans and ecological disturbance presaging the end-Permian mass extinction event". Earth and Planetary Science Letters. 281 (3–4): 188–201. Bibcode:2009E&PSL.281..188C. doi:10.1016/j.epsl.2009.02.012.
- Hays, Lindsay; Kliti Grice; Clinton B. Foster; Roger E. Summons (2012). "Biomarker and isotopic trends in a Permian–Triassic sedimentary section at Kap Stosch, Greenland" (PDF). Organic Geochemistry. 43: 67–82. doi:10.1016/j.orggeochem.2011.10.010. hdl:20.500.11937/26597.
- Meyers, Katja; L.R. Kump; A. Ridgwell (September 2008). "Biogeochemical controls on photic-zone euxinia during the end-Permian mass extinction". Geology. 36 (9): 747–750. Bibcode:2008Geo....36..747M. doi:10.1130/g24618a.1.
- Kump, Lee; Alexander Pavlov; Michael A. Arthur (2005). "Massive release of hydrogen sulfide to the surface ocean and atmosphere during intervals of oceanic anoxia". Geology. 33 (5): 397–400. Bibcode:2005Geo....33..397K. doi:10.1130/G21295.1.
- Visscher, H.; Looy, C. V.; Collinson, M. E.; Brinkhuis, H.; van Konijnenburg-van Cittert, J. H. A.; Kurschner, W. M.; Sephton, M. A. (2004-07-28). "Environmental mutagenesis during the end-Permian ecological crisis". Proceedings of the National Academy of Sciences. 101 (35): 12952–12956. doi:10.1073/pnas.0404472101. ISSN 0027-8424. PMC 516500. PMID 15282373.
- Ziegler, A.; Eshel, G.; Rees, P.; Rothfus, T.; Rowley, D.; Sunderlin, D. (2003). "Tracing the tropics across land and sea: Permian to present". Lethaia. 36 (3): 227–254. CiteSeerX 10.1.1.398.9447. doi:10.1080/00241160310004657.
- Shen, Shu-Zhong; Bowring, Samuel A. (2014). "The end-Permian mass extinction: A still unexplained catastrophe". National Science Review. 1 (4): 492–495. doi:10.1093/nsr/nwu047.
- Zhang R, Follows MJ, Grotzinger JP, Marshall J (2001). "Could the Late Permian deep ocean have been anoxic?". Paleoceanography. 16 (3): 317–329. Bibcode:2001PalOc..16..317Z. doi:10.1029/2000PA000522.
- Over, Jess (editor), Understanding Late Devonian and Permian–Triassic Biotic and Climatic Events (Volume 20 in series Developments in Palaeontology and Stratigraphy (2006)). The state of the inquiry into the extinction events.
- Sweet, Walter C. (editor), Permo–Triassic Events in the Eastern Tethys : Stratigraphy Classification and Relations with the Western Tethys (in series World and Regional Geology)
- "Siberian Traps". Retrieved 2011-04-30.
- "Big Bang In Antarctica: Killer Crater Found Under Ice". Retrieved 2011-04-30.
- "Global Warming Led To Atmospheric Hydrogen Sulfide And Permian Extinction". Retrieved 2011-04-30.
- Morrison, D. "Did an Impact Trigger the Permian-Triassic Extinction?". NASA. Archived from the original on 2011-06-10. Retrieved 2011-04-30.
- "Permian Extinction Event". Retrieved 2011-04-30.
- Ogden, DE; Sleep, NH (2012). "Explosive eruption of coal and basalt and the end-Permian mass extinction". Proc. Natl. Acad. Sci. U.S.A. 109 (1): 59–62. Bibcode:2012PNAS..109...59O. doi:10.1073/pnas.1118675109. PMC 3252959. PMID 22184229.
- "BBC Radio 4 In Our Time discussion of the Permian-Triassic boundary". Retrieved 2012-02-01. Podcast available.
- Zimmer, Carl (2018-12-07). "The Planet Has Seen Sudden Warming Before. It Wiped Out Almost Everything". The New York Times. Retrieved 2018-12-10.
- "The Great Dying: Earth's largest-ever mass extinction is a warning for humanity". CBS News. Retrieved 2021-03-05. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Permian%E2%80%93Triassic_extinction_event | 21 |
24 | Coastal flooding normally occurs when dry and low-lying land is submerged by seawater. The range of a coastal flooding is a result of the elevation of floodwater that penetrates the inland which is controlled by the topography of the coastal land exposed to flooding. Flood damage modelling was limited to local, regional or national scales. However, with the presence of climate change and an increase in the population rates, flood events have intensified and called for a global interest in finding out different methods with both spatial and temporal dynamics.
The seawater can flood the land via several different paths: direct flooding, overtopping of a barrier, breaching of a barrier.
Coastal flooding is largely a natural event, however human influence on the coastal environment can exacerbate coastal flooding. Extraction of water from groundwater reservoirs in the coastal zone can instigate subsidence of the land, thus increasing the risk of flooding. Engineered protection structures along the coast such as sea walls alter the natural processes of the beach, often leading to erosion on adjacent stretches of the coast which also increases the risk of flooding. Moreover, sea level rise and extreme weather caused by climate change will increase the intensity and amount of coastal flooding affecting hundreds of millions of people.
The seawater can flood the land via several different paths:
- Direct flooding — where the sea height exceeds the elevation of the land, often where waves have not built up a natural barrier such as a dune
- Overtopping of a barrier — the barrier may be natural or human-engineered and overtopping occurs due to swelling conditions during storms or high tides often on open stretches of the coast. The height of the waves exceeds the height of the barrier and water flows over the top of the barrier to flood the land behind it. Overtopping can result in high velocity flows that can erode significant amounts of the land surface which can undermine defense structures.
- Breaching of a barrier — again the barrier may be natural (sand dune) or human-engineered (sea wall), and breaching occurs on open coasts exposed to large waves. Breaching occurs when the barrier is broken down or destroyed by waves allowing the seawater to extend inland and flood the areas
Storms and storm surges
Storms, including hurricanes and tropical cyclones, can cause flooding through storm surges which are waves significantly larger than normal. If a storm event coincides with the high astronomical tide, extensive flooding can occur. Storm surges involve three processes:
- wind setup
- barometric setup
- wave setup
Wind blowing in an onshore direction (from the sea towards the land) can cause the water to 'pile-up' against the coast; this is known as wind setup. Low atmospheric pressure is associated with storm systems and this tends to increase the surface sea level; this is a barometric setup. Finally increased wave breaking height results in a higher water level in the surf zone, which is wave setup. These three processes interact to create waves that can overtop natural and engineered coastal protection structures thus penetrating seawater further inland than normal.
Sea level rise
The Intergovernmental Panel on Climate Change (IPCC) estimate global mean sea-level rise from 1990 to 2100 to be between nine and eighty-eight centimetres. It is also predicted that with climate change there will be an increase in the intensity and frequency of storm events such as hurricanes. This suggests that coastal flooding from storm surges will become more frequent with sea level rise.
A rise in sea level alone threatens increased levels of flooding and permanent inundation of low-lying land as sea-level simply may exceed the land elevation. This, therefore, indicates that coastal flooding associated with sea-level rise will become a significant issue in the next 100 years especially as human populations continue to grow and occupy the coastal zone.
Sunny day flooding
The examples and perspective in this article or section might have an extensive bias or disproportional coverage towards one or more specific regions. (January 2019)
Tidal flooding, also known as sunny day flooding or nuisance flooding, is the temporary inundation of low-lying areas, especially streets, during exceptionally high tide events, such as at full and new moons. The highest tides of the year may be known as the king tide, with the month varying by location. These kinds of floods tend not to a high risk to property or human safety, but further stress coastal infrastructure in low lying areas.
This kind of flooding is becoming more common in cities and other human-occupied coastal areas as sea level rise associated with climate change and other human-related environmental impacts such as coastal erosion and land subsidence increase the vulnerability of infrastructure. Geographies faced with these issues can utilize coastal management practices to mitigate the effects in some areas, but increasingly these kinds of floods may develop into coastal flooding that requires managed retreat or other more extensive climate change adaptation practices are needed for vulnerable areas.
Coastal areas can be significantly flooded as the result of tsunami waves which propagate through the ocean as the result of the displacement of a significant body of water through earthquakes, landslides, volcanic eruptions, and glacier calvings. There is also evidence to suggest that significant tsunami have been caused in the past by meteor impact into the ocean. Tsunami waves are so destructive due to the velocity of the approaching waves, the height of the waves when they reach land, and the debris the water entrains as it flows over land can cause further damage.
Depending on the magnitude of the tsunami waves and floods, it could cause severe injuries which call for precautionary interventions that prevent overwhelming aftermaths. It was reported that more than 200,000 people were killed in the earthquake and subsequent tsunami that hit the Indian Ocean, on December 26, 2004. Not to mention, several diseases are a result of floods ranging from hypertension to chronic obstructive pulmonary diseases.
Reducing global sea-level rise is said to be one way to prevent significant flooding of coastal areas at present times and in the future. This could be minimised by further reducing greenhouse gas emissions. However, even if significant emission decreases are achieved, there is already a substantial commitment to sea-level rise into the future. International climate change policies like the Kyoto Protocol are seeking to mitigate the future effects of climate change, including sea-level rise.
In addition, more immediate measures of engineered and natural defenses are put in place to prevent coastal flooding.
There are a variety of ways in which humans are trying to prevent the flooding of coastal environments, typically through so-called hard engineering structures such as flood barriers, seawalls and levees. That armouring of the coast is typical to protect towns and cities which have developed right up to the beachfront. Enhancing depositional processes along the coast can also help prevent coastal flooding. Structures such as groynes, breakwaters, and artificial headlands promote the deposition of sediment on the beach thus helping to buffer against storm waves and surges as the wave energy is spent on moving the sediments in the beach than on moving water inland.
The coast does provide natural protective structures to guard against coastal flooding. These include physical features like gravel bars and sand dune systems, but also ecosystems such as salt marshes and mangrove forests have a buffering function. Mangroves and wetlands are often considered to provide significant protection against storm waves, tsunamis, and shoreline erosion through their ability to attenuate wave energy. To protect the coastal zone from flooding, the natural defenses should, therefore, be protected and maintained.
As coastal flooding is typically a natural process, it is inherently difficult to prevent flood occurrence. If human systems are affected by flooding, an adaption to how that system operates on the coast through behavioral and institutional changes is required, these changes are the so-called non-structural mechanisms of coastal flooding response.
Building regulations, coastal hazard zoning, urban development planning, spreading the risk through insurance, and enhancing public awareness are some ways of achieving this. Adapting to the risk of flood occurrence can be the best option if the cost of building defense structures outweighs any benefits or if the natural processes in that stretch of coastline add to its natural character and attractiveness.
A more extreme and often difficult to accept the response to coastal flooding is abandoning the area (also known as managed retreat) prone to flooding. This however raises issues for where the people and infrastructure affected would go and what sort of compensation should/could be paid.
Social and economic impacts
The coastal zone (the area both within 100 kilometres distance of the coast and 100 metres elevation of sea level) is home to a large and growing proportion of the global population. Over 50 percent of the global population and 65 percent of cities with populations over five million people are in the coastal zone. In addition to the significant number of people at risk of coastal flooding, these coastal urban centres are producing a considerable amount of the global Gross Domestic Product (GDP).
People's lives, homes, businesses, and city infrastructure like roads, railways, and industrial plants are all at risk of coastal flooding with massive potential social and economic costs. The recent earthquakes and tsunami in Indonesia in 2004 and in Japan in March 2011 clearly illustrate the devastation coastal flooding can produce. Indirect economic costs can be incurred if economically important sandy beaches are eroded resulting in a loss of tourism in areas dependent on the attractiveness of those beaches.
Top disasters by deaths in 2004
|Rank||Disaster||Month||Country||Number of Deaths|
|1||December 26 Tsunami||December||12 countries||226,408|
|10||Meningitis epidemic||January/March||Burkina Faso||527|
Coastal flooding can result in a wide variety of environmental impacts on different spatial and temporal scales. Flooding can destroy coastal habitats such as coastal wetlands and estuaries and can erode dune systems. These places are characterized by their high biological diversity therefore coastal flooding can cause significant biodiversity loss and potentially species extinctions. In addition to this, these coastal features are the coasts natural buffering system against storm waves; consistent coastal flooding and sea-level rise can cause this natural protection to be reduced allowing waves to penetrate greater distances inland exacerbating erosion and furthering coastal flooding.
Prolonged inundation of seawater after flooding can also cause salination of agriculturally productive soils thus resulting in a loss of productivity for long periods of time. Food crops and forests can be completely killed off by salination of soils or wiped out by the movement of floodwaters. Coastal freshwater bodies including lakes, lagoons, and coastal freshwater aquifers can also be affected by saltwater intrusion. This can destroy these water bodies as habitats for freshwater organisms and sources of drinking water for towns and cities.
Examples of existing coastal flooding issues include:
- Flood control in the Netherlands
- Floods in Bangladesh
- The Thames Barrier is one of the world's largest flood barriers and serves to protect London from flooding during exceptionally high tides and storm surges. The Barrier can be lifted at high tide to prevent sea waters flooding London and can be lowered to release stormwater runoff from the Thames catchment.
- Flooding of the low-lying coastal zone South Canterbury Plains in New Zealand can result in prolonged inundation, which can affect the productivity of the affected pastoral agriculture for several years.
Hurricane Katrina in New Orleans
Hurricane Katrina made landfall as a category 3 cyclone on the Saffir–Simpson hurricane wind scale, indicating that it had become an only moderate level storm. However, the catastrophic damage caused by the extensive flooding was the result of the highest recorded storm surges in North America. For several days prior to the landfall of Katrina, wave setup was generated by the persistent winds of the cyclonic rotation of the system. This prolonged wave set up coupled with the very low central pressure level meant massive storm surges were generated. Storm surges overtopped and breached the levees and floodwalls intended to protect the city from inundation. Unfortunately, New Orleans is inherently prone to coastal flooding for a number of factors. Firstly, much of New Orleans is below sea level and is bordered by the Mississippi River therefore protection against flooding from both the sea and the river has become dependent on engineered structures. Land-use change and modification to natural systems in the Mississippi River have rendered the natural defenses for the city less effective. Wetland loss has been calculated to be around 1,900 square miles (4,920 square kilometres) since 1930. This is a significant amount as four miles of wetland are estimated to reduce the height of a storm surge by one foot (30 centimeters).
2004 Indian Ocean earthquake and tsunami: An earthquake of approximately magnitude 9.0 struck off the coast of Sumatra, Indonesia causing the propagation of a massive tsunami throughout the Indian Ocean. This tsunami caused significant loss of human life, an estimate of 280,000 – 300,000 people has been reported and caused extensive damage to villages, towns, and cities and to the physical environment. The natural structures and habitats destroyed or damaged include coral reefs, mangroves, beaches, and seagrass beds. The more recent earthquake and tsunami in Japan in March 2011 (2011 Tōhoku earthquake and tsunami) also clearly illustrates the destructive power of tsunamis and the turmoil of coastal flooding.
There is a need for future research into:
- Management strategies for dealing with the forced abandonment of coastal settlements
- Quantifying the effectiveness of natural buffering systems, such as mangroves, against coastal flooding
- Better engineering design and practices or alternative mitigation strategies to engineering
- Coastal flood advisory, watch, warning (U.S.)
- Coastal management
- Flash flood
- Intergovernmental Panel on Climate Change
- Saltwater intrusion
- Ramsay & Bell 2008
- mp 1998. sfn error: no target: CITEREFmp1998 (help)
- Jongman, Brenden; Ward, Philip J.; Aerts, Jeroen C. J. H. (2012-10-01). "Global exposure to river and coastal flooding: Long term trends and changes". Global Environmental Change. 22 (4): 823–835. doi:10.1016/j.gloenvcha.2012.07.004. ISSN 0959-3780.
- Tanoue, Masahiro & Hirabayashi, Yukiko & Ikeuchi, Hiroaki. (2016). Global-scale river flood vulnerability in the last 50 years. Scientific Reports. 6. 10.1038/srep36021.
- Nicholls 2002
- Griffis 2007
- Dawson et al. 2009
- Pope 1997
- "Report: Flooded Future: Global vulnerability to sea level rise worse than previously understood". www.climatecentral.org. Retrieved 2020-11-09.
- Gallien, Schubert & Sanders 2011
- Kurian et al. 2009
- File:Projections of global mean sea level rise by Parris et al. (2012).png
- Sea level rise chart
- Nicholls et al. 2007
- Suarez et al. 2005
- Michael 2007
- Erik Bojnansky (March 9, 2017). "Sea levels are rising, so developers and governments need to band together: panel". The Real Deal. Retrieved March 10, 2017.
- "What is nuisance flooding?". National Oceanic and Atmospheric Administration. Retrieved December 13, 2016.
- "What is nuisance flooding? Defining and monitoring an emerging challenge | PreventionWeb.net". www.preventionweb.net. Retrieved 2021-01-07.
- Karegar, Makan A.; Dixon, Timothy H.; Malservisi, Rocco; Kusche, Jürgen; Engelhart, Simon E. (2017-09-11). "Nuisance Flooding and Relative Sea-Level Rise: the Importance of Present-Day Land Motion". Scientific Reports. 7 (1): 11197. doi:10.1038/s41598-017-11544-y. ISSN 2045-2322.
- Cochard et al. 2008
- Goff et al. 2010
- Alongi 2008
- Llewellyn, CAPT Mark (2006). "Floods and Tsunamis" (PDF). Runels.
- Short & Masselink 1999
- Dawson et al. 2011
- Snoussi, Ouchani & Niazi 2008
- Hunt & Watkiss 2011
- Tomita et al. 2006
- Nadal et al. 2010
- Horner 1986
- Ebersole et al. 2010
- Alongi, D. M. (2008). "Mangrove Forests: Resiliance, Protection from Tsunamis, and Responses to Global Climate Change". Estuarine, Coastal and Shelf Science. 76 (1): 1–13. Bibcode:2008ECSS...76....1A. doi:10.1016/j.ecss.2007.08.024.
- Benavente, J.; Del Río, L.; Gracia, F. J.; Martínez-del-Pozo, J. A. (2006). "Coastal flooding hazard related to storms and coastal evolution in Valdelagrana spit (Cadiz Bay Natural Park, SW Spain)". Continental Shelf Research. 26 (9): 1061–1076. Bibcode:2006CSR....26.1061B. doi:10.1016/j.csr.2005.12.015.
- Cochard, R.; Ranamukhaarachchi, S. L.; Shivakoti, G. P.; Shipin, O. V.; Edwards, P. J.; Seeland, K. T. (2008). "The 2004 tsunami in Aceh and Southern Thailand: A review on coastal ecosystems, wave hazards and vulnerability". Perspectives in Plant Ecology, Evolution and Systematics. 10 (1): 3–40. doi:10.1016/j.ppees.2007.11.001.
- Dawson, R. J.; Dickson, M. E.; Nicholls, R. J.; Hall, J. W.; Walkden, M. J. A.; Stansby, P. K.; Mokrech, M.; Richards, J.; Zhou, J.; Milligan, J.; Jordan, A.; Pearson, S.; Rees, J.; Bates, P. D.; Koukoulas, S.; Watkinson, S. R. (2009). "Integrated analysis of risks of coastal flooding and cliff erosion under scenarios of long term change" (PDF). Climatic Change. 95 (1–2): 249–288. doi:10.1007/s10584-008-9532-8.
- Dawson, J. R.; Ball, T.; Werritty, J.; Werritty, A.; Hall, J. W.; Roche, N. (2011). "Assessing the effectiveness of non-structural flood management measures in the Thames Estuary under conditions of socio-economic and environmental change". Global Environmental Change. 21 (2): 628–646. doi:10.1016/j.gloenvcha.2011.01.013.
- Doornkamp, J. C. (1998). "Coastal flooding, global warming and environmental management" (PDF). Journal of Environmental Management. 52 (4): 327–333. doi:10.1006/jema.1998.0188. Archived from the original (PDF) on 2015-04-14. Retrieved 2015-04-08.
- Ebersole, B. A.; Westerink, J. J.; Bunya, S.; Dietrich, J. C.; Cialone, M. A. (2010). "Development of storm surge which led to flooding in St. Bernard Polder during Hurricane Katrina". Ocean Engineering. 37 (1): 91–103. doi:10.1016/j.oceaneng.2009.08.013.
- Gallien, T. W.; Schubert, J. E.; Sanders, B. F. (2011). "Predicting tidal flooding of urbanized embayments: A modelling framework and data requirements". Coastal Engineering. 58 (6): 567–577. doi:10.1016/j.coastaleng.2011.01.011.
- Goff, J.; Dominey-Howes, D.; Chagué-Goff, C.; Courtney, C. (2010). "Analysis of the Mahuika comet impact tsunami hypothesis". Marine Geology. 271 (3): 292–296. Bibcode:2010MGeol.271..292G. doi:10.1016/j.margeo.2010.02.020.
- Griffis, F. H. (2007). "Engineering failures exposed by Hurricane Katrina". Technology in Society. 29 (2): 189–195. doi:10.1016/j.techsoc.2007.01.015.
- Horner, R. W. (1986). "The Thames Barrier". Project Management. 4 (4): 189–194. doi:10.1016/0263-7863(86)90002-5.
- Hunt, A.; Watkiss, P. (2011). "Climate change impacts and adaptations in cities: A review of the literature" (PDF). Climatic Change. 104 (1): 13–49. doi:10.1007/s10584-010-9975-6.
- Kurian, N. P.; Nirupama, N.; Baba, M.; Thomas, K. V. (2009). "Coastal flooding due to synoptic scale , meso-scale and remote forcings". Natural Hazards. 48 (2): 259–273. doi:10.1007/s11069-008-9260-4.
- Link, L. E. (2010). "The anatomy of a disaster, an overview of Hurricane Katrina and New Orleans". Ocean Engineering. 37 (1): 4–12. doi:10.1016/j.oceaneng.2009.09.002.
- Michael, J. A. (2007). "Episodic flooding and the cost of sea-level rise". Ecological Economics. 63: 149–159. doi:10.1016/j.ecolecon.2006.10.009.
- Nadal, N. C.; Zapata, R. E.; Pagán, I.; López, R.; Agudelo, J. (2010). "Building damage due to riverine and coastal floods". Journal of Water Resources Planning and Management. 136 (3): 327–336. doi:10.1061/(ASCE)WR.1943-5452.0000036.
- Nicholls, R. J. (2002). "Analysis of global impacts of sea-level rise: A case study of flooding". Physics and Chemistry of the Earth, Parts A/B/C. 27 (32–34): 1455–1466. Bibcode:2002PCE....27.1455N. doi:10.1016/S1474-7065(02)00090-6.
- Nicholls, R. J.; Wong, P. P.; Burkett, V. R.; Codignotto, J. O.; Hay, J. E.; McLean, R. F.; Ragoonaden, S.; Woodroffe, C. D. (2007). "Coastal systems and low-lying areas". In Parry, M. L.; Canziani, O. F.; Palutikof, J. P.; Linden, P. J.; Hanson, C. E. (eds.). Climate Change 2007: impacts, adaptation and vulnerability. Contribution of working group II to the fourth assessment report of the intergovernmental panel on climate change. Cambridge University Press. pp. 315–357.
- Pope, J. (1997). "Responding to coastal erosion and flooding damages". Journal of Coastal Research. 3 (3): 704–710. JSTOR 4298666.
- Ramsay, D.; Bell, R. (2008). Coastal Hazards and Climate Change. A Guidance Manual for Local Government in New Zealand (PDF) (2nd ed.). New Zealand: Ministry for the Environment. ISBN 978-0478331189. Archived from the original (PDF) on 2015-04-13. Retrieved 2015-04-08.
- Short, A. D.; Masselink, G. (1999). "Embayed and Structurally Controlled Beaches". Handbook of Beach and Shoreface Morphodynamics. John Wiley and Sons. pp. 231–250. ISBN 978-0471965701.
- Snoussi, M.; Ouchani, T.; Niazi, S. (2008). "Vulnerability assessment of the impact of sea-level rise and flooding on the Moroccan coast: The case of the Mediterranean Eastern Zone". Estuarine, Coastal and Shelf Science. 77 (2): 206–213. Bibcode:2008ECSS...77..206S. doi:10.1016/j.ecss.2007.09.024.
- Suarez, P.; Anderson, W.; Mahal, V.; Lakshmanan, T. R. (2005). "Impacts of flooding and climate change on urban transportation: A systemwide performance assessment of the Boston Metro Area". Transportation Research Part D: Transport and Environment. 10 (3): 231–244. doi:10.1016/j.trd.2005.04.007. | https://wiki-offline.jakearchibald.com/wiki/Coastal_flooding | 21 |
27 | Causes of the Great DepressionIn 1929 the stock market crashed, triggering the worst depression ever in U.
S. history, which lasted for about a decade. During the 1920s, the unequal distribution of wealth and the stock market speculation combined to create an unstable economy by the end of the decade. The unequal distribution of the wealth had several outlets.
Money was distributed between industry and agriculture within the U. S. ; in social classes, between the rich and middle class; and lastly in world markets, between America and Europe. Due to the imbalance of the wealth, the economy became very unstable. The stock market crashed because of the excessive speculation in the 1920s, which made the stock market artificially high (Galbraith 175).Order now
The poor distribution of the wealth, excessive speculation, and the stock market crashes caused the U. S. economy to fail, signaling the start of the Great Depression. The 1920s were a time when the American people and the economy were thriving.
This period of time was called the Roaring Twenties. Unemployment dropped as low as 3 percent, prices held steady, and the gross national product climbed from $70 billion in 1922 to nearly $100 billion in1929 (EV 525). However, the prosperity of the 1920s was not shared evenly among the social classes in America. A study conducted by the Brookings Institution stated, 78 percent of all American families had incomes of less than $3,000. Forty percent had family incomes of less than $1,500. Only 2.
3 percent of the population enjoyed incomes of over $10,000. Sixty thousand American families held savings which amounted to the total held by the bottom 25 million families. (Goldston 26). The 40 percent of Americans at the lowest end of the economic scale received only 12 percent of the national income by 1929 (EV 549).
This maldistribution of income between the rich and the middle class increased throughout the 1920s. A major reason for this large and growing gap between the upper class and the working class Americans was that the manufacturing output increased throughout this period. As the production costs fell, wages went up slowly, and prices for goods remained at a constant. The majority of the benefits created by increased productivity fell into the hands of corporate owners. The federal government also helped to make the growing gap between the upper and middle classes.
President Calvin Coolidges administration favored business, and as a result, the wealthy invested in these businesses. An example of this type of legislation is the Revenue Act of 1926, which significantly reduced income and inheritance taxes (Goldston 23). The introduction of credit to the American public proved to choke the economy rather than to stimulate it. To make an economy run properly, the total demand must equal total supply. The economy of the 1920s produced an over supply of goods. It was not that the surplus products were not wanted, but that the people who needed them could not afford the products.
The working class spent most of their money on things they needed: food, shelter, and clothes. They also purchased some luxury items, but their income limited them to only a few of these purchases. Meanwhile, the rich were enjoying their increased profits. While the vast majority did not have enough money to satisfy all of their material wants and needs, the manufactures continued to produce surplus goods. Recognizing that the surpluses could be sold if consumers were financially able to buy them, the concept of buying on credit was established. Credit was immediately popular.
Nearing the end of the decade, 75 percent of all automobiles were purchased on credit (EV 526). The credit system created artificial demand for products which people could not usually buy. People could not spend their regular wages to purchase products, because much of their income went toward their credit payments. The poor distribution of wealth within the U. S extended to entire industries, helping one at the expense of another. The prosperity of the decade was not shared among the industries equally.
While the automotive industry was thriving in the 1920s, some industries, such as agriculture, were declining steadily. Most of the industries that were prospering in the 1920s were in some | https://artscolumbia.org/causes-of-the-great-depression-essay-2-76989/ | 21 |
59 | Time: 60 hours
College Credit Recommended
Consider how microeconomists and macroeconomists analyze price fluctuations. In microeconomics, we focus on how supply and demand determine prices in a given market. In macroeconomics, we focus on changes in the price level across all markets. Microeconomics studies firm profit maximization, output optimization, consumer utility maximization, and consumption optimization. Macroeconomics studies economic growth, price stability, and full employment.
Macroeconomic performance relies on measures of economic activity, such as variables and data at the national level, within a specific period of time. Macroeconomics analyzes aggregate measures, such as national income, national output, unemployment and inflation rates, and business cycle fluctuations. In this course, we prompt you to think about the national and global issues we face, consider competing views, and draw conclusions from various perspectives, tools, and alternatives.
First, read the course syllabus. Then, enroll in the course by clicking "Enroll me in this course". Click Unit 1 to read its introduction and learning outcomes. You will then see the learning materials and instructions on how to use them.
The study of microeconomics focuses on exchanges among consumers and firms that are in the market to purchase goods and services. In contrast, macroeconomics focuses on exchanges that take place across all of the markets within a country. We take the interrelated actions of consumers, businesses, government agencies, financial intermediaries, and global trading partners into account, as they exchange resources, goods, and services, and facilitate currency and quantity flows. Microeconomics studies how to achieve profit maximization, while macroeconomics studies how to achieve economic stability and growth on a national level.
Completing this unit should take you approximately 12 hours.
In macroeconomics we study the total output an economy generates. Economists use gross domestic product (GDP), the monetary value of all final goods and services produced within a country's borders in one year, to measure a country's total output. Macroeconomics tend to use real GDP, rather than nominal GDP, for their comparisons since real GDP removes the effect of inflation. Measuring growth in current dollars (which does not account for inflation), rather than constant dollars, might indicate a false sense of economic growth or decline.
Governments focus on three key indicators of economic growth: an increase in real GDP over time, full employment, and price level stability. In unit 5, we explore how governments form, implement, and evaluate their fiscal and monetary policies to achieve these three goals. In this unit we uncover scenarios and philosophical debates about government's role in a market-based economy. We examine whether GDP is an accurate measure of societal well-being, quality of life, and the standard of living.Completing this unit should take you approximately 10 hours.
In this unit we explore the forces affecting growth, inflation, and unemployment at the aggregate level, such as output, income, or the set of components within GDP.
Aggregate demand is the total amount of goods and services people want to purchase. It measures what people want to buy, rather than what is actually produced. The aggregate demand is the sum of consumption, investment, government expenses, and net exports. Aggregate supply is the total output an economy produces at a given price level. We consider aggregate supply in the short-run and in the long-run.
Completing this unit should take you approximately 9 hours.
In this unit, we explore aggregate economic equilibrium in the short run and the long run. At a macro level, equilibrium is the point where aggregate supply equals aggregate demand. We examine shifts in aggregate supply and aggregate demand, and the short-term and long-term effects for the entire economy.
Also in this unit, we explore economic growth. Economic growth is the process of increasing the potential level of GDP (the level of production occurring at the natural rate of unemployment)
Completing this unit should take you approximately 4 hours.
Monetary policy includes the methods government agencies, such as the U.S. Federal Reserve, engage in to encourage banks, businesses, and individuals to change their interest rates, the supply of money, and the demand for money. Money serves as a medium of exchange, a store of value, and a unit of account. These three functions enable individuals to avoid a bartering system (we pay a business money for providing a service, rather than with a goat or loaf of bread). The ways we use to define and measure money are important to managing an economy. Savings and investment are key elements within the circular flow model and are a function of interest rates.
Completing this unit should take you approximately 11 hours.
- Governments use various policies and tools to steer the macroeconomy toward three main goals: full employment, price stability, and economic growth. In this unit, we study fiscal policy, which involves taxing and spending policies, including the fiscal legislation Congress enacts in the United States.
Completing this unit should take you approximately 10 hours.
- This unit examines the macroeconomic effects of international flows of financial capital, and goods and services. The determinants of exchange rates are identified and the connection is made between financial capital flows and the trade balance. The unit explores the effects of exchange rates on a country's economy and the economies of its trading partners. The welfare effects of trade are also studied.
Completing this unit should take you approximately 4 hours.
This study guide will help you get ready for the final exam. It discusses the key topics in each unit, walks through the learning outcomes, and lists important vocabulary. It is not meant to replace the course materials!
Please take a few minutes to give us feedback about this course. We appreciate your feedback, whether you completed the whole course or even just a few resources. Your feedback will help us make our courses better, and we use your feedback each time we make updates to our courses.
If you come across any urgent problems, email email@example.com or post in our discussion forum.
Certificate Final Exam
Take this exam if you want to earn a free Course Completion Certificate.
To receive a free Course Completion Certificate, you will need to earn a grade of 70% or higher on this final exam. Your grade for the exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again as many times as you want, with a 7-day waiting period between each attempt.
Once you pass this final exam, you will be awarded a free Course Completion Certificate.
Saylor Direct Credit
Take this exam if you want to earn college credit for this course. This course is eligible for college credit through Saylor Academy's Saylor Direct Credit Program.
The Saylor Direct Credit Final Exam requires a proctor and a proctoring fee of $25. To pass this course and earn a Proctor-Verified Course Certificate and official transcript, you will need to earn a grade of 70% or higher on the Saylor Direct Credit Final Exam. Your grade for this exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again a maximum of 3 times, with a 14-day waiting period between each attempt.
Once you pass this final exam, you will be awarded a Credit-Recommended Course Completion Certificate and an official transcript. | https://learn.saylor.org/course/view.php?id=9 | 21 |
32 | |Part of a series on the|
|History of Ireland|
|Timeline of Irish history|
|Peoples and polities|
|Lordship of Ireland|
|Kingdom of Ireland|
|United Kingdom of Great Britain and Ireland|
|Republic of Ireland · Northern Ireland|
|Battles · Clans · Kingdoms · States|
|Gaelic monarchs · British monarchs|
|Economic history · History of the Irish language|
The History of Ireland began with the first known human settlement in Ireland around 8000 BC, when hunter-gatherers arrived from Great Britain and continental Europe, probably via a land bridge. Few archaeological traces remain of this group, but their descendants and later Neolithic arrivals, particularly from the Iberian Peninsula, were responsible for major Neolithic sites such as Newgrange. Following the arrival of Saint Patrick and other Christian missionaries in the early to mid-5th century A.D., Christianity subsumed the indigenous pagan religion by the year 600.
From around 800 A.D., more than a century of Viking invasions brought havoc upon the monastic culture and on the island's various regional dynasties, yet both of these institutions proved strong enough to survive and assimilate the invaders. The coming of Cambro-Norman mercenaries under Richard de Clare, 2nd Earl of Pembroke, nicknamed Strongbow, in 1169 marked the beginning of more than 700 years of direct Norman and, later, English involvement in Ireland. The English crown did not begin asserting full control of the island until after the English Reformation, when questions over the loyalty of Irish vassals provided the initial impetus for a series of military campaigns between 1534 and 1691. This period was also marked by an English policy of plantation which led to the arrival of thousands of English and Scottish Protestant settlers. As the military and political defeat of Gaelic Ireland became more clear in the early seventeenth century, the role of religion as a new division in Ireland became more pronounced. From this period on, sectarian conflict became a recurrent theme in Irish history.
The overthrow, in 1613, of the Catholic majority in the Irish parliament was realised principally through the creation of numerous new boroughs, all of which were Protestant-dominated. By the end of the seventeenth century all Catholics, representing some 85% of Ireland's population then, were banned from the Irish parliament. Political power rested entirely in the hands of a British settler-colonial, and more specifically Anglican, minority while the Catholic population suffered severe political and economic privations. In 1801, this colonial parliament was abolished and Ireland became an integral part of a new United Kingdom of Great Britain and Ireland under the Act of Union. Catholics were still banned from sitting in that new parliament until Catholic Emancipation was attained in 1829, the principal condition of which was the removal of the poorer, and thus more radical, Irish freeholders from the franchise.
The Irish Parliamentary Party strove from the 1880s to attain Home Rule self-government through the parliamentary constitutional movement eventually winning the Home Rule Act 1914, though suspended on the outbreak of World War I. In 1922, after the Irish War of Independence, the southern twenty-six counties of Ireland seceded from the United Kingdom (UK) to become the independent Irish Free State — and after 1948, the Republic of Ireland. The remaining six north eastern counties, known as Northern Ireland, remained part of the UK. The history of Northern Ireland has been dominated by sporadic sectarian conflict between (mainly Catholic) Nationalists and (mainly Protestant) Unionists. This conflict erupted into the Troubles in the late 1960s, until an uneasy peace thirty years later.
Early history: 8000 BC–AD 400[edit | edit source]
What little is known of pre-Christian Ireland comes from a few references in Roman writings, Irish poetry and myth, and archaeology. The earliest inhabitants of Ireland, people of a mid-Stone Age, or Mesolithic, culture, arrived sometime after 8000 BC, when the climate had become more hospitable following the retreat of the polar icecaps. About 4000 BC agriculture was introduced from the continent, leading to the establishment of a high Neolithic culture, characterized by the appearance of pottery, polished stone tools, rectangular wooden houses and communal megalithic tombs, some of which are huge stone monuments like the Passage Tombs of Newgrange, Knowth and Dowth, many of them astronomically aligned (most notably, Newgrange). Four main types of megalithic tomb have been identified: Portal Tombs, Court Tombs, Passage Tombs and Wedge Tombs. In Leinster and Munster individual adult males were buried in small stone structures, called cists, under earthen mounds and were accompanied by distinctive decorated pottery. This culture apparently prospered, and the island became more densely populated. Towards the end of the Neolithic new types of monuments developed, such as circular embanked enclosures and timber, stone and post and pit circles.
The Bronze Age properly began once copper was alloyed with tin to produce true bronze artifacts, this took place around 2000 BC, when some Ballybeg flat axes and associated metalwork was produced. The period preceding this, in which Lough Ravel and most Ballybeg axes were produced, and is known as the Copper Age or Chalcolithic, commenced about 2500 BC. This period also saw the production of elaborate gold and bronze ornaments, weapons and tools. There was a movement away from the construction of communal megalithic tombs to the burial of the dead in small stone cists or simple pits, which could be situated in cemeteries or in circular earth or stone built burial mounds known respectively as barrows and cairns. As the period progressed inhumation burial gave way to cremation and by the Middle Bronze Age cremations were often placed beneath large burial urns.
The Iron Age in Ireland began about 600 BC. By the historic period (AD 431 onwards) the main over-kingdoms of In Tuisceart, Airgialla, Ulaid, Mide, Laigin, Mumhain, Cóiced Ol nEchmacht began to emerge (see Kingdoms of ancient Ireland). Within these kingdoms a rich culture flourished. The society of these kingdoms was dominated by an upper class, consisting of aristocratic warriors and learned people, possibly including druids.
Linguists realised from the 17th century onwards that the language spoken by these people, the Goidelic languages, was a branch of the Celtic languages. This was originally explained as a result of invasions by Celts from the continent. However, research during the 20th century indicated otherwise, and in the later years of the century the conclusion drawn was that culture developed gradually and continuously, and that the introduction of Celtic language and elements of Celtic culture was a result of cultural exchange with Celtic groups on southwest continental Europe from the neolithic to the Bronze Age., Little archaeological evidence was found for large intrusive groups of Celtic immigrants in Ireland. The hypothesis that the native Late Bronze Age inhabitants gradually absorbed Celtic influences has since been supported by some recent genetic research.
The Romans referred to Ireland as Hibernia. Ptolemy in AD 100 records Ireland's geography and tribes. Ireland was never formally a part of the Roman Empire but Roman influence was often projected well beyond formal borders. Tacitus writes that an exiled Irish prince was with Agricola in Britain and would return to seize power in Ireland. Juvenal tells us that Roman "arms had been taken beyond the shores of Ireland". In recent years, some experts have hypothesized that Roman-sponsored Gaelic forces (or perhaps even Roman regulars) mounted some kind of invasion around 100, but the exact relationship between Rome and the dynasties and peoples of Hibernia remains unclear.
Early Christian Ireland 400–800[edit | edit source]
The middle centuries of the first millennium AD marked great changes in Ireland.
Niall Noigiallach (died c.450/455) laid the basis for the Uí Néill dynasty's hegemony over much of western, northern and central Ireland. Politically, the former emphasis on tribal affiliation had been replaced by the 700s by that of patrilineal and dynastic background. Many formerly powerful kingdoms and peoples disappeared. Irish pirates struck all over the coast of western Britain in the same way that the Vikings would later attack Ireland. Some of these founded entirely new kingdoms in Pictland, Wales and Cornwall. The Attacotti of south Leinster may even have served in the Roman military in the mid-to-late 300s.
Perhaps it was some of the latter returning home as rich mercenaries, merchants, or slaves stolen from Britain or Gaul, that first brought the Christian faith to Ireland. Some early sources claim that there were missionaries active in southern Ireland long before St. Patrick. Whatever the route, and there were probably many, this new faith was to have the most profound effect on the Irish.
Tradition maintains that in AD 432, St. Patrick arrived on the island and, in the years that followed, worked to convert the Irish to Christianity. On the other hand, according to Prosper of Aquitaine, a contemporary chronicler, Palladius was sent to Ireland by the Pope in 431 as "first Bishop to the Irish believing in Christ", which demonstrates that there were already Christians living in Ireland. Palladius seems to have worked purely as Bishop to Irish Christians in the Leinster and Meath kingdoms, while Patrick — who may have arrived as late as 461 — worked first and foremost as a missionary to the Pagan Irish, converting in the more remote kingdoms located in Ulster and Connacht.
Patrick is traditionally credited with preserving the tribal and social patterns of the Irish, codifying their laws and changing only those that conflicted with Christian practices. He is also credited with introducing the Roman alphabet, which enabled Irish monks to preserve parts of the extensive Celtic oral literature. The historicity of these claims remains the subject of debate and there is no direct evidence linking Patrick with any of these accomplishments. The myth of Patrick, as scholars refer to it, was developed in the centuries after his death.
The druid tradition collapsed, first in the face of the spread of the new faith, and ultimately in the aftermath of famine and plagues due to the climate changes of 535–536. Irish scholars excelled in the study of Latin learning and Christian theology in the monasteries that flourished shortly thereafter. Missionaries from Ireland to England and Continental Europe spread news of the flowering of learning, and scholars from other nations came to Irish monasteries. The excellence and isolation of these monasteries helped preserve Latin learning during the Early Middle Ages. The arts of manuscript illumination, metalworking, and sculpture flourished and produced such treasures as the Book of Kells, ornate jewellery, and the many carved stone crosses that dot the island. Sites dating to this period include clochans, ringforts and promontory forts.
The first English involvement in Ireland took place in this period. In 684 AD an English expeditionary force sent by Northumbrian King Ecgfrith invaded Ireland in the summer of that year. The English forces managed to seize a number of captives and booty, but they apparently did not stay in Ireland for long. The next English involvement in Ireland would take place a little more than half a millennium later in 1169 AD when the Normans invaded the country.
Early medieval era 800–1166[edit | edit source]
Main article Early Medieval Ireland 800–1166
The first recorded Viking raid in Irish history occurred in 795 when Vikings from Norway looted the island.Early Viking raids were generally small in scale and quick. These early raids interrupted the golden age of Christian Irish culture starting the beginning of two hundred years of intermittent warfare, with waves of Viking raiders plundering monasteries and towns throughout Ireland. Most of the early raiders came from the fjords of western Norway.
By the early 840s, the Vikings began to establish settlements along the Irish coasts and to spend the winter months there. Vikings founded settlements in several places and most famously, Dublin. Written accounts from this time (early to mid 840s) show that the Vikings were moving further inland to attack (often using rivers) and then retreating to their coastal headquarters.
In 852, the Vikings landed in Dublin Bay and established a fortress. After several generations a group of mixed Irish and Norse ethnic background arose (the so-called Gall-Gaels, Gall then being the Irish word for "foreigners"
However, the Vikings never achieved total domination of Ireland, often fighting for and against various Irish kings. 1014 the Battle of Clontarf marked the beginning of the decline of Viking power in Ireland. However the towns that the Vikings had founded continued to flourish and trade became an important part of the Irish economy.
Later medieval Ireland[edit | edit source]
The arrival of the Normans 1167–1185[edit | edit source]
By the 12th century, Ireland was divided politically into a shifting hierarchy of petty kingdoms and over-kingdoms. Power was exercised by the heads of a few regional dynasties vying against each other for supremacy over the whole island. One of these men, King Diarmait Mac Murchada of Leinster was forcibly exiled by the new High King, Ruaidri mac Tairrdelbach Ua Conchobair. Fleeing to Aquitaine, Diarmait obtained permission from Henry II to use the Norman forces to regain his kingdom. The first Norman knight landed in Ireland in 1167, followed by the main forces of Normans, Welsh and Flemings. Several counties were restored to the control of Diarmait, who named his son-in-law, Richard de Clare, heir to his kingdom. This caused consternation to King Henry II of England, who feared the establishment of a rival Norman state in Ireland. Accordingly, he resolved to establish his authority.
With the authority of the papal bull Laudabiliter from Adrian IV, Henry landed with a large fleet at Waterford in 1171, becoming the first King of England to set foot on Irish soil. Henry awarded his Irish territories to his younger son John with the title Dominus Hiberniae ("Lord of Ireland"). When John unexpectedly succeeded his brother as King John, the "Lordship of Ireland" fell directly under the English Crown.
The Lordship of Ireland 1185–1254[edit | edit source]
Initially the Normans controlled the entire east coast, from Waterford up to eastern Ulster and penetrated far west in the country. The counties were ruled by many smaller kings. The first Lord of Ireland was King John, who visited Ireland in 1185 and 1210 and helped consolidate the Norman controlled areas, while at the same time ensuring that the many Irish kings swore fealty to him.
Throughout the thirteenth century the policy of the English Kings was to weaken the power of the Norman Lords in Ireland. For example King John encouraged Hugh de Lacy to destabilise and then overthrow the Lord of Ulster, before creating him to the Earl of Ulster. The Hiberno-Norman community suffered from a series of invasion that ceased the spread of their settlement and power. Politics and events in Gaelic Ireland served to draw the settlers deeper into the orbit of the Irish.
Gaelic resurgence, Norman decline 1254–1360[edit | edit source]
By 1261 the weakening of the Normans had become manifest when Fineen Mac Carthy defeated a Norman army at the Battle of Callann.The war contunued between the different lords and earls for aabout 100 years and the wars caused a great deal of destruction, especially around Dublin. In this chaotic situation, local Irish lords won back large amounts of land that their families had lost since the conquest and held them after the war was over.
The Black Death arrived in Ireland in 1348. Because most of the English and Norman inhabitants of Ireland lived in towns and villages, the plague hit them far harder than it did the native Irish, who lived in more dispersed rural settlements. After it had passed, Gaelic Irish language and customs came to dominate the country again. The English-controlled area shrunk back to a fortified area around Dublin. Since the government in Dublin had little real authority, however, their Statutes did not have much effect.
By the end of the 15th century, central English authority in Ireland had all but disappeared. England's attentions were diverted by its own civil war Wars of the Roses. The Lordship of Ireland lay in the hands of the powerful Fitzgerald Earl of Kildare, who dominated the country by means of military force and alliances with lords and clans around Ireland. Around the country, local Gaelic and Gaelicised lords expanded their powers at the expense of the English government in Dublin but the power of the Dublin government was seriously curtailed by the introduction of Poynings Law in 1494. According to this act the Irish parliament was essentially put under the control of the Westminster parliament.
Reformation and Protestant ascendancy[edit | edit source]
At the Reformation, in 1532, when Henry VIII of England broke with the Pope authority, fundamentally changed Ireland. His son Edward VI of England moved further, breaking with Papal doctrine completely. While the English, the Welsh and, later, the Scots accepted Protestantism, the Irish remained Catholic. This influenced their relationship with England for the next four hundred years, as the Reformation coincided with a determined effort on behalf of the English to re-conquer and colonise Ireland. This sectarian difference meant that the native Irish and the Roman Catholic were excluded from political power.
Re-conquest and rebellion[edit | edit source]
From 1536 Henry VIII of England decided to re-conquer Ireland and bring it under crown control. The Fitzgerald dynasty of Kildare, who had become the effective rulers of Ireland in the 15th century, had become very unreliable allies of the Tudor monarchs. Fitzgerald went into open rebellion against the crown. When Henry VIII of England had put down this rebellion he resolved to bring Ireland under English government control so the island would not become a base for future rebellions or foreign invasions of England. In 1541, Henry upgraded Ireland from a lordship to a full Kingdom of Ireland. Henry was proclaimed King of Ireland at a meeting of the Irish Parliament that year. This was the first meeting of the Irish Parliament to be attended by the Gaelic Irish chieftains as well as the Hiberno-Norman aristocracy. With the institutions of government in place, the next step was to extend the control of the English Kingdom of Ireland over all of its claimed territory. This took nearly a century, with various English administrations in the process either negotiating or fighting with the independent Irish and Old English lords.
The re-conquest was completed during the reigns of Elizabeth I of England and James I of England, after several bloody conflicts. After this point, the English authorities in Dublin established real control over Ireland for the first time, bringing a centralised government to the entire island, and successfully disarmed the native lordships. However, the English were not successful in converting the Catholic Irish to the Protestant religion and the brutal methods used by crown authority to pacify the country heightened resentment of English rule.
From the mid-16th and into the early 17th century, crown governments carried out a policy of colonisation known as Plantations of Ireland. Scottish and English Protestants were sent as colonists to the provinces of Irland. These settlers, who had a British and Protestant identity, would form the ruling class of future British administrations in Ireland. A series of Penal Laws discriminated against all faiths other than the established Anglican. The principal victims of these laws were Catholics and later Presbyterians.
Civil wars and penal laws[edit | edit source]
After Irish Catholic rebellion and civil war, Oliver Cromwell, on behalf of the English Commonwealth, re-conquered Ireland during the time from 1649 to 1651. Under Cromwell's government, landownership in Ireland was transferred overwhelmingly to Protestant colonists. The 17th century was perhaps the bloodiest in Ireland's history. Two periods of civil war caused huge loss of life and resulted in the final dispossession of the Irish Catholic landowning class and their subordination under the Penal Laws.
In the mid-17th century, when Irish Catholics rebelled against English and Protestant domination, thousands of Protestant settlers were massacred. The Catholic gentry briefly ruled the country for some years until Oliver Cromwell re-conquered Ireland in 1649-1653 on behalf of the English Commonwealth. Cromwell's conquest was the most brutal phase of a brutal war. By its close, up to a third of Ireland's pre-war population was dead or in exile. As punishment for the rebellion of 1641, almost all lands owned by Irish Catholics were confiscated and given to British settlers. Several hundred remaining native landowners were transplanted to Connacht.
Ireland became the main battleground after the 1688, when the Catholic tried to get James II of England as the ruler of Irland Scotland end Englande, but failed. James II was replaced with William III of England, William of Orange. The wealthier Irish Catholics backed James to try to reverse the remaining Penal Laws and land confiscations, whereas Protestants supported William to preserve their property in the country. James and William fought for the Kingdom of Ireland in the Battle of the Boyne, where James's outnumbered forces were defeated. Jacobite resistance was finally ended after the Battle of Aughrim in July 1691. The Penal Laws were re-enacted more thoroughly after this war, as the Protestant élite wanted to ensure that the Irish Catholic landed classes would not be in a position to repeat their rebellions of the 17th century.
Colonial Ireland[edit | edit source]
Main article Ireland 1691-1801
Subsequent Irish antagonism towards England was aggravated by the economic situation of Ireland in the 18th century. Some absentee landlords managed some of their estates inefficiently, and food tended to be produced for export rather than for domestic consumption. Two very cold winters led directly to the Great Irish Famine (1740-1741), which killed about 400,000 people; all of Europe was affected. In addition, Irish exports were reduced by the Navigation Acts from the 1660s, which placed tariffs on Irish products entering England, but exempted English goods from tariffs on entering Ireland. However most of the 18th century was relatively peaceful in comparison with the preceding two hundred years, and the population doubled to over four million.
By the late 18th century, many of the Irish Protestant élite had come to see Ireland as their native country. A Parliamentary faction led by Henry Grattan agitated for a more favourable trading relationship with England and for greater legislative independence for the Parliament of Ireland. However, reform in Ireland stalled over the more radical proposals to enfranchise Irish Catholics. This was enabled in 1793, but Catholics could not yet enter parliament or become government officials. Some were attracted to the more militant example of the French Revolution of 1789. They formed the Society of the United Irishmen to overthrow British rule and found a non-sectarian republic. Their activity culminated in the Irish Rebellion of 1798, which was bloodily suppressed. Largely in response to this rebellion, Irish self-government was abolished altogether by the Act of Union in 1801.
Union with Great Britain (1801-1922)[edit | edit source]
In 1800, after the Irish Rebellion of 1798, the British and the Irish parliaments enacted the Act of Union, which merged the Kingdom of Ireland and the Kingdom of Great Britain (itself a union of "England" (Wales had been incorporated into England by the Acts of Union of 1536), and Scotland, created almost 100 years earlier), to create the United Kingdom of Great Britain and Ireland. Part of the deal for the union was that Catholic Emancipation would be conceded to remove discrimination against Catholics, Presbyterians, and others. However, King George III controversially blocked any change.
In 1823, an enterprising Catholic lawyer, Daniel O'Connell, known as "the Great Liberator" began a successful campaign to achieve emancipation, which was finally conceded in 1829. He later led an unsuccessful campaign for "Repeal of the Act of Union".
The second of Ireland's "Great Famines", An Gorta Mór struck the country severely in the period 1845-1849, with potato blight leading to mass starvation and emigration. (See Great Irish Famine.) The impact of emigration in Ireland was severe; the population dropped from over 8 million before the Famine to 4.4 million in 1911.
The Irish language, once the spoken language of the entire island, declined in use sharply in the nineteenth century as a result of the Famine and the creation of the National School education system, as well as hostility to the language from leading Irish politicians of the time; it was largely replaced by English.
Outside mainstream nationalism, a series of violent rebellions by Irish republicans took place in 1803, under Robert Emmet; in 1848 a rebellion by the Young Irelanders, most prominent among them, Thomas Francis Meagher; and in 1867, another insurrection by the Irish Republican Brotherhood. All failed, but physical force nationalism remained an undercurrent in the nineteenth century.
The late 19th century also witnessed major land reform, spearheaded by the Land League under Michael Davitt demanding what became known as the 3 Fs; Fair rent, free sale, fixity of tenure. From 1870 and as a result of the Land War agitations and subsequent Plan of Campaign of the 1880s, various British governments introduced a series of Irish Land Acts - William O'Brien playing a leading role by winning the greatest piece of social legislation Ireland had yet seen, the Wyndham Land Purchase Act (1903) which broke up large estates and gradually gave rural landholders and tenants ownership of the lands. It effectively ended absentee landlordism, solving the age-old Irish Land Question
In the 1870s the issue of Irish self-government again became a major focus of debate under Protestant landowner, Charles Stewart Parnell and the Irish Parliamentary Party of which he was founder. British prime minister William Ewart Gladstone made two unsuccessful attempts to introduce Home Rule in 1886 and 1893. Parnell's controversial leadership eventually ended when he was implicated in a divorce scandal, when it was revealed that he had been living in family relationship with Katherine O'Shea, the long separated wife of a fellow Irish MP, with whom he was father of three children.
After the introduction of the Local Government (Ireland) Act 1898 which broke the power of the landlord dominated "Grand Juries", passing for the first time absolute democratic control of local affairs into the hands of the people through elected Local County Councils, the debate over full Home Rule led to tensions between Irish nationalists and Irish unionists (those who favoured maintenance of the union). Most of the island was predominantly nationalist, Catholic and agrarian. The northeast, however, was predominantly unionist, Protestant and industrialised. Unionists feared a loss of political power and economic wealth in a predominantly rural, nationalist, Catholic home-rule state. Nationalists believed that they would remain economically and politically second class citizens without self-government. Out of this division, two opposing sectarian movements evolved, the Protestant Orange Order and the Catholic Ancient Order of Hibernians.
Home Rule, Easter 1916 and the War of Independence[edit | edit source]
Home Rule became certain when in 1910 the Irish Parliamentary Party (IPP) under John Redmond held the balance of power in the Commons and the third Home Rule Bill was introduced in 1912. Unionist resistance was immediate with the formation of the Ulster Volunteers. In turn the Irish Volunteers were established to oppose them and enforce the introduction of self-government.
In September 1914, just as the First World War broke out, the UK Parliament finally passed the Third Home Rule Act to establish self-government for Ireland, but was suspended for the duration of the war. In order to ensure the implementation of Home Rule after the war, nationalist leaders and the IPP under Redmond supported the British and Allied war effort against the Central Powers. The core of the Irish Volunteers were against this decision, a majority splitting off into the National Volunteers who enlisted in Irish regiments of the 10th and 16th (Irish) Divisions. Before the war ended, Britain made two concerted efforts to implement Home Rule, one in May 1916 and again with the Irish Convention during 1917-1918, but the Irish sides (Nationalist, Unionist) were unable to agree terms for the temporary or permanent exclusion of Ulster from its provisions.
The period from 1916-1921 was marked by political violence and upheaval, ending in the partition of Ireland and independence for 26 of its 32 counties. A failed attempt was made to gain separate independence for Ireland with the 1916 Easter Rising, an insurrection in Dublin. Though support for the insurgents was small, the violence used in its suppression led to a swing in support of the rebels. In addition, the unprecedented threat of Irishmen being conscripted to the British Army in 1918 (for service on the Western Front as a result of the German Spring Offensive) accelerated this change. (See: Conscription Crisis of 1918). In the December 1918 elections Sinn Féin, the party of the rebels, won a majority of three-quarters of all seats in Ireland, MPs of which assembled in Dublin on 21 January 1919, to form a thirty-two county Irish Republic parliament, the first Dáil Éireann unilaterally declaring sovereignty over the entire island.
Unwilling to negotiate any understanding with Britain short of complete independence, the Irish Republican Army — the army of the newly declared Irish Republic — waged a guerrilla war (the Irish War of Independence) from 1919 to 1921. In the course of the fighting and amid much acrimony, the Fourth Government of Ireland Act 1920 implemented Home Rule while separating the island into what the British government's Act termed "Northern Ireland" and "Southern Ireland". In July 1921, the Irish and British governments agreed a truce that halted the war. In December 1921, representatives of both governments signed an Anglo-Irish Treaty. The Irish delegation was led by Arthur Griffith and Michael Collins. This abolished the Irish Republic and created the Irish Free State, a self-governing Dominion of the British Empire in the manner of Canada and Australia. Under the Treaty, Northern Ireland could opt out of the Free State and stay within the United Kingdom: it promptly did so. In 1922, both parliaments ratified the Treaty, formalising independence for the twenty-six county Irish Free State (which went on to re-name itself Ireland in 1937 and declare itself a republic in 1949); while the six county Northern Ireland, gaining Home Rule for itself, remained part of the United Kingdom. For most of the next 75 years, each territory was strongly aligned to either Catholic or Protestant ideologies, although this was more marked in the six counties of Northern Ireland.
Free State/Republic (1922-present)[edit | edit source]
Main articles: History of the Republic of Ireland; Irish Free State, Republic of Ireland; Names of the Irish state The treaty to sever the Union divided the republican movement into anti-Treaty (who wanted to fight on until an Irish Republic was achieved) and pro-Treaty supporters (who accepted the Free State as a first step towards full independence and unity). Between 1922 and 1923 both sides fought the bloody Irish Civil War. The new Irish Free State government defeated the anti-Treaty remnant of the Irish Republican Army. This division among nationalists still colours Irish politics today, specifically between the two leading Irish political parties, Fianna Fáil and Fine Gael.
The new Irish Free State (1922–37) existed against the backdrop of the growth of dictatorships in mainland Europe and a major world economic downturn in 1929. In contrast with many contemporary European states it remained a democracy. Testament to this came when the losing faction in the Irish civil war, Eamon de Valera's Fianna Fáil, was able to take power peacefully by winning the 1932 general election. Nevertheless, up until the mid 1930s, considerable parts of Irish society saw the Free State through the prism of the civil war, as a repressive, British imposed state. It was only the peaceful change of government in 1932 that signalled the final acceptance of the Free State on their part. In contrast to many other states in the period, the Free State remained financially solvent as a result of low government expenditure. However, unemployment and emigration were high. The population declined to a low of 2.7 million recorded in the 1961 census.
The Roman Catholic Church had a powerful influence over the Irish state for much of its history. The clergy's influence meant that the Irish state had very conservative social policies, banning, for example, divorce, contraception, abortion, pornography as well as encouraging the censoring of many books and films. In addition the Church largely controlled the State's hospitals, schools and remained the largest provider of many other social services.
With the partition of Ireland in 1922, 92.6% of the Free State's population were Catholic while 7.4% were Protestant. By the 1960s, the Protestant population had fallen by half. Although emigration was high among all the population, due to a lack of economic opportunity, the rate of Protestant emigration was disproportionate in this period. Many Protestants left the country in the early 1920s, either because they felt unwelcome in a predominantly Catholic and nationalist state, because they were afraid due to the burning of Protestant homes (particularly of the old landed class) by republicans during the civil war, because they regarded themselves as British and did not wish to live in an independent Irish state, or because of the economic disruption caused by the recent violence. The Catholic Church had also issued a decree, known as Ne Temere, whereby the children of marriages between Catholics and Protestants had to be brought up as Catholics. From 1945, the emigration rate of Protestants fell and they became less likely to emigrate than Catholics - indicating their integration into the life of the Irish State.
In 1937, a new Constitution of Ireland re-established the state as Ireland (or Éire in Irish). The state remained neutral throughout World War II (see Irish neutrality) and this saved it from much of the horrors of the war, although tens of thousands volunteered to serve in the British forces. Ireland was also hit badly by rationing of food, and coal in particular (peat production became a priority during this time). Though nominally neutral, recent studies have suggested a far greater level of involvement by the South with the Allies than was realised, with D Day's date set on the basis of secret weather information on Atlantic storms supplied by Éire. For more detail on 1939–45, see main article The Emergency.
In the 1960s, Ireland underwent a major economic change under reforming Taoiseach (prime minister) Seán Lemass and Secretary of the Department of Finance T.K. Whitaker, who produced a series of economic plans. Free second-level education was introduced by Donnchadh O'Malley as Minister for Education in 1968. From the early 1960s, the Republic sought admission to the European Economic Community but, because 90% of the export economy still depended on the United Kingdom market, it could not do so until the UK did, in 1973.
Global economic problems in the 1970s, augmented by a set of misjudged economic policies followed by governments, including that of Taoiseach Jack Lynch, caused the Irish economy to stagnate. The Troubles in Northern Ireland discouraged foreign investment. Devaluation was enabled when the Irish Pound, or Punt, was established in as a truly separate currency in 1979, breaking the link with the UK's sterling. However, economic reforms in the late 1980s, the end of the Troubles, helped by investment from the European Community, led to the emergence of one of the world's highest economic growth rates, with mass immigration (particularly of people from Asia and Eastern Europe) as a feature of the late 1990s. This period came to be known as the Celtic Tiger and was focused on as a model for economic development in the former Eastern Bloc states, which entered the European Union in the early 2000s. Property values had risen by a factor of between four and ten between 1993 and 2006, in part fueling the boom.
Irish society also adopted relatively liberal social policies during this period. Divorce was legalised, homosexuality decriminalised, while abortion in limited cases was allowed by the Irish Supreme Court in the X Case legal judgment. Major scandals in the Roman Catholic Church, both sexual and financial, coincided with a widespread decline in religious practice, with weekly attendance at Roman Catholic Mass halving in twenty years. A series of tribunals set up from the 1990s have investigated alleged malpractices by politicians, the Catholic clergy, judges, hospitals and the Gardaí (police).
Northern Ireland[edit | edit source]
"A Protestant State" (1921-1971)[edit | edit source]
From 1921 to 1971, Northern Ireland was governed by the Ulster Unionist Party government, based at Stormont in East Belfast. The founding Prime Minister, James Craig, proudly declared that it would be "a Protestant State for a Protestant People" (in contrast to the anticipated "Papist" state to the south). Discrimination against the minority nationalist community in jobs and housing, and their total exclusion from political power due to the majoritarian electoral system, led to the emergence of the Northern Ireland Civil Rights Association in the late 1960s, inspired by Martin Luther King's civil rights movement in the United States of America. A violent counter-reaction from conservative unionists and the Royal Ulster Constabulary (RUC) led to civil disorder, notably the Battle of the Bogside and the Northern Ireland riots of August 1969. To restore order, British troops were deployed to the streets of Northern Ireland at this time.
Tensions came to a head with the events of Bloody Sunday and Bloody Friday, and the worst years (early 1970s) of what became known as the Troubles resulted. The Stormont parliament was prorogued in 1971 and abolished in 1972. Paramilitary private armies such as the Provisional Irish Republican Army, the Official IRA, the Irish National Liberation Army, the Ulster Defence Association and the Ulster Volunteer Force fought each other and the British army and the (largely Unionist) RUC, resulting in the deaths of well over three thousand men, women and children, civilians and military. Most of the violence took place in Northern Ireland, but some also spread to England and across the Irish border. Template:Irish police
Direct rule (1971-1998)[edit | edit source]
For the next 27 years, Northern Ireland was under "direct rule" with a Secretary of State for Northern Ireland in the British Cabinet responsible for the departments of the Northern Ireland executive/government. Principal acts were passed by the Parliament of the United Kingdom in the same way as for much of the rest of the UK, but many smaller measures were dealt with by Order in Council with minimal parliamentary scrutiny. Throughout this time the aim was to restore devolution, but three attempts - the power-sharing executive established by the Northern Ireland Constitution Act and the Sunningdale Agreement, the 1975 Northern Ireland Constitutional Convention and Jim Prior's 1982 assembly - all failed to either reach consensus or operate in the longer term.
During the 1970s British policy concentrated on defeating the Provisional Irish Republican Army (IRA) by military means including the policy of Ulsterisation (requiring the RUC and British Army reserve Ulster Defence Regiment to be at the forefront of combating the IRA). Although IRA violence decreased it was obvious that no military victory was on hand in either the short or medium terms. Even Catholics that generally rejected the IRA were unwilling to offer support to a state that seemed to remain mired in sectarian discrimination, and the Unionists plainly were not interested in Catholic participation in running the state in any case. In the 1980s the IRA attempted to secure a decisive military victory based on massive arms shipments from Libya. When this failed - probably because of MI5's penetration of the IRA's senior commands - senior republican figures began to look to broaden the struggle from purely military means. In time this began a move towards military cessation. In 1986 the British and Irish governments signed the Anglo Irish Agreement signalling a formal partnership in seeking a political solution. Socially and economically Northern Ireland suffered the worst levels of unemployment in the UK and although high levels of public spending ensured a slow modernisation of public services and moves towards equality, progress was slow in the 1970s and 1980s, only in the 1990s when progress towards peace became tangible, did the economic situation brighten. By then, too, the demographics of Northern Ireland had undergone significant change, and more than 40% of the population are Catholics.
Devolution and direct rule (1998-present)[edit | edit source]
More recently, the Belfast Agreement ("Good Friday Agreement") of April 10, 1998 brought a degree of power sharing to Northern Ireland, giving both unionists and nationalists control of limited areas of government. However, both the power-sharing Executive and the elected Assembly have been suspended since October 2002 following a breakdown in trust between the political parties. Efforts to resolve outstanding issues, including "decommissioning" of paramilitary weapons, policing reform and the removal of British army bases are continuing. Recent elections have not helped towards compromise, with the moderate Ulster Unionist and (nationalist) Social Democrat and Labour parties being substantially displaced by the hard-line Democratic Unionist and (nationalist) Sinn Féin parties.
Flags in Ireland[edit | edit source]
The state flag of the Republic of Ireland is the Irish Tricolour. This flag, which bears the colours green for Roman Catholics, orange for Protestants, and white for the desired peace between them, dates back to the middle of the 19th century.
The state flag applying to Northern Ireland is the Union Flag of the United Kingdom of Great Britain and Northern Ireland. The Ulster Banner is sometimes used as a de facto regional flag for Northern Ireland.
The Tricolour was first unfurled in public by Young Irelander Thomas Francis Meagher who, using the symbolism of the flag, explained his vision as follows: "The white in the centre signifies a lasting truce between the "Orange" and the "Green," and I trust that beneath its folds the hands of the Irish Protestant and the Irish Catholic may be clasped in generous and heroic brotherhood". Fellow nationalist John Mitchel said of it: "I hope to see that flag one day waving as our national banner."
In 1937 when the Constitution of Ireland was introduced, the Tricolour was formally confirmed as the national flag: "The national flag is the tricolour of green, white and orange." While the Tricolour today is the official flag of the Republic of Ireland, as a state flag it does not apply to the entire island of Ireland.
Since Partition, there has been no universally-accepted flag to represent the entire island. As a provisional solution for certain sports fixtures, the Flag of the Four Provinces enjoys a certain amount of general acceptance and popularity.
Historically a number of flags have been used, including:
- Saint Patrick's Flag (St Patrick's Saltire, St Patric's Cross) which was the flag sometimes used for the Kingdom of Ireland and which represented Ireland on the Union Flag after the Act of Union,
- a green flag with a harp (used by most nationalists in the 19th century and which is also the flag of Leinster),
- a blue flag with a harp used from the 18th century onwards by many nationalists (now the standard of the President of Ireland), and
- the Irish Tricolour.
St Patrick's Saltire was formerly used to represent the island of Ireland by the all-island Irish Rugby Football Union (IRFU), before adoption of the four-provinces flag. The Gaelic Athletic Association (GAA) uses the Tricolour to represent the whole island.
See also[edit | edit source]
Footnotes[edit | edit source]
- ^ Moody, T.W. & Martin, F.X., eds. (1995). The Course of Irish History. Roberts Rinehart. pp. 31-32. ISBN 1-56833-175-4.
- ^ Geneticists find Celtic links to Spain and Portugal www.breakingnews.ie, 2004-09-09. Retrieved 2007-04-01.
- ^ Myths of British ancestry Stephen Oppenheimer. October 2006, Special report. Retrieved 2007-04-01.
- ^ Y-chromosome variation and Irish origins (pdf)
- ^ "Yes, the Romans did invade Ireland". British Archaeology. http://www.britarch.ac.uk/BA/ba14/ba14feat.html.
- ^ *Philip Rance, ‘Attacotti, Déisi and Magnus Maximus: the Case for Irish Federates in Late Roman Britain’, Britannia 32 (2001) 243-270
- ^ *Carmel McCaffrey, Leo Eaton "In Search of Ancient Ireland" Ivan R Dee (2002)PBS 2002
- ^ M.E.Collins, Ireland 1868-1966, (1993) p431)
- ^ The Irish State www.irlgov.ie
References[edit | edit source]
- Irish History, Séamus Mac Annaidh, Bath: Paragon, 1999, ISBN 0-76256-139-1
- Irish Kings and High Kings, Francis John Byrne, Dublin, 1973.
- A New History of Ireland: I - PreHistoric and Early Ireland, ed. Daibhi O Croinin. 2005
- A New History of Ireland: II- Medieval Ireland 1169-1534, ed. Art Cosgrove. 1987.
- Braudel, Fernand, The Perspective of the World, vol III of Civilization and Capitalism (1979, in English 1985)
- Plumb, J.H., England in the 18th Century, 1973: "The Irish Empire"
- Murray N. Rothbard, For a New Liberty, 1973, online.
Further reading[edit | edit source]
- S.J. Connolly (editor) The Oxford Companion to Irish History (Oxford University Press, 2000)
- Tim Pat Coogan De Valera (Hutchinson, 1993)
- Norman Davies The Isles: A History (Macmillan, 1999)
- Nancy Edwards, The archaeology of early medieval Ireland (London, Batsford 1990).
- R. F. Foster Modern Ireland, 1600-1972
- J.J.Lee The Modernisation of Irish Society 1848-1918 (Gill and Macmillan)
- FSL Lyons Ireland Since the Famine
- Dorothy McCardle The Irish Republic
- T.W. Moody and F.X. Martin "The Course of Irish History" Fourth Edition (Lanham, Maryland: Roberts Rinehart Publishers, 2001).
- James H. Murphy Abject Loyalty: Nationalism and Monarchy in Ireland During the Reign of Queen Victoria (Cork University Press, 2001)
- http://www.ucc.ie/celt/published/E900003-001/ - the 1921 Treaty debates online.
- John A. Murphy Ireland in the Twentieth Century (Gill and Macmillan)
- Frank Packenham (Lord Longford) Peace by Ordeal
- Alan J. Ward The Irish Constitutional Tradition: Responsible Government & Modern Ireland 1782-1992 (Irish Academic Press, 1994)
- Robert Kee The Green Flag Volumes 1-3 (The Most Distressful Country, The Bold Fenian Men, Ourselves Alone)
- Carmel McCaffrey and Leo Eaton In Search of Ancient Ireland: the origins of the Irish from Neolithic Times to the Coming of the English (Ivan R Dee, 2002)
- Carmel McCaffrey In Search of Ireland's Heroes: the Story of the Irish from the English Invasion to the Present Day (Ivan R Dee, 2006)
- Hugh F. Kearney Ireland:Contested Ideas of Nationalism and History (NYU Press, 2007)]
- Nicholas Canny "The Elizabethan Conquest of Ireland"(London, 1976) ISBN 0-85527-034-9.
[edit | edit source]
- History of Ireland: Primary Documents
- History of Ireland guide
- Ireland Under Coercion - "The diary of an American", by William Henry Hurlbert, published 1888, from Project Gutenberg
- The Story of Ireland by Emily Lawless, 1896 (Project Gutenberg)
- Timeline of Irish History 1840-1916 (1916 Rebellion Walking Tour)
- A Concise History of Ireland by P. W. Joyce
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | https://familypedia.wikia.org/wiki/History_of_Ireland | 21 |
57 | Democracy (Greek: δημοκρατία dēmokratía, literally "rule by people") is a form of government in which the people have the authority to choose their governing legislation. Who people are and how authority is shared among them are core issues for democratic development and constitution. Some cornerstones of these issues are freedom of assembly and speech, inclusiveness and equality, membership, consent, voting, right to life and minority rights.
|Part of the Politics series|
|Basic forms of government|
|Part of a series on|
Generally, there are two types of democracy: direct and representative. In a direct democracy, the people directly deliberate and decide on legislature. In a representative democracy the people elect representatives to deliberate and decide on legislature, such as in parliamentary or presidential democracy. Liquid democracy combines elements of these two basic types.
In the common variant of liberal democracy the powers of the majority are exercised within the framework of a representative democracy, but the constitution limits the majority and protects the minority, usually through the enjoyment by all of certain individual rights, e.g. freedom of speech, or freedom of association. Beside these general types of democracy there have been a wealth of further types (see below). Republics, though often associated with democracy because of the shared principle of rule by consent of the governed, are not necessarily democracies, as republicanism does not specify how the people are to rule.
Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes. The uncertainty of outcomes is inherent in democracy. Democracy makes all forces struggle repeatedly to realize their interests and devolves power from groups of people to sets of rules. Western democracy, as distinct from that which existed in pre-modern societies, is generally considered to have originated in city-states such as Classical Athens and the Roman Republic, where various schemes and degrees of enfranchisement of the free male population were observed before the form disappeared in the West at the beginning of late antiquity. The English word dates back to the 16th century, from the older Middle French and Middle Latin equivalents.
According to American political scientist Larry Diamond, democracy consists of four key elements: a political system for choosing and replacing the government through free and fair elections; the active participation of the people, as citizens, in politics and civic life; protection of the human rights of all citizens; a rule of law, in which the laws and procedures apply equally to all citizens. Todd Landman, nevertheless, draws our attention to the fact that democracy and human rights are two different concepts and that "there must be greater specificity in the conceptualisation and operationalisation of democracy and human rights".
The term appeared in the 5th century BC to denote the political systems then existing in Greek city-states, notably Athens, to mean "rule of the people", in contrast to aristocracy (ἀριστοκρατία, aristokratía), meaning "rule of an elite". While theoretically these definitions are in opposition, in practice the distinction has been blurred historically. The political system of Classical Athens, for example, granted democratic citizenship to free men and excluded slaves and women from political participation. In virtually all democratic governments throughout ancient and modern history, democratic citizenship consisted of an elite class, until full enfranchisement was won for all adult citizens in most modern democracies through the suffrage movements of the 19th and 20th centuries.
Democracy contrasts with forms of government where power is either held by an individual, as in an absolute monarchy, or where power is held by a small number of individuals, as in an oligarchy. Nevertheless, these oppositions, inherited from Greek philosophy, are now ambiguous because contemporary governments have mixed democratic, oligarchic and monarchic elements. Karl Popper defined democracy in contrast to dictatorship or tyranny, thus focusing on opportunities for the people to control their leaders and to oust them without the need for a revolution.
No consensus exists on how to define democracy, but legal equality, political freedom and rule of law have been identified as important characteristics. These principles are reflected in all eligible citizens being equal before the law and having equal access to legislative processes. For example, in a representative democracy, every vote has equal weight, no unreasonable restrictions can apply to anyone seeking to become a representative, and the freedom of its eligible citizens is secured by legitimised rights and liberties which are typically protected by a constitution. Other uses of "democracy" include that of direct democracy.
One theory holds that democracy requires three fundamental principles: upward control (sovereignty residing at the lowest levels of authority), political equality, and social norms by which individuals and institutions only consider acceptable acts that reflect the first two principles of upward control and political equality.
The term "democracy" is sometimes used as shorthand for liberal democracy, which is a variant of representative democracy that may include elements such as political pluralism; equality before the law; the right to petition elected officials for redress of grievances; due process; civil liberties; human rights; and elements of civil society outside the government. Roger Scruton argues that democracy alone cannot provide personal and political freedom unless the institutions of civil society are also present.
In some countries, notably in the United Kingdom which originated the Westminster system, the dominant principle is that of parliamentary sovereignty, while maintaining judicial independence. In the United States, separation of powers is often cited as a central attribute. In India, parliamentary sovereignty is subject to the Constitution of India which includes judicial review. Though the term "democracy" is typically used in the context of a political state, the principles also are applicable to private organisations.
There are many decision making methods used in democracies, but majority rule is the dominant form. Without compensation, like legal protections of individual or group rights, political minorities can be oppressed by the "tyranny of the majority". Majority rule is a competitive approach, opposed to consensus democracy, creating the need that elections, and generally deliberation, are substantively and procedurally "fair," i.e., just and equitable. In some countries, freedom of political expression, freedom of speech, freedom of the press, and internet democracy are considered important to ensure that voters are well informed, enabling them to vote according to their own interests.
It has also been suggested that a basic feature of democracy is the capacity of all voters to participate freely and fully in the life of their society. With its emphasis on notions of social contract and the collective will of all the voters, democracy can also be characterised as a form of political collectivism because it is defined as a form of government in which all eligible citizens have an equal say in lawmaking.
While representative democracy is sometimes equated with the republican form of government, the term "republic" classically has encompassed both democracies and aristocracies. Many democracies are constitutional monarchies, such as the United Kingdom.
Historic origins and proto-democratic societies
Retrospectively different polity, outside of declared democracies, have been described as proto-democratic (see History of democracy).
The term "democracy" first appeared in ancient Greek political and philosophical thought in the city-state of Athens during classical antiquity. The word comes from demos, "common people" and kratos, "strength". Led by Cleisthenes, Athenians established what is generally held as the first democracy in 508–507 BC. Cleisthenes is referred to as "the father of Athenian democracy."
Athenian democracy took the form of a direct democracy, and it had two distinguishing features: the random selection of ordinary citizens to fill the few existing government administrative and judicial offices, and a legislative assembly consisting of all Athenian citizens. All eligible citizens were allowed to speak and vote in the assembly, which set the laws of the city state. However, Athenian citizenship excluded women, slaves, foreigners (μέτοικοι / métoikoi), non-landowners, and men under 20 years of age. The exclusion of large parts of the population from the citizen body is closely related to the ancient understanding of citizenship. In most of antiquity the benefit of citizenship was tied to the obligation to fight war campaigns.
Athenian democracy was not only direct in the sense that decisions were made by the assembled people, but also the most direct in the sense that the people through the assembly, boule and courts of law controlled the entire political process and a large proportion of citizens were involved constantly in the public business. Even though the rights of the individual were not secured by the Athenian constitution in the modern sense (the ancient Greeks had no word for "rights"), the Athenians enjoyed their liberties not in opposition to the government but by living in a city that was not subject to another power and by not being subjects themselves to the rule of another person.
Range voting appeared in Sparta as early as 700 BC. The Apella was an assembly of the people, held once a month, in which every male citizen of at least 30 years of age could participate. In the Apella, Spartans elected leaders and cast votes by range voting and shouting. Aristotle called this "childish", as compared with the stone voting ballots used by the Athenians. Sparta adopted it because of its simplicity, and to prevent any bias voting, buying, or cheating that was predominant in the early democratic elections.
Even though the Roman Republic contributed significantly to many aspects of democracy, only a minority of Romans were citizens with votes in elections for representatives. The votes of the powerful were given more weight through a system of gerrymandering, so most high officials, including members of the Senate, came from a few wealthy and noble families. In addition, the Roman Republic was the first government in the western world to have a Republic as a nation-state, although it didn't have much of a democracy. The Roman model of governance inspired many political thinkers over the centuries, and today's modern representative democracies imitate more the Roman than the Greek models because it was a state in which supreme power was held by the people and their elected representatives, and which had an elected or nominated leader. Other cultures, such as the Iroquois Nation in the Americas between around 1450 and 1600 AD also developed a form of democratic society before they came in contact with the Europeans. This indicates that forms of democracy may have been invented in other societies around the world.
During the Middle Ages, there were various systems involving elections or assemblies, although often only involving a small part of the population. These included:
- the Things of Scandinavia,
- The 1061 Papal election,
- the Althing in Iceland,
- the Løgting in the Faeroe Islands,
- Papal conclaves, Elections of Bishops, Abbots, Abbesses carried on, and evolved from their classical roots.
- the election of Uthman in the Rashidun Caliphate,
- the South Indian Kingdom of the Chola in the state of Tamil Nadu in the Indian Subcontinent had an electoral system at 920 A.D., about 1100 years ago,
- Carantania, old Slavic/Slovenian principality, the Ducal Inauguration from 7th to 15th century,
- the upper-caste election of the Gopala in the Bengal region of the Indian Subcontinent,
- the Holy Roman Empire's Hoftag and Imperial Diets (mostly Nobles and Clergy but 100 Free Cities were included),
- Frisia in the 10th–15th Century (Weight of vote based on landownership) including the peasant republic of the Dithmarschen
- the Polish–Lithuanian Commonwealth (10% of population),
- certain medieval Italian city-states such as Venice, Genoa, Florence, Pisa, Lucca, Amalfi, Siena and San Marino
- the 200+ Royal and Imperial Free Cities of Central and Northern Europe, including Strasbourg, Cologne, Frankfurt, Lübeck, Hamburg, Bremen, Nuremberg, Bruges, Ghent, Augsburg, Amsterdam, Prague, Krakow, and Gdansk, organized under Stadtrecht or German Town Law.
- the Hansetag of the Hanseatic League.
- the various permanent town leagues or Städtebund such as the Lusatian League, the Decapole, and the Pentapolitana
- the Republic of Ragusa (now Dubrovnik) on the Dalmatian coast in what is today Croatia.
- the free pirate groups of the Baltic such as the Victual Brothers.
- the Cortes of León,
- the tuatha system in early medieval Ireland,
- the Veche in Novgorod and Pskov Republics of medieval Russia,
- The States in Tirol and the Old Swiss Confederacy in Switzerland,
- the autonomous merchant city of Sakai in the 16th century in Japan,
- Volta-Nigeric societies such as Igbo.
- the Mekhk-Khel system of the Nakh peoples of the North Caucasus, by which representatives to the Council of Elders for each teip (clan) were popularly elected by that teip's members.
- The 10th Sikh Guru Gobind Singh ji (Nanak X) established the world's first Sikh democratic republic state ending the aristocracy on day of 1st Vasakh 1699 and Gurbani as sole constitution of this Sikh republic on the Indian subcontinent.
Most regions in medieval Europe were ruled by clergy or feudal lords.
The Kouroukan Fouga divided the Mali Empire into ruling clans (lineages) that were represented at a great assembly called the Gbara. However, the charter made Mali more similar to a constitutional monarchy than a democratic republic.
The Parliament of England had its roots in the restrictions on the power of kings written into Magna Carta (1215), which explicitly protected certain rights of the King's subjects and implicitly supported what became the English writ of habeas corpus, safeguarding individual freedom against unlawful imprisonment with right to appeal. The first representative national assembly in England was Simon de Montfort's Parliament in 1265. The emergence of petitioning is some of the earliest evidence of parliament being used as a forum to address the general grievances of ordinary people. However, the power to call parliament remained at the pleasure of the monarch.
Early modern period
In 17th century England, there was renewed interest in Magna Carta. The Parliament of England passed the Petition of Right in 1628 which established certain liberties for subjects. The English Civil War (1642–1651) was fought between the King and an oligarchic but elected Parliament, during which the idea of a political party took form with groups debating rights to political representation during the Putney Debates of 1647. Subsequently, the Protectorate (1653–59) and the English Restoration (1660) restored more autocratic rule, although Parliament passed the Habeas Corpus Act in 1679 which strengthened the convention that forbade detention lacking sufficient cause or evidence. After the Glorious Revolution of 1688, the Bill of Rights was enacted in 1689 which codified certain rights and liberties, and is still in effect. The Bill set out the requirement for regular elections, rules for freedom of speech in Parliament and limited the power of the monarch, ensuring that, unlike much of Europe at the time, royal absolutism would not prevail.
In the Cossack republics of Ukraine in the 16th and 17th centuries, the Cossack Hetmanate and Zaporizhian Sich, the holder of the highest post of Hetman was elected by the representatives from the country's districts.
In North America, representative government began in Jamestown, Virginia, with the election of the House of Burgesses (forerunner of the Virginia General Assembly) in 1619. English Puritans who migrated from 1620 established colonies in New England whose local governance was democratic and which contributed to the democratic development of the United States; although these local assemblies had some small amounts of devolved power, the ultimate authority was held by the Crown and the English Parliament. The Puritans (Pilgrim Fathers), Baptists, and Quakers who founded these colonies applied the democratic organisation of their congregations also to the administration of their communities in worldly matters.
18th and 19th centuries
The first Parliament of Great Britain was established in 1707, after the merger of the Kingdom of England and the Kingdom of Scotland under the Acts of Union. Although the monarch increasingly became a figurehead, only a small minority actually had a voice; Parliament was elected by only a few percent of the population (less than 3% as late as 1780). During the Age of Liberty in Sweden (1718–1772), civil rights were expanded and power shifted from the monarch to parliament. The taxed peasantry was represented in parliament, although with little influence, but commoners without taxed property had no suffrage.
The creation of the short-lived Corsican Republic in 1755 marked the first nation in modern history to adopt a democratic constitution (all men and women above age of 25 could vote). This Corsican Constitution was the first based on Enlightenment principles and included female suffrage, something that was not granted in most other democracies until the 20th century.
In the American colonial period before 1776, and for some time after, often only adult white male property owners could vote; enslaved Africans, most free black people and most women were not extended the franchise. This changed state by state, beginning with the republican State of New Connecticut, soon after called Vermont, which, on declaring independence of Great Britain in 1777, adopted a constitution modelled on Pennsylvania's with citizenship and democratic suffrage for males with or without property, and went on to abolish slavery. On the American frontier, democracy became a way of life, with more widespread social, economic and political equality. Although not described as a democracy by the founding fathers, they shared a determination to root the American experiment in the principles of natural freedom and equality.
The American Revolution led to the adoption of the United States Constitution in 1787, the oldest surviving, still active, governmental codified constitution. The Constitution provided for an elected government and protected civil rights and liberties for some, but did not end slavery nor extend voting rights in the United States, instead leaving the issue of suffrage to the individual states. Generally, suffrage was limited to white male property owners and taxpayers, of whom between 60% and 90% were eligible to vote by the end of the 1780s. The Bill of Rights in 1791 set limits on government power to protect personal freedoms but had little impact on judgements by the courts for the first 130 years after ratification.
First page of original manuscript of Constitution of 3 May 1791, registered (upper right corner) on 5 May 1791
|Created||6 October 1788 – 3 May 1791|
|Ratified||3 May 1791|
|Location||Central Archives of Historical Records, Warsaw|
The Polish Constitution of 3 May 1791 (Polish: Konstytucja Trzeciego Maja) is called "the first constitution of its kind in Europe" by historian Norman Davies. Short lived due to Russian, German, Austrian aggression, It was instituted by the Government Act (Polish: Ustawa rządowa) adopted on that date by the Sejm (parliament) of the Polish-Lithuanian Commonwealth. (Polish: Ustawa Rządowa, "Governance Act"), was a constitution adopted by the Great Sejm ("Four-Year Sejm", meeting in 1788–92) for the Polish–Lithuanian Commonwealth, a dual monarchy comprising the Crown of the Kingdom of Poland and the Grand Duchy of Lithuania. The Constitution was designed to correct the Commonwealth's political flaws and had been preceded by a period of agitation for—and gradual introduction of—reforms, beginning with the Convocation Sejm of 1764 and the consequent election that year of Stanisław August Poniatowski as the Commonwealth's last king.
The Constitution sought to implement a more effective constitutional monarchy, introduced political equality between townspeople and nobility, and placed the peasants under the protection of the government, mitigating the worst abuses of serfdom. It banned pernicious parliamentary institutions such as the liberum veto, which had put the Sejm at the mercy of any single deputy, who could veto and thus undo all the legislation that had been adopted by that Sejm. The Commonwealth's neighbours reacted with hostility to the adoption of the Constitution. King Frederick William II broke Prussia's alliance with the Polish-Lithuanian Commonwealth and joined with Catherine the Great's Imperial Russia and the Targowica Confederation of anti-reform Polish magnates to defeat the Commonwealth in the Polish–Russian War of 1792.
The 1791 Constitution was in force for less than 19 months. It was declared null and void by the Grodno Sejm that met in 1793, though the Sejm's legal power to do so was questionable. The Second and Third Partitions of Poland (1793, 1795) ultimately ended Poland's sovereign existence until the close of World War I in 1918. Over that 123-year period, the 1791 Constitution helped keep alive Polish aspirations for the eventual restoration of the country's sovereignty. In the words of two of its principal authors, Ignacy Potocki and Hugo Kołłątaj, the 1791 Constitution was "the last will and testament of the expiring Homeland."
The Constitution of 3 May 1791 combined a monarchic republic with a clear division of executive, legislative, and judiciary powers. It is generally considered Europe's first, and the world's second, modern written national constitution, after the United States Constitution that had come into force in 1789.
In 1789, Revolutionary France adopted the Declaration of the Rights of Man and of the Citizen and, although short-lived, the National Convention was elected by all men in 1792. However, in the early 19th century, little of democracy—as theory, practice, or even as word—remained in the North Atlantic world.
During this period, slavery remained a social and economic institution in places around the world. This was particularly the case in the United States, and especially in the last fifteen slave states that kept slavery legal in the American South until the Civil War. A variety of organisations were established advocating the movement of black people from the United States to locations where they would enjoy greater freedom and equality.
The United Kingdom's Slave Trade Act 1807 banned the trade across the British Empire, which was enforced internationally by the Royal Navy under treaties Britain negotiated with other nations. As the voting franchise in the U.K. was increased, it also was made more uniform in a series of reforms beginning with the Reform Act 1832, although the United Kingdom did not manage to become a complete democracy well into the 20th century. In 1833, the United Kingdom passed the Slavery Abolition Act which took effect across the British Empire.
Universal male suffrage was established in France in March 1848 in the wake of the French Revolution of 1848. In 1848, several revolutions broke out in Europe as rulers were confronted with popular demands for liberal constitutions and more democratic government.
In the 1860 United States Census, the slave population in the United States had grown to four million, and in Reconstruction after the Civil War (late 1860s), the newly freed slaves became citizens with a nominal right to vote for men. Full enfranchisement of citizens was not secured until after the Civil Rights Movement gained passage by the United States Congress of the Voting Rights Act of 1965.
In 1876 the Ottoman Empire transitioned from an absolute monarchy to a constitutional one, and held two elections the next year to elect members to her newly formed parliament. Provisional Electoral Regulations were issued on 29 October 1876, stating that the elected members of the Provincial Administrative Councils would elect members to the first Parliament. On 24 December a new constitution was promulgated, which provided for a bicameral Parliament with a Senate appointed by the Sultan and a popularly elected Chamber of Deputies. Only men above the age of 30 who were competent in Turkish and had full civil rights were allowed to stand for election. Reasons for disqualification included holding dual citizenship, being employed by a foreign government, being bankrupt, employed as a servant, or having "notoriety for ill deeds". Full universal suffrage was achieved in 1934.
20th and 21st centuries
20th-century transitions to liberal democracy have come in successive "waves of democracy", variously resulting from wars, revolutions, decolonisation, and religious and economic circumstances. Global waves of "democratic regression" reversing democratization, have also occurred in the 1920s and 30s, in the 1960s and 1970s, and in the 2010s.
In the 1920s democracy flourished and women's suffrage advanced, but the Great Depression brought disenchantment and most of the countries of Europe, Latin America, and Asia turned to strong-man rule or dictatorships. Fascism and dictatorships flourished in Nazi Germany, Italy, Spain and Portugal, as well as non-democratic governments in the Baltics, the Balkans, Brazil, Cuba, China, and Japan, among others.
World War II brought a definitive reversal of this trend in western Europe. The democratisation of the American, British, and French sectors of occupied Germany (disputed), Austria, Italy, and the occupied Japan served as a model for the later theory of government change. However, most of Eastern Europe, including the Soviet sector of Germany fell into the non-democratic Soviet bloc.
The war was followed by decolonisation, and again most of the new independent states had nominally democratic constitutions. India emerged as the world's largest democracy and continues to be so. Countries that were once part of the British Empire often adopted the British Westminster system.
By 1960, the vast majority of country-states were nominally democracies, although most of the world's populations lived in nations that experienced sham elections, and other forms of subterfuge (particularly in "Communist" nations and the former colonies.)
A subsequent wave of democratisation brought substantial gains toward true liberal democracy for many nations. Spain, Portugal (1974), and several of the military dictatorships in South America returned to civilian rule in the late 1970s and early 1980s (Argentina in 1983, Bolivia, Uruguay in 1984, Brazil in 1985, and Chile in the early 1990s). This was followed by nations in East and South Asia by the mid-to-late 1980s.
Economic malaise in the 1980s, along with resentment of Soviet oppression, contributed to the collapse of the Soviet Union, the associated end of the Cold War, and the democratisation and liberalisation of the former Eastern bloc countries. The most successful of the new democracies were those geographically and culturally closest to western Europe, and they are now members or candidate members of the European Union. In 1986, after the toppling of the most prominent Asian dictatorship, the only democratic state of its kind at the time emerged in the Philippines with the rise of Corazon Aquino, who would later be known as the Mother of Asian Democracy.
The liberal trend spread to some nations in Africa in the 1990s, most prominently in South Africa. Some recent examples of attempts of liberalisation include the Indonesian Revolution of 1998, the Bulldozer Revolution in Yugoslavia, the Rose Revolution in Georgia, the Orange Revolution in Ukraine, the Cedar Revolution in Lebanon, the Tulip Revolution in Kyrgyzstan, and the Jasmine Revolution in Tunisia.
According to Freedom House, in 2007 there were 123 electoral democracies (up from 40 in 1972). According to World Forum on Democracy, electoral democracies now represent 120 of the 192 existing countries and constitute 58.2 percent of the world's population. At the same time liberal democracies i.e. countries Freedom House regards as free and respectful of basic human rights and the rule of law are 85 in number and represent 38 percent of the global population.
Most electoral democracies continue to exclude those younger than 18 from voting. The voting age has been lowered to 16 for national elections in a number of countries, including Brazil, Austria, Cuba, and Nicaragua. In California, a 2004 proposal to permit a quarter vote at 14 and a half vote at 16 was ultimately defeated. In 2008, the German parliament proposed but shelved a bill that would grant the vote to each citizen at birth, to be used by a parent until the child claims it for themselves.
According to Freedom House, starting in 2005, there have been eleven consecutive years in which declines in political rights and civil liberties throughout the world have outnumbered improvements, as populist and nationalist political forces have gained ground everywhere from Poland (under the Law and Justice Party) to the Philippines (under Rodrigo Duterte).
In a Freedom House report released in 2018, Democracy Scores for most countries declined for the 12th consecutive year. The Christian Science Monitor reported that nationalist and populist political ideologies were gaining ground, at the expense of rule of law, in countries like Poland, Turkey and Hungary. For example, in Poland, the President appointed 27 new Supreme Court judges over objections from the European Union. In Turkey, thousands of judges were removed from their positions following a failed coup attempt during a government crackdown .
Measurement of democracy
Several freedom indices are published by several organisations according to their own various definitions of the term and relying on different types of data:
- Freedom in the World published each year since 1972 by the U.S.-based Freedom House ranks countries by political rights and civil liberties that are derived in large measure from the Universal Declaration of Human Rights. Countries are assessed as free, partly free, or unfree.
- Worldwide Press Freedom Index is published each year since 2002 (except that 2011 was combined with 2012) by France-based Reporters Without Borders. Countries are assessed as having a good situation, a satisfactory situation, noticeable problems, a difficult situation, or a very serious situation.
- The Index of Freedom in the World is an index measuring classical civil liberties published by Canada's Fraser Institute, Germany's Liberales Institute, and the U.S. Cato Institute. It is not currently included in the table below.
- The CIRI Human Rights Data Project measures a range of human, civil, women's and workers rights. It is now hosted by the University of Connecticut. It was created in 1994. In its 2011 report, the U.S. was ranked 38th in overall human rights.
- The Democracy Index, published by the U.K.-based Economist Intelligence Unit, is an assessment of countries' democracy. Countries are rated to be either Full Democracies, Flawed Democracies, Hybrid Regimes, or Authoritarian regimes. Full democracies, flawed democracies, and hybrid regimes are considered to be democracies, and the authoritarian nations are considered to be dictatorial. The index is based on 60 indicators grouped in five different categories.
- The U.S.-based Polity data series is a widely used data series in political science research. It contains coded annual information on regime authority characteristics and transitions for all independent states with greater than 500,000 total population and covers the years 1800–2006. Polity's conclusions about a state's level of democracy are based on an evaluation of that state's elections for competitiveness, openness and level of participation. Data from this series is not currently included in the table below. The Polity work is sponsored by the Political Instability Task Force (PITF) which is funded by the U.S. Central Intelligence Agency. However, the views expressed in the reports are the authors' alone and do not represent the views of the US Government.
- MaxRange, a dataset defining level of democracy and institutional structure(regime-type) on a 100-graded scale where every value represents a unique regime type. Values are sorted from 1–100 based on level of democracy and political accountability. MaxRange defines the value corresponding to all states and every month from 1789 to 2015 and updating. MaxRange is created and developed by Max Range, and is now associated with the university of Halmstad, Sweden.
Dieter Fuchs and Edeltraud Roller suggest that, in order to truly measure the quality of democracy, objective measurements need to be complemented by "subjective measurements based on the perspective of citizens". Similarly, Quinton Mayne and Brigitte Geißel also defend that the quality of democracy does not depend exclusively on the performance of institutions, but also on the citizens' own dispositions and commitment.
Difficulties in measuring democracy
Because democracy is an overarching concept that includes the functioning of diverse institutions which are not easy to measure, strong limitations exist in quantifying and econometrically measuring the potential effects of democracy or its relationship with other phenomena—whether inequality, poverty, education etc. Given the constraints in acquiring reliable data with within-country variation on aspects of democracy, academics have largely studied cross-country variations. Yet variations between democratic institutions are very large across countries which constrains meaningful comparisons using statistical approaches. Since democracy is typically measured aggregately as a macro variable using a single observation for each country and each year, studying democracy faces a range of econometric constraints and is limited to basic correlations. Cross-country comparison of a composite, comprehensive and qualitative concept like democracy may thus not always be, for many purposes, methodologically rigorous or useful.
Types of governmental democracies
Democracy has taken a number of forms, both in theory and practice. Some varieties of democracy provide better representation and more freedom for their citizens than others. However, if any democracy is not structured so as to prohibit the government from excluding the people from the legislative process, or any branch of government from altering the separation of powers in its own favour, then a branch of the system can accumulate too much power and destroy the democracy.
The following kinds of democracy are not exclusive of one another: many specify details of aspects that are independent of one another and can co-exist in a single system.
Several variants of democracy exist, but there are two basic forms, both of which concern how the whole body of all eligible citizens executes its will. One form of democracy is direct democracy, in which all eligible citizens have active participation in the political decision making, for example voting on policy initiatives directly. In most modern democracies, the whole body of eligible citizens remain the sovereign power but political power is exercised indirectly through elected representatives; this is called a representative democracy.
Direct democracy is a political system where the citizens participate in the decision-making personally, contrary to relying on intermediaries or representatives. The use of a lot system, a characteristic of Athenian democracy, is unique to direct democracies. In this system, important governmental and administrative tasks are performed by citizens picked from a lottery. A direct democracy gives the voting population the power to:
- Change constitutional laws,
- Put forth initiatives, referendums and suggestions for laws,
- Give binding orders to elective officials, such as revoking them before the end of their elected term, or initiating a lawsuit for breaking a campaign promise.
Within modern-day representative governments, certain electoral tools like referendums, citizens' initiatives and recall elections are referred to as forms of direct democracy. However, some advocates of direct democracy argue for local assemblies of face-to-face discussion. Direct democracy as a government system currently exists in the Swiss cantons of Appenzell Innerrhoden and Glarus, the Rebel Zapatista Autonomous Municipalities, communities affiliated with the CIPO-RFM, the Bolivian city councils of FEJUVE, and Kurdish cantons of Rojava.
Representative democracy involves the election of government officials by the people being represented. If the head of state is also democratically elected then it is called a democratic republic. The most common mechanisms involve election of the candidate with a majority or a plurality of the votes. Most western countries have representative systems.
Representatives may be elected or become diplomatic representatives by a particular district (or constituency), or represent the entire electorate through proportional systems, with some using a combination of the two. Some representative democracies also incorporate elements of direct democracy, such as referendums. A characteristic of representative democracy is that while the representatives are elected by the people to act in the people's interest, they retain the freedom to exercise their own judgement as how best to do so. Such reasons have driven criticism upon representative democracy, pointing out the contradictions of representation mechanisms with democracy
Parliamentary democracy is a representative democracy where government is appointed by, or can be dismissed by, representatives as opposed to a "presidential rule" wherein the president is both head of state and the head of government and is elected by the voters. Under a parliamentary democracy, government is exercised by delegation to an executive ministry and subject to ongoing review, checks and balances by the legislative parliament elected by the people.
Parliamentary systems have the right to dismiss a Prime Minister at any point in time that they feel he or she is not doing their job to the expectations of the legislature. This is done through a Vote of No Confidence where the legislature decides whether or not to remove the Prime Minister from office by a majority support for his or her dismissal. In some countries, the Prime Minister can also call an election whenever he or she so chooses, and typically the Prime Minister will hold an election when he or she knows that they are in good favour with the public as to get re-elected. In other parliamentary democracies extra elections are virtually never held, a minority government being preferred until the next ordinary elections. An important feature of the parliamentary democracy is the concept of the "loyal opposition". The essence of the concept is that the second largest political party (or coalition) opposes the governing party (or coalition), while still remaining loyal to the state and its democratic principles.
Presidential Democracy is a system where the public elects the president through free and fair elections. The president serves as both the head of state and head of government controlling most of the executive powers. The president serves for a specific term and cannot exceed that amount of time. Elections typically have a fixed date and aren't easily changed. The president has direct control over the cabinet, specifically appointing the cabinet members.
The president cannot be easily removed from office by the legislature, but he or she cannot remove members of the legislative branch any more easily. This provides some measure of separation of powers. In consequence however, the president and the legislature may end up in the control of separate parties, allowing one to block the other and thereby interfere with the orderly operation of the state. This may be the reason why presidential democracy is not very common outside the Americas, Africa, and Central and Southeast Asia.
A semi-presidential system is a system of democracy in which the government includes both a prime minister and a president. The particular powers held by the prime minister and president vary by country.
Hybrid or semi-direct
Some modern democracies that are predominantly representative in nature also heavily rely upon forms of political action that are directly democratic. These democracies, which combine elements of representative democracy and direct democracy, are termed hybrid democracies, semi-direct democracies or participatory democracies. Examples include Switzerland and some U.S. states, where frequent use is made of referendums and initiatives.
The Swiss confederation is a semi-direct democracy. At the federal level, citizens can propose changes to the constitution (federal popular initiative) or ask for a referendum to be held on any law voted by the parliament. Between January 1995 and June 2005, Swiss citizens voted 31 times, to answer 103 questions (during the same period, French citizens participated in only two referendums). Although in the past 120 years less than 250 initiatives have been put to referendum. The populace has been conservative, approving only about 10% of the initiatives put before them; in addition, they have often opted for a version of the initiative rewritten by government.
In the United States, no mechanisms of direct democracy exists at the federal level, but over half of the states and many localities provide for citizen-sponsored ballot initiatives (also called "ballot measures", "ballot questions" or "propositions"), and the vast majority of states allow for referendums. Examples include the extensive use of referendums in the US state of California, which is a state that has more than 20 million voters.
In New England, Town meetings are often used, especially in rural areas, to manage local government. This creates a hybrid form of government, with a local direct democracy and a state government which is representative. For example, most Vermont towns hold annual town meetings in March in which town officers are elected, budgets for the town and schools are voted on, and citizens have the opportunity to speak and be heard on political matters.
Many countries such as the United Kingdom, Spain, the Netherlands, Belgium, Scandinavian countries, Thailand, Japan and Bhutan turned powerful monarchs into constitutional monarchs with limited or, often gradually, merely symbolic roles. For example, in the predecessor states to the United Kingdom, constitutional monarchy began to emerge and has continued uninterrupted since the Glorious Revolution of 1688 and passage of the Bill of Rights 1689.
In other countries, the monarchy was abolished along with the aristocratic system (as in France, China, Russia, Germany, Austria, Hungary, Italy, Greece and Egypt). An elected president, with or without significant powers, became the head of state in these countries.
Elite upper houses of legislatures, which often had lifetime or hereditary tenure, were common in many nations. Over time, these either had their powers limited (as with the British House of Lords) or else became elective and remained powerful (as with the Australian Senate).
The term republic has many different meanings, but today often refers to a representative democracy with an elected head of state, such as a president, serving for a limited term, in contrast to states with a hereditary monarch as a head of state, even if these states also are representative democracies with an elected or appointed head of government such as a prime minister.
The Founding Fathers of the United States rarely praised and often criticised democracy, which in their time tended to specifically mean direct democracy, often without the protection of a constitution enshrining basic rights; James Madison argued, especially in The Federalist No. 10, that what distinguished a direct democracy from a republic was that the former became weaker as it got larger and suffered more violently from the effects of faction, whereas a republic could get stronger as it got larger and combats faction by its very structure.
What was critical to American values, John Adams insisted, was that the government be "bound by fixed laws, which the people have a voice in making, and a right to defend." As Benjamin Franklin was exiting after writing the U.S. constitution, a woman asked him "Well, Doctor, what have we got—a republic or a monarchy?". He replied "A republic—if you can keep it."
A liberal democracy is a representative democracy in which the ability of the elected representatives to exercise decision-making power is subject to the rule of law, and moderated by a constitution or laws that emphasise the protection of the rights and freedoms of individuals, and which places constraints on the leaders and on the extent to which the will of the majority can be exercised against the rights of minorities (see civil liberties).
In a liberal democracy, it is possible for some large-scale decisions to emerge from the many individual decisions that citizens are free to make. In other words, citizens can "vote with their feet" or "vote with their dollars", resulting in significant informal government-by-the-masses that exercises many "powers" associated with formal government elsewhere.
Socialist thought has several different views on democracy. Social democracy, democratic socialism, and the dictatorship of the proletariat (usually exercised through Soviet democracy) are some examples. Many democratic socialists and social democrats believe in a form of participatory, industrial, economic and/or workplace democracy combined with a representative democracy.
Within Marxist orthodoxy there is a hostility to what is commonly called "liberal democracy", which is simply referred to as parliamentary democracy because of its often centralised nature. Because of orthodox Marxists' desire to eliminate the political elitism they see in capitalism, Marxists, Leninists and Trotskyists believe in direct democracy implemented through a system of communes (which are sometimes called soviets). This system ultimately manifests itself as council democracy and begins with workplace democracy.
Democracy cannot consist solely of elections that are nearly always fictitious and managed by rich landowners and professional politicians.— Che Guevara, Speech, Uruguay, 1961
Anarchists are split in this domain, depending on whether they believe that a majority-rule is tyrannic or not. To many anarchists, the only form of democracy considered acceptable is direct democracy. Pierre-Joseph Proudhon argued that the only acceptable form of direct democracy is one in which it is recognised that majority decisions are not binding on the minority, even when unanimous. However, anarcho-communist Murray Bookchin criticised individualist anarchists for opposing democracy, and says "majority rule" is consistent with anarchism.
Some anarcho-communists oppose the majoritarian nature of direct democracy, feeling that it can impede individual liberty and opt in favour of a non-majoritarian form of consensus democracy, similar to Proudhon's position on direct democracy. Henry David Thoreau, who did not self-identify as an anarchist but argued for "a better government" and is cited as an inspiration by some anarchists, argued that people should not be in the position of ruling others or being ruled when there is no consent.
Sometimes called "democracy without elections", sortition chooses decision makers via a random process. The intention is that those chosen will be representative of the opinions and interests of the people at large, and be more fair and impartial than an elected official. The technique was in widespread use in Athenian Democracy and Renaissance Florence and is still used in modern jury selection.
A consociational democracy allows for simultaneous majority votes in two or more ethno-religious constituencies, and policies are enacted only if they gain majority support from both or all of them.
A consensus democracy, in contrast, would not be dichotomous. Instead, decisions would be based on a multi-option approach, and policies would be enacted if they gained sufficient support, either in a purely verbal agreement, or via a consensus vote—a multi-option preference vote. If the threshold of support were at a sufficiently high level, minorities would be as it were protected automatically. Furthermore, any voting would be ethno-colour blind.
Qualified majority voting is designed by the Treaty of Rome to be the principal method of reaching decisions in the European Council of Ministers. This system allocates votes to member states in part according to their population, but heavily weighted in favour of the smaller states. This might be seen as a form of representative democracy, but representatives to the Council might be appointed rather than directly elected.
|Part of the Politics series on|
Inclusive democracy is a political theory and political project that aims for direct democracy in all fields of social life: political democracy in the form of face-to-face assemblies which are confederated, economic democracy in a stateless, moneyless and marketless economy, democracy in the social realm, i.e. self-management in places of work and education, and ecological democracy which aims to reintegrate society and nature. The theoretical project of inclusive democracy emerged from the work of political philosopher Takis Fotopoulos in "Towards An Inclusive Democracy" and was further developed in the journal Democracy & Nature and its successor The International Journal of Inclusive Democracy.
The basic unit of decision making in an inclusive democracy is the demotic assembly, i.e. the assembly of demos, the citizen body in a given geographical area which may encompass a town and the surrounding villages, or even neighbourhoods of large cities. An inclusive democracy today can only take the form of a confederal democracy that is based on a network of administrative councils whose members or delegates are elected from popular face-to-face democratic assemblies in the various demoi. Thus, their role is purely administrative and practical, not one of policy-making like that of representatives in representative democracy.
The citizen body is advised by experts but it is the citizen body which functions as the ultimate decision-taker. Authority can be delegated to a segment of the citizen body to carry out specific duties, for example to serve as members of popular courts, or of regional and confederal councils. Such delegation is made, in principle, by lot, on a rotation basis, and is always recallable by the citizen body. Delegates to regional and confederal bodies should have specific mandates.
A Parpolity or Participatory Polity is a theoretical form of democracy that is ruled by a Nested Council structure. The guiding philosophy is that people should have decision making power in proportion to how much they are affected by the decision. Local councils of 25–50 people are completely autonomous on issues that affect only them, and these councils send delegates to higher level councils who are again autonomous regarding issues that affect only the population affected by that council.
A council court of randomly chosen citizens serves as a check on the tyranny of the majority, and rules on which body gets to vote on which issue. Delegates may vote differently from how their sending council might wish, but are mandated to communicate the wishes of their sending council. Delegates are recallable at any time. Referendums are possible at any time via votes of most lower-level councils, however, not everything is a referendum as this is most likely a waste of time. A parpolity is meant to work in tandem with a participatory economy.
Cosmopolitan democracy, also known as Global democracy or World Federalism, is a political system in which democracy is implemented on a global scale, either directly or through representatives. An important justification for this kind of system is that the decisions made in national or regional democracies often affect people outside the constituency who, by definition, cannot vote. By contrast, in a cosmopolitan democracy, the people who are affected by decisions also have a say in them.
According to its supporters, any attempt to solve global problems is undemocratic without some form of cosmopolitan democracy. The general principle of cosmopolitan democracy is to expand some or all of the values and norms of democracy, including the rule of law; the non-violent resolution of conflicts; and equality among citizens, beyond the limits of the state. To be fully implemented, this would require reforming existing international organisations, e.g. the United Nations, as well as the creation of new institutions such as a World Parliament, which ideally would enhance public control over, and accountability in, international politics.
Cosmopolitan Democracy has been promoted, among others, by physicist Albert Einstein, writer Kurt Vonnegut, columnist George Monbiot, and professors David Held and Daniele Archibugi. The creation of the International Criminal Court in 2003 was seen as a major step forward by many supporters of this type of cosmopolitan democracy.
Creative Democracy is advocated by American philosopher John Dewey. The main idea about Creative Democracy is that democracy encourages individual capacity building and the interaction among the society. Dewey argues that democracy is a way of life in his work of "Creative Democracy: The Task Before Us" and an experience built on faith in human nature, faith in human beings, and faith in working with others. Democracy, in Dewey's view, is a moral ideal requiring actual effort and work by people; it is not an institutional concept that exists outside of ourselves. "The task of democracy", Dewey concludes, "is forever that of creation of a freer and more humane experience in which all share and to which all contribute".
Guided democracy is a form of democracy which incorporates regular popular elections, but which often carefully "guides" the choices offered to the electorate in a manner which may reduce the ability of the electorate to truly determine the type of government exercised over them. Such democracies typically have only one central authority which is often not subject to meaningful public review by any other governmental authority. Russian-style democracy has often been referred to as a "Guided democracy." Russian politicians have referred to their government as having only one center of power/ authority, as opposed to most other forms of democracy which usually attempt to incorporate two or more naturally competing sources of authority within the same government.
Aside from the public sphere, similar democratic principles and mechanisms of voting and representation have been used to govern other kinds of groups. Many non-governmental organisations decide policy and leadership by voting. Most trade unions and cooperatives are governed by democratic elections. Corporations are controlled by shareholders on the principle of one share, one vote—sometimes supplemented by workplace democracy. Amitai Etzioni has postulated a system that fuses elements of democracy with sharia law, termed islamocracy.
Aristotle contrasted rule by the many (democracy/timocracy), with rule by the few (oligarchy/aristocracy), and with rule by a single person (tyranny or today autocracy/absolute monarchy). He also thought that there was a good and a bad variant of each system (he considered democracy to be the degenerate counterpart to timocracy).
For Aristotle the underlying principle of democracy is freedom, since only in a democracy can the citizens have a share in freedom. In essence, he argues that this is what every democracy should make its aim. There are two main aspects of freedom: being ruled and ruling in turn, since everyone is equal according to number, not merit, and to be able to live as one pleases.
But one factor of liberty is to govern and be governed in turn; for the popular principle of justice is to have equality according to number, not worth, ... And one is for a man to live as he likes; for they say that this is the function of liberty, inasmuch as to live not as one likes is the life of a man that is a slave.
Early Republican theory
A common view among early and renaissance Republican theorists was that democracy could only survive in small political communities. Heeding the lessons of the Roman Republic's shift to monarchism as it grew larger, these Republican theorists held that the expansion of territory and population inevitably led to tyranny. Democracy was therefore highly fragile and rare historically, as it could only survive in small political units, which due to their size were vulnerable to conquest by larger political units. Montesquieu famously said, "if a republic is small, it is destroyed by an outside force; if it is large, it is destroyed by an internal vice." Rousseau asserted, "It is, therefore the natural property of small states to be governed as a republic, of middling ones to be subject to a monarch, and of large empires to be swayed by a despotic prince."
The theory of aggregative democracy claims that the aim of the democratic processes is to solicit citizens' preferences and aggregate them together to determine what social policies society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented.
Different variants of aggregative democracy exist. Under minimalism, democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book Capitalism, Socialism, and Democracy. Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner.
According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socialises and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not really rule themselves unless they directly decide laws and policies.
Governments will tend to produce laws and policies that are close to the views of the median voter—with half to their left and the other half to their right. This is not actually a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book An Economic Theory of Democracy.
Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation. Similarly, Ronald Dworkin argues that "democracy is a substantive, not a merely procedural, ideal."
Deliberative democracy is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups. If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule.
Radical democracy is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision making processes.
Some economists have criticized the efficiency of democracy, citing the premise of the irrational voter, or a voter who makes decisions without all of the facts or necessary information in order to make a truly informed decision. Another argument is that democracy slows down processes because of the amount of input and participation needed in order to go forward with a decision. A common example often quoted to substantiate this point is the high economic development achieved by China (a non-democratic country) as compared to India (a democratic country). According to economists, the lack of democratic participation in countries like China allows for unfettered economic growth.
On the other hand, Socrates was of the belief that democracy without educated masses (educated in the more broader sense of being knowledgeable and responsible) would only lead to populism being the criteria to become an elected leader, and not competence. This would ultimately lead to a demise of the nation. This was quoted by Plato in book 10 of The Republic, in Socrates' conversation with Adimantus. Socrates was of the opinion that the right to vote must not be an indiscriminate right (for example by birth or citizenship), but must be given only to people who thought sufficiently of their choice.
Popular rule as a façade
The 20th-century Italian thinkers Vilfredo Pareto and Gaetano Mosca (independently) argued that democracy was illusory, and served only to mask the reality of elite rule. Indeed, they argued that elite oligarchy is the unbendable law of human nature, due largely to the apathy and division of the masses (as opposed to the drive, initiative and unity of the elites), and that democratic institutions would do no more than shift the exercise of power from oppression to manipulation. As Louis Brandeis once professed, "We may have democracy, or we may have wealth concentrated in the hands of a few, but we can't have both.". British writer Ivo Mosley, grandson of blackshirt Oswald Mosley describes in In the Name of the People: Pseudo-Democracy and the Spoiling of Our World, how and why current forms of electoral governance are destined to fall short of their promise.
Plato's The Republic presents a critical view of democracy through the narration of Socrates: "Democracy, which is a charming form of government, full of variety and disorder, and dispensing a sort of equality to equals and unequaled alike." In his work, Plato lists 5 forms of government from best to worst. Assuming that the Republic was intended to be a serious critique of the political thought in Athens, Plato argues that only Kallipolis, an aristocracy led by the unwilling philosopher-kings (the wisest men), is a just form of government.
James Madison critiqued direct democracy (which he referred to simply as "democracy") in Federalist No. 10, arguing that representative democracy—which he described using the term "republic"—is a preferable form of government, saying: "... democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths." Madison offered that republics were superior to democracies because republics safeguarded against tyranny of the majority, stating in Federalist No. 10: "the same advantage which a republic has over a democracy, in controlling the effects of faction, is enjoyed by a large over a small republic".
More recently, democracy is criticised for not offering enough political stability. As governments are frequently elected on and off there tends to be frequent changes in the policies of democratic countries both domestically and internationally. Even if a political party maintains power, vociferous, headline grabbing protests and harsh criticism from the popular media are often enough to force sudden, unexpected political change. Frequent policy changes with regard to business and immigration are likely to deter investment and so hinder economic growth. For this reason, many people have put forward the idea that democracy is undesirable for a developing country in which economic growth and the reduction of poverty are top priorities.
This opportunist alliance not only has the handicap of having to cater to too many ideologically opposing factions, but it is usually short lived since any perceived or actual imbalance in the treatment of coalition partners, or changes to leadership in the coalition partners themselves, can very easily result in the coalition partner withdrawing its support from the government.
In representative democracies, it may not benefit incumbents to conduct fair elections. A study showed that incumbents who rig elections stay in office 2.5 times as long as those who permit fair elections. Democracies in countries with high per capita income have been found to be less prone to violence, but in countries with low incomes the tendency is the reverse. Election misconduct is more likely in countries with low per capita incomes, small populations, rich in natural resources, and a lack of institutional checks and balances. Sub-Saharan countries, as well as Afghanistan, all tend to fall into that category.
Governments that have frequent elections tend to have significantly more stable economic policies than those governments who have infrequent elections. However, this trend does not apply to governments where fraudulent elections are common.
Democracy in modern times has almost always faced opposition from the previously existing government, and many times it has faced opposition from social elites. The implementation of a democratic government within a non-democratic state is typically brought about by democratic revolution.
Several philosophers and researchers have outlined historical and social factors seen as supporting the evolution of democracy.
Other commentators have mentioned the influence of economic development. In a related theory, Ronald Inglehart suggests that improved living-standards in modern developed countries can convince people that they can take their basic survival for granted, leading to increased emphasis on self-expression values, which correlates closely with democracy.
Douglas M. Gibler and Andrew Owsiak in their study argued about the importance of peace and stable borders for the development of democracy. It has often been assumed that democracy causes peace, but this study shows that, historically, peace has almost always predated the establishment of democracy.
Carroll Quigley concludes that the characteristics of weapons are the main predictor of democracy: Democracy—this scenario—tends to emerge only when the best weapons available are easy for individuals to obtain and use. By the 1800s, guns were the best personal weapons available, and in the United States of America (already nominally democratic), almost everyone could afford to buy a gun, and could learn how to use it fairly easily. Governments couldn't do any better: it became the age of mass armies of citizen soldiers with guns. Similarly, Periclean Greece was an age of the citizen soldier and democracy.
Other theories stressed the relevance of education and of human capital—and within them of cognitive ability to increasing tolerance, rationality, political literacy and participation. Two effects of education and cognitive ability are distinguished:
- a cognitive effect (competence to make rational choices, better information-processing)
- an ethical effect (support of democratic values, freedom, human rights etc.), which itself depends on intelligence.
Evidence consistent with conventional theories of why democracy emerges and is sustained has been hard to come by. Statistical analyses have challenged modernisation theory by demonstrating that there is no reliable evidence for the claim that democracy is more likely to emerge when countries become wealthier, more educated, or less unequal. Neither is there convincing evidence that increased reliance on oil revenues prevents democratisation, despite a vast theoretical literature on "the Resource Curse" that asserts that oil revenues sever the link between citizen taxation and government accountability, seen as the key to representative democracy. The lack of evidence for these conventional theories of democratisation have led researchers to search for the "deep" determinants of contemporary political institutions, be they geographical or demographic. More inclusive institutions lead to democracy because as people gain more power, they are able to demand more from the elites, who in turn have to concede more things to keep their position. This virtuous circle may end up in democracy.
An example of this is the disease environment. Places with different mortality rates had different populations and productivity levels around the world. For example, in Africa, the tsetse fly—which afflicts humans and livestock—reduced the ability of Africans to plow the land. This made Africa less settled. As a consequence, political power was less concentrated. This also affected the colonial institutions European countries established in Africa. Whether colonial settlers could live or not in a place made them develop different institutions which led to different economic and social paths. This also affected the distribution of power and the collective actions people could take. As a result, some African countries ended up having democracies and others autocracies.
An example of geographical determinants for democracy is having access to coastal areas and rivers. This natural endowment has a positive relation with economic development thanks to the benefits of trade. Trade brought economic development, which in turn, broadened power. Rulers wanting to increase revenues had to protect property-rights to create incentives for people to invest. As more people had more power, more concessions had to be made by the ruler and in many places this process lead to democracy. These determinants defined the structure of the society moving the balance of political power.
In the 21st century, democracy has become such a popular method of reaching decisions that its application beyond politics to other areas such as entertainment, food and fashion, consumerism, urban planning, education, art, literature, science and theology has been criticised as "the reigning dogma of our time". The argument suggests that applying a populist or market-driven approach to art and literature (for example), means that innovative creative work goes unpublished or unproduced. In education, the argument is that essential but more difficult studies are not undertaken. Science, as a truth-based discipline, is particularly corrupted by the idea that the correct conclusion can be arrived at by popular vote. However, more recently, theorists have also advanced the concept epistemic democracy to assert that democracy actually does a good job tracking the truth.
Robert Michels asserts that although democracy can never be fully realised, democracy may be developed automatically in the act of striving for democracy:
The peasant in the fable, when on his death-bed, tells his sons that a treasure is buried in the field. After the old man's death the sons dig everywhere in order to discover the treasure. They do not find it. But their indefatigable labor improves the soil and secures for them a comparative well-being. The treasure in the fable may well symbolise democracy.
Dr. Harald Wydra, in his book Communism and The Emergence of Democracy (2007), maintains that the development of democracy should not be viewed as a purely procedural or as a static concept but rather as an ongoing "process of meaning formation". Drawing on Claude Lefort's idea of the empty place of power, that "power emanates from the people [...] but is the power of nobody", he remarks that democracy is reverence to a symbolic mythical authority—as in reality, there is no such thing as the people or demos. Democratic political figures are not supreme rulers but rather temporary guardians of an empty place. Any claim to substance such as the collective good, the public interest or the will of the nation is subject to the competitive struggle and times of for gaining the authority of office and government. The essence of the democratic system is an empty place, void of real people, which can only be temporarily filled and never be appropriated. The seat of power is there, but remains open to constant change. As such, people's definitions of "democracy" or of "democratic" progress throughout history as a continual and potentially never ending process of social construction.
- Consent of the governed
- Constitutional liberalism
- Democracy Index
- Democracy Ranking
- Democratic peace theory
- Empowered democracy
- Foucault–Habermas debate
- Good governance
- Parliament in the Making
- Power to the people
- The Establishment
- Shadow government (conspiracy)
- Spatial citizenship
- Piotr Machnikowski renders the Polish "Ojczyzna" as "Fatherland". The "literal" English translation of "ojczyzna" is indeed "fatherland": both these words are calques of the Latin "patria," which itself derives from the Latin "pater" ("father"). The English translation of the Constitution of 3 May 1791, by Christopher Kasparek, reproduced in Wikisource (e.g. at the end of section II, "The Landed Nobility") renders "ojczyzna" as "country", which is the usual English-language equivalent of the expression. In this particular context, "Homeland" may be the most natural rendering.
- The claims of "first" and "second constitution" have been disputed. The U.S. and Polish-Lithuanian constitutions had been preceded by earlier documents that had not introduced the clear division of executive, legislative, and judiciary powers advocated by Enlightenment thinkers such as Montesquieu. According to Koenigsberger, the Corsican Constitution of 1755 had not separated the executive from the judiciary. See history of the constitution.
- "Definition of DEMOCRACY". www.merriam-webster.com. Retrieved 5 July 2018.
- Locke, John. Two Treatises on Government: a Translation into Modern English. Quote:"There is no practical alternative to majority political rule – i.e., to taking the consent of the majority as the act of the whole and binding every individual. It would be next to impossible to obtain the consent of every individual before acting collectively ... No rational people could desire and constitute a society that had to dissolve straightaway because the majority was unable to make the final decision and the society was incapable of acting as one body."Google Books.
- Oxford English Dictionary: "democracy".
- Watkins, Frederick (1970). "Democracy". Encyclopædia Britannica. 7 (Expo '70 hardcover ed.). William Benton. pp. 215–23. ISBN 978-0-85229-135-1.
- R. R. Palmer, The Age of the Democratic Revolution: Political History of Europe and America, 1760–1800 (1959)
- Przeworski, Adam (1991). Democracy and the Market. Cambridge University Press. pp. 10–14.
- Diamond, L., Lecture at Hilla University for Humanistic Studies 21 January 2004: "What is Democracy"; Diamond, L. and Morlino, L., The quality of democracy (2016). In Diamond, L., In Search of Democracy. London: Routledge. ISBN 978-0-415-78128-2.
- Landman, Todd (2018). "Democracy and Human Rights: Concepts, Measures, and Relationships". Politics and Governance. 6 (1): 48. doi:10.17645/pag.v6i1.1186.
- Wilson, N.G. (2006). Encyclopedia of ancient Greece. New York: Routledge. p. 511. ISBN 0-415-97334-1.
- Barker, Ernest (1906). The Political Thought of Plato and Aristotle. Chapter VII, Section 2: G.P. Putnam's Sons.
- Jarvie, 2006, pp. 218–19
- "Democracy Index 2017 – Economist Intelligence Unit" (PDF). EIU.com. Archived from the original (PDF) on 18 February 2018. Retrieved 17 February 2018.
- Staff writer (22 August 2007). "Liberty and justice for some". The Economist. Economist Group.
- O'Donnell, Guillermo (2005), "Why the rule of law matters", in Diamond, Larry; Morlino, Leonardo (eds.), Assessing the quality of democracy, Baltimore: Johns Hopkins University Press, pp. 3–17, ISBN 978-0-8018-8287-6. Preview.
- Dahl, Robert A.; Shapiro, Ian; Cheibub, José Antônio (2003). The democracy sourcebook. Cambridge, Massachusetts: MIT Press. ISBN 978-0-262-54147-3. Details.
- Hénaff, Marcel; Strong, Tracy B. (2001). Public space and democracy. Minneapolis: University of Minnesota Press. ISBN 978-0-8166-3388-3.
- Kimber, Richard (September 1989). "On democracy". Scandinavian Political Studies. 12 (3): 201, 199–219. doi:10.1111/j.1467-9477.1989.tb00090.x. Full text.
- Scruton, Roger (9 August 2013). "A Point of View: Is democracy overrated?". BBC News. BBC.
- Kopstein, Jeffrey; Lichbach, Mark; Hanson, Stephen E., eds. (2014). Comparative Politics: Interests, Identities, and Institutions in a Changing Global Order (4, revised ed.). Cambridge University Press. pp. 37–39. ISBN 978-1-139-99138-4.
- "Parliamentary sovereignty". UK Parliament. Retrieved 18 August 2014; "Independence". Courts and Tribunals Judiciary. Retrieved 9 November 2014.
- Daily Express News (2 August 2013). "All-party meet vows to uphold Parliament supremacy". The New Indian Express. Express Publications (Madurai) Limited. Retrieved 18 August 2013.
- Barak, Aharon (2006), "Protecting the constitution and democracy", in Barak, Aharon (ed.), The judge in a democracy, Princeton, New Jersey: Princeton University Press, p. 27, ISBN 978-0-691-12017-1. Preview.
- Kelsen, Hans (October 1955). "Foundations of democracy". Ethics. 66 (1): 1–101. doi:10.1086/291036. JSTOR 2378551.
- Nussbaum, Martha (2000). Women and human development: the capabilities approach. Cambridge New York: Cambridge University Press. ISBN 978-0-521-00385-8.
- Snyder, Richard; Samuels, David (2006), "Devaluing the vote in Latin America", in Diamond, Larry; Plattner, Marc F. (eds.), Electoral systems and democracy, Baltimore: Johns Hopkins University Press, p. 168, ISBN 978-0-8018-8475-7.
- Montesquieu, Spirit of the Laws, Bk. II, ch. 2–3.
- Everdell, William R. (2000) . The end of kings: a history of republics and republicans (2nd ed.). Chicago: University of Chicago Press. ISBN 978-0-226-22482-4.
- "Pericles' Funeral Oration". the-athenaeum.org.
- John Dunn, Democracy: the unfinished journey 508 BC – 1993 AD, Oxford University Press, 1994, ISBN 0-19-827934-5
- Raaflaub, Ober & Wallace 2007, p. .
- "Democracy". Online Etymology Dictionary.
- R. Po-chia Hsia, Lynn Hunt, Thomas R. Martin, Barbara H. Rosenwein, and Bonnie G. Smith, The Making of the West, Peoples and Cultures, A Concise History, Volume I: To 1740 (Boston and New York: Bedford/St. Martin's, 2007), 44.
- Aristotle Book 6
- Grinin, Leonid E. (2004). The Early State, Its Alternatives and Analogues. Uchitel' Publishing House.
- "Women and Family in Athenian Law". www.stoa.org. Retrieved 1 March 2018.
- Susan Lape, Reproducing Athens: Menander's Comedy, Democratic Culture, and the Hellenistic City, Princeton University Press, 2009, p. 4, ISBN 1-4008-2591-1
- Raaflaub, Ober & Wallace 2007, p. 5.
- Ober & Hedrick 1996, p. 107.
- Clarke, 2001, pp. 194–201
- "Full historical description of the Spartan government". Rangevoting.org. Retrieved 28 September 2013.
- Terrence A. Boring, Literacy in Ancient Sparta, Leiden Netherlands (1979). ISBN 90-04-05971-7
- "Ancient Rome from the earliest times down to 476 A.D". Annourbis.com. Retrieved 22 August 2010.
- Livy 2002, p. 34
- Watson 2005, p. 271
- "Constitution 1,000 years ago". The Hindu. Chennai, India. 11 July 2008.
- "Magna Carta: an introduction". The British Library. Retrieved 28 January 2015.
Magna Carta is sometimes regarded as the foundation of democracy in England. ...Revised versions of Magna Carta were issued by King Henry III (in 1216, 1217 and 1225), and the text of the 1225 version was entered onto the statute roll in 1297. ...The 1225 version of Magna Carta had been granted explicitly in return for a payment of tax by the whole kingdom, and this paved the way for the first summons of Parliament in 1265, to approve the granting of taxation.
- "Citizen or Subject?". The National Archives. Retrieved 17 November 2013.
- Jobson, Adrian (2012). The First English Revolution: Simon de Montfort, Henry III and the Barons' War. Bloomsbury. pp. 173–74. ISBN 978-1-84725-226-5.
- "Simon de Montfort: The turning point for democracy that gets overlooked". BBC. 19 January 2015. Retrieved 19 January 2015; "The January Parliament and how it defined Britain". The Telegraph. 20 January 2015. Retrieved 28 January 2015.
- "Origins and growth of Parliament". The National Archives. Retrieved 17 November 2013.
- "From legal document to public myth: Magna Carta in the 17th century". The British Library. Retrieved 16 October 2017; "Magna Carta: Magna Carta in the 17th Century". The Society of Antiquaries of London. Retrieved 16 October 2017.
- "Origins and growth of Parliament". The National Archives. Retrieved 7 April 2015.
- "Rise of Parliament". The National Archives. Retrieved 7 April 2015.
- "Putney debates". The British Library. Retrieved 22 December 2016.
- "Britain's unwritten constitution". British Library. Retrieved 27 November 2015.
The key landmark is the Bill of Rights (1689), which established the supremacy of Parliament over the Crown.... The Bill of Rights (1689) then settled the primacy of Parliament over the monarch’s prerogatives, providing for the regular meeting of Parliament, free elections to the Commons, free speech in parliamentary debates, and some basic human rights, most famously freedom from ‘cruel or unusual punishment’.
- "Constitutionalism: America & Beyond". Bureau of International Information Programs (IIP), U.S. Department of State. Archived from the original on 24 October 2014. Retrieved 30 October 2014.
The earliest, and perhaps greatest, victory for liberalism was achieved in England. The rising commercial class that had supported the Tudor monarchy in the 16th century led the revolutionary battle in the 17th, and succeeded in establishing the supremacy of Parliament and, eventually, of the House of Commons. What emerged as the distinctive feature of modern constitutionalism was not the insistence on the idea that the king is subject to law (although this concept is an essential attribute of all constitutionalism). This notion was already well established in the Middle Ages. What was distinctive was the establishment of effective means of political control whereby the rule of law might be enforced. Modern constitutionalism was born with the political requirement that representative government depended upon the consent of citizen subjects.... However, as can be seen through provisions in the 1689 Bill of Rights, the English Revolution was fought not just to protect the rights of property (in the narrow sense) but to establish those liberties which liberals believed essential to human dignity and moral worth. The "rights of man" enumerated in the English Bill of Rights gradually were proclaimed beyond the boundaries of England, notably in the American Declaration of Independence of 1776 and in the French Declaration of the Rights of Man in 1789.
- Tocqueville, Alexis de (2003). Democracy in America. Barnes & Noble. pp. 11, 18–19. ISBN 0-7607-5230-3.
- Allen Weinstein and David Rubel (2002), The Story of America: Freedom and Crisis from Settlement to Superpower, DK Publishing, Inc., New York, ISBN 0-7894-8903-1, p. 61
- Clifton E. Olmstead (1960), History of Religion in the United States, Prentice-Hall, Englewood Cliffs, NJ, pp. 63–65, 74–75, 102–05, 114–15
- Christopher Fennell (1998), Plymouth Colony Legal Structure
- "Citizenship 1625–1789". The National Archives. Retrieved 17 November 2013.
- "Getting the vote". The National Archives. Retrieved 22 August 2010.
- Gregory, Desmond (1985). The ungovernable rock: a history of the Anglo-Corsican Kingdom and its role in Britain's Mediterranean strategy during the Revolutionary War, 1793–1797. London: Fairleigh Dickinson University Press. p. 31. ISBN 978-0-8386-3225-3.
- "Voting in Early America". Colonial Williamsburg. Spring 2007. Retrieved 21 April 2015.
- Ray Allen Billington, America's Frontier Heritage (1974) 117–58. ISBN 0-8263-0310-2
- Johnston, Douglas M.; Reisman, W. Michael (2008). The Historical Foundations of World Order. Leiden: Martinus Nijhoff Publishers. p. 544. ISBN 978-90-474-2393-5.
- Jacqueline Newmyer, "Present from the start: John Adams and America" Archived 26 November 2013 at the Wayback Machine, Oxonian Review of Books, 2005, vol 4 issue 2
- Ratcliffe, Donald (Summer 2013). "The Right to Vote and the Rise of Democracy, 1787-1828" (PDF). Journal of the Early Republic. 33: 231.
- Ratcliffe, Donald (Summer 2013). "The Right to Vote and the Rise of Democracy, 1787-1828" (PDF). Journal of the Early Republic. 33: 225–229.
- Dinkin, Robert (1982). Voting in Revolutionary America: A Study of Elections in the Original Thirteen States, 1776-1789. USA: Greenwood Publishing. pp. 37–42. ISBN 978-0313230912.
- "The Bill Of Rights: A Brief History". ACLU. Retrieved 21 April 2015.
- Deacy, Susan (2008). Athena. London and New York: Routledge. pp. 145–49. ISBN 978-0-415-30066-7.
- Norman Davies (15 May 1991). The Third of May 1791 (PDF). Minda de Gunzburg Center for European Studies, Harvard University. Archived from the original (PDF) on 5 September 2019. Retrieved 5 September 2019.
- (Polish: Konstytucja 3 maja, Belarusian: Канстытуцыя 3 мая (official) / 3 траўня (Taraškievica), Lithuanian: Gegužės trečiosios konstitucija
- Piotr Machnikowski (1 December 2010). Contract Law in Poland. Kluwer Law International. p. 20. ISBN 978-90-411-3396-0. Retrieved 12 July 2011.
- Jan Ligeza (2017). Preambuła Prawa [The Preamble of Law] (in Polish). Polish Scientific Publishers PWN. p. 12. ISBN 978-83-945455-0-5.
- H. G. Koenigsberger (1986). Politicians and Virtuosi: Essays on Early Modern History (Vol. 49). A&C Black. ISBN 978-0-90-762865-1. Retrieved 10 December 2017.
- Dorothy Carrington (July 1973). "The Corsican constitution of Pasquale Paoli (1755–1769)". The English Historical Review. 88 (348): 481–503. JSTOR 564654.
- "The French Revolution II". Mars.wnec.edu. Archived from the original on 27 August 2008. Retrieved 22 August 2010.
- Michael Denning (2004). Culture in the Age of Three Worlds. Verso. p. 212. ISBN 978-1-85984-449-6. Retrieved 10 July 2013.
- Lovejoy, Paul E. (2000). Transformations in slavery: a history of slavery in Africa (2nd ed.). New York: Cambridge University Press. p. 290. ISBN 978-0-521-78012-4.
- French National Assembly. "1848 " Désormais le bulletin de vote doit remplacer le fusil "". Retrieved 26 September 2009.
- "Movement toward greater democracy in Europe". Indiana University Northwest.
- "Introduction – Social Aspects of the Civil War". Itd.nps.gov. Archived from the original on 14 July 2007. Retrieved 22 August 2010.
- Transcript of Voting Rights Act (1965) U.S. National Archives.
- The Constitution: The 24th Amendment Time.
- Hasan Kayalı (1995) "Elections and the Electoral Process in the Ottoman Empire, 1876–1919" International Journal of Middle East Studies, Vol. 27, No. 3, pp 265–286
- Diamond, Larry (15 September 2015). "Timeline: Democracy in Recession". The New York Times. Retrieved 25 January 2016.
- Kurlantzick, Joshua (11 May 2017). "Mini-Trumps Are Running for Election All Over the World". Bloomberg.com. Retrieved 16 May 2017.
- Mounk, Yascha (January 2017). "The Signs of Deconsolidation". Journal of Democracy. Retrieved 16 May 2017.
- "Age of Dictators: Totalitarianism in the inter-war period". Archived from the original on 7 September 2006. Retrieved 7 September 2006.CS1 maint: BOT: original-url status unknown (link)
- "Did the United States Create Democracy in Germany?: The Independent Review: The Independent Institute". Independent.org. Retrieved 22 August 2010.
- "World | South Asia | Country profiles | Country profile: India". BBC News. 7 June 2010. Retrieved 22 August 2010.
- Julian Go (2007). "A Globalizing Constitutionalism?, Views from the Postcolony, 1945–2000". In Arjomand, Saïd Amir (ed.). Constitutionalism and political reconstruction. Brill. pp. 92–94. ISBN 978-90-04-15174-1.
- "How the Westminster Parliamentary System was exported around the World". University of Cambridge. 2 December 2013. Retrieved 16 December 2013.
- "Tables and Charts". Freedomhouse.org. 10 May 2004. Archived from the original on 13 July 2009. Retrieved 22 August 2010.
- List of Electoral Democracies fordemocracy.net
- Wall, John (October 2014). "Democratising democracy: the road from women's to children's suffrage" (PDF). The International Journal of Human Rights. 18:6: 646–59 – via Rutgers University.
- "General Assembly declares 15 September International Day of Democracy; Also elects 18 Members to Economic and Social Council". Un.org. Retrieved 22 August 2010.
- "Freedom in the Word 2017". freedomhouse.org. 2016. Retrieved 16 May 2017.
- "Freedom House: Democracy Scores for Most Countries Decline for 12th Consecutive Year", VOA News, 16 January 2018. Retrieved 21 January 2018.
- "As populism rises, fragile democracies move to weaken their courts". Christian Science Monitor. 13 November 2018. ISSN 0882-7729. Retrieved 14 November 2018.
- Freedom in The World 2017 – Populists and Autocrats: The Dual Threat to Global Democracy by Freedom House, 31 January 2017
- Freedom in The World 2017 report (PDF)
- Skaaning, Svend-Erik (2018). "Different Types of Data and the Validity of Democracy Measures". Politics and Governance. 6 (1): 105. doi:10.17645/pag.v6i1.1183.
- "Press Freedom Index 2014" Archived 14 February 2014 at the Wayback Machine, Reporters Without Borders, 11 May 2014
- " World Freedom Index 2013: Canadian Fraser Institute Ranks Countries ", Ryan Craggs, Huffington Post, 14 January 2013
- "CIRI Human Rights Data Project", website. Retrieved 25 October 2013.
- Michael Kirk (10 December 2010). "Annual International Human Rights Ratings Announced". University of Connecticut.
- "Human Rights in 2011: The CIRI Report". CIRI Human Rights Data Project. 29 August 2013.
- "Democracy index 2012: Democracy at a standstill". Economist Intelligence Unit. 14 March 2013. Retrieved 24 March 2013.
- "MaxRange". Archived from the original on 17 August 2018. Retrieved 28 April 2015.
- Fuchs, Dieter; Roller, Edeltraud (2018). "Conceptualizing and Measuring the Quality of Democracy: The Citizens' Perspective". Politics and Governance. 6 (1): 22. doi:10.17645/pag.v6i1.1188.
- Mayne, Quinton; Geißel, Brigitte (2018). "Don't Good Democracies Need "Good" Citizens? Citizen Dispositions and the Study of Democratic Quality". Politics and Governance. 6 (1): 33. doi:10.17645/pag.v6i1.1216.
- Alexander Krauss, 2016. The scientific limits of understanding the (potential) relationship between complex social phenomena: the case of democracy and inequality. Vol. 23(1). Journal of Economic Methodology.
- G.F. Gaus, C. Kukathas, Handbook of Political Theory, SAGE, 2004, pp. 143–45, ISBN 0-7619-6787-7, Google Books link
- The Judge in a Democracy, Princeton University Press, 2006, p. 26, ISBN 0-691-12017-X, Google Books link
- A. Barak, The Judge in a Democracy, Princeton University Press, 2006, p. 40, ISBN 0-691-12017-X, Google Books link
- T.R. Williamson, Problems in American Democracy, Kessinger Publishing, 2004, p. 36, ISBN 1-4191-4316-6, Google Books link
- U.K. Preuss, "Perspectives of Democracy and the Rule of Law." Journal of Law and Society, 18:3 (1991). pp. 353–64
- Budge, Ian (2001). "Direct democracy". In Clarke, Paul A.B.; Foweraker, Joe (eds.). Encyclopedia of Political Thought. Taylor & Francis. ISBN 978-0-415-19396-2.
- Bernard Manin. Principles of Representative Government. pp. 8–11 (1997).
- Beramendi, Virginia, and Jennifer Somalie. Angeyo. Direct Democracy: The International Idea Handbook. Stockholm, Sweden: International IDEA, 2008. Print.
- Vincent Golay and Mix et Remix, Swiss political institutions, Éditions loisirs et pédagogie, 2008. ISBN 978-2-606-01295-3.
- Niels Barmeyer, Developing Zapatista Autonomy, Chapter Three: Who is Running the Show? The Workings of Zapatista Government.
- Denham, Diana (2008). Teaching Rebellion: Stories from the Grassroots Mobilization in Oaxaca.
- Zibechi, Raul (2013). Dispersing Power: Social Movements as Anti-State Forces in Latin America.
- "A Very Different Ideology in the Middle East". Rudaw.
- "Radical Revolution – The Thermidorean Reaction". Wsu.edu. 6 June 1999. Archived from the original on 3 February 1999. Retrieved 22 August 2010.
- Köchler, Hans (1987). The Crisis of Representative Democracy. Frankfurt/M., Bern, New York. ISBN 978-3-8204-8843-2.
- Urbinati, Nadia (1 October 2008). "2". Representative Democracy: Principles and Genealogy. ISBN 978-0-226-84279-0.
- Fenichel Pitkin, Hanna (September 2004). "Representation and democracy: uneasy alliance". Scandinavian Political Studies. 27 (3): 335–42. doi:10.1111/j.1467-9477.2004.00109.x.
- Aristotle. "Ch. 9". Politics. Book 4.
- Keen, Benjamin, A History of Latin America. Boston: Houghton Mifflin, 1980.
- Kuykendall, Ralph, Hawaii: A History. New York: Prentice Hall, 1948.
- Brown, Charles H., The Correspondents' War. New York: Charles Scribners' Sons, 1967.
- Taussig, Capt. J.K., "Experiences during the Boxer Rebellion," in Quarterdeck and Fo'c'sle. Chicago: Rand McNally & Company, 1963
- O'Neil, Patrick H. Essentials of Comparative Politics. 3rd ed. New York: W.W. Norton 2010. Print
- Garret, Elizabeth (13 October 2005). "The Promise and Perils of Hybrid Democracy" (PDF). The Henry Lecture, University of Oklahoma Law School. Archived from the original (PDF) on 9 October 2017. Retrieved 7 August 2012.
- "Article on direct democracy by Imraan Buccus". Themercury.co.za. Archived from the original on 17 January 2010. Retrieved 22 August 2010.
- "A Citizen's Guide To Vermont Town Meeting". July 2008. Archived from the original on 5 August 2012. Retrieved 12 October 2012.
- "Republic – Definition from the Merriam-Webster Online Dictionary". M-W.com. 25 April 2007. Retrieved 22 August 2010.
- Novanglus, no. 7. 6 March 1775
- "The Founders' Constitution: Volume 1, Chapter 18, Introduction, "Epilogue: Securing the Republic"". Press-pubs.uchicago.edu. Retrieved 22 August 2010.
- "Economics Cannot be Separated from Politics" speech by Che Guevara to the ministerial meeting of the Inter-American Economic and Social Council (CIES), in Punta del Este, Uruguay on August 8, 1961
- Pierre-Joseph Proudhon. General Idea of the Revolution See also commentary by Graham, Robert. The General Idea of Proudhon's Revolution
- Bookchin, Murray. Communalism: The Democratic Dimensions of Social Anarchism. Anarchism, Marxism and the Future of the Left: Interviews and Essays, 1993–1998, AK Press 1999, p. 155
- Bookchin, Murray. Social Anarchism or Lifestyle Anarchism: An Unbridgeable Chasm
- Graeber, David and Grubacic, Andrej. Anarchism, Or The Revolutionary Movement Of The Twenty-first Century
- Thoreau, H.D. On the Duty of Civil Disobedience
- Dowlen, Oliver (2008). The Political Potential of Sortition: A study of the random selection of citizens for public office. Imprint Academic.
- "Article on Cosmopolitan democracy by Daniele Archibugi" (PDF). Archived from the original (PDF) on 25 July 2011. Retrieved 22 August 2010.
- "letter by Einstein – "To the General Assembly of the United Nations"". Retrieved 2 July 2013., first published in United Nations World New York, Oct 1947, pp. 13–14
- Daniele Archibugi & David Held, eds., Cosmopolitan Democracy. An Agenda for a New World Order, Polity Press, Cambridge, 1995; David Held, Democracy and the Global Order, Polity Press, Cambridge, 1995, Daniele Archibugi, The Global Commonwealth of Citizens. Toward Cosmopolitan Democracy, Princeton University Press, Princeton, 2008
- "Archived copy" (PDF). Archived from the original (PDF) on 12 February 2015. Retrieved 12 February 2015.CS1 maint: archived copy as title (link)
- Ten Years After the Soviet Breakup: From Democratization to "Guided Democracy" Journal of Democracy. By Archie Brown. Oct. 2001. Downloaded 28 April 2017.
- Putin’s Rule: Its Main Features and the Current Diarchy Johnson's Russia List. By Peter Reddaway. 18 February 2009. Downloaded 28 April 2017.
- Compare: Tibi, Bassam (2013). The Sharia State: Arab Spring and Democratization. p. 161. ISBN 978-1-135-92468-3.
- "Aristotle, Nicomachean Ethics, Book VIII, Chapter 10 (1160a.31-1161a.9)". Internet Classics Archive. Retrieved 21 June 2018.
- "Aristotle". Internet Encyclopedia of Philosophy.
- "Deudney, D.: Bounding Power: Republican Security Theory from the Polis to the Global Village. (eBook and Paperback)". press.princeton.edu. Retrieved 14 March 2017.
- Springer, Simon (2011). "Public Space as Emancipation: Meditations on Anarchism, Radical Democracy, Neoliberalism and Violence". Antipode. 43 (2): 525–62. doi:10.1111/j.1467-8330.2010.00827.x.
- Joseph Schumpeter, (1950). Capitalism, Socialism, and Democracy. Harper Perennial. ISBN 0-06-133008-6.
- Anthony Downs, (1957). An Economic Theory of Democracy. Harper Collins College. ISBN 0-06-041750-1.
- Dahl, Robert, (1989). Democracy and its Critics. New Haven: Yale University Press. ISBN 0-300-04938-2
- Dworkin, Ronald (2006). Is Democracy Possible Here? Princeton: Princeton University Press. ISBN 978-0-691-13872-5, p. 134.
- Gutmann, Amy, and Dennis Thompson (2002). Why Deliberative Democracy? Princeton University Press. ISBN 978-0-691-12019-5
- Joshua Cohen, "Deliberation and Democratic Legitimacy" in Essays on Reason and Politics: Deliberative Democracy Ed. James Bohman and William Rehg (The MIT Press: Cambridge) 1997, 72–73.
- Ethan J. "Can Direct Democracy Be Made Deliberative?", Buffalo Law Review, Vol. 54, 2006
- "Is Democracy a Pre-Condition in Economic Growth? A Perspective from the Rise of Modern China". UN Chronicle. Retrieved 24 January 2017.
- Conversation of Socrates, Plato; H, Translated by Spens. The Republic of Plato – Book ten – A conversation between Socrates and Admimantus.
- Femia, Joseph V. (2001). Against the masses : varieties of anti-democratic thought since the French Revolution. Oxford: Oxford University Press. ISBN 978-0-19-828063-7. OCLC 46641885.
- Dilliard, Irving (1941). Mr. Justice Brandeis, great American;press opinion and public appraisal. Saint Louis. hdl:2027/mdp.39015009170443.
- "Book Review, In the Name of the People". Publishers Weekly. 3 April 2013.
- Plato, the Republic of Plato (London: J.M Dent & Sons LTD.; New York: E.P. Dutton & Co. Inc.), 558-C.
- The contrast between Plato's theory of philosopher-kings, arresting change, and Aristotle's embrace of change, is the historical tension espoused by Karl Raimund Popper in his WWII treatise, The Open Society and its Enemies (1943).
- "Head to head: African democracy". BBC News. 16 October 2008. Retrieved 1 April 2010.
- The Review of Policy Research, Volume 22, Issues 1–3, Policy Studies Organization, Potomac Institute for Policy Studies. Blackwell Publishing, 2005. p. 28
- Paul Collier (8 November 2009). "5 myths about the beauty of the ballot box". Washington Post. Washington Post. p. B2.
- For example: Lipset, Seymour Martin. (1959). "Some Social Requisites of Democracy: Economic Development and Political Legitimacy". American Political Science Review. 53 (1): 69–105. doi:10.2307/1951731. JSTOR 1951731.
- Inglehart, Ronald. Welzel, Christian Modernisation, Cultural Change and Democracy: The Human Development Sequence, 2005. Cambridge: Cambridge University Press
- Inglehart, Ronald F. (2018). Cultural Evolution: People's Motivations Are Changing, and Reshaping the World. Cambridge University Press. doi:10.1017/9781108613880. ISBN 978-1-108-61388-0.
- Gibler, Douglas M.; Owsiak, Andrew (2017). "Democracy and the Settlement of International Borders, 1919–2001". Journal of Conflict Resolution. 62 (9): 1847–75. doi:10.1177/0022002717708599.
- Foreword, written by historian Harry J Hogan Archived 1 September 2013 at the Wayback Machine in 1982, to Quigley's Weapons Systems and Political Stability
- see also Chester G Starr, Review of Weapons Systems and Political Stability, American Historical Review, Feb 1984, p. 98, available at carrollquigley.net
- Carroll Quigley (1983). Weapons systems and political stability: a history. University Press of America. pp. 38–39. ISBN 978-0-8191-2947-5. Retrieved 20 May 2013.
- Carroll Quigley (1983). Weapons systems and political stability: a history. University Press of America. p. 307. ISBN 978-0-8191-2947-5. Retrieved 20 May 2013.
- Glaeser, E.; Ponzetto, G.; Shleifer, A. (2007). "Why does democracy need education?". Journal of Economic Growth. 12 (2): 77–99. doi:10.1007/s10887-007-9015-1. Retrieved 3 July 2017.
- Deary, I.J.; Batty, G.D.; Gale, C.R. (2008). "Bright children become enlightened adults" (PDF). Psychological Science. 19 (1): 1–6. doi:10.1111/j.1467-9280.2008.02036.x. PMID 18181782.
- Compare: Rindermann, H (2008). "Relevance of education and intelligence for the political development of nations: Democracy, rule of law and political liberty". Intelligence. 36 (4): 306–22. doi:10.1016/j.intell.2007.09.003.
Political theory has described a positive linkage between education, cognitive ability and democracy. This assumption is confirmed by positive correlations between education, cognitive ability, and positively valued political conditions (N = 183 − 130). [...] It is shown that in the second half of the 20th century, education and intelligence had a strong positive impact on democracy, rule of law and political liberty independent from wealth (GDP) and chosen country sample. One possible mediator of these relationships is the attainment of higher stages of moral judgment fostered by cognitive ability, which is necessary for the function of democratic rules in society. The other mediators for citizens as well as for leaders could be the increased competence and willingness to process and seek information necessary for political decisions due to greater cognitive ability. There are also weaker and less stable reverse effects of the rule of law and political freedom on cognitive ability.
- Albertus, Michael; Menaldo, Victor (2012). "Coercive Capacity and the Prospects for Democratisation". Comparative Politics. 44 (2): 151–69. doi:10.5129/001041512798838003.
- "The Resource Curse: Does the Emperor Have no Clothes?".
- Acemoglu, Daron; Robinson, James A. (2006). Economic Origins of Dictatorship and Democracy. Cambridge University Press. ISBN 978-0-521-85526-6.
- "Rainfall and Democracy".
- Alsan, Marcella (2015). "The Effect of the TseTse Fly on African Development" (PDF). American Economic Review. 105 (1): 382–410. CiteSeerX 10.1.1.1010.2955. doi:10.1257/aer.20130604.
- Acemoglu, Daron; Johnson, Simon; Robinson, James (2005). "Institutions as a fundamental cause of long-run growth". Handbook of Economic Growth. Handbook of Economic Growth. 1. pp. 385–472, Sections 1 to 4. doi:10.1016/S1574-0684(05)01006-3. ISBN 978-0-444-52041-8.
- Mellinger, Andrew D., Jeffrey Sachs, and John L. Gallup. (1999). "Climate, Water Navigability, and Economic Development". Working Paper.
- Acemoglu, Daron; Johnson, Simon; Robinson, James (2005). "Institutions as a fundamental cause of long-run growth". Handbook of Economic Growth. Handbook of Economic Growth. 1. pp. 385–472, Sections 5 to 10. doi:10.1016/S1574-0684(05)01006-3. ISBN 978-0-444-52041-8.
- Farrelly, Elizabeth (15 September 2011). "Deafened by the roar of the crowd". The Sydney Morning Herald. Archived from the original on 30 December 2011. Retrieved 17 September 2011.
- Robert Michels (1999) [1962 by Crowell-Collier]. Political Parties. Transaction Publishers. p. 243. ISBN 978-1-4128-3116-1. Retrieved 5 June 2013.
- Harald Wydra, Communism and the Emergence of Democracy, Cambridge: Cambridge University Press, 2007, pp. 22–27.
- Compare: Wydra, Harald (2007). "Democracy as a process of meaning-formation". Communism and the Emergence of Democracy. Cambridge University Press. pp. 244–68. ISBN 978-1-139-46218-1. Retrieved 11 August 2018.
- Abbott, Lewis. (2006). British Democracy: Its Restoration and Extension. ISR/Google Books.
- Appleby, Joyce. (1992). Liberalism and Republicanism in the Historical Imagination. Harvard University Press.
- Archibugi, Daniele, The Global Commonwealth of Citizens. Toward Cosmopolitan Democracy, Princeton University Press ISBN 978-0-691-13490-1
- Becker, Peter, Heideking, Juergen, & Henretta, James A. (2002). Republicanism and Liberalism in America and the German States, 1750–1850. Cambridge University Press. ISBN 978-0-521-80066-2
- Benhabib, Seyla. (1996). Democracy and Difference: Contesting the Boundaries of the Political. Princeton University Press. ISBN 978-0-691-04478-1
- Blattberg, Charles. (2000). From Pluralist to Patriotic Politics: Putting Practice First, Oxford University Press, ISBN 978-0-19-829688-1.
- Birch, Anthony H. (1993). The Concepts and Theories of Modern Democracy. London: Routledge. ISBN 978-0-415-41463-0
- Bittar, Eduardo C.B. (2016). "Democracy, Justice and Human Rights: Studies of Critical Theory and Social Philosophy of Law". Saarbrücken: LAP, 2016. ISBN 978-3-659-86065-2
- Castiglione, Dario. (2005). "Republicanism and its Legacy." European Journal of Political Theory. pp. 453–65.
- Copp, David, Jean Hampton, & John E. Roemer. (1993). The Idea of Democracy. Cambridge University Press. ISBN 978-0-521-43254-2
- Caputo, Nicholas. (2005). America's Bible of Democracy: Returning to the Constitution. SterlingHouse Publisher, Inc. ISBN 978-1-58501-092-9
- Dahl, Robert A. (1991). Democracy and its Critics. Yale University Press. ISBN 978-0-300-04938-1
- Dahl, Robert A. (2000). On Democracy. Yale University Press. ISBN 978-0-300-08455-9
- Dahl, Robert A. Ian Shapiro & Jose Antonio Cheibub. (2003). The Democracy Sourcebook. MIT Press. ISBN 978-0-262-54147-3
- Dahl, Robert A. (1963). A Preface to Democratic Theory. University of Chicago Press. ISBN 978-0-226-13426-0
- Davenport, Christian. (2007). State Repression and the Domestic Democratic Peace. Cambridge University Press. ISBN 978-0-521-86490-9
- Diamond, Larry & Marc Plattner. (1996). The Global Resurgence of Democracy. Johns Hopkins University Press. ISBN 978-0-8018-5304-3
- Diamond, Larry & Richard Gunther. (2001). Political Parties and Democracy. JHU Press. ISBN 978-0-8018-6863-4
- Diamond, Larry & Leonardo Morlino. (2005). Assessing the Quality of Democracy. JHU Press. ISBN 978-0-8018-8287-6
- Diamond, Larry, Marc F. Plattner & Philip J. Costopoulos. (2005). World Religions and Democracy. JHU Press. ISBN 978-0-8018-8080-3
- Diamond, Larry, Marc F. Plattner & Daniel Brumberg. (2003). Islam and Democracy in the Middle East. JHU Press. ISBN 978-0-8018-7847-3
- Elster, Jon. (1998). Deliberative Democracy. Cambridge University Press. ISBN 978-0-521-59696-1
- Emerson, Peter (2007) "Designing an All-Inclusive Democracy." Springer. ISBN 978-3-540-33163-6
- Emerson, Peter (2012) "Defining Democracy." Springer. ISBN 978-3-642-20903-1
- Everdell, William R. (2003) The End of Kings: A History of Republics and Republicans. Chicago: University of Chicago Press. ISBN 0-226-22482-1.
- Fuller, Roslyn (2015). Beasts and Gods: How Democracy Changed Its Meaning and Lost its Purpose. United Kingdom: Zed Books. p. 371. ISBN 978-1-78360-542-2.
- Gabardi, Wayne. (2001). Contemporary Models of Democracy. Polity.
- Gutmann, Amy, and Dennis Thompson. (1996). Democracy and Disagreement. Princeton University Press. ISBN 978-0-674-19766-4
- Gutmann, Amy, and Dennis Thompson. (2002). Why Deliberative Democracy? Princeton University Press. ISBN 978-0-691-12019-5
- Haldane, Robert Burdone (1918). . London: Headley Bros. Publishers Ltd.
- Halperin, M.H., Siegle, J.T. & Weinstein, M.M. (2005). The Democracy Advantage: How Democracies Promote Prosperity and Peace. Routledge. ISBN 978-0-415-95052-7
- Hansen, Mogens Herman. (1991). The Athenian Democracy in the Age of Demosthenes. Oxford: Blackwell. ISBN 978-0-631-18017-3
- Held, David. (2006). Models of Democracy. Stanford University Press. ISBN 978-0-8047-5472-9
- Inglehart, Ronald. (1997). Modernisation and Postmodernisation. Cultural, Economic, and Political Change in 43 Societies. Princeton University Press. ISBN 978-0-691-01180-6
- Isakhan, Ben and Stockwell, Stephen (co-editors). (2011) The Secret History of Democracy. Palgrave MacMillan. ISBN 978-0-230-24421-4
- Jarvie, I.C.; Milford, K. (2006). Karl Popper: Life and time, and values in a world of facts Volume 1 of Karl Popper: A Centenary Assessment. Ashgate Publishing, Ltd. ISBN 978-0-7546-5375-2.
- Khan, L. Ali. (2003). A Theory of Universal Democracy: Beyond the End of History. Martinus Nijhoff Publishers. ISBN 978-90-411-2003-8
- Köchler, Hans. (1987). The Crisis of Representative Democracy. Peter Lang. ISBN 978-3-8204-8843-2
- Lijphart, Arend. (1999). Patterns of Democracy: Government Forms and Performance in Thirty-Six Countries. Yale University Press. ISBN 978-0-300-07893-0
- Lipset, Seymour Martin (1959). "Some Social Requisites of Democracy: Economic Development and Political Legitimacy". American Political Science Review. 53 (1): 69–105. doi:10.2307/1951731. JSTOR 1951731.
- Macpherson, C.B. (1977). The Life and Times of Liberal Democracy. Oxford University Press. ISBN 978-0-19-289106-8
- Morgan, Edmund. (1989). Inventing the People: The Rise of Popular Sovereignty in England and America. Norton. ISBN 978-0-393-30623-1
- Mosley, Ivo (2003). Democracy, Fascism, and the New World Order. Imprint Academic. ISBN 978-0-907845-64-5.
- Mosley, Ivo (2013). In The Name Of The People. Imprint Academic. ISBN 978-1-84540-262-4.
- Ober, J.; Hedrick, C.W. (1996). Dēmokratia: a conversation on democracies, ancient and modern. Princeton University Press. ISBN 978-0-691-01108-0.
- Plattner, Marc F. & Aleksander Smolar. (2000). Globalisation, Power, and Democracy. JHU Press. ISBN 978-0-8018-6568-8
- Plattner, Marc F. & João Carlos Espada. (2000). The Democratic Invention. Johns Hopkins University Press. ISBN 978-0-8018-6419-3
- Putnam, Robert. (2001). Making Democracy Work. Princeton University Press. ISBN 978-5-551-09103-5
- Raaflaub, Kurt A.; Ober, Josiah; Wallace, Robert W (2007). Origins of Democracy in Ancient Greece. University of California Press. ISBN 978-0-520-24562-4.
- Riker, William H.. (1962). The Theory of Political Coalitions. Yale University Press.
- Sen, Amartya K. (1999). "Democracy as a Universal Value". Journal of Democracy. 10 (3): 3–17. doi:10.1353/jod.1999.0055.
- Tannsjo, Torbjorn. (2008). Global Democracy: The Case for a World Government. Edinburgh University Press. ISBN 978-0-7486-3499-6. Argues that not only is world government necessary if we want to deal successfully with global problems it is also, pace Kant and Rawls, desirable in its own right.
- Thompson, Dennis (1970). The Democratic Citizen: Social Science and Democratic Theory in the 20th Century. Cambridge University Press. ISBN 978-0-521-13173-5
- Tooze, Adam, "Democracy and Its Discontents", The New York Review of Books, vol. LXVI, no. 10 (6 June 2019), pp. 52–53, 56–57. "Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the environmental problem remains fundamentally unaddressed.... Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly.... Finally, there is the threat du jour: corporations and the technologies they promote." (pp. 56–57.)
- Vinje, Victor Condorcet (2014). The Versatile Farmers of the North; The Struggle of Norwegian Yeomen for Economic Reforms and Political Power, 1750–1814. Nisus Publications.
- Volk, Kyle G. (2014). Moral Minorities and the Making of American Democracy. New York: Oxford University Press.
- Weingast, Barry. (1997). "The Political Foundations of the Rule of Law and Democracy". American Political Science Review. 91 (2): 245–63. doi:10.2307/2952354. JSTOR 2952354.
- Weatherford, Jack. (1990). Indian Givers: How the Indians Transformed the World. New York: Fawcett Columbine. ISBN 978-0-449-90496-1
- Whitehead, Laurence. (2002). Emerging Market Democracies: East Asia and Latin America. JHU Press. ISBN 978-0-8018-7219-8
- Willard, Charles Arthur. (1996). Liberalism and the Problem of Knowledge: A New Rhetoric for Modern Democracy. University of Chicago Press. ISBN 978-0-226-89845-2
- Wood, E. M. (1995). Democracy Against Capitalism: Renewing historical materialism. Cambridge University Press. ISBN 978-0-521-47682-9
- Wood, Gordon S. (1991). The Radicalism of the American Revolution. Vintage Books. ISBN 978-0-679-73688-2 examines democratic dimensions of republicanism
|Library resources about |
|Wikimedia Commons has media related to Democracy.|
|Wikiquote has quotations related to: Democracy|
|Look up democracy in Wiktionary, the free dictionary.|
- Democracy at the Stanford Encyclopedia of Philosophy
- Dictionary of the History of Ideas: Democracy
- The Economist Intelligence Unit's index of democracy
- Alexis de Tocqueville, Democracy in America Full hypertext with critical essays on America in 1831–32 from American Studies at the University of Virginia
- The Varieties of Democracy project. Indicators of hundreds of attributes of democracy and non-democracy for most countries from 1900 to 2018, and from as early as 1789 for dozens of countries, with many interactive online graphics tools
- Data visualizations of data on democratisation and list of data sources on political regimes on 'Our World in Data', by Max Roser.
- MaxRange: Analyzing political regimes and democratization processes—Classifying political regime type and democracy level to all states and months 1789–2015
- "Democracy", BBC Radio 4 discussion with Melissa Lane, David Wootton and Tim Winter (In Our Time, 18 October 2001)
- Democracy (1945) on YouTube Encyclopædia Britannica Films | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Democracy | 21 |
35 | Ionic bonding. A full outer energy level is the most stable possible arrangement of electrons. This is kinda like ionic bonding, but also kinda like covalent bonding, but also kinda it’s own thing altogether. It has two iron atoms and three oxygen atoms. These two lines represent two pairs of electrons, or a total of four electrons. In a covalent bond, the atoms are bound by shared electrons. 1 1 + 8+ + + 30 Reaction between hydrogen + oxygen 2H2 + O2 → 2H2O 1+ 8+ 2 Hydrogen Atoms Oxygen Atom Water Molecule 1+ index 38. There is some gray area. In chemistry, we refer to Chemical bonding as a means or a way by which an atom attaches itself with other atoms. Covalent bonds form when two or more nonmetals combine. the metals being on the lHS of ur PT. 6) ozone have dative bonding but co2 doesn't have . ... polar covalent, nonpolar covalent, ionic, and hydrogen bonds. The oxygen atom has a total of eight valence electrons, so its outer energy level is full. One type of chemical bonding is ionic bonding. covalent is between 2 nonmetals and non-metals are on the RHS of ur periodic table. There are also a few other types of chemical bonding that are not common enough to go into in this lesson. Compounds that involve a metal binding with either a non-metal will display ionic bonding. In organic chemistry, covalent bonds are much more common than ionic bonds. SECTION2 Ionic and Covalent Bonding The Structure of Matter Name Class Date CHAPTER 6 ... Notice that the covalent bond that joins the two oxygen atoms in the figure is shown as two lines. 7) covalent ; made up of 2 non metals. The ions are atoms that have gained one or more electrons (known as anions, which are negatively charged) and atoms that have lost one or more electrons (known as cations, which are positively charged). Ionic bonds usually occur between metal and nonmetal ions. Ionic and covalent compounds. Covalent is a type of chemical bond where atoms are bonded together by the sharing of electrons. Click to see full answer Thereof, is Iron II hydroxide ionic or covalent? There are no purely ionic or covalent bonds in phosphoric acid. Geez. Covalent bonds form between non-metal atoms. For example, an element like silicon (Si, atomic number 14) is a semi-metal (or semiconductor) that can form network covalent bonds. Covalent bond: It is formed by the sharing of electron pair between bonded atoms. But there is so much more to learn about ionic vs covalent, read on to find out more. One electron was "donated" by the 2d oxygen to the 3rd oxygen. In a true covalent bond, the electronegativity values are the same (e.g., H 2, O 3), although in practice the electronegativity values just need to be close.If the electron is shared equally between the atoms forming a covalent bond, then the bond is said to be nonpolar. The difference in electronegativity between oxygen and hydrogen is too small for the oxygen atom to pull the electrons completely off the hydrogen atoms. Practice: CaO -> ionic (contains a metal and a nonmetal) SO2 -> covalent (contains two nonmetals) HNO3-> ionic (contains more than two different elements) To go from a molecular to a net ionic equation you must know for each substance in the equation: (1) is it ionic or molecular; (2) if ionic, is it soluble; (3) if molecular, does it dissociate. Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The atom with larger electronegativity attract the electron pair more towards it self and becomes partial negative while the other atom becomes partial positive. Water molecule in dot notation. Is fe2o3 a covalent or ionic bond? Sodium Chloride: NaCl = IONIC. The oxidation state of iron oxide is +3 and +2. dont be lazy For many molecules, the sharing of electrons allows each atom to attain the equivalent of a full outer shell, corresponding to a stable electronic configuration. Oxygen forms a covalent bond with itself, since oxygen gas is just two oxygen atoms bonded together with a covalent bond. 3195% OFF. polar covalent bond. Oxygen molecule in dot notation. O=O-O. The first Oxygen has 2 paired dots, 2nd has 2d paired dots, 3d has 3 paired dots. The greater the difference the more ionic (aka polar) and the less covalent the bond becomes. For example: In water the electronegativity of oxygen … An ionic bond is formed when ions interact to create an ionic compound with the positive and negative charges in balance. " ... O2: Covalent bond: OCl2: Covalent bond: OF2 ( Oxygen difluoride ) Covalent bond: oxygen: Covalent bond: ozone ( O3 ) Covalent bond: P2O5: Covalent bond: P4: Covalent bond: P4O10 : A second general feature of bonding also became apparent in the early days of chemistry.It was found that there are two large classes of compound that can be distinguished by their behaviour when dissolved in water. Fe2O3 or iron oxide is ionic. As dor co2 o=c=o. One class consists of electrolytes: these compounds are so called because they dissolve to give solutions that conduct electricity. Mainly a non polar covalent bond is between atoms of same element.eg:-O2,H2,N2 etc. Ionic and covalent bonds are the major two types of chemical bonds that exist in compounds. Covalent Bonds. Because one atom of nitrogen needs 3 atoms of chlorine to bond, the formula will be NCl3. If you want to quickly find the word you want to search, use Ctrl + F, then type the word you want to search. 39. 36. ionic 37. simple covalent 38. covalent 39. covalent 40. ionic 41 ionic. In a covalent bond, the atoms bond by sharing electrons. Covalent Bonding. Such bonds lead to stable molecules if they share electrons in such a way as to create a noble gas configuration for each atom. Covalent chemical bonds involve the sharing of a pair of valence electrons by two atoms, in contrast to the transfer of electrons in ionic bonds. Basically, there are three types of chemical bonding in chemistry, and they are covalent bonding, ionic bonding, and metallic bonding.In this IGCSE chemistry chemical bonding blog post, I am going to cover the basic concepts of these three types of bonding. As a quantum-mechanical description, Pauling proposed that the wave function for a polar molecule AB is a linear combination of wave functions for covalent and ionic molecules: ψ = aψ(A:B) + bψ(A + B − ). Covalent bonds usually occur between nonmetals. I'll tell you the ionic or Covalent bond list below. He estimated that a difference of 1.7 corresponds to 50% ionic character, so that a greater difference corresponds to a bond which is predominantly ionic. The structure of the methane, CH4, molecule exhibits single covalent bonds. For example, sodium (Na), a metal, and chloride (Cl), a nonmetal, form an ionic bond to make NaCl. Daily charts. Identify compounds as ionic or molecular (covalent) based on ionic compounds being the combination of metals with nonmetals. 32 Covalent Prefixes Mon - 1 Tetra - 4 Di - 2 Pent- 5 Tri - 3 Hex - 6 Aprefix tells you the number of atoms of that element in the compound. A bond is ionic if the electronegativity difference between the atoms is The bond may result from the electrostatic force of attraction between oppositely charged ions as in ionic bonds; or through the sharing of electrons as in covalent bonds . Covalent Bonds. There are a few other ways of diagramming molecules to better illustrate the covalent and even ionic bonding. 5) Ca2+ and OH- hence ionic , in the OH-, they share electrons making it covalent. For example, both hydrogen and oxygen are nonmetals, and when they combine to make water, they do so by forming covalent bonds. 23 & 24) just refer to the periodic table bro. S6 or S8) or of some intermediate ionic-covalent nature (e. 20: 以下、\(^o^)/でVIPがお送りします 2017/08/29(火) … A polar covalent bond (b) is intermediate between the two extremes: the bonding electrons are shared unequally between the two atoms, and the electron Based on relative electronegativities, classify the bonding in each compound as ionic, covalent, or polar covalent. Ionic is a type of chemical bond where atoms are bonded together by the attraction between opposite charges. A covalent bond forms when the bonded atoms have a lower total energy than that of widely separated atoms. If atoms have similar electronegativities of less than 0.5 units, they are nonpolar covalent. 2 Ionic bonding and Covalent bonding. Why is water covalent, not ionic? In reality there are more than just two categories: covalent vs ionic. It also forms covalent bonds... See full answer below. ionic is between 1 metal and 1 nonmetal. Iron(II) hydroxide or ferrous hydroxide is a compound with the formula Fe(OH) 2.It is produced when iron(II) ions, from a compound such as iron(II) sulfate, react with hydroxide ions.Iron(II) hydroxide itself is practically white, but even traces of oxygen impart a greenish tinge. Molecules may have either ionic or covalent bonds, whereas compounds have either ionic or metallic or covalent bonds. | https://kessho-coating.com/k7vv4jd/is-o2-covalent-or-ionic-8bc5fd | 21 |
16 | In stereochemistry, stereoisomerism, or spatial isomerism, is a form of isomerism in which molecules have the same molecular formula and sequence of bonded atoms (constitution), but differ in the three-dimensional orientations of their atoms in space. This contrasts with structural isomers, which share the same molecular formula, but the bond connections or their order differs. By definition, molecules that are stereoisomers of each other represent the same structural isomer.
Enantiomers, also known as optical isomers, are two stereoisomers that are related to each other by a reflection: they are mirror images of each other that are non-superposable. Human hands are a macroscopic analog of this. Every stereogenic center in one has the opposite configuration in the other. Two compounds that are enantiomers of each other have the same physical properties, except for the direction in which they rotate polarized light and how they interact with different optical isomers of other compounds. As a result, different enantiomers of a compound may have substantially different biological effects. Pure enantiomers also exhibit the phenomenon of optical activity and can be separated only with the use of a chiral agent. In nature, only one enantiomer of most chiral biological compounds, such as amino acids (except glycine, which is achiral), is present. An optically active compound shows two forms: D-(+) form and L-(−) form.
Diastereomers are stereoisomers not related through a reflection operation. They are not mirror images of each other. These include meso compounds, cis–trans isomers, E-Z isomers, and non-enantiomeric optical isomers. Diastereomers seldom have the same physical properties. In the example shown below, the meso form of tartaric acid forms a diastereomeric pair with both levo and dextro tartaric acids, which form an enantiomeric pair.
(natural) tartaric acid
The D- and L- labeling of the isomers above is not the same as the d- and l- labeling more commonly seen, explaining why these may appear reversed to those familiar with only the latter naming convention.[further explanation needed]
Cis–trans and E-Z isomerism
Stereoisomerism about double bonds arises because rotation about the double bond is restricted, keeping the substituents fixed relative to each other. If the two substituents on at least one end of a double bond are the same, then there is no stereoisomer and the double bond is not a stereocenter, e.g. propene, CH3CH=CH2 where the two substituents at one end are both H.
Traditionally, double bond stereochemistry was described as either cis (Latin, on this side) or trans (Latin, across), in reference to the relative position of substituents on either side of a double bond. The simplest examples of cis-trans isomerism are the 1,2-disubstituted ethenes, like the dichloroethene (C2H2Cl2) isomers shown below.
Molecule I is cis-1,2-dichloroethene and molecule II is trans-1,2-dichloroethene. Due to occasional ambiguity, IUPAC adopted a more rigorous system wherein the substituents at each end of the double bond are assigned priority based on their atomic number. If the high-priority substituents are on the same side of the bond, it is assigned Z (Ger. zusammen, together). If they are on opposite sides, it is E (Ger. entgegen, opposite). Since chlorine has a larger atomic number than hydrogen, it is the highest-priority group. Using this notation to name the above pictured molecules, molecule I is (Z)-1,2-dichloroethene and molecule II is (E)-1,2-dichloroethene. It is not the case that Z and cis or E and trans are always interchangeable. Consider the following fluoromethylpentene:
The proper name for this molecule is either trans-2-fluoro-3-methylpent-2-ene because the alkyl groups that form the backbone chain (i.e., methyl and ethyl) reside across the double bond from each other, or (Z)-2-fluoro-3-methylpent-2-ene because the highest-priority groups on each side of the double bond are on the same side of the double bond. Fluoro is the highest-priority group on the left side of the double bond, and ethyl is the highest-priority group on the right side of the molecule.
The terms cis and trans are also used to describe the relative position of two substituents on a ring; cis if on the same side, otherwise trans.
Conformational isomerism is a form of isomerism that describes the phenomenon of molecules with the same structural formula but with different shapes due to rotations about one or more bonds. Different conformations can have different energies, can usually interconvert, and are very rarely isolatable. For example, cyclohexane can exist in a variety of different conformations including a chair conformation and a boat conformation, but, for cyclohexane itself, these can never be separated. The boat conformation represents the energy maximum on a conformational itinerary between the two equivalent chair forms; however, it does not represent the transition state for this process, because there are lower-energy pathways.
There are some molecules that can be isolated in several conformations, due to the large energy barriers between different conformations. 2,2',6,6'-Tetrasubstituted biphenyls can fit into this latter category.
Anomerism is an identity for single bonded ring structures where "cis" or "Z" and "trans" or "E" (geometric isomerism) needs to name the substitutions on a carbon atom that also displays the identity of chirality; so anomers have carbon atoms that have geometric isomerism and optical isomerism (Enantiomerism) on one or more of the carbons of the ring. Anomers are named "alpha" or "axial" and "beta" or "equatorial" when substituting a cyclic ring structure that has single bonds between the carbon atoms of the ring for example, a hydroxyl group, a methyl hydroxyl group, a methoxy group or another pyranose or furanose group which are typical single bond substitutions but not limited to these. Axial geometric isomerism will be perpendicular (90 degrees) to a reference plane and equatorial will be 120 degrees away from the axial bond or deviate 30 degrees from the reference plane.
- A configurational stereoisomer is a stereoisomer of a reference molecule that has the opposite configuration at a stereocenter (e.g., R- vs S- or E- vs Z-). This means that configurational isomers can be interconverted only by breaking covalent bonds to the stereocenter, for example, by inverting the configurations of some or all of the stereocenters in a compound.
- An epimer is a diastereoisomer that has the opposite configuration at only one of the stereocenters.
Le Bel-van't Hoff rule
Le Bel-van't Hoff rule states that for a structure with n asymmetric carbon atoms, there is a maximum of 2n different stereoisomers possible. As an example, D-glucose is an aldohexose and has the formula C6H12O6. Four of its six carbon atoms are stereogenic, which means D-glucose is one of 24=16 possible stereoisomers.
- IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "stereoisomerism". doi:10.1351/goldbook.S05983
- Columbia Encyclopedia. "Stereoisomers" in Encyclopedia.com, n.l., 2005, Link
- Morrison and Boyd Organic Chemistry Sixth ed. pgs. 1170-1171 ISBN 0-13-643669-2 | https://en.wikipedia.org/wiki/Stereoisomerism | 21 |
23 | Economic inequality typically describes conditions that separate individuals in terms of wealth or income. All nations and economic systems have some type of inequality. A few of the biggest factors that affect this situation include demographic, political, and macroeconomic factors. Not always a bad thing, economic inequality helps create an environment where individuals desire to reach the top rung of the economic ladder. The presence of inequality factors and how much they suppress an economy can dictate the environment by which individuals succeed or fail.
Demographic factors are one of the most common in terms of inequality. The factors may be sex, age, education, race, or any other type of demographic in a region. Inequality can exist when either one or more of these factors are present. Essentially, demographic factors play a role in terms of labor in the overall economic environment. For example, when a working class comprises a large part of a particular group, there may be a lower probability for success in terms of economic growth.
Political factors also play a large role in economic inequality. Command or planned economies may restrict the growth of individuals, creating inequality. This occurs when one group is more favored than another, allowing this group to succeed better economically. Market economies can have this problem as well, though the freer the market can help restrict government intervention and the possibility for economic inequality. Another problem here is that a particular political group may pander more to individuals in a specific economic category, allowing inequality to foster.
Macroeconomics represents the larger policies and constructs a nation implements to help grow its economy. Poor fiscal or monetary policy, however, can create economic inequality due to misguided intent. For example, allowing increases to the money supply through loose central banking can create rampant inflation, which eats away at the purchasing power of a nation’s currency. Lower-income individuals can experience more problems with inflation as they have fewer dollars by which to create a standard of living. Forced inequality can be a result from this and other macroeconomic policy problems.
Again, economic inequality is not always a bad thing. It can create a desire to improve one’s life and move from one economic class to another. On the other hand, it may also drive individuals into the political arena, where they become involved in voting and changing poor macroeconomics policies that restrict economic freedom. | https://www.infobloom.com/what-factors-affect-economic-inequality.htm | 21 |
68 | Note: This lesson was originally published on an older version of The Learning Network; the link to the related Times article will take you to a page on the old site.
Teaching ideas based on New York Times content.
Overview of Lesson Plan: Students investigate the positive and negative aspects of the currency switch to the euro in eleven European countries as of January 1, 1999. After discussing the euro and the
potential economic outcomes and setbacks of this currency change, students research the previous currencies of the eleven countries and create posters illustrating their findings.
Alison Zimbalist, The New York Times Learning Network
Suggested Time Allowance: 1 hour
1. Write a journal entry discussing how the prices of goods and services are determined, why countries have different currencies, and why some countries may want to share a currency system.
2. Read and discuss “Primer on Euro: From Birth To Growth as Unifying Force”; maintain a “pros/cons” list about issues of the switch to the euro as discussed in the article.
3. Work in small groups to research the currencies preceding the euro of the eleven European countries switching to the euro, focusing on the name and current exchange rate of the currency, the images found on this currency, and the relation of these images to the values and history of the country.
4. Create a poster about the currency of the researched European country.
Resources / Materials:
-copies of “Primer on Euro: From Birth To Growth as Unifying Force” (one per student)
-resource materials containing information the currencies of Ireland, Finland, Netherlands, Germany, Austria, Luxembourg, Belgium, France, Italy, Spain and Portugal prior to the euro (e.g., encyclopedias, travel guides, and books on the specific countries)
-computers with Internet access (optional)
-poster board or large pieces of construction paper)
-markers, crayons, and colored pencils
-glue or tape
Activities / Procedures:
1. WARM-UP/ DO-NOW: In their journals, students respond to the following questions (written on the board prior to class): How are the prices of goods and services determined? Why do countries have different currencies? Why might some countries share a currency? Students then share their answers.
2. Read and discuss “Primer on Euro: From Birth To Growth as Unifying Force,” focusing on the questions below. While reading through the article, students should point out the positive and negative aspects
of the euro system mentioned in the article, and the teacher should keep a “pros/cons” list on the board.
a. What is the euro, and how have European countries planned for its arrival?
b. How does this common economic policy reflect upon visions of unification of the eleven European nations to be joined by the euro?
c. How will the changing of the currencies in Europe occur over time?
d. What do economists predict will happen to economies worldwide when the euro is officially instituted as the dominant currency of Europe?
e. Why is Great Britain not changing their currency to the euro, and what potential is there for switching currencies in the future?
f. What is the history behind the notion of unifying European countries through their currency system?
g. How can the switch to the euro “work like a reform program for Europe,” as stated by Bundesbank vice president Jurgen Stark?
h. How are European banks and merchants dealing with the dramatic shift in currency?
i. What negative outcomes could result from the currency change?
j. What political changes have occured since the euro was originally concede, and what effects might these changes have on the economic success of the countries that will be using the euro?
3. Students divide into pairs or small groups (if possible, divide students into eleven pairings or groups so that all eleven countries switching currencies to the euro are represented). Assign each group one of the
European nations switching to the euro (Ireland, Finland, Netherlands, Germany, Austria, Luxembourg, Belgium, France, Italy, Spain, and Portugal). Using any available resources, groups then research the past currencies
of their assigned country, responding to the following:
–What was the name of the currency of this country that preceded the euro, and what was its exchange rate prior to switching to the euro in American dollars? (Visit the Currency Converter at (//www2.travelocity.com/converter).
–What did units of this currency look like? What people, places, symbols, and words are on this currency?
–What do the images on the currency illustrate about the values and history of this country?
–What previous currencies existed in this country?
4. WRAP-UP/ HOMEWORK: Groups create a poster about the currency of the researched European country, including all of the answers of the research questions. Students should include on their posters either a color copy of pictures of the appropriate bills and coins or small denominations of the currency (obtained through some banks, trade exchanges, and airports). Students should be encouraged to make posters as colorful and interesting as possible. In a future class, students can present their information to the class or simply hang their posters in the classroom.
Further Questions for Discussion:
–How are the prices of goods and services determined?
–Why do countries have different currencies?
–Why might some countries share a currency?
–In order for countries to share a type of currency, what similarities among them must exist?
–What are some of the names of currencies used in different countries?
–Why do prices of the same good vary so much among different countries? How will the prices be affected by the change to the euro, and how will these price changes impact the economy of the countries switching to the euro?
–What is the European Union, and what economic responsibilities and powers does it have?
–How does the common economic policy of the euro and thet European Central Bank reflect upon visions of unification of the eleven European nations to be joined by the euro?
–How will the changing of the currencies in Europe occur over time?
–What do economists predict will happen to economies worldwide when the euro is officially instituted as the dominant currency of Europe?
–What is the history behind the notion of unifying European countries through their currency system?
–How can the switch to the euro “work like a reform program for Europe,” as stated by Bundesbank vice president Jurgen Stark?
–How are European banks and merchants dealing with the dramatic shift in currency?
–What negative outcomes could result from the currency change?
–What political changes have occured since the euro was originally concede, and what effects might these changes have on the economic success of the countries that will be using the euro?
–How are global economies interconnected, and how might the euro affect these economic relationships?
–How do the images on currency illustrate the values and history of the country minting and/or using that currency?
Evaluation / Assessment:
Students will be evaluated based on journal entry response, participation in class discussions, participation group research on a European currency, and creation of a poster demonstrating an understanding of the history of the researched currency.
placid, mecca, monetary, conversion, reined, inflation, circulation, denominated, transactions, fiscal, merging, precursor, impetus, reunification, technocratic, integration, proponents, tariffs, obsolete, confounded, clamoring
1. Refer to the graphic “A Euro Guide.” How will the euro affect trade within Europe and globally?
2. Research the European Union, focusing on its development, goals, and projects.
3. Investigate the different denominations of the euro. What will each denomination look like? What people, places, symbols and words will appear on the denominations? In what way will these illustrations represent the notion of economic unification of Europe?
4. Learn about how international banks operate.
5. Study different economic systems. Create a world map that is color-coded to represent where different economic systems are currently in place.
6. Write a brief profile of the economies of the eleven European countries switching currencies to the euro. Include in the profile the economic system of the country, the major imports and exports, and any other pertinent information.
7. Obtain copies of the United Nations 1998 Human Development Index found online at (//www.undp.org/undp/hdro/98hdi.htm). Compare the statistics for the eleven countries switching to the euro, and illustrate the statistics graphically.
8. Read the editorial “The Euro’s Promise and Peril” from December 29, 1998’s New York Times. How do the statements in the editorial mirror those in the featured article? How do they differ? What is the opinion of the writer of the editorial, and how do you know?
American History- Research the history of currencies in the United States. What different types of currencies have existed in the United States? What did they look like? How did the images and words on these currencies represent the ideals or beliefs of the United States at the time of their minting and use?
Foreign Language- Research the currencies used in countries that speak the language of study.
Geography- Research the ways in which the eleven European countries switching to the euro are economically compatible. Learn about other ways in which they may be compatible (politically, socially, ethnically, linguistically, and religiously).
-Follow coverage of the euro as it begins to be used in the beginning of 1999.
-Read and discuss newspaper articles about the euro from the countries in which it is used. Visit Ecola Newsstand at (//www.ecola.com) for links to English-speaking newspapers worldwide.
Technology- Learn about how coins and paper money are minted.
Other Information on the Web:
To read a discussion of the euro from a British perspective, visit EmuNet at (//www.euro-emu.co.uk).
Learn more about the euro and the European Union at The Association for the Monetary Union of Europe Web site, found at (//amue.lf.net).
See the European Monetary Institute site at (//www.ecb.int).
Travelocity’s Currency Converter is located at (//www2.travelocity.com/converter).
Academic Content Standards:
World History Standard 44- Understands the search for community, stability, and peace in an interdependent world. Benchmarks: Understands influences on economic development around the world; Understands the emergence of a global culture
World History Standard 45- Understands major global trends since World War II. Benchmark: Understands the causes and consequences of the world’s shift from bipolar to multipolar centers of economic, political, and military power
Economics Standard 10- Understands basic concepts about international economics. Benchmarks: Knows that an exchange rate is the price of one nation’s currency in terms of another nation’s currency, and that exchange rates are determined by the forces of supply and demand; Understands that extensive international trade requires an organized system for exchanging money between nations (i.e., a foreign exchange market); Knows that despite the advantages of international trade (e.g., broader range of choices in buying goods and services), many nations restrict the free flow of goods and services through a variety of devices known as “barriers to trade” (e.g., tariffs, quotas) for national defense reasons or because some companies and workers are hurt by free trade; Understands that increasing international interdependence causes economic conditions and policies in one nation to affect economic conditions in many other nations
Geography Standard 11- Understands the patterns and networks of economic interdependence on Earth’s surface. Benchmarks: Understands issues related to the spatial distribution of economic activities; Understands the primary geographic causes for world trade; Understands historic and contemporary economic trade networks
Language Arts Standard 4- Gathers and uses information for research purposes. Benchmarks: Uses a variety of resource materials to gather information for research topics; Determines the appropriateness of an information source for a research topic; Organizes information and ideas from multiple sources in systematic ways
World History Standard 44- Understands the search for community, stability, and peace in an interdependent world. Benchmarks: Understands rates of economic development and the emergence of different economic systems around the globe; Understands how global political change has altered the world economy
World History Standard 45- Understands major global trends since World War II. Benchmark: Understands connections between globalizing trends in economy, technology, and culture and dynamic assertions of traditional cultural identity and distinctiveness
Economics Standard 10- Understands basic concepts about international economics. Benchmarks: Understands that trade between nations would not occur if nations had the same kinds of productive resources and could produce all goods and services at the same real costs; Knows that a nation has an absolute advantage if it can produce more of a product with the same amount of resources than another nation, and it has a comparative advantage when it can produce a product at a lower opportunity cost than another nation; Knows that comparative advantages change over time because of changes in resource prices and events that occur in other nations; Understands that a change in exchange rates changes the relative price of goods and services traded by the two countries and can have a significant effect on the flow of trade between nations and on a nation’s domestic economy
Geography Standard 11- Understands the patterns and networks of economic interdependence on Earth’s surface. Benchmarks: Knows the spatial distribution of major economic systems and their relative merits in terms of productivity and the social welfare of workers; Understands the advantages and disadvantages of international economic patterns
Language Arts Standard 4- Gathers and uses information for research puroses. Benchmarks: Uses a variety of news sources to gather information for research topics; Uses telephone information services found in public libraries to gather information for research topics; Synthesizes a variety of types of visual information, including pictures and symbols, for research topics
This lesson plan may be used to address the academic standards listed above. These standards are drawn from Content Knowledge: A Compendium of Standards and Benchmarks for K-12 Education; 3rd and 4th Editions and have been provided courtesy of the Mid-continent Research for Education and Learning in Aurora, Colorado. | https://learning.blogs.nytimes.com/1998/12/30/euro-cent-trick/ | 21 |
17 | What are some examples of great code
With the TEXTFunction by applying formatting using Format codes change the way numbers are displayed. This feature is useful in cases where you want to display numbers in a more readable format or combine them with text or symbols.
Note: The TEXT function converts numbers to text, which can make them difficult to refer to in later calculations. It is best to keep the original value in one cell and use the TEXT function in another cell. If you then have to create other formulas, you can always refer to the original value and not the result of the TEXT function.
TEXT(Value; text format)
The syntax of the TEXTFunction has the following arguments:
A numeric value to be converted to text
A text string that defines the formatting to be applied to the specified value
In its simplest form, the TEXT function says:
TEXT (value you want to format; "format code to be applied")
Below are some common examples that you can copy right into Excel and then experiment with yourself. Make sure that the format codes are enclosed in quotation marks.
= TEXT (1234,567;"#.##0,00 €")
Currency with thousands separator and 2 decimal places, e.g. B. € 1,234.57. Note that Excel rounds the value to 2 decimal places.
= TEXT (TODAY ();"DD.MM.YY")
Current date in the format DD.MM.YY, e.g. B. 03/14/12
= TEXT (TODAY ();"TTTT")
Current day of the week, e.g. B. Monday
= TEXT (NOW ();"H: MM AM / PM")
Current time, e.g. B. 1:29 PM
= TEXT (0.285;"0,0 %")
Percentage, e.g. B. 28.5%
= TEXT (4.34;"# ?/?")
Break, e.g. B. 4 1/3
= SMOOTH (TEXT (0.34;"# ?/?"))
Break, e.g. B. 1/3 In this case, the SMOOTH function removes the leading space created by the decimal value.
= TEXT (12200000;"0.00E + 00")
Scientific notation, e.g. B. 1.22E + 07
= TEXT (1234567898;"[<=9999999]###-####;(###) ###-####")
Special format (telephone number), e.g. B. (123) 456-7898
= TEXT (1234;"0000000")
Adding leading zeros (0), e.g. B. 0001234
= TEXT (123456;"##0° 00' 00''")
Custom - Latitude / Longitude
Note: While you can use the TEXT function to change the formatting, this is not the only option. You can also change the format without a formula. To do this, press CTRL + 1 (or +1 on a Mac), and then select in the dialog box Format cells the tab numbers out.
Download our examples
You can download a sample workbook that contains all of the TEXT functionality examples found in this article, as well as other examples. You can follow these examples or create your own format codes for the TEXT function.
Download examples of the TEXT function in Excel
Other format codes available
In the dialog box Format cells you can find more available format codes:
Press CTRL + 1 ( +1 on a Mac) to the Format cells to call.
Select on the tab numbers the desired format.
Select the option Custom out.
The desired format code is now in the field Type displayed. In this case, select in the field Type everything except the comma (,) and @ symbol. In the following example only DD.MM.YYYY was selected and copied.
Press CTRL + Cto copy the format code, then press Abortto bring up the dialog box Format cells close.
Now all you need is CTRL + V press to insert the format code into your TEXT formula, e.g. E.g .: = TEXT (B2; "DD.MM.YYYYRemember to put the format code in quotation marks ("format code"), otherwise Excel will issue an error message.
Format codes by category
Below are some examples of how different number formats could be applied to your values. Use the dialog box to do this Format cells and then the option Custom, around Format codes in your TEXT- Copy function.
- Select a number format
- Leading zeros (000)
- Show a thousand separator
- Number, currency and accounting formats
- Scientific notation
- Special formats
Why does Excel delete leading zeros?
Excel is designed to search cells for numbers rather than numbers that look like text (such as part or item numbers). To keep leading zeros, format the input area as text before inserting or entering values. Highlight the column or range in which you want to enter the values, press CTRL + 1to bring up the dialog box Format cells and select on the tab numbers the option text out. Now Excel keeps the leading zeros.
If Excel removed leading zeros after entering data, you can use the TEXT-Add function again. You can refer to the top cell with the values and = TEXT (value; "00000") where the number of zeros in the formula must match the number of characters you want. Then copy and paste the rest of the area.
If for some reason the text values need to be converted back to numbers, you can multiply by 1, e.g. B. = D4 * 1 or use the double unary operator (-), e.g. B. = - D4.
Excel separates thousands with periods when the format includes a period (.) Enclosed by number signs (#) or zeros. With the format string "#.###" for example, Excel displays the number 12,200,000 as 12,200,000.
A period following a placeholder for digits indicates a multiple of a thousand. With the format string "#.###,0." for example, Excel displays the number 12,200,000 as 12,200.0.
The thousands separator depends on the country settings. It is a comma in the US and a period (.) In other locales.
The thousand separator is available for number, currency, and accounting formats.
The following are examples of standard number formats (thousands separators and decimal numbers only), as well as currency and accounting formats. You can add the desired currency symbol to currency formats and align it next to the value, while in accounting formats the currency symbol and the decimal places are aligned in the column. Notice the differences between currency and accounting formats below. The accounting format uses a trailing space to separate the symbol from the value.
If you are looking for the format code for a currency symbol, press first CTRL + 1 (or +1 on a Mac), then select the format you want and finally from the drop-down list symbol the symbol from:
Then click to the left of the sectioncategory on Customand copy the format code including the currency symbol.
Note: The TEXT function does not support color formatting. So if you copy a number format code from the Format Cells dialog box that contains a color; B. "#. ## € 0.00;[Red]- #. ## € 0.00 ", the TEXT function accepts the format code, but does not display the color.
You can change how a date is displayed using a combination of "D" for day, "M" for month, and "Y" for year.
The format codes of the TEXT function are not case-sensitive. So you can use "T" or "t", "M" or "m", "J" or "j".
You can change the time display to use a combination of "H" for hours, "M" for minutes, or "S" for seconds and "AM / PM" for 12-hour clock display.
If you omit "AM / PM" or "A / P", the time is displayed in 24-hour format.
The format codes of the TEXT function are not case-sensitive. So you can use "H" or "h", "M" or "m", "S" or "s".
You can change the display from decimal values to percentages (%).
You can change the display of decimal values to fractions (? /?).
With scientific notation, numbers can be given as a decimal number between 1 and 10 with a power of ten. This notation is often used to shorten the display of large numbers.
Excel already contains some special formats:
Postal code - "00000"
Postal code D - "D-00000"
Telephone number - "[<= 9999999] ### - ####; (###) ### - ####"
Social Security Number - "0000-00 00 00"
The special formats depend on the locale. If there are no custom formats for your locale, or if the custom formats do not suit your needs, you can use the dialog box Format cells under Custom create your own special formats.
The TEXT-Function is rarely used alone, but mostly in conjunction with other information. For example, suppose you want to combine text and a numeric value, such as "Report printed 3/14/12" or "Weekly sales: $ 66,348.72". You could type this into Excel manually, but that doesn't make much sense as Excel can do this job for you. Unfortunately, when it comes to combining text and formatted numbers like dates, times, currency, etc., Excel doesn't know how to display them, so it discards the number format. At this point the TEXTFunction is invaluable as this function allows you to force Excel to format the values however you want, for example by using a Format code how "DD.MM.YY" use as date format.
The following example shows what happens if you type text and number without the TEXT-Try to combine the function. In this case it will ampersands (&) used to represent a text string, a space, and a value with = A2 & "" & B2 to concatenate.
As you can see, Excel has removed the formatting of the date in cell B2. The next example shows how to use the TEXT function to apply the desired format.
The updated formula is:
Cell C2:= A2 & "" & TEXT (B2; "DD.MM.YY") - Date format
frequently asked Questions
How can I convert numbers to text, such as 123 to one hundred and twenty three?
Can I change the spelling of text?
Yes. You can use the functions LARGE, SMALL and LARGE2 for this purpose. For example, = LARGE ("hello") would become "HELLO".
Can I insert a new line (line break) in a cell as with ALT + ENTER in the TEXT function?
Yes, but there are a few steps involved. First, select the appropriate cell or cells and use CTRL + 1to bring up the dialog box Format cells to open it, and then activate it under Alignment> Text Control the option new line. Next, adjust the finished one TEXTFunction on. Add the ASCII function at the point where the line break should occur CHARACTERS (10) a. You may need to adjust the column width to match the alignment of the end result.
In this case the following was used: = "Today's date:" & CHARACTER (10) & TEXT (TODAY (); "DD.MM.YY")
Why does Excel convert my number entries to 1.22E + 07 or something similar?
This is the so-called scientific notation. Excel automatically converts numbers longer than 12 digits when a cell is saved as a Generally is formatted, and with more than 15 digits when a cell is as number is formatted. If you need to enter long numeric strings but don't want them to be converted, format the cells in question as textbefore entering or pasting values in Excel.
Dates in different languages
Create or delete a custom number format
Convert numbers saved as text
All Excel functions (by category)
- When shows up bad
- Can humans and animals make babies?
- Is college free in Canada for citizens
- There was color television in the 1970s
- What are traditional Telugu foods
- Have you ever given up hope
- What are some entry level IT jobs
- Would you ever reopen Alcatraz Island?
- Some cats actually have Aspergers Syndrome
- Why is Vincent van Gogh a genius
- What does BRP mean in slang
- What do you call your grandmother
- Why can't we meet all other expectations
- What happens when particles stop moving
- Are you in school right now?
- What is the greatest sexual secret
- Which degrees belong to the MINT
- Men like to wear women's panties
- Can we have economic growth without pollution
- What do you eat for christmas
- Which countries make up Central America
- What happens after pregnancy
- Which side should one stay in Mauritius
- Do you like being venezuelan | https://familyhealthandwealth.info/?post=8549 | 21 |
15 | African-American culture, also known as Black American culture, refers to the contributions of African Americans to the culture of the United States, either as part of or distinct from mainstream American culture. The distinct identity of African-American culture is rooted in the historical experience of the African-American people, including the Middle Passage. The culture is both distinct and enormously influential on American and global worldwide culture as a whole.
African-American culture is primarily rooted in West and Central Africa. Understanding its identity within the culture of the United States it is, in the anthropological sense, conscious of its origins as largely a blend of West and Central African cultures. Although slavery greatly restricted the ability of African-Americans to practice their original cultural traditions, many practices, values and beliefs survived, and over time have modified and/or blended with European cultures and other cultures such as that of Native Americans. African-American identity was established during the slavery period, producing a dynamic culture that has had and continues to have a profound impact on American culture as a whole, as well as that of the broader world.
Elaborate rituals and ceremonies were a significant part of African Americans' ancestral culture. Many West African societies traditionally believed that spirits dwelled in their surrounding nature. From this disposition, they treated their environment with mindful care. They also generally believed that a spiritual life source existed after death, and that ancestors in this spiritual realm could then mediate between the supreme creator and the living. Honor and prayer was displayed to these "ancient ones", the spirit of those past. West Africans also believed in spiritual possession.
In the beginning of the eighteenth century, Christianity began to spread across North Africa; this shift in religion began displacing traditional African spiritual practices. The enslaved Africans brought this complex religious dynamic within their culture to America. This fusion of traditional African beliefs with Christianity provided a common place for those practicing religion in Africa and America.
After emancipation, unique African-American traditions continued to flourish, as distinctive traditions or radical innovations in music, art, literature, religion, cuisine, and other fields. 20th-century sociologists, such as Gunnar Myrdal, believed that African Americans had lost most of their cultural ties with Africa. But, anthropological field research by Melville Herskovits and others demonstrated that there has been a continuum of African traditions among Africans of the diaspora. The greatest influence of African cultural practices on European culture is found below the Mason-Dixon line in the American South.
For many years African-American culture developed separately from European-American culture, both because of slavery and the persistence of racial discrimination in America, as well as African-American slave descendants' desire to create and maintain their own traditions. Today, African-American culture has become a significant part of American culture and yet, at the same time, remains a distinct cultural body.
- 1 African-American cultural history
- 2 Music
- 3 The arts
- 4 Museums
- 5 Language
- 6 Fashion and aesthetics
- 7 Religion
- 8 Life events
- 9 Cuisine
- 10 Holidays and observances
- 11 Names
- 12 Family
- 13 Politics and social issues
- 14 African-American population centers
- 15 See also
- 16 References
- 17 Bibliography
- 18 External links
African-American cultural history[edit | edit source]
From the earliest days of American slavery in the 17th century, slave owners sought to exercise control over their slaves by attempting to strip them of their African culture. The physical isolation and societal marginalization of African slaves and, later, of their free progeny, however, facilitated the retention of significant elements of traditional culture among Africans in the New World generally, and in the United States in particular. Slave owners deliberately tried to repress independent political or cultural organization in order to deal with the many slave rebellions or acts of resistance that took place in the United States, Brazil, Haiti, and the Dutch Guyanas.
African cultures, slavery, slave rebellions, and the civil rights movement have shaped African-American religious, familial, political, and economic behaviors. The imprint of Africa is evident in a myriad of ways: in politics, economics, language, music, hairstyles, fashion, dance, religion, cuisine, and worldview.
In turn, African-American culture has had a pervasive, transformative impact on many elements of mainstream American culture. This process of mutual creative exchange is called creolization. Over time, the culture of African slaves and their descendants has been ubiquitous in its impact on not only the dominant American culture, but on world culture as well.
Oral tradition[edit | edit source]
Slaveholders limited or prohibited education of enslaved African Americans because they feared it might empower their chattel and inspire or enable emancipatory ambitions. In the United States, the legislation that denied slaves formal education likely contributed to their maintaining a strong oral tradition, a common feature of indigenous African cultures. African-based oral traditions became the primary means of preserving history, mores, and other cultural information among the people. This was consistent with the griot practices of oral history in many African and other cultures that did not rely on the written word. Many of these cultural elements have been passed from generation to generation through storytelling. The folktales provided African Americans the opportunity to inspire and educate one another.
Examples of African-American folktales include trickster tales of Br'er Rabbit and heroic tales such as that of John Henry. The Uncle Remus stories by Joel Chandler Harris helped to bring African-American folk tales into mainstream adoption. Harris did not appreciate the complexity of the stories nor their potential for a lasting impact on society. Other narratives that appear as important, recurring motifs in African-American culture are the "Signifying Monkey", "The Ballad of Shine", and the legend of Stagger Lee.
The legacy of the African-American oral tradition manifests in diverse forms. African-American preachers tend to perform rather than simply speak. The emotion of the subject is carried through the speaker's tone, volume, and cadence, which tend to mirror the rising action, climax, and descending action of the sermon. Often song, dance, verse, and structured pauses are placed throughout the sermon. Call and response is another pervasive element of the African-American oral tradition. It manifests in worship in what is commonly referred to as the "amen corner". In direct contrast to recent tradition in other American and Western cultures, it is an acceptable and common audience reaction to interrupt and affirm the speaker. This pattern of interaction is also in evidence in music, particularly in blues and jazz forms. Hyperbolic and provocative, even incendiary, rhetoric is another aspect of African-American oral tradition often evident in the pulpit in a tradition sometimes referred to as "prophetic speech".
Modernity and migration of black communities to the North has had a history of placing strain on the retention of black cultural practices and traditions. The urban and radically different spaces in which black culture was being produced raised fears in anthropologists and sociologists that the southern black folk aspect of black popular culture were at risk of being lost in history. The study over the fear of losing black popular cultural roots from the South have a topic of interest to many anthropologists, who among them include Zora Neale Hurston. Through her extensive studies of Southern folklore and cultural practices, Hurston has claimed that the popular Southern folklore traditions and practices are not dying off. Instead they are evolving, developing, and re-creating themselves in different regions.
Other aspects of African-American oral tradition include the dozens, signifying, trash talk, rhyming, semantic inversion and word play, many of which have found their way into mainstream American popular culture and become international phenomena.
Spoken word artistry is another example of how the African-American oral tradition has influenced modern popular culture. Spoken word artists employ the same techniques as African-American preachers including movement, rhythm, and audience participation. Rap music from the 1980s and beyond has been seen as an extension of oral culture.
Harlem Renaissance[edit | edit source]
The first major public recognition of African-American culture occurred during the Harlem Renaissance pioneered by Alain Locke. In the 1920s and 1930s, African-American music, literature, and art gained wide notice. Authors such as Zora Neale Hurston and Nella Larsen and poets such as Langston Hughes, Claude McKay, and Countee Cullen wrote works describing the African-American experience. Jazz, swing, blues and other musical forms entered American popular music. African-American artists such as William H. Johnson and Palmer Hayden created unique works of art featuring African Americans.
The Harlem Renaissance was also a time of increased political involvement for African Americans. Among the notable African-American political movements founded in the early 20th century are the Universal Negro Improvement Association and the National Association for the Advancement of Colored People. The Nation of Islam, a notable quasi-Islamic religious movement, also began in the early 1930s.
African-American cultural movement[edit | edit source]
The Black Power movement of the 1960s and 1970s followed in the wake of the non-violent Civil Rights Movement. The movement promoted racial pride and ethnic cohesion in contrast to the focus on integration of the Civil Rights Movement, and adopted a more militant posture in the face of racism. It also inspired a new renaissance in African-American literary and artistic expression generally referred to as the African-American or "Black Arts Movement".
The works of popular recording artists such as Nina Simone ("Young, Gifted and Black") and The Impressions ("Keep On Pushing"), as well as the poetry, fine arts, and literature of the time, shaped and reflected the growing racial and political consciousness. Among the most prominent writers of the African-American Arts Movement were poet Nikki Giovanni; poet and publisher Don L. Lee, who later became known as Haki Madhubuti; poet and playwright Leroi Jones, later known as Amiri Baraka; and Sonia Sanchez. Other influential writers were Ed Bullins, Dudley Randall, Mari Evans, June Jordan, Larry Neal, and Ahmos Zu-Bolton.
Another major aspect of the African-American Arts Movement was the infusion of the African aesthetic, a return to a collective cultural sensibility and ethnic pride that was much in evidence during the Harlem Renaissance and in the celebration of Négritude among the artistic and literary circles in the US, Caribbean, and the African continent nearly four decades earlier: the idea that "black is beautiful". During this time, there was a resurgence of interest in, and an embrace of, elements of African culture within African-American culture that had been suppressed or devalued to conform to Eurocentric America. Natural hairstyles, such as the afro, and African clothing, such as the dashiki, gained popularity. More importantly, the African-American aesthetic encouraged personal pride and political awareness among African Americans.
Music[edit | edit source]
African-American music is rooted in the typically polyrhythmic music of the ethnic groups of Africa, specifically those in the Western, Sahelean, and Sub-Saharan regions. African oral traditions, nurtured in slavery, encouraged the use of music to pass on history, teach lessons, ease suffering, and relay messages. The African pedigree of African-American music is evident in some common elements: call and response, syncopation, percussion, improvisation, swung notes, blue notes, the use of falsetto, melisma, and complex multi-part harmony. During slavery, Africans in America blended traditional European hymns with African elements to create spirituals.
Many African Americans sing "Lift Every Voice and Sing" in addition to the American national anthem, "The Star-Spangled Banner", or in lieu of it. Written by James Weldon Johnson and John Rosamond Johnson in 1900 to be performed for the birthday of Abraham Lincoln, the song was, and continues to be, a popular way for African Americans to recall past struggles and express ethnic solidarity, faith, and hope for the future. The song was adopted as the "Negro National Anthem" by the NAACP in 1919. Many African-American children are taught the song at school, church or by their families. "Lift Ev'ry Voice and Sing" traditionally is sung immediately following, or instead of, "The Star-Spangled Banner" at events hosted by African-American churches, schools, and other organizations.
In the 19th century, as the result of the blackface minstrel show, African-American music entered mainstream American society. By the early 20th century, several musical forms with origins in the African-American community had transformed American popular music. Aided by the technological innovations of radio and phonograph records, ragtime, jazz, blues, and swing also became popular overseas, and the 1920s became known as the Jazz Age. The early 20th century also saw the creation of the first African-American Broadway shows, films such as King Vidor's Hallelujah!, and operas such as George Gershwin's Porgy and Bess.
Rock and roll, doo wop, soul, and R&B developed in the mid-20th century. These genres became very popular in white audiences and were influences for other genres such as surf. During the 1970s, the dozens, an urban African-American tradition of using rhyming slang to put down one's enemies (or friends), and the West Indian tradition of toasting developed into a new form of music. In the South Bronx the half speaking, half singing rhythmic street talk of "rapping" grew into the hugely successful cultural force known as hip hop.
Contemporary[edit | edit source]
Hip hop would become a multicultural movement, however, it still remained important to many African Americans. The African-American Cultural Movement of the 1960s and 1970s also fueled the growth of funk and later hip-hop forms such as rap, hip house, new jack swing, and go-go. House music was created in black communities in Chicago in the 1980s. African-American music has experienced far more widespread acceptance in American popular music in the 21st century than ever before. In addition to continuing to develop newer musical forms, modern artists have also started a rebirth of older genres in the form of genres such as neo soul and modern funk-inspired groups.
In contemporary art, black subject matter has been used as raw material to portray the Black experience and aesthetics. The way Blacks' facial features were once conveyed as stereotypical in media and entertainment continues to be an influence within art. Dichotomies arise from artworks such as Open Casket by Dana Schutz based on the murder of Emmett Till to remove the painting and destroy it from the way Black pain is conveyed. Meanwhile, Black artists such as Kerry James Marshall portrays the Black body as empowerment and Black invisibility.
The arts[edit | edit source]
Dance[edit | edit source]
African-American dance, like other aspects of African-American culture, finds its earliest roots in the dances of the hundreds of African ethnic groups that made up African slaves in the Americas as well as influences from European sources in the United States. Dance in the African tradition, and thus in the tradition of slaves, was a part of both everyday life and special occasions. Many of these traditions such as get down, ring shouts, and other elements of African body language survive as elements of modern dance.
In the 19th century, African-American dance began to appear in minstrel shows. These shows often presented African Americans as caricatures for ridicule to large audiences. The first African-American dance to become popular with white dancers was the cakewalk in 1891. Later dances to follow in this tradition include the Charleston, the Lindy Hop, the Jitterbug and the swing.
During the Harlem Renaissance, African-American Broadway shows such as Shuffle Along helped to establish and legitimize African-American dancers. African-American dance forms such as tap, a combination of African and European influences, gained widespread popularity thanks to dancers such as Bill Robinson and were used by leading white choreographers, who often hired African-American dancers.
Contemporary African-American dance is descended from these earlier forms and also draws influence from African and Caribbean dance forms. Groups such as the Alvin Ailey American Dance Theater have continued to contribute to the growth of this form. Modern popular dance in America is also greatly influenced by African-American dance. American popular dance has also drawn many influences from African-American dance most notably in the hip-hop genre.
One of the uniquely African American forms of dancing, turfing, emerged from social and political movements in the East Bay in the San Francisco Bay Area. Turfing is a hood dance and a response to the loss of African American lives, police brutality, and race relations in Oakland, California. The dance is an expression of Blackness, and one that integrates concepts of solidarity, social support, peace, and the discourse of the state of black people in our current social structures.
Art[edit | edit source]
From its early origins in slave communities, through the end of the 20th century, African-American art has made a vital contribution to the art of the United States. During the period between the 17th century and the early 19th century, art took the form of small drums, quilts, wrought-iron figures, and ceramic vessels in the southern United States. These artifacts have similarities with comparable crafts in West and Central Africa. In contrast, African-American artisans like the New England–based engraver Scipio Moorhead and the Baltimore portrait painter Joshua Johnson created art that was conceived in a thoroughly western European fashion.
During the 19th century, Harriet Powers made quilts in rural Georgia, United States that are now considered among the finest examples of 19th-century Southern quilting. Later in the 20th century, the women of Gee's Bend developed a distinctive, bold, and sophisticated quilting style based on traditional African-American quilts with a geometric simplicity that developed separately but was like that of Amish quilts and modern art.
After the American Civil War, museums and galleries began more frequently to display the work of African-American artists. Cultural expression in mainstream venues was still limited by the dominant European aesthetic and by racial prejudice. To increase the visibility of their work, many African-American artists traveled to Europe where they had greater freedom. It was not until the Harlem Renaissance that more European Americans began to pay attention to African-American art in America.
During the 1920s, artists such as Raymond Barthé, Aaron Douglas, Augusta Savage, and photographer James Van Der Zee became well known for their work. During the Great Depression, new opportunities arose for these and other African-American artists under the WPA. In later years, other programs and institutions, such as the New York City-based Harmon Foundation, helped to foster African-American artistic talent. Augusta Savage, Elizabeth Catlett, Lois Mailou Jones, Romare Bearden, Jacob Lawrence, and others exhibited in museums and juried art shows, and built reputations and followings for themselves.
In the 1950s and 1960s, there were very few widely accepted African-American artists. Despite this, The Highwaymen, a loose association of 27 African-American artists from Ft. Pierce, Florida, created idyllic, quickly realized images of the Florida landscape and peddled some 50,000 of them from the trunks of their cars. They sold their art directly to the public rather than through galleries and art agents, thus receiving the name "The Highwaymen". Rediscovered in the mid-1990s, today they are recognized as an important part of American folk history. Their artwork is widely collected by enthusiasts and original pieces can easily fetch thousands of dollars in auctions and sales.
The Black Arts Movement of the 1960s and 1970s was another period of resurgent interest in African-American art. During this period, several African-American artists gained national prominence, among them Lou Stovall, Ed Love, Charles White, and Jeff Donaldson. Donaldson and a group of African-American artists formed the Afrocentric collective AfriCOBRA, which remains in existence today. The sculptor Martin Puryear, whose work has been acclaimed for years, was being honored with a 30-year retrospective of his work at the Museum of Modern Art in New York in November 2007. Notable contemporary African-American artists include Willie Cole, David Hammons, Eugene J. Martin, Mose Tolliver, Reynold Ruffins, the late William Tolliver, and Kara Walker.
Literature[edit | edit source]
African-American literature has its roots in the oral traditions of African slaves in America. The slaves used stories and fables in much the same way as they used music. These stories influenced the earliest African-American writers and poets in the 18th century such as Phillis Wheatley and Olaudah Equiano. These authors reached early high points by telling slave narratives.
During the early 20th century Harlem Renaissance, numerous authors and poets, such as Langston Hughes, W. E. B. Du Bois, and Booker T. Washington, grappled with how to respond to discrimination in America. Authors during the Civil Rights Movement, such as Richard Wright, James Baldwin, and Gwendolyn Brooks wrote about issues of racial segregation, oppression, and other aspects of African-American life. This tradition continues today with authors who have been accepted as an integral part of American literature, with works such as Roots: The Saga of an American Family by Alex Haley, The Color Purple by Alice Walker, Beloved by Nobel Prize-winning Toni Morrison, and fiction works by Octavia Butler and Walter Mosley. Such works have achieved both best-selling and/or award-winning status.
Museums[edit | edit source]
The African-American Museum Movement emerged during the 1950s and 1960s to preserve the heritage of the African-American experience and to ensure its proper interpretation in American history. Museums devoted to African-American history are found in many African-American neighborhoods. Institutions such as the African American Museum and Library at Oakland, The African American Museum in Cleveland and the Natchez Museum of African American History and Culture were created by African Americans to teach and investigate cultural history that, until recent decades was primarily preserved through oral traditions. Other prominent museums include Chicago's DuSable Museum of African American History and the National Museum of African American History and Culture, part of the Smithsonian Institution in Washington, D.C.
Language[edit | edit source]
Generations of hardships imposed on the African-American community created distinctive language patterns. Slave owners often intentionally mixed people who spoke different African languages to discourage communication in any language other than English. This, combined with prohibitions against education, led to the development of pidgins, simplified mixtures of two or more languages that speakers of different languages could use to communicate. Examples of pidgins that became fully developed languages include Creole, common to Louisiana, and Gullah, common to the Sea Islands off the coast of South Carolina and Georgia.
African American Vernacular English (AAVE) is a variety (dialect, ethnolect, and sociolect) of the American English language closely associated with the speech of, but not exclusive to, African Americans. While AAVE is academically considered a legitimate dialect because of its logical structure, some of both whites and African Americans consider it slang or the result of a poor command of Standard American English. Many African Americans who were born outside the American South still speak with hints of AAVE or southern dialect. Inner-city African-American children who are isolated by speaking only AAVE sometimes have more difficulty with standardized testing and, after school, moving to the mainstream world for work. It is common for many speakers of AAVE to code switch between AAVE and Standard American English depending on the setting.
Fashion and aesthetics[edit | edit source]
Attire[edit | edit source]
The Black Arts Movement, a cultural explosion of the 1960s, saw the incorporation of surviving cultural dress with elements from modern fashion and West African traditional clothing to create a uniquely African-American traditional style. Kente cloth is the best known African textile. These colorful woven patterns, which exist in numerous varieties, were originally made by the Ashanti and Ewe peoples of Ghana and Togo. Kente fabric also appears in a number of Western style fashions ranging from casual T-shirts to formal bow ties and cummerbunds. Kente strips are often sewn into liturgical and academic robes or worn as stoles. Since the Black Arts Movement, traditional African clothing has been popular amongst African Americans for both formal and informal occasions. Other manifestations of traditional African dress in common evidence in African-American culture are vibrant colors, mud cloth, trade beads and the use of Adinkra motifs in jewelry and in couture and decorator fabrics.
Another common aspect of fashion in African-American culture involves the appropriate dress for worship in the Black church. It is expected in most churches that an individual present their best appearance for worship. African-American women in particular are known for wearing vibrant dresses and suits. An interpretation of a passage from the Christian Bible, "...every woman who prays or prophesies with her head uncovered dishonors her head...", has led to the tradition of wearing elaborate Sunday hats, sometimes known as "crowns".
Hair[edit | edit source]
Hair styling in African-American culture is greatly varied. African-American hair is typically composed of coiled curls, which range from tight to wavy. Many women choose to wear their hair in its natural state. Natural hair can be styled in a variety of ways, including the afro, twist outs, braid outs, and wash and go styles. It is a myth that natural hair presents styling problems or is hard to manage; this myth seems prevalent because mainstream culture has, for decades, attempted to get African American women to conform to its standard of beauty (i.e., straight hair). To that end, some women prefer straightening of the hair through the application of heat or chemical processes. Although this can be a matter of personal preference, the choice is often affected by straight hair being a beauty standard in the West and the fact that hair type can affect employment. However, more and more women are wearing their hair in its natural state and receiving positive feedback. Alternatively, the predominant and most socially acceptable practice for men is to leave one's hair natural.
Often, as men age and begin to lose their hair, the hair is either closely cropped, or the head is shaved completely free of hair. However, since the 1960s, natural hairstyles, such as the afro, braids, and dreadlocks, have been growing in popularity. Despite their association with radical political movements and their vast difference from mainstream Western hairstyles, the styles have attained considerable, but certainly limited, social acceptance.
Maintaining facial hair is more prevalent among African-American men than in other male populations in the US. In fact, the soul patch is so named because African-American men, particularly jazz musicians, popularized the style. The preference for facial hair among African-American men is due partly to personal taste, but also because they are more prone than other ethnic groups to develop a condition known as pseudofolliculitis barbae, commonly referred to as razor bumps, many prefer not to shave.
Body image[edit | edit source]
European-Americans have sometimes appropriated different hair braiding techniques and other forms of African-American hair. There are also individuals and groups who are working towards raising the standing of the African aesthetic among African Americans and internationally as well. This includes efforts toward promoting as models those with clearly defined African features; the mainstreaming of natural hairstyles; and, in women, fuller, more voluptuous body types.
Religion[edit | edit source]
Christianity[edit | edit source]
The religious institutions of African-American Christians commonly are referred to collectively as the black church. During slavery, many slaves were stripped of their African belief systems and typically denied free religious practice, forced to become Christian. Slaves managed, however, to hang on to some practices by integrating them into Christian worship in secret meetings. These practices, including dance, shouts, African rhythms, and enthusiastic singing, remain a large part of worship in the African-American church.
African-American churches taught that all people were equal in God's eyes and viewed the doctrine of obedience to one's master taught in white churches as hypocritical – yet accepted and propagated internal hierarchies and support for corporal punishment of children among other things . Instead the African-American church focused on the message of equality and hopes for a better future. Before and after emancipation, racial segregation in America prompted the development of organized African-American denominations. The first of these was the AME Church founded by Richard Allen in 1787.
After the Civil War the merger of three smaller Baptist groups formed the National Baptist Convention This organization is the largest African-American Christian Denomination and the second largest Baptist denomination in the United States. An African-American church is not necessarily a separate denomination. Several predominantly African-American churches exist as members of predominantly white denominations. African-American churches have served to provide African-American people with leadership positions and opportunities to organize that were denied in mainstream American society. Because of this, African-American pastors became the bridge between the African-American and European American communities and thus played a crucial role in the Civil Rights Movement.
Like many Christians, African-American Christians sometimes participate in or attend a Christmas play. Black Nativity by Langston Hughes is a re-telling of the classic Nativity story with gospel music. Productions can be found in African-American theaters and churches all over the country.
Islam[edit | edit source]
Generations before the advent of the Atlantic slave trade, Islam was a thriving religion in West Africa due to its peaceful introduction via the lucrative Trans-Saharan trade between prominent tribes in the southern Sahara and the Arabs and Berbers in North Africa. In his attesting to this fact the West African scholar Cheikh Anta Diop explained: "The primary reason for the success of Islam in Black Africa [...] consequently stems from the fact that it was propagated peacefully at first by solitary Arabo-Berber travelers to certain Black kings and notables, who then spread it about them to those under their jurisdiction". Many first-generation slaves were often able to retain their Muslim identity, their descendants were not. Slaves were either forcibly converted to Christianity as was the case in the Catholic lands or were besieged with gross inconveniences to their religious practice such as in the case of the Protestant American mainland.
In the decades after slavery and particularly during the depression era, Islam reemerged in the form of highly visible and sometimes controversial movements in the African-American community. The first of these of note was the Moorish Science Temple of America, founded by Noble Drew Ali. Ali had a profound influence on Wallace Fard, who later founded the Black nationalist Nation of Islam in 1930. Elijah Muhammad became head of the organization in 1934. Much like Malcolm X, who left the Nation of Islam in 1964, many African-American Muslims now follow traditional Islam.
Many former members of the Nation of Islam converted to Sunni Islam when Warith Deen Mohammed took control of the organization after his father's death in 1975 and taught its members the traditional form of Islam based on the Qur'an. A survey by the Council on American-Islamic Relations shows that 30% of Sunni Mosque attendees are African Americans. In fact, most African-American Muslims are orthodox Muslims, as only 2% are of the Nation of Islam.
Judaism[edit | edit source]
There are 150,000 African Americans in the United States who practice Judaism. Some of these are members of mainstream Jewish groups like the Reform, Conservative, or Orthodox branches of Judaism; others belong to non-mainstream Jewish groups like the Black Hebrew Israelites. The Black Hebrew Israelites are a collection of African-American religious organizations whose practices and beliefs are derived to some extent from Judaism. Their varied teachings often include, that African Americans are descended from the Biblical Israelites.
Studies have shown in the last 10 to 15 years there has been major increase in African-Americans identifying as Jewish. Rabbi Capers Funnye, the first cousin of Michelle Obama, says in response to skepticism by some on people being African-American and Jewish at the same time, "I am a Jew, and that breaks through all color and ethnic barriers."
Other religions[edit | edit source]
Aside from Christianity, Islam, and Judaism, there are also African Americans who follow Buddhism and a number of other religions. There is a small but growing number of African Americans who participate in African traditional religions, such as West African Vodun, Santería, Ifá and diasporic traditions like the Rastafari movement. Many of them are immigrants or descendants of immigrants from the Caribbean and South America, where these are practiced. Because of religious practices, such as animal sacrifice, which are no longer common among the larger American religions, these groups may be viewed negatively and are sometimes the victims of harassment. It must be stated, however, that since the Supreme Court judgement that was given to the Lukumi Babaluaye church of Florida in 1993, there has been no major legal challenge to their right to function as they see fit.
Irreligious beliefs[edit | edit source]
Life events[edit | edit source]
For most African Americans, the observance of life events follows the pattern of mainstream American culture. While African Americans and whites often lived to themselves for much of American history, both groups generally had the same perspective on American culture. There are some traditions that are unique to African Americans.
Some African Americans have created new rites of passage that are linked to African traditions. Some pre-teen and teenage boys and girls take classes to prepare them for adulthood. These classes tend to focus on spirituality, responsibility, and leadership. Many of these programs are modeled after traditional African ceremonies, with the focus largely on embracing African cultures.
To this day, some African-American couples choose to "jump the broom" as a part of their wedding ceremony. Although the practice, which can be traced back to Ghana, fell out of favor in the African-American community after the end of slavery, it has experienced a slight resurgence in recent years as some couples seek to reaffirm their African heritage.
Funeral traditions tend to vary based on a number of factors, including religion and location, but there are a number of commonalities. Probably the most important part of death and dying in the African-American culture is the gathering of family and friends. Either in the last days before death or shortly after death, typically any friends and family members that can be reached are notified. This gathering helps to provide spiritual and emotional support, as well as assistance in making decisions and accomplishing everyday tasks.
The spirituality of death is very important in African-American culture. A member of the clergy or members of the religious community, or both, are typically present with the family through the entire process. Death is often viewed as transitory rather than final. Many services are called homegoings or homecomings, instead of funerals, based on the belief that the person is going home to the afterlife; "Returning to god" or the Earth (also see Euphemism as well as Connotation). The entire end of life process is generally treated as a celebration of the person's life, deeds and accomplishments – the "good things" rather than a mourning of loss. This is most notably demonstrated in the New Orleans jazz funeral tradition where upbeat music, dancing, and food encourage those gathered to be happy and celebrate the homegoing of a beloved friend.
Cuisine[edit | edit source]
In studying of the African American culture, food cannot be left out as one of the medians to understand their traditions, religion, interaction, and social and cultural structures of their community. Observing the ways they prepare their food and eat their food ever since the enslaved era, reveals about the nature and identity of African American culture in the United States. Derek Hicks examines the origins of "gumbo", which is considered a soul food to African Americans, in his reference to the intertwinement of food and culture in African American community. No written evidence are found historically about the gumbo or its recipes, so through the African American's nature of orally passing their stories and recipes down, gumbo came to represent their truly communal dish. Gumbo is said to be "an invention of enslaved Africans and African Americans". By mixing and cooking leftover ingredients from their White owners (often less desirable cuts of meats and vegetables) all together into a dish that has consistency between stew and soup, African Americans took the detestable and created it into a desirable dish. Through sharing of this food in churches with a gathering of their people, they not only shared the food, but also experience, feelings, attachment, and sense of unity that brings the community together.
The cultivation and use of many agricultural products in the United States, such as yams, peanuts, rice, okra, sorghum, grits, indigo dyes, and cotton, can be traced to African influences. African-American foods reflect creative responses to racial and economic oppression and poverty. Under slavery, African Americans were not allowed to eat better cuts of meat, and after emancipation many were often too poor to afford them.
Soul food, a hearty cuisine commonly associated with African Americans in the South (but also common to African Americans nationwide), makes creative use of inexpensive products procured through farming and subsistence hunting and fishing. Pig intestines are boiled and sometimes battered and fried to make chitterlings, also known as "chitlins". Ham hocks and neck bones provide seasoning to soups, beans and boiled greens (turnip greens, collard greens, and mustard greens).
Other common foods, such as fried chicken and fish, macaroni and cheese, cornbread, and hoppin' john (black-eyed peas and rice) are prepared simply. When the African-American population was considerably more rural than it generally is today, rabbit, opossum, squirrel, and waterfowl were important additions to the diet. Many of these food traditions are especially predominant in many parts of the rural South.
Traditionally prepared soul food is often high in fat, sodium, and starch. Highly suited to the physically demanding lives of laborers, farmhands and rural lifestyles generally, it is now a contributing factor to obesity, heart disease, and diabetes in a population that has become increasingly more urban and sedentary. As a result, more health-conscious African Americans are using alternative methods of preparation, eschewing trans fats in favor of natural vegetable oils and substituting smoked turkey for fatback and other, cured pork products; limiting the amount of refined sugar in desserts; and emphasizing the consumption of more fruits and vegetables than animal protein. There is some resistance to such changes, however, as they involve deviating from long culinary tradition.
Holidays and observances[edit | edit source]
As with other American racial and ethnic groups, African Americans observe ethnic holidays alongside traditional American holidays. Holidays observed in African-American culture are not only observed by African Americans but are widely considered American holidays. The birthday of noted American civil rights leader Martin Luther King, Jr has been observed nationally since 1983. It is one of three federal holidays named for an individual.
Black History Month is another example of another African-American observance that has been adopted nationally and its teaching is even required by law in some states. Black History Month is an attempt to focus attention on previously neglected aspects of the American history, chiefly the lives and stories of African Americans. It is observed during the month of February to coincide with the founding of the NAACP and the birthdays of Frederick Douglass, a prominent African-American abolitionist, and Abraham Lincoln, the United States president who signed the Emancipation Proclamation.
On June 7, 1979 President Jimmy Carter decreed that June would be the month of black music. For the past 28 years, presidents have announced to Americans that Black Music Month (also called African-American Music Month) should be recognized as a critical part of American heritage. Black Music Month is highlighted with various events urging citizens to revel in the many forms of music from gospel to hip-hop. African-American musicians, singers, and composers are also highlighted for their contributions to the nation's history and culture.
Less-widely observed outside of the African-American community is Emancipation Day popularly known as Juneteenth or Freedom Day, in recognition of the official reading of the Emancipation Proclamation on June 19, 1865, in Texas. Juneteenth is a day when African Americans reflect on their unique history and heritage. It is one of the fastest growing African-American holidays with observances in the United States. Another holiday not widely observed outside of the African-American community is the birthday of Malcolm X. The day is observed on May 19 in American cities with a significant African-American population, including Washington, D.C.
Another noted African-American holiday is Kwanzaa. Like Emancipation Day, it is not widely observed outside of the African-American community, although it is growing in popularity with both African-American and African communities. African-American scholar and activist "Maulana" Ron Karenga invented the festival of Kwanzaa in 1966, as an alternative to the increasing commercialization of Christmas. Derived from the harvest rituals of Africans, Kwanzaa is observed each year from December 26 through January 1. Participants in Kwanzaa celebrations affirm their African heritage and the importance of family and community by drinking from a unity cup; lighting red, black, and green candles; exchanging heritage symbols, such as African art; and recounting the lives of people who struggled for African and African-American freedom.
Negro Election Day is also another festival derived from rituals of African culture specifically West Africa and revolves around the voting of a black official in New England colonies during the 18th century.
Names[edit | edit source]
Although many African-American names are common among the larger population of the United States, distinct naming trends have emerged within the African American culture. Prior to the 1950s and 1960s, most African-American names closely resembled those used within European American culture. A dramatic shift in naming traditions began to take shape in the 1960s and 1970s in America. With the rise of the mid-century Civil Rights Movement, there was a dramatic rise in names of various origins. The practice of adopting neo-African or Islamic names gained popularity during that era. Efforts to recover African heritage inspired selection of names with deeper cultural significance. Before this, using African names was uncommon because African Americans were several generations removed from the last ancestor to have an African name, as slaves were often given European names and most surnames are of Anglo origin.
African-American names have origins in many languages including French, Latin, English, Arabic, and African languages. One very notable influence on African-American names is the Muslim religion. Islamic names entered the popular culture with the rise of The Nation of Islam among Black Americans with its focus on civil rights. The popular name "Aisha" has origins in the Qur'an. Despite the origins of these names in the Muslim religion and the place of the Nation of Islam in the civil rights movement, many Muslim names such as Jamal and Malik entered popular usage among Black Americans simply because they were fashionable, and many Islamic names are now commonly used by African Americans regardless of their religion. Names of African origin began to crop up as well. Names like Ashanti, Tanisha, Aaliyah, Malaika have origins in the continent of Africa.
By the 1970s and 1980s, it had become common within the culture to invent new names, although many of the invented names took elements from popular existing names. Prefixes such as La/Le-, Da/De-, Ra/Re-, or Ja/Je- and suffixes such as -ique/iqua, -isha, and -aun/-awn are common, as well as inventive spellings for common names.
Family[edit | edit source]
When slavery was practiced in the United States, it was common for families to be separated through sale. Even during slavery, however, many African-American families managed to maintain strong familial bonds. Free African men and women, who managed to buy their own freedom by being hired out, who were emancipated, or who had escaped their masters, often worked long and hard to buy the members of their families who remained in bondage and send for them.
Others, separated from blood kin, formed close bonds based on fictive kin; play relations, play aunts, cousins, and the like. This practice, a holdover from African oral traditions such as sanankouya, survived Emancipation, with non-blood family friends commonly accorded the status and titles of blood relations. This broader, more African concept of what constitutes family and community, and the deeply rooted respect for elders that is part of African traditional societies, may be the genesis of the common use of the terms like "cousin" (or "cuz"), "aunt", "uncle", "brother", "sister", "Mother", and "Mama" when addressing other African-American people, some of whom may be complete strangers.
African-American family structure[edit | edit source]
Immediately after slavery, African-American families struggled to reunite and rebuild what had been taken. As late as 1960, when most African Americans lived under some form of segregation, 78 percent of African-American families were headed by married couples. This number steadily declined during the latter half of the 20th century. For the first time since slavery, a majority of African-American children live in a household with only one parent, typically the mother.
This apparent weakness is balanced by mutual-aid systems established by extended family members to provide emotional and economic support. Older family members pass on social and cultural traditions such as religion and manners to younger family members. In turn, the older family members are cared for by younger family members when they cannot care for themselves. These relationships exist at all economic levels in the African-American community, providing strength and support both to the African-American family and the community.
[edit | edit source]
Since the passing of the Voting Rights Act, African Americans are voting and being elected to public office in increasing numbers. As of 2008 there were approximately 10,000 African-American elected officials in America. African Americans are overwhelmingly Democratic. Only 11 percent of African Americans voted for George W. Bush in the 2004 Presidential Election.
Social issues such as racial profiling, the racial disparity in sentencing, higher rates of poverty, lower access to health care and institutional racism in general are important to the African-American community. While the divide on racial and fiscal issues has remained consistently wide for decades, seemingly indicating a wide social divide, African Americans tend to hold the same optimism and concern for America as any other ethnic group.
These political and social sentiments have been expressed through hip-hop culture, including graffiti, break-dancing, rapping, and more. This cultural movement makes statements about historical, as well as present-day topics like street culture and incarceration, and oftentimes expresses a call for change. Hip hop artists play a prominent role in activism and fighting social injustices, and has a cultural role in defining and reflecting on political and social issues.
An area where African Americans in general outstrip whites is in their condemnation of homosexuality. Prominent leaders in the Black church have demonstrated against gay rights issues such as gay marriage. This stands in stark contrast to the down-low phenomenon of covert male–male sexual acts. There are those within the community who take a different position, notably the late Coretta Scott King and the Reverend Al Sharpton, the latter of whom, when asked in 2003 whether he supported gay marriage, replied that he might as well have been asked if he supported black marriage or white marriage.
African-American population centers[edit | edit source]
African-American neighborhoods are types of ethnic enclaves found in many cities in the United States. The formation of African-American neighborhoods is closely linked to the history of segregation in the United States, either through formal laws, or as a product of social norms. Despite this, African-American neighborhoods have played an important role in the development of nearly all aspects of both African-American culture and broader American culture.
Wealthy African-American communities[edit | edit source]
Many affluent African-American communities exist today, including the following: Woodmore, Maryland; Hillcrest, Rockland County, New York; Redan and Cascade Heights, Georgia; Mitchellville, Maryland; Desoto, Texas; Quinby, South Carolina; Forest Park, Oklahoma; Mount Airy, Philadelphia, Pennsylvania.
Ghettos[edit | edit source]
Due to segregated conditions and widespread poverty some African-American neighborhoods in the United States have been called "ghettos". The use of this term is controversial and, depending on the context, potentially offensive. Despite mainstream America's use of the term "ghetto" to signify a poor urban area populated by ethnic minorities, those living in the area often used it to signify something positive. The African-American ghettos did not always contain dilapidated houses and deteriorating projects, nor were all of its residents poverty-stricken. For many African Americans, the ghetto was "home", a place representing authentic "blackness" and a feeling, passion, or emotion derived from the rising above the struggle and suffering of being of African descent in America.
Langston Hughes relays in the "Negro Ghetto" (1931) and "The Heart of Harlem" (1945): "The buildings in Harlem are brick and stone/And the streets are long and wide,/But Harlem's much more than these alone,/Harlem is what's inside." Playwright August Wilson used the term "ghetto" in Ma Rainey's Black Bottom (1984) and Fences (1987), both of which draw upon the author's experience growing up in the Hill District of Pittsburgh, an African-American ghetto.
Although African-American neighborhoods may suffer from civic disinvestment, with lower-quality schools, less-effective policing and fire protection, there are institutions such as churches and museums and political organizations that help to improve the physical and social capital of African-American neighborhoods. In African-American neighborhoods the churches may be important sources of social cohesion. For some African Americans, the kind spirituality learned through these churches works as a protective factor against the corrosive forces of racism. Museums devoted to African-American history are also found in many African-American neighborhoods.
Many African-American neighborhoods are located in inner cities, and these are the mostly residential neighborhoods located closest to the central business district. The built environment is often row houses or brownstones, mixed with older single-family homes that may be converted to multi-family homes. In some areas there are larger apartment buildings. Shotgun houses are an important part of the built environment of some southern African-American neighborhoods. The houses consist of three to five rooms in a row with no hallways. This African-American house design is found in both rural and urban southern areas, mainly in African-American communities and neighborhoods.
In Black Rednecks and White Liberals, Thomas Sowell suggested that modern urban black ghetto culture is rooted in the white Cracker culture of the North Britons and Scots-Irish who migrated from the generally lawless border regions of Britain to the American South, where they formed a redneck culture common to both blacks and whites in the antebellum South. According to Sowell, characteristics of this culture included lively music and dance, violence, unbridled emotions, flamboyant imagery, illegitimacy, religious oratory marked by strident rhetoric, and a lack of emphasis on education and intellectual interests. Because redneck culture proved counterproductive, "that culture long ago died out ... among both white and black Southerners, while still surviving today in the poorest and worst of the urban black ghettos", which Sowell described as being characterized by "brawling, braggadocio, self-indulgence, [and] disregard of the future", and where "belligerence is considered being manly and crudity is considered cool, while being civilized is regarded as 'acting white'." Sowell asserts that white liberal Americans have perpetuated this "counterproductive and self-destructive lifestyle" among black Americans living in urban ghettos through "the welfare state, and look-the-other-way policing, and smiling at 'gangsta rap'".
See also[edit | edit source]
- African-American Civil Rights Movement (1865–95)
- African-American Civil Rights Movement (1896–1954)
- African-American Civil Rights Movement (1954–68) in popular culture
- Cool (aesthetic) § African Americans
- Culture of the Southern United States
- Historically black colleges and universities
- Imaging Blackness
- Mythology and commemorations of Benjamin Banneker
References[edit | edit source]
- ^ Gomez, Michael Angelo (1998). Exchanging Our Country Marks : The Transformation of African Identities in the Colonial and Antebellum South: The Transformation of African Identities in the Colonial and Antebellum South. University of North Carolina Press. p. 12. ISBN 0807861715.
- ^ a b Clayborn Carson, Emma J. Lapsansky-Werner, and Gary B. Nash, The Struggle for Freedom: A History of African Americans, Vol 1 to 1877 ( Prentice Hall, 2012) p.18
- ^ James, Jessica S. (June 2008). "What Neighborhood Poverty Studies Can Learn from African American Studies" (PDF). The Journal of Pan African Studies 2 (4).
- ^ Herskovits, Melville (1990). The Myth of the Negro Past. Sidney Mintz. Beacon Press. p. 368. ISBN 0-8070-0905-9. http://ann.sagepub.com/cgi/framedreprint/222/1/226.
- ^ Opala, Joseph. "The Gullah: Rice, Slavery, and the Sierra Leone Connection". Yale University. Archived from the original on 2008-05-18. https://web.archive.org/web/20080518031713/http://www.yale.edu/glc/gullah/04.htm. Retrieved 2008-05-22.
- ^ "South Carolina – African American Culture, Heritage". South Carolina Information Highway. http://www.sciway.net/afam/culture.html. Retrieved 2008-05-21.
- ^ a b "African American Voices: Slave Culture". University of Houston. 2007-06-02. Archived from the original on 2008-05-07. https://web.archive.org/web/20080507214116/http://www.digitalhistory.uh.edu/black_voices/voices_display.cfm?id=23. Retrieved 2007-06-02.
- ^ Price, Richard (1996). Maroon Societies: Rebel Slave Communities in the Americas. Anchor Books. pp. 1–33.
- ^ Geneviève Fabre, Robert G. O'Meally (1994). History and Memory in African-American Culture. Oxford University Press. pp. 12–208.
- ^ a b c d e Maggie Papa; Amy Gerber; Abeer Mohamed. "African American Culture through Oral Tradition". George Washington University. Archived from the original on 2008-05-27. https://web.archive.org/web/20080527050107/http://www.gwu.edu/~e73afram/ag-am-mp.html. Retrieved May 17, 2007.
- ^ "Editor's Analysis of "The Wonderful Tar Baby Story"". University of Virginia. http://xroads.virginia.edu/~ug97/remus/anatar.html. Retrieved 2007-10-07.
- ^ "John Henry: The Steel Driving Man". ibiblio. http://www.ibiblio.org/john_henry/index.html. Retrieved 2007-10-07.
- ^ "Uncle Remus". UncleRemus.com. 2003. http://www.uncleremus.com/index.html. Retrieved 2007-10-10.
- ^ "EDITOR'S PREFACES". UncleRemus.com. 2003. http://www.uncleremus.com/preface.html. Retrieved 2007-10-10.
- ^ Raboteau, Albert J. (1995). A Fire in the Bones: Reflections on African-American Religious History. Beacon Press. ISBN 0-8070-0933-4. https://books.google.com/?id=RvSIg9of6TYC. Retrieved 2007-10-07.
- ^ Fabre and O'Meally, pp. 219–244.
- ^ DUNBAR, EVE E., ed (2013-01-01). Black Regions of the Imagination. African American Writers between the Nation and the World. Temple University Press. pp. 16–57. ISBN 9781439909423.
- ^ a b Michael L. Hecht, Ronald L. Jackson, Sidney A. Ribeau (2003). African American Communication: Exploring Identity and Culture? Routledge. pp. 3–245.
- ^ Miazga, Mark (1998-12-15). "The Spoken Word Movement of 1990s". Michigan State University. http://www.msu.edu/~miazgama/spokenword.htm. Retrieved 2007-10-07.
- ^ Johnson, William H.. "The Harlem Renaissance". fatherryan.org. Archived from the original on 2007-06-01. https://web.archive.org/web/20070601152141/http://www.fatherryan.org/harlemrenaissance/. Retrieved 2007-06-01.
- ^ "Black Power". King Encyclopedia. Stanford University. http://mlk-kpp01.stanford.edu/. Retrieved 2007-06-02.
- ^ "Black Power". Black Arts Movement. University of Michigan. Archived from the original on 2008-02-27. https://web.archive.org/web/20080227210607/http://www.umich.edu/~eng499/concepts/power.html. Retrieved 2007-06-02.
- ^ "Nikki Giovanni". Black Arts Movement. University of Michigan. Archived from the original on 2008-03-03. https://web.archive.org/web/20080303085941/http://www.umich.edu/~eng499/people/giovanni.html. Retrieved 2007-06-02.
- ^ "Black Aesthetic". Black Arts Movement. University of Michigan. Archived from the original on 2008-01-27. https://web.archive.org/web/20080127061217/http://www.umich.edu/~eng499/concepts/blaes.html. Retrieved 2007-06-02.
- ^ Stewart, Earl L. (August 1, 1998). African American Music: An Introduction. Prentice Hall International. pp. 5–15. ISBN 0-02-860294-3. https://books.google.com/?id=fLIJAAAACAAJ&dq=.
- ^ Bond, Julian, ed (2000). Lift Every Voice and Sing: A Celebration of the Negro National Anthem; 100 Years, 100 Voices. Random House. ISBN 0-679-46315-1. https://books.google.com/?id=s0YKAAAACAAJ. Retrieved 2007-10-14.
- ^ "Lift Every Voice and Sing". National Public Radio. 2002-02-04. https://www.npr.org/programs/morning/features/patc/liftvoice/index.html. Retrieved 2007-06-01.
- ^ McIntyre, Dean B. (2000-01-20). "Lift Every Voice -- 100 Years Old". General Board of Discipleship. Archived from the original on 2008-05-07. https://web.archive.org/web/20080507152240/http://www.gbod.org/worship/default.asp?act=reader&item_id=1786. Retrieved 2007-06-01.
- ^ (1986) "The Roots of Hip Hop". RM Hip Hop Magazine. Retrieved on 2007-11-06.
- ^ Southern., Eileen (1997). The Music of Black Americans: A History (3rd ed.). W. W. Norton & Company. ISBN 0-393-97141-4.
- ^ Crocker, Lizzie (2017-03-23). "The Controversial Painting of Emmett Till Stays on Show at The Whitney". http://www.thedailybeast.com/articles/2017/03/23/the-controversial-painting-of-emmett-till-stays-on-show-at-the-whitney.html.
- ^ Wood, Peter H.. ""Gimmie de Knee Bone Bent":African Body Language and the Evolution of American Dance Forms". Free to Dance: Behind the Dance. PBS. https://www.pbs.org/wnet/freetodance/behind/behind_gimme2.html. Retrieved 2007-10-30.
- ^ "Cakewalk Dance". Streetswing Dance History Archive. http://www.streetswing.com/histmain/z3cake1.htm. Retrieved 2007-04-01.
- ^ a b Ballroom, Boogie, Shimmy Sham, Shake: A Social and Popular Dance Reader. Julie Malnig. Edition: illustrated. University of Illinois Press. 2009. pp. 19-23.
- ^ "African American Dance, a history!". The African American Registry. Archived from the original on May 5, 2007. https://web.archive.org/web/20070505054635/http://www.aaregistry.com/african_american_history/749/African_American_Dance_a_history. Retrieved 2007-06-02.
- ^ Bragin, Naomi Elizabeth. "Black Street Movement: Turf Dance, YAK Films and Politics of Sitation in Oakland, California." ["Collected Work: Dance and the social city. Published by: Birmingham, Ala: Society of Dance History Scholars, 2012. Pages: 51–57.
- ^ "Shot and Captured." Tdr-The Drama Review-The Journal of Performance Studies, vol. 58, no. 2, n.d., pp. 99–114.
- ^ "From Streets To Stage, Two Dance Worlds See Harmonization And Chaos." Weekend Edition Saturday, 23 Jan. 2016. Literature Resource Center, go.galegroup.com.libproxy.berkeley.edu/ps/i.do?p=LitRC&sw=w&u=ucberkeley&v=2.1&it=r&id=GALE%7CA442019322&asid=3532d05de97e18b5b3e990197018a880. Accessed 8 Nov. 2017.
- ^ Simms, Renee. "Immortal Dance in the Age of Michael Brown." Southwest Review, no. 1, 2017, p. 74.
- ^ "Conscious Quiet as a Mode of Black Visual Culture." Black Camera: The New Series, vol. 8, no. 1, Fall 2016, pp. 146–154.
- ^ Patton., Sharon F. (1998). African-American Art. Oxford University Press. ISBN 0-19-284213-7. https://books.google.com/?id=2598QQgoRP8C. Retrieved 2007-10-14.
- ^ Powell, Richard (April 2005). African American Art. Oxford University Press. ISBN 0-465-00071-1. http://www.aawc.com/Submission_Art.html. Retrieved 2007-10-14.
- ^ "Harriet Powers". Early Women Masters. Archived from the original on 2007-10-18. https://web.archive.org/web/20071018043355/http://earlywomenmasters.net/powers/index.html. Retrieved 2007-10-14.
- ^ "The Quilts of Gees Bend". Tinwood Ventures. 2004. http://www.quiltsofgeesbend.com/history/. Retrieved 2007-10-14.
- ^ Southern, Eileen. Music of Negro Americans: A History. New York: Norton, 1997. pp. 404–409.
- ^ "Aaron Douglas (1898–1979)". University of Michigan. Archived from the original on 2006-07-13. https://web.archive.org/web/20060713062733/http://www.si.umich.edu/chico/Harlem/text/adouglas.html. Retrieved 2007-10-04.
- ^ "Augusta Fells Savage (1882–1962)". University of Michigan. Archived from the original on 2007-07-13. https://web.archive.org/web/20070713041014/http://www.si.umich.edu/CHICO/Harlem/text/asavage.html. Retrieved 2007-10-04.
- ^ "James Van Der Zee Biography (1886–1983)". biography.com. http://www.biography.com/search/article.do?id=9515411. Retrieved 2007-10-04.
- ^ Hall, Ken (2004). "The Highwaymen". McElreath Printing & Publishing, Inc. Archived from the original on 2007-10-18. https://web.archive.org/web/20071018072750/http://go-star.com/framer/highwaymen.htm. Retrieved 2007-10-14.
- ^ "Updates & Snapshots 2006". James Gibson. 2000. Archived from the original on 2008-03-11. https://web.archive.org/web/20080311011426/http://www.gibson-highwaymen.com/generic81.html. Retrieved 2007-10-14.
- ^ Painting by a Florida Highwayman
- ^ Smith, Roberta (September 9, 2007). "Solo Museum Shows: Not the Usual Suspects". The New York Times. https://www.nytimes.com/2007/09/09/arts/design/09smith.html. Retrieved 2007-11-06.
- ^ "African Americans in the Visual Arts". Long Island University. Archived from the original on May 9, 2007. https://web.archive.org/web/20070509035918/http://www.liunet.edu/cwis/cwp/library/aavaahp.htm. Retrieved 2007-06-02.
- ^ Ward, Jr., Jerry W. (April 7, 1998). M. Graham. ed. To Shatter Innocence: Teaching African American Poetry. Teaching African American Literature. Routledge. p. 146. ISBN 0-415-91695-X.
- ^ African American Museums Association: History Script error: No such module "webarchive".
- ^ Natchez Museum Showcases African American Heritage Today in Mississippi, accessed March 2, 2016
- ^ "African-American Museums, History, and the American Ideal" by John E. Fleming. Journal of American History, Vol. 81, No. 3, The Practice of American History: A Special Issue (December 1994), pp. 1020–1026.
- ^ "Slavery in America: Historical Overview". slaveryinamerica.org. Archived from the original on 2008-01-21. https://web.archive.org/web/20080121084057/http://www.slaveryinamerica.org/history/hs_es_overview.htm. Retrieved May 17, 2007.
- ^ "Creole language". Columbia Electronic Encyclopedia, 6th ed. Columbia University Press. 2007. http://www.infoplease.com/ce6/society/A0813989.html. Retrieved 2007-06-02.
- ^ "Gullah". Columbia Electronic Encyclopedia, 6th ed. Columbia University Press. 2007. http://www.infoplease.com/ce6/society/A0822152.html. Retrieved 2007-06-02.
- ^ Labov, William (1972). Language in the Inner City: Studies in Black English Vernacular. Philadelphia: University of Pennsylvania Press. ISBN 0-8122-1051-4. https://books.google.com/?id=snEEdFKLJ5cC. Retrieved 2007-10-14.
- ^ Oubré, Alondra (1997). "Black English Vernacular (Ebonics) and Educability A Cross-Cultural Perspective on Language, Cognition, and Schooling". African American Web Connection. http://www.aawc.com/ebonicsarticle.html. Retrieved 2007-06-02.
- ^ "What lies ahead?". Do you speak American?. PBS. 2005. https://www.pbs.org/speak/ahead/. Retrieved 2007-10-30.
- ^ Coulmas, Florian (2005). Sociolinguistics: The Study of Speakers' Choices. Cambridge University Press. p. 177. ISBN 1-397-80521-8. https://books.google.com/?id=yPUGpzAUOwsC. Retrieved 2007-10-30.
- ^ Dewey, William Joseph; Dele Jẹgẹdẹ; Rosalind I. J. Hackett (2003). The World Moves, We Follow: Celebrating African Art. Knoxville, Tenn.: Frank H. McClung Museum, The University of Tennessee. p. 23. ISBN 1-880174-05-7.
- ^ "Wrapped in Pride: Ghanaian Kente and African American Identity". National Museum of African Art. http://www.nmafa.si.edu/exhibits/kente/about.htm. Retrieved May 17, 2007.
- ^ Corinthians%2011:5-6&verse=NIV&src=! 1 Corinthians 11:5-6 NIV
- ^ "Fashion". Dickinson College. Archived from the original on 2008-08-01. https://web.archive.org/web/20080801083322/http://alpha.dickinson.edu/departments/amos/mosaic01steel/je/fashion.html. Retrieved 2007-10-14.
- ^ "Tradition of Hats in the African-American Church". PBS. https://www.pbs.org/wnet/religionandethics/week724/feature.html. Retrieved 2007-10-14.
- ^ Byrd, Ayana; Tharps, Lori (January 12, 2002). Hair Story: Untangling the Roots of Black Hair in America. New York: St. Martin's Press. p. 162. ISBN 0-312-28322-9.
- ^ Washington, Darren Taylor (2007-05-22). "Film Encourages Africans and African Americans to Cultivate Natural Hair". Voice of America. http://www.voanews.com/english/archive/2007-05/Film-Encourages-Africans-and-African-Americans-to-Cultivate-Natural-Hair.cfm?CFID=4575142&CFTOKEN=69491174. Retrieved 2008-06-24.
- ^ McDonald, Ashley (2008-04-07). "The Rise of Natural Hair". The Meter. Archived from the original on 2008-04-11. https://web.archive.org/web/20080411062102/http://media.www.tsumeter.com/media/storage/paper956/news/2008/04/07/ArtsCulture/The-Rise.Of.Natural.Hair-3306856.shtml. Retrieved 2008-06-24.
- ^ a b "African American Hairstyles". Dickinson College. Archived from the original on 2008-07-31. https://web.archive.org/web/20080731094158/http://alpha.dickinson.edu/departments/amos/mosaic01steel/je/hair.html. Retrieved 2007-10-14.
- ^ Lacy, D. Aaron.The Most Endangered Title VII Plaintiff?: African-American Males and Intersectional Claims." Nebraska Law Review, Vol. 86, No. 3, 2008, pp. 14–15. Retrieved 11-08-2007.
- ^ Green, Penelope."Ranting; Stubble trouble." The New York Times, November 8, 2007. Retrieved 11-08-2007.
- ^ Lacy, op. cit.
- ^ Jones, LaMont (April 23, 2007). "Black and beautiful: African-American women haven't had an easy time in the fashion world". Pittsburgh Post-Gazette. http://www.post-gazette.com/pg/07113/780189-314.stm. Retrieved 2007-10-14.
- ^ "The Study of African American Religion". Harvard University. http://my.hds.harvard.edu/icb/icb.do?keyword=k5867. Retrieved 2007-06-01.
- ^ a b c Maffly-Kipp, Laurie. "African American Religion, Pt. I: To the Civil War". University of North Carolina at Chapel Hill. http://www.nhc.rtp.nc.us/tserve/nineteen/nkeyinfo/aareligion.htm. Retrieved May 15, 2007.
- ^ Maffly-Kipp, Laurie F. (May 2001). "The Church in the Southern Black Community". University of North Carolina. http://docsouth.unc.edu/church/intro.html. Retrieved May 21, 2007.
- ^ "Amazing grace: 50 years of the Black church". Ebony. April 1995. http://findarticles.com/p/articles/mi_m1077/is_n6_v50/ai_16749588. Retrieved 2007-10-14.
- ^ Abdul Alkalimat and Associates. Religion and the Black Church. Introduction to Afro-American Studies (6th ed.). Chicago: Twenty-first Century Books and Publications. http://eblackstudies.org/intro/chapter10.htm.
- ^ "Intiman Theater: Black Nativity". Intiman Theater. Archived from the original on 2008-01-09. https://web.archive.org/web/20080109055932/http://www.intiman.org/2007season/nativity.html. Retrieved 2007-10-13.
- ^ "Black Nativity". The National Center of African American Artists. 2004. Archived from the original on 2007-10-09. https://web.archive.org/web/20071009075118/http://www.ncaaa.org/nativity.html. Retrieved 2007-10-13.
- ^ Cheikh Anta Diop, Precolonial Black Africa, p. 163.
- ^ Sylvaine Diouf, Servants of Allah
- ^ Huda. "African-American Muslims". About.com. http://islam.about.com/library/weekly/aa012601a.htm. Retrieved 2007-06-02.
- ^ Wood, Daniel B. (February 14, 2002). "America's black Muslims close a rift". Christian Science Monitor. http://www.csmonitor.com/2002/0214/p03s01-ussc.html. Retrieved 2007-11-13.
- ^ Wood, Daniel B. (February 14, 2002). "America's black Muslims close a rift". Christian Science Monitor. Archived from the original on April 26, 2006. https://web.archive.org/web/20060426232004/http://www.csmonitor.com/2002/0214/p03s01-ussc.html. Retrieved 2007-11-13.
- ^ a b Rachel Pomerance, Judaism Drawing More Black Americans, The Atlanta Journal-Constitution, June 18, 2008.
- ^ Angell, Stephen W. (May 2001). "Black Zion: African American Religious Encounters with Judaism". The North Star 4 (2). ISSN 1094-902X. Retrieved on 2007-10-19.
- ^ Niko Koppel, Black Rabbi Reaches Out to Mainstream of His Faith, The New York Times, March 16, 2008.
- ^ Dale, Maryclaire (August 9, 2003). "African Religions Attracting Americans". African Traditional Religion. afgen.com. http://afgen.com/african_religions.html. Retrieved 2007-06-02.
- ^ "A Religious Portrait of African-Americans". 30 January 2009. http://www.pewforum.org/A-Religious-Portrait-of-African-Americans.aspx. Retrieved 22 October 2017.
- ^ Grimes, Ronald L. (2002). Deeply Into the Bone: Re-Inventing Rites of Passage. University of California Press. pp. 145–146. ISBN 0-520-23675-0. https://books.google.com/?id=v_AXM_qgwTAC. Retrieved 2007-10-14.
- ^ "'Jumping The Broom' a short history..". African American Registry. July 15, 2005. Archived from the original on October 27, 2006. https://web.archive.org/web/20061027020054/http://www.aaregistry.com/african_american_history/2905/Jumping_The_Broom_a_short_history__. Retrieved 2007-10-14.
- ^ Anyiam, Thony. "Who should jump the broom?". Anyiams Creations International. http://www.anyiams.com/jumping_the_broom.htm. Retrieved 2007-10-14.
- ^ "Death and Dying in the Black Experience: An Interview with Ronald K. Barrett, PhD". Education Development Center, Inc.. 2001-09-25. Archived from the original on 2007-10-09. https://web.archive.org/web/20071009215354/http://www2.edc.org/lastacts/archives/archivesSept01/intlpersp.asp. Retrieved 2007-10-13.
- ^ "Jazz Funerals". PBS. 2004-01-30. https://www.pbs.org/wnet/religionandethics/week722/feature.html. Retrieved 2007-10-13.
- ^ Hicks, Derek S. "An Unusual Feast: Gumbo and the Complex Brew of Black Religion." In Religion, Food, and Eating in North America, edited by Benjamin E. Zeller, Marie W. Dallam, Reid L. Neilson, and Nora L. Rubel, 134-154. New York: Columbia University Press, 2014.
- ^ Hicks, Derek S. "An Unusual Feast: Gumbo and the Complex Brew of Black Religion." In Religion, Food, and Eating in North America, edited by Benjamin E. Zeller, Marie W. Dallam, Reid L. Neilson, and Nora L. Rubel, 136. New York: Columbia University Press, 2014.
- ^ Holloway, Joseph E. (2005). Africanisms in American Culture. Bloomington, Ind.: Indiana University Press. p. 48. ISBN 0-253-34479-4.
- ^ a b "A History of Soul Food". 20th Century Fox. Archived from the original on 2008-06-11. https://web.archive.org/web/20080611094806/http://www.foxhome.com/soulfood/htmls/soulfood.html. Retrieved 2007-06-02.
- ^ Jonsson, Patrik (February 6, 2006). "Backstory: Southern discomfort food". The Christian Science Monitor. http://www.csmonitor.com/2006/0206/p20s01-lifo.html. Retrieved 2007-06-02.
- ^ a b CNN Student News (2007-01-31). "Extra!: History of Black History Month". CNN. http://www.cnn.com/2006/EDUCATION/01/30/extra.black.history.month/index.html. Retrieved 2007-06-01.
- ^ "5 USC 6103". Cornell Law School. https://www.law.cornell.edu/uscode/text/05/6103-. Retrieved 2007-06-01.
- ^ "Black Music Month". http://www.classbrain.com/artholiday/publish/article_141.shtml. Retrieved 22 October 2017.
- ^ "History of Juneteenth". juneteenth.com. 2005. http://www.juneteenth.com/history.htm. Retrieved March 15, 2007.
- ^ "Malcolm X's Birthday". University of Kansas Medical Center. 2003. Archived from the original on June 1, 2007. https://web.archive.org/web/20070601172051/http://www3.kumc.edu/diversity/ethnic_relig/malcolm.html. Retrieved May 15, 2007.
- ^ "Fundamental Questions About Kwanzaa". OfficialKwanzaaWebsite.org. http://www.officialkwanzaawebsite.org/faq.shtml. Retrieved May 15, 2007.
- ^ a b c Wattenberg, Laura (May 7, 2013). The Baby Name Wizard, Revised 3rd Edition: A Magical Method for Finding the Perfect Name for Your Baby. Harmony. ISBN 0770436471.
- ^ Moskowitz, Clara (November 30, 2010). "Baby Names Reveal More About Parents Than Ever Before". Live Science. http://www.livescience.com/9027-baby-names-reveal-parents.html.
- ^ "Finding Our History: African-American Names". Family Education. http://life.familyeducation.com/baby/baby-names/45480.html. Retrieved 2007-06-05.
- ^ Zax, David (Aug 25, 2008). "What's up with black names, anyway?". Salon.com. http://www.salon.com/2008/08/25/creative_black_names/.
- ^ Rosenkrantz, Linda; Satran, Paula Redmond (August 16, 2001). Baby Names Now: From Classic to Cool--The Very Last Word on First Names. St. Martin's Griffin. ISBN 0312267576. https://www.amazon.com/Baby-Names-Now-Classic-Cool--The/dp/B0009X1MMS/ref=sr_1_1?ie=UTF8&qid=1393738023&sr=8-1&keywords=Baby+Names+Now%3A+From+Classic+to+Cool--The+Very+Last+Word+on+First+Names.
- ^ Lack, Evonne. "Popular African American Names". http://www.babycenter.com/0_popular-african-american-names_10329236.bc. Retrieved 12 February 2014.
- ^ Conley, Dalton (March 10, 2010). "Raising E and Yo...". Psychology Today.
- ^ Thomas Sowell, Affirmative Action around the World, 2004. Basic Books. pp. 115–156.
- ^ Wilder-Hamilton, Elonda R. (2002). "Uncovering the Truth: Understanding the Impact of American Culture on the Black Male Black Female Relationship". The Black Agenda. Archived from the original on 2008-04-07. https://web.archive.org/web/20080407000815/http://www.blackagenda.com/conferences/2002nbfc/wilderhamilton.htm. Retrieved 2007-06-03.
- ^ Martin, Elmer P. (1980). The Black Extended Family. University of Chicago Press. ISBN 0-226-50797-1. https://books.google.com/?id=8xSQEZejTk4C.
- ^ Scott, Janny (2008-03-23). "What Politicians Say When They Talk About Race". The New York Times. https://www.nytimes.com/2008/03/23/weekinreview/23scott.html. Retrieved 2008-06-24.
- ^ Bositis, David (2001). "The Black Vote in 2004" (PDF). The Joint Center for Political and Economic Studies. Archived from the original on 2007-06-20. https://web.archive.org/web/20070620035118/http://www.jointcenter.org/publications1/publication-PDFs/BlackVote.pdf. Retrieved 2007-05-18.
- ^ "Threat and Humiliation: Racial Profiling, Domestic Security, and Human Rights in the United States" (PDF). Amnesty International. Archived from the original on 2007-06-20. https://web.archive.org/web/20070620035119/http://www.amnestyusa.org/racial_profiling/report/rp_report.pdf. Retrieved 2007-06-01.
- ^ Kansal, Tushar (2005). "Racial Disparity in Sentencing: A Review of the Literature". In Mauer, Marc (PDF). The Sentencing Project. Archived from the original on 2008-06-26. https://web.archive.org/web/20080626014120/http://www.sentencingproject.org/Admin/Documents/publications/rd_reducingrdmanual.pdf. Retrieved 2007-06-01.
- ^ "Poverty in the United States: Frequently Asked Questions". National Poverty Center. 2006. http://www.npc.umich.edu/poverty/. Retrieved 2007-06-01.
- ^ Payne, January W. (2004-12-21). "Dying for Basic Care". Washington Post. https://www.washingtonpost.com/wp-dyn/articles/A13690-2004Dec20.html. Retrieved 2007-06-01.
- ^ Randall, Vernellia (2007-03-25). "Institutional Racism". University of Dayton. Archived from the original on 2007-05-19. https://web.archive.org/web/20070519065534/http://academic.udayton.edu/race/intro.htm. Retrieved 2007-06-01.
- ^ Richardson, Elaine and Gwendolyn Pough. "Hiphop Literacies and the Globalization of Black Popular Culture." Social Identities, vol. 22, no. 2, Mar. 2016, pp. 129–132.
- ^ Nelson, Angela M. "Black Popular Culture (US)." Encyclopedia of Race and Racism, edited by Patrick L. Mason, 2nd ed., vol. 1, Macmillan Reference USA, 2013, pp. 275–284.
- ^ Dodds, Sherril. "Hip Hop Battles and Facial Intertexts." Dance Research, vol. 34, no. 1, May 2016, pp. 63–83.
- ^ Kitwana, Bakari. The Hip Hop Generation : Young Blacks and the Crisis in African American Culture. New York : Basic Civitas, c2002, 2002.
- ^ Porfilio, Brad J.1, et al. "Ending the 'War against Youth:' Social Media and Hip-Hop Culture as Sites of Resistance, Transformation and (Re) Conceptualization." Journal for Critical Education Policy Studies (JCEPS), vol. 11, no. 4, Nov. 2013, pp. 85–105.
- ^ a b DeFrantz, Thomas. Dancing Revelations: Alvin Ailey's Embodiment of African American Culture. Oxford University Press, 2004. acls humanities e-book.
- ^ Hutchinson, Earl Ofari (December 14, 2004). "King would not have marched against gay marriage". The San Francisco Chronicle. http://www.sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2004/12/14/EDGEBAB7KB1.DTL. Retrieved 2007-10-22.
- ^ Sandalow, Marc (July 16, 2003). "Democrats divided on gay marriage". The San Francisco Chronicle. http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2003/07/16/MN75663.DTL. Retrieved 2008-01-11.
- ^ Smitherman, Geneva. Black Talk: Words and Phrases from the Hood to the Amen Corner. New York: Houghton Mifflin Company, 2000.
- ^ "GHETTO". Archived from the original on 2008-05-11. https://web.archive.org/web/20080511224527/http://kpearson.faculty.tcnj.edu/Dictionary/ghetto.htm. Retrieved 2008-05-11. Kim Pearson
- ^ "Root shock: The consequences of African American dispossession" Journal of Urban Health. Springer, New York. Volume 78, Number 1 / March 2001. doi:10.1093/jurban/78.1.72
- ^ Wachtel, Paul L. (1999). Race in the Mind of America: Breaking the Vicious Circle Between Blacks and Whites. New York: Routledge. p. 219. ISBN 0-415-92000-0.
- ^ Douglas A. Smith, "The Neighborhood Context of Police Behavior", Crime and Justice, Vol. 8, Communities and Crime (1986), pp. 313–341.
- ^ Thabit, Walter; Frances Fox Piven (2003). How East New York Became a Ghetto. New York: New York University Press. p. 80. ISBN 0-8147-8267-1.
- ^ Rubin, Irene S. (1982). Running in the Red: The Political Dynamics of Urban Fiscal Stress. Albany, NY: State University of New York Press. p. 126. ISBN 0-87395-564-1.
- ^ "Church Culture as a Strategy of Action in the Black Community", Mary Pattillo-McCoy, American Sociological Review, Vol. 63, No. 6 (December 1998), pp. 767–784.
- ^ "'Gathering the Spirit' at First Baptist Church: Spirituality as a Protective Factor in the Lives of African American Children" by Wendy L. Haight; Social Work, Vol. 43, 1998.
- ^ "Black architecture still standing, the Shotgun House"', The Great Buildings Collection on CD-ROM Kevin Matthews. African American Registry.
- ^ a b Sowell, Thomas (May 16, 2015). "Black Rednecks and White Liberals". Capitalism Magazine.
- ^ a b c Nordlinger, Jay (September 9, 2005). ""Black Rednecks and White Liberals", by Thomas Sowell". National Review.
Bibliography[edit | edit source]
- Hamilton, Marybeth: In Search of the Blues.
- William Ferris; Give My Poor Heart Ease: Voices of the Mississippi Blues – The University of North Carolina Press; (2009) ISBN 978-0-8078-3325-4 (with CD and DVD)
- William Ferris; Glenn Hinson The New Encyclopedia of Southern Culture: Volume 14: Folklife, University of North Carolina Press (2009) ISBN 978-0-8078-3346-9 (Cover :photo of James Son Thomas)
- William Ferris; Blues From The Delta – Da Capo Press; revised edition (1988) ISBN 978-0-306-80327-7
- Ted Gioia; Delta Blues: The Life and Times of the Mississippi Masters Who Revolutionized American Music – W. W. Norton & Company (2009) ISBN 978-0-393-33750-1
- Sheldon Harris; Blues Who's Who Da Capo Press, 1979
- Robert Nicholson; Mississippi Blues Today! Da Capo Press (1999) ISBN 978-0-306-80883-8
- Robert Palmer; Deep Blues: A Musical and Cultural History of the Mississippi Delta – Penguin Reprint edition (1982) ISBN 978-0-14-006223-6
- Frederic Ramsey Jr.; Been Here And Gone – 1st edition (1960) Rutgers University Press – London Cassell (UK) and New Brunswick, New Jersey; 2nd printing (1969) Rutgers University Press New Brunswick, New Jersey; (2000) University of Georgia Press
- Wiggins, David K. and Ryan A. Swanson, eds. Separate Games: African American Sport behind the Walls of Segregation. University of Arkansas Press, 2016. xvi, 272 pp.
- Charles Reagan Wilson, William Ferris, Ann J. Adadie; Encyclopedia of Southern Culture (1656 pp) University of North Carolina Press; 2nd edition (1989) – ISBN 978-0-8078-1823-7
[edit | edit source]
- "Encyclopedia Smithsonian: African American History and Culture". Archived from the original on 2008-06-21. https://web.archive.org/web/20080621063349/http://www.si.edu/Encyclopedia_SI/History_and_Culture/AfricanAmerican_History.htm. | https://familypedia.wikia.org/wiki/African-American_culture | 21 |
28 | Demand-Pull Inflation occurs when aggregate demand outpaces aggregate supply in an economy. Basically the idea behind this concept is that as inflation rises so does real gross domestic products rise and the unemployment rate falls as the economy moves on the Phillips curve (Miller, 1997). Basically this is a case of having tons of money but not enough things to buy it with. Cost-Push Inflation occurs when there is a substantial increase in the cost of important goods or services where no suitable alternative is available (Miller, 1997). Basically, cost-push inflation is what the ecomony experienced in the recent oil crisis.
There was a decrease in the production of oil and that caused the prices of oil to increase. Gasoline companies that relied on the oil had to then pay more to get the oil they needed and they in turn passed on the increase of oil prices to the consumer by raising gas prices at the pumps. The basic difference between the two types of inflation is that demand-pull inflation is seen as constructive to a faster rate of economic growth since the excess demand and favourable market conditions will stimulate investment and expansion, while cost-push inflation is more of a “supply shock” inflation.
Inflation is measured using what are called indexes. One kind of index that is used to measure inflation is the consumer price index (CPI). The CPI is a measurement of the average price of consumer goods and services purchased by households. The percent change in the CPI is a measure of inflation Two basic types of data are needed when using the CPI index: price data and weighting data. The price data are collected for a sample of goods and services from a sample of sales outlets in a sample of locations for a sample of times.
The weighting data are estimates of the shares of the different types of expenditure as fractions of the total expenditure covered by the index. These weights are usually based upon expenditure data obtained for sampled periods from a sample of households. Together the types of data helped give a clear picture of how inflation is effecting the economy. A second index that is used to measure inflation is the producer price index (PPI).
The PPI is a measurement of the average changes in prices received by domestic producers for their output.. PPI measures the pressure being put on producers by the costs of their raw materials. This could be “passed on” to consumers, or it could be absorbed by profits, or offset by increasing productivity. The PPI in this way will give a clearer idea of cost-pull inflation while it would seem that the CPI gives a clearer indication of demand-pull inflation (Miller, 1997). | https://lawaspect.com/u-s-government-4/ | 21 |
60 | |History and description of|
|Development of vowels|
|Development of consonants|
Middle English had a long close front vowel /i:/, and two long mid front vowels: the close-mid /e:/ and the open-mid /?:/. The three vowels generally correspond to the modern spellings ⟨i⟩, ⟨ee⟩ and ⟨ea⟩ respectively, but other spellings are also possible. The spellings that became established in Early Modern English are mostly still used today, but the qualities of the sounds have changed significantly.
The /i:/ and /e:/ generally corresponded to similar Old English vowels, and /?:/ came from Old English /æ:/. For other possible histories, see English historical vowel correspondences. In particular, the long vowels sometimes arose from short vowels by Middle English open syllable lengthening or other processes. For example, team comes from an originally-long Old English vowel, and eat comes from an originally-short vowel that underwent lengthening. The distinction between both groups of words is still preserved in a few dialects, as is noted in the following section.
Middle English /?:/ was shortened in certain words. Both long and short forms of such words often existed alongside each other during Middle English. In Modern English the short form has generally become standard, but the spelling ⟨ea⟩ reflects the formerly-longer pronunciation. The words that were affected include several ending in d, such as bread, head, spread, and various others including breath, weather, and threat. For example, bread was /br?:d/ in earlier Middle English, but came to be shortened and rhymed with bed.
During the Great Vowel Shift, the normal outcome of /i:/ was a diphthong, which developed into Modern English /a?/, as in mine and find. Meanwhile, /e:/ became /i:/, as in feed, and /?:/ of words like meat became /e:/, which later merged with /i:/ in nearly all dialects, as is described in the following section.
The meet-meat merger or the fleece merger is the merger of the Early Modern English vowel /e:/ (as in meat) into the vowel /i:/ (as in meet). The merger was complete in standard accents of English by about 1700.
As noted in the previous section, the Early Modern/New English (ENE) vowel /e:/ developed from Middle English /?:/ via the Great Vowel Shift, and ENE /i:/ was usually the result of Middle English /e:/ (the effect in both cases was a raising of the vowel). The merger saw ENE /e:/ raised further to become identical to /i:/ and so Middle English /?:/ and /e:/ have become /i:/ in standard Modern English, and meat and meet are now homophones. The merger did not affect the words in which /?:/ had undergone shortening (see section above), and a handful of other words (such as break, steak, great) also escaped the merger in the standard accents and so acquired the same vowel as brake, stake, grate. Hence, the words meat, threat (which was shortened), and great now have three different vowels although all three words once rhymed.
The merger results in the FLEECE lexical set, as defined by John Wells. Words in the set that had ENE /i:/ (Middle English /e:/) are mostly spelled ⟨ee⟩ (meet, green, etc.), with a single ⟨e⟩ in monosyllables (be, me) or followed by a single consonant and a vowel letter (these, Peter), sometimes ⟨ie⟩ or ⟨ei⟩ (believe, ceiling), or irregularly (key, people). Most of those that had ENE /e:/ (Middle English /?:/) are spelled ⟨ea⟩ (meat, team, eat, etc.), but some borrowed words have a single ⟨e⟩ (legal, decent, complete), ⟨ei⟩, or otherwise (receive, seize, phoenix, quay). There are also some loanwords in which /i:/ is spelled ⟨i⟩ (police, machine, ski), most of which entered the language later.
There are still some dialects in the British Isles that do not have the merger. Some speakers in Northern England have /i:/ or // in the first group of words (those that had ENE /i:/, like meet), but // in the second group (those that had ENE /e:/, like meat). In Staffordshire, the distinction might rather be between /?i/ in the first group and /i:/ in the second group. In some (particularly rural and lower-class) varieties of Irish English, the first group has /i/, and the second preserves /e:/. A similar contrast has been reported in parts of Southern and Western England, but it is now rarely encountered there.
In some Yorkshire dialects, an additional distinction may be preserved within the meat set. Words that originally had long vowels, such as team and cream (which come from Old English t?am and Old French creme), may have //, and those that had an original short vowel, which underwent open syllable lengthening in Middle English (see previous section), like eat and meat (from Old English etan and mete), have a sound resembling //, similar to the sound that is heard in some dialects in words like eight and weight that lost a velar fricative).
In Alexander's book (2001) about the traditional Sheffield dialect, the spelling "eigh" is used for the vowel of eat and meat, but "eea" is used for the vowel of team and cream. However, a 1999 survey in Sheffield found the // pronunciation to be almost extinct there.
In certain accents, when the FLEECE vowel was followed by /r/, it acquired a laxer pronunciation. In General American, words like near and beer now have the sequence /ir/, and nearer rhymes with mirror (the mirror-nearer merger). In Received Pronunciation, a diphthong // has developed (and by non-rhoticity, the /r/ is generally lost, unless there is another vowel after it), so beer and near are /b/ and /n/, and nearer (with //) remains distinct from mirror (with /?/). Several pronunciations are found in other accents, but outside North America, the nearer-mirror opposition is always preserved. For example, some conservative accents in Northern England have the sequence /i:?/ in words like near, which is reduced to /i:/ before a pronounced /r/, as in serious.
Another development is that bisyllabic /i:?/ may become smoothed to the diphthong // in certain words, which leads to pronunciations like /'vk?l/, /'t?/ and /a?'d/ for vehicle, theatre/theater and idea, respectively. That is not restricted to any variety of English. It happens in both British English and (less noticeably or often) American English as well as other varieties although it is far more common for Britons, and many Americans do not have the phoneme //. The words that have // may vary depending on dialect. Dialects that have the smoothing usually also have the diphthong // in words like beer, deer, and fear, and the smoothing causes idea, Korea, etc. to rhyme with those words.
In Geordie, the FLEECE vowel undergoes an allophonic split, with the monophthong being used in morphologically-closed syllables (as in freeze [f?i:z]) and the diphthong [ei] being used in morphologically-open syllables not only word-finally (as in free [f?ei]) but also word-internally at the end of a morpheme (as in frees [f?eiz]).
Most dialects of English turn /i:/ into a diphthong, and the monophthongal is in free variation with the diphthongal [?i ~ ?i] (with the former diphthong being the same as Geordie [ei], the only difference lying in the transcription), particularly word-internally. However, word-finally, diphthongs are more common.
Compare the identical development of the close back GOOSE vowel.
Middle English short /i/ has developed into a lax, near-close near-front unrounded vowel, /?/, in Modern English, as found in words like kit. (Similarly, short /u/ has become /?/.) According to Roger Lass, the laxing occurred in the 17th century, but other linguists have suggested that it took place potentially much earlier.
The short mid vowels have also undergone lowering and so the continuation of Middle English /e/ (as in words like dress) now has a quality closer to in most accents. Again, however, it is not clear whether the vowel already had a lower value in Middle English.
The pin-pen merger is a conditional merger of /?/ and /?/ before the nasal consonants [m], [n], and [?]. The merged vowel is usually closer to [?] than to [?]. Examples of homophones resulting from the merger include pin-pen, kin-ken and him-hem. The merger is widespread in Southern American English and is also found in many speakers in the Midland region immediately north of the South and in areas settled by migrants from Oklahoma and Texas who settled in the Western United States during the Dust Bowl. It is also a characteristic of African-American Vernacular English.
The pin-pen merger is one of the most widely recognized features of Southern speech. A study of the written responses of American Civil War veterans from Tennessee, together with data from the Linguistic Atlas of the Gulf States and the Linguistic Atlas of the Middle South Atlantic States, shows that the prevalence of the merger was very low up to 1860 but then rose steeply to 90% in the mid-20th century. There is now very little variation throughout the South in general except that Savannah, Austin, Miami, and New Orleans are excluded from the merger. The area of consistent merger includes southern Virginia and most of the South Midland and extends westward to include much of Texas. The northern limit of the merged area shows a number of irregular curves. Central and southern Indiana is dominated by the merger, but there is very little evidence of it in Ohio, and northern Kentucky shows a solid area of distinction around Louisville.
Outside the South, most speakers of North American English maintain a clear distinction in perception and production. However, in the West, there is sporadic representation of merged speakers in Washington, Idaho, Kansas, Nebraska, and Colorado. However, the most striking concentration of merged speakers in the west is around Bakersfield, California, a pattern that may reflect the trajectory of migrant workers from the Ozarks westward.
The raising of /?/ to /?/ was formerly widespread in Irish English and was not limited to positions before nasals. Apparently, it came to be restricted to those positions in the late 19th and the early 20th centuries. The pin-pen merger is now commonly found only in Southern and South-West Irish English.
A complete merger of /?/ and /?/, not restricted to positions before nasals, is found in many speakers of Newfoundland English. The pronunciation in words like bit and bet is [?], but before /r/, in words like beer and bear, it is [?]. The merger is common in Irish-settled parts of Newfoundland and is thought to be a relic of the former Irish pronunciation.
|center/centre||sinner||'s?(r)||With intervocalic alveolar flapping.|
|engine||Injun||'?ndn||With weak-vowel merger.|
|enter||inner||'?(r)||With intervocalic alveolar flapping.|
|lender||Linda||'l?nd?||In non-rhotic accents.|
|Lennon||linen||'l?n?n||With weak-vowel merger.|
|lentil||lintel||'l?nt?l||lentil may also be /'l?nt?l/, which becomes /'l?nt?l/ and does not merge with lintel.|
|many||minty||'mi||With intervocalic alveolar flapping.|
|meant it||minute||'m?t||With intervocalic alveolar flapping.|
|tenting||tinning||'t||With intervocalic alveolar flapping.|
|whence||wince||'w?ns||With wine-whine merger.|
|when||win||'w?n||With wine-whine merger.|
|when's||winds||'w?n(d)z||With wine-whine merger.|
|when's||wins||'w?nz||With wine-whine merger.|
Different phonemic analyses of these vowels are possible. In one view, [?] and are in complementary distribution and should therefore still be regarded as allophones of one phoneme. Wells, however, suggests that the non-rhyming of words like kit and bit, which is particularly marked in the broader accents, makes it more satisfactory to consider to constitute a different phoneme from [? ~ i], and and [?] can be regarded as comprising a single phoneme except for speakers who maintain the contrast in weak syllables. There is also the issue of the weak vowel merger in most non-conservative speakers, which means that rabbit /'ræb?t/ (conservative /'ræb?t/) rhymes with abbott /'æb?t/. This weak vowel is consistently written ⟨?⟩ in South African English dialectology, regardless of its precise quality.
The thank-think merger is the lowering of /?/ to /æ/ before the velar nasal /?/ that can be found in the speech of speakers of African American Vernacular English, Appalachian English, and (rarely) Southern American English. For speakers with the lowering, think and thank, sing and sang etc. can sound alike. It is reflected in the colloquial variant spelling thang of thing.
The weak vowel merger is the loss of contrast between /?/ (schwa) and unstressed /?/, which occurs in certain dialects of English: notably Southern Hemisphere, North American, many 21st-century (but not older) standard Southern British, and Irish accents. In speakers with this merger, the words abbot and rabbit rhyme, and Lennon and Lenin are pronounced identically, as are addition and edition. However, it is possible among these merged speakers (such as General American speakers) that a distinction is still maintained in certain contexts, such as in the pronunciation of Rosa's versus roses, due to the morpheme break in Rosa's. (Speakers without the merger generally have [?] in the final syllables of rabbit, Lenin, roses and the first syllable of edition, distinct from the schwa [?] heard in the corresponding syllables of abbot, Lennon, Rosa's and addition.) If an accent with the merger is also non-rhotic, then for example chatted and chattered will be homophones. The merger also affects the weak forms of some words, causing unstressed it, for instance, to be pronounced with a schwa, so that dig it would rhyme with bigot.
The merger is very common in the Southern Hemisphere accents. Most speakers of Australian English (as well as recent Southern England English) replace weak /?/ with schwa , although in -ing the pronunciation is frequently [?]; and where there is a following /k/, as in paddock or nomadic, some speakers maintain the contrast, while some who have the merger use [?] as the merged vowel. In New Zealand English the merger is complete, and indeed /?/ is very centralized even in stressed syllables, so that it is usually regarded as the same phoneme as /?/. In South African English most speakers have the merger, but in more conservative accents the contrast may be retained (as vs. [?]. Plus a kit split exists; see above).
The merger is also commonly found in American and Canadian English; however, the realization of the merged vowel varies according to syllable type, with [?] appearing in word-final or open-syllable word-initial positions (such as drama or cilantro), but often [?~?] in other positions (abbot and exhaust). In traditional Southern American English, the merger is generally not present, and /?/ is also heard in some words that have schwa in RP, such as salad. In Caribbean English schwa is often not used at all, with unreduced vowels being preferred, but if there is a schwa, then /?/ remains distinct from it.
In traditional RP, the contrast between /?/ and weak /?/ is maintained; however, this may be declining among modern standard speakers of southern England, who increasingly prefer a merger, specifically with the realization [?]. In other accents of the British Isles behavior may be variable; in Irish English the merger is almost universal.
The merger is not complete in Scottish English, where speakers typically distinguish except from accept, but the latter can be phonemicized with an unstressed STRUT: /?k's?pt/ (as can the word-final schwa in comma /'k?m?/) and the former with /?/: /?k's?pt/. In other environments KIT and COMMA are mostly merged to a quality around , often even when stressed (Wells transcribes this merged vowel with ⟨?⟩. Here, ⟨?⟩ is used for the sake of consistency and accuracy) and when before /r/, as in fir /f?r/ and letter /'l?t?r/ (but not fern /f?rn/ and fur /f?r/ - see nurse mergers). The HAPPY vowel is /e/: /'hape/.
Even in accents that do not have the merger, there may be certain words in which traditional /?/ is replaced by /?/ by many speakers (here the two sounds may be considered to be in free variation). In RP, /?/ is now often heard in place of /?/ in endings such as -ace (as in palace), -ate (as in senate), -less, -let, for the ⟨i⟩ in -ily, -ity, -ible, and in initial weak be-, de-, re-, and e-.
Final /?l/, and also /?n/ and /?m/, are commonly realized as syllabic consonants. In accents without the merger, use of /?/ rather than /?/ prevents syllabic consonant formation. Hence in RP, for example, the second syllable of Barton is pronounced as a syllabic [n?], while that of Martin is [?n].
Particularly in American linguistic tradition, the unmerged weak [?]-type vowel is often transcribed with the barred i ⟨?⟩, the IPA symbol for the close central unrounded vowel. Another symbol sometimes used is ⟨?⟩, the non-IPA symbol for a near-close central unrounded vowel; in the third edition of the OED this symbol is used in the transcription of words (of the types listed above) that have free variation between /?/ and /?/ in RP.
|IPA (for merged form)||Notes|
|Aaron||Erin||'?r?n||With Mary-marry-merry merger.|
|barrel||beryl||'b?r?l||With Mary-marry-merry merger.|
|modern||modding||'m?d?n||Non-rhotic with G-dropping.|
|pattern||patting||'pæt?n||Non-rhotic with G-dropping.|
The merger of /?/ with the word-internal variety of /?/ in abbot (not called COMMA on purpose, since word-final and sometimes also word-initial COMMA can be analyzed as STRUT - see above), which in non-rhotic varieties also encompasses the unstressed syllable of letters occurs when the stressed variant of /?/ is realized with a schwa-like quality , for example in some Inland Northern American English varieties (where the final stage of the Northern Cities Vowel Shift has been completed), New Zealand English, Scottish English and partially also South African English (see kit-bit split). As a result, the vowels in kit /k?t/, lid /l?d/, and miss /m?s/ belong to the same phoneme as the unstressed vowel in balance /'bæl?ns/.
There are no homophonous pairs apart from those caused by the weak vowel merger, but a central KIT tends to sound like STRUT to speakers of other dialects, which is why Australians accuse New Zealanders of saying "fush and chups" instead of "fish and chips" (which, in an Australian accent, sounds close to "feesh and cheeps"). This is not accurate, as the STRUT vowel is always more open than the central KIT; in other words, there is no strut-comma merger (though a kit-strut merger is possible in some Glaswegian speech). This means that varieties of English with this merger effectively contrast two stressable unrounded schwas, which is very similar to the contrast between /?/ and /?/ in Romanian, as in the minimal pair r?u /r?w/ 'river' vs. râu /r?w/ 'bad'.
Most dialects with this merger feature happy tensing, which means that pretty is best analyzed as /'pr?ti:/ in those accents. In Scotland, the HAPPY vowel is commonly a close-mid , identified phonemically as FACE: /'pr?te/.
The name kit-comma merger is appropriate in the case of those dialects in which the quality of STRUT is far removed from (the word-final allophone of /?/), such as Inland Northern American English. It can be misleading in the case of other accents.
Happy tensing is a process whereby a final unstressed i-type vowel becomes tense [i] rather than lax [?]. That affects the final vowels of words such as happy, city, hurry, taxi, movie, Charlie, coffee, money, Chelsea. It may also apply in inflected forms of such words containing an additional final consonant sound, such as cities, Charlie's and hurried. It can also affect words such as me, he and she when used as clitics, as in show me, would he?
Until the 17th century, words like happy could end with the vowel of my (originally [i:] but diphthongized in the Great Vowel Shift), it alternated with a short i sound, which led to the present-day realizations. (Many words spelt -ee, -ea, -ey formerly had the vowel of day; there is still alternation between that vowel and the happy vowel in words such as Sunday, Monday.) It is not entirely clear when the vowel underwent the transition. The fact that tensing is uniformly present in South African English, Australian English, and New Zealand English implies that it was present in southern British English already at the beginning of the 19th century. Yet it is not mentioned by descriptive phoneticians until the early 20th century, and even then at first only in American English. The British phonetician Jack Windsor Lewis believes that the vowel moved from [i] to [?] in Britain the second quarter of the 19th century before reverting to [i] in non-conservative British accents towards the last quarter of the 20th century.
Conservative RP has the laxer [?] pronunciation. This is also found in Southern American English, in much of the north of England, and in Jamaica. In Scottish English an [e] sound, similar to the Scottish realization of the vowel of day, may be used. The tense [i] variant, however, is now established in General American, and is also the usual form in Canada, Australia, New Zealand and South Africa, in the south of England and in some northern cities (e.g. Liverpool, Newcastle). It is also becoming more common in modern RP.
The lax and tense variants of the happy vowel may be identified with the phonemes /?/ and /i:/ respectively. They may also be considered to represent a neutralization between the two phonemes, although for speakers with the tense variant, there is the possibility of contrast in such pairs as taxis and taxes (see English phonology - vowels in unstressed syllables). Modern British dictionaries represent the happy vowel with the symbol ⟨i⟩ (distinct from both ⟨?⟩ and ⟨i:⟩).
Roach (2009) considers the tensing to be a neutralization between /?/ and /i:/, while Cruttenden (2014) regards the tense variant in modern RP still as an allophone of /?/ on the basis that it is shorter and more resistant to diphthongization than /i:/. Lindsey (2019) regards the phenomenon to be a mere substitution of /i:/ for /?/ and criticizes the notation ⟨i⟩ for causing "widespread belief in a specific 'happY vowel'" that "never existed".
Old English had the short vowel /y/ and long vowel /y:/, which were spelled orthographically with ⟨y⟩, contrasting with the short vowel /i/ and the long vowel /i:/, which were spelled orthographically with ⟨i⟩. By Middle English the two vowels /y/ and /y:/ merged with /i/ and /i:/, leaving only the short-long pair /i/-/i:/. Modern spelling therefore uses both ⟨y⟩ and ⟨i⟩ for the modern KIT and PRICE vowels. Modern spelling with ⟨i⟩ vs. ⟨y⟩ is not an indicator of the Old English distinction between the four sounds, as spelling has been revised since after the merger occurred. After the merger occurred, the name of the letter ⟨y⟩ acquired an initial [w] sound in it, to keep it distinct from the name of the letter ⟨i⟩.
The mitt-meet merger is a phenomenon occurring in Malaysian English and Singaporean English in which the phonemes /i:/ and /?/ are both pronounced /i/. As a result, pairs like mitt and meet, bit and beat, and bid and bead are homophones.
The met-mat merger is a phenomenon occurring in Malaysian English, Singaporean English and Hong Kong English in which the phonemes /?/ and /æ/ are both pronounced /?/. For some speakers, it occurs only in front of voiceless consonants, and pairs like met, mat, bet, bat are homophones, but bed, bad or med, mad are kept distinct. For others, it occurs in all positions. | https://www.popflock.com/learn?s=Phonological_history_of_English_high_front_vowels | 21 |
21 | FREE Catholic Classes
I. BEFORE 1556
From their first appearance in the history of the world the Germans represented the principle of unchecked individualism, as opposed to the Roman principle of an all-embracing authority. German history in the Middle Ages was strongly influenced by two opposing principles: universalism and individualism. After Arminius had fought for German freedom in the Teutoburg Forest the idea that the race was entitled to be independent gradually became a powerful factor in its historical development. This conception first took form when the Germanic states grew out of the Roman Empire. Even Theodoric the Great thought of uniting the discordant barbarian countries with the aid of the leges gentium into a great confederation of the Mediterranean. Although in these Mediterranean countries the Roman principle finally prevailed, being that of a more advanced civilization, still the individualistic forces which contributed to found these states were not wasted. By them the world-embracing empire of Rome was overthrown and the way prepared for the national principle. It was not until after the fall of the Western Empire that a great Frankish kingdom became possible and the Franks, no longer held in check by the Roman Empire, were able to draw together the tribes of the old Teutonic stock and to lay the foundation of a German empire. Before this the Germanic tribes had been continually at variance; no tie bound them together; even the common language failed to produce unity. On the other hand, the so-called Lautverschiebung , or shifting of the consonants, in German, separated the North and South Germans. Nor was German mythology a source of union, for the tribal centres of worship rather increased the already existing particularism. The Germans had not even a common name. Since the eighth century most probably the designations Franks and Frankish extended beyond the boundaries of the Frankish tribe. It was not, however, until the ninth century that the expression theodisk (later German Deutsch ), signifying "popular," or "belonging to people" made its appearance and a great stretch of time divided this beginning from the use of the word as a name of the nation.
The work of uniting Germany was not begun by a tribe living in the interior but by one on the outskirts of the country. The people called Franks suddenly appear in history in the third century. They represented no single tribe, but consisted of a combination of Low and High German tribes. Under the leadership of Clovis (Chlodwig) the Franks overthrew the remains of the Roman power in Gaul and built up the Frankish state on a Germano-Romanic foundation. The German tribes were conquered one after another and colonized in the Roman manner. Large extents of territory were marked out as belonging to the king, and on these military colonies were founded. The commanders of these military colonies gradually became administrative functionaries, and the colonies themselves grew into peaceful agricultural village communities. For a long time political expressions, such as Hundreds , recalled the original military character of the people. From that time the Frankish ruler became the German overlord, but the centrifugal tendency of the Germanic tribes reacted against this sovereignty as soon as the Merovingian Dynasty began slowly to decline, owing to internal feuds. In each of the tribes after this the duke rose to supremacy over his fellow tribesmen. From the seventh century the tribal duke became an almost independent sovereign. These ducal states originated in the supreme command of large bodies of troops, and then in the administration of large territories by dukes. At the same time the disintegration was aided by the bad administration of the counts, the officials in charge of the territorial districts ( Gau ), who were no longer supervised by the central authority. But what was most disastrous was that an unruly aristocracy sought to control all the economical interests and to exercise arbitrary powers over politics. These sovereign nobles had become powerful through the feudal system, a form of government which gave to medieval Germany its peculiar character. Caesar in his day found that it was customary among the Gauls for a freeman, the "client," voluntarily to enter into a relation of dependence on a "senior." This surrender ( commendatio ) took place in order to obtain the protection of the lord or to gain the usufruct of land. From this Gallic system of clientship there developed, in Frankish times, the conception of the "lord's man " ( homagium or hominium ), who by an oath swore fealty to his suzerain and became a vassus , or gasindus , or homo . The result of the growth of this idea was that finally there appeared, throughout the kingdom, along with royalty, powerful territorial lords with their vassi or vassalli , as their followers were called from the eighth century. The vassals received as fief ( beneficium ) a piece of land of which they enjoyed the use for life. The struggle of the Franks with the Arabs quickened the development of the feudal system, for the necessity of creating an army of horsemen then became evident. Moreover the poorer freemen, depressed in condition by the frequent wars, could not be required to do service as horsemen, a duty that could only be demanded from the vassals of the great landowners. In order to force these territorial lords to do military service fiefs were granted from the already existing public domain, and in their turn the great lords granted part of these fiefs to their retainers. Thus the Frankish king was gradually transformed from a lord of the land and people to a feudal lord over the beneficiaries directly and indirectly dependent upon him by feudal tenure. By the end of the ninth century the feudal system had bound together the greater part of the population.
FREE Catholic Classes Pick a class, you can learn anything
While in this way the secular aristocracy grew into a power, at the same time the Church was equally strengthened by feudalism. The Christian Church during this era -- a fact of the greatest importance -- was the guardian of the remains of classical culture. With this culture the Church was to endow the Germans. Moreover it was to bring them a great fund of new moral conceptions and principles, much increase in knowledge, and skill in art and handicrafts. The well-knit organization of the Church, the convincing logic of dogma, the grandeur of the doctrine of salvation, the sweet poetry of the liturgy, all these captured the understanding of the simple-minded but fine-natured primitive German. It was the Church, in fact, that first brought the exaggerated individualism of the race under control and developed in it gradually, by means of asceticism, those social virtues essential to the State. The country was converted to Christianity very slowly for the Church had here a difficult problem to solve, namely, to replace the natural conception of life by an entirely different one that appeared strange to the people. The acceptance of the Christian name and ideas was at first a purely mechanical one, but it became an inner conviction. No people has shown a more logical or deeper comprehension of the organization and saving aims of the Christian Church. None has exhibited a like devotion to the idea of the Church nor did any people contribute more in the Middle Ages to the greatness of the Church than the German. In the conversion of Germany much credit is due the Irish and Scotch, but the real founders of Christianity in Germany are the Anglo-Saxons, above all St. Boniface . Among the early missionaries were: St. Columbanus, the first to come to the Continent (about 583), who laboured in Swabia; Fridolin, the founder of Saeckingen; Pirminius, who established the monastery of Reichenau in 724; and Gallus (d. 645), the founder of St. Gall. The cause of Christianity was furthered in Bavaria by Rupert of Worms (beginning of the seventh century), Corbinian (d. 730), and Emmeram (d. 715). The great organizer of the Church of Bavaria was St. Boniface. The chief herald of the Faith among the Franks was the Scotchman, St. Kilian (end of the seventh century); the Frisians received Christianity through Willibrord (d. 739). The real Apostle of Germany was St. Boniface, whose chief work was in Central Germany and Bavaria. Acting in conjunction with Rome he organized the German Church, and finally in 755 met the death of a martyr at the hands of the Frisians. After the Church had thus obtained a good foothold it soon reached a position of much importance in the eyes of the youthful German peoples. By grants of land the princes gave it an economic power which was greatly increased when many freemen voluntarily became dependents of these new spiritual lords; thus, besides the secular territorial aristocracy, there developed a second power, that of the ecclesiastical princes. Antagonism between these two elements was perceptible at an early date. Pepin sought to remove the difficulty by strengthening the Frankish Church and placing between the secular and spiritual lords the new Carlovingian king, who, by the assumption of the title Dei gratia , obtained a somewhat religious character.
The Augustinian conception of the Kingdom of God early influenced the Frankish State; political and religious theories unconsciously blended. The union of Church and State seemed the ideal which was to be realized. Each needed the other; the State needed the Church as the only source of real order and true education ; the Church needed for its activities the protection of the secular authority . In return for the training in morals and learning that the Church gave, the State granted it large privileges, such as: the privilegium fori or freedom from the jurisdiction of the State; immunity, that is exemption from taxes and services to the State, from which gradually grew the right to receive the taxes of the tenants residing on the exempt lands and the right to administer justice to them; further, release from military service; and, finally, the granting of great fiefs that formed the basis of the later ecclesiastical sovereignties. The reverse of this picture soon became apparent; the ecclesiastics to whom had been given lands and offices in fief became dependent on secular lords. Thus the State at an early date had a share in the making of ecclesiastical laws, exercised the right of patronage, appointed to dioceses, and soon undertook, especially in the time of Charles Martel , the secularization of church lands. Consequently the question of the relation of Church and State soon claimed attention; it was the most important question in the history of the German Middle Ages. Under the first German emperor this problem seemed to find its solution.
Real German history begins with Charlemagne (768-814). The war with the Saxons was the most important one he carried on, and the result of this struggle, of fundamental importance for German history, was that the Saxons were brought into connexion with the other Germanic tribes and did not fall under Scandinavian influence. The lasting union of the Franks, Saxons, Frisians, Thuringians, Hessians, Alamanni, and Bavarians, that Charlemagne effected, formed the basis of a national combination which gradually lost sight of the fact that it was the product of compulsion. From the time of Charlemagne the above-named German tribes lived under Frankish constitution retaining their own old laws the leges barbarorum , which Charlemagne codified. Another point of importance for German development was that Charlemagne fixed the boundary between his domain and the Slavs, including the Wends, on the farther side of the Elbe and Saale Rivers. It is true that Charlemagne did not do all this according to a deliberate plan, but mainly in the endeavour to win these related Germanic peoples over to Christianity. Charlemagne's German policy, therefore, was not a mere brute conquest, but a union which was to be strengthened by the ties of morality and culture to be created by the Christian religion. The amalgamation of the ecclesiastical with the secular elements that had begun in the reign of Pepin reached its completion under Charlemagne. The fact that Pepin obtained papal approval of his kingdom strengthened the bond between the Church and the Frankish kingdom. The consciousness of being the champion of Christianity against the Arabs, moreover, gave to the King of the Franks the religious character of the predestined protectors of the Church ; thus he attained a position of great importance in the Kingdom of God. Charlemagne was filled with these ideas ; like St. Augustine he hated the supremacy of the heathen empire. The type of God's Kingdom to Charlemagne and his councillors was not the Roman Empire but the Jewish theocracy. This type was kept in view when Charlemagne undertook to give reality to the Kingdom of God. The Frankish king desired like Solomon to be a great ecclesiastical and secular potentate, a royal priest. He was conscious that his conception of his position as the head of the Kingdom of God, according to the German ideas, was opposed to the essence of Roman Caesarism, and for reason he objected to being crowned emperor by the Pope on Christmas Day, 800. On this day the Germanic idea of the Kingdom of God, of which Charlemagne was the representative, bowed to the Roman idea, which regards Rome as its centre, Rome the seat of the old empire and the most sacred place of the Christian world . Charlemagne when emperor still regarded himself as the real leader of the Church. Although in 774 he confirmed the gift of his father to the Roman res publica , nevertheless he saw to it that Rome remained connected with the Frankish State; in return it had a claim to Frankish protection. He even interfered in dogmatic questions.
Charlemagne looked upon the revived Roman Empire from the ancient point of view inasmuch as he greatly desired recognition by the Eastern Empire. He regarded his possession of the empire as resulting solely from his own power, consequently he himself crowned his son Louis. Yet on the other hand he looked upon his empire only as a Christian one, whose most noble calling it was to train up the various races within its borders to the service of God and thus to unify them. The empire rapidly declined under his weak and nerveless son, Louis the Pious (814-40). The decay was hastened by the prevailing idea that this State was the personal property of the sovereign, a view that contained the germ of constant quarrels and necessitated the division of the empire when there were several sons. Louis sought to prevent the dangers of such division by law of hereditary succession published in 817, by which the sovereign power and the imperial crown were to be passed to the oldest son. This law was probably enacted through the influence of the Church, which maintained positively this unity of the supreme power and the Crown, as being in harmony with the idea of the Kingdom of God, and as besides required by the hierarchical economy of the church organization. When Louis had a fourth son, by his second wife, Judith, he immediately set aside law of partition of 817 for the benefit of the new heir. An odious struggle broke out between father and sons, and among the sons themselves. In 833 the emperor was captured by his sons at the battle of Luegenfeld (field of lies) near Colmar. Pope Gregory IV was at the time in the camp of the sons. The demeanour of the pope and the humiliating ecclesiastical penance that Louis was compelled to undergo at Soissons made apparent the change that had come about since Charlemagne in the theory of the relations of Church and State. Gregory's view that the Church was under the rule of the representative of Christ, and that it was a higher authority, not only spiritually but also substantially, and therefore politically, had before this found learned defenders in France. In opposition to the oldest son Lothair, Louis and Pepin, sons of Louis the Pious, restored the father to his throne (834), but new rebellions followed, when the sons once more grew dissatisfied.
In 840 the emperor died near Ingelheim. The quarrels of the sons went on after the death of the father, and in 841 Lothair was completely defeated near Fontenay (Fontanetum) by Louis the German and Charles the Bald. The empire now fell apart, not from the force of national hatreds, but in consequence of the partition now made and known as the Treaty of Verdun (August, 843), which divided the territory between the sons of Louis the Pious: Lothair, Louis the German (843-76), and Charles the Bald, and which finally resulted in the complete overthrow of the Carlovingian monarchy.
As the imperial power grew weaker, the Church gradually raised itself above the State. The scandalous behaviour of Lothair II, who, divorced himself from his lawful wife in order to marry his concubine, brought deep disgrace on his kingdom. The Church however, now an imposing and well-organized power, sat in judgment on the adulterous king. When Lothair II died, his uncles divided his possessions between them; by the Treaty of Ribemont (Mersen), Lorraine, which lay between the East Frankish Kingdom of Louis the German and the West Frankish Kingdom of Charles the Bald, was assigned to the East Frankish Kingdom. In this way a long-enduring boundary was definitely drawn between the growing powers of Germany and France. By a curious chance this boundary coincided almost exactly with the linguistic dividing line. Charles the Fat (876-87), the last son of Louis the German, united once more the entire empire. But according to old Germanic ideas the weak emperor forfeited his sovereignty by his cowardice when the dreaded Northmen appeared before Paris on one of their frequent incursions into France, and by his incapacity as a ruler. Consequently the Eastern Franks made his nephew Arnulf (887-99) king. This change was brought about by a revolt of the laity against the bishops in alliance with the emperor. The danger of Norman invasion Arnulf ended once and for all by his victory in 891 at Louvain on the Dyle. In the East also he was victorious after the death (894) of Swatopluk, the great King of Moravia. The conduct of some of the great nobles forced him to turn for aid to the bishops ; supported by the Church, he was crowned emperor at Rome in 896. Theoretically his rule extended over the West Frankish Kingdom, but the sway of his son, Louis the Child (899-911), the last descendant of the male line of the German Carlovingians, was limited entirely to the East Frankish Kingdom. Both in the East and West Frankish Kingdoms, in this era of confusion, the nobility grew steadily stronger, and freemen in increasing numbers became vassals in order to escape the burdens that the State laid on them; the illusion of the imperial title could no longer give strength to the empire. Vassal princes like Guido and Lamberto of Spoleto, and Berengar of Friuli, were permitted to wear the diadem of the Caesars.
As the idea of political unity declined, that of the unity of the Church increased in power. The Kingdom of God, which the royal priest, Charlemagne, by his overshadowing personality had, in his own opinion, made a fact, proved to be an impossibility. Church and State, which for a short time were united in Charlemagne, had, as early as the reign of Louis the Pious, become separated. The Kingdom of God was now identified with the Church. Pope Nicholas I asserted that the head of the one and indivisible Church could not be subordinate to any secular power, that only the pope could rule the Church, that it was obligatory on princes to obey the pope in spiritual things, and finally that the Carlovingians had received their right to rule from the pope. This grand idea of unity, this all-controlling sentiment of a common bond, could not be annihilated even in these troubled times when the papacy was humiliated by petty Italian rulers. The idea of her unity gave the Church the strength to raise herself rapidly to a position higher than that of the State. From the age of St. Boniface the Church in the East Frankish Kingdom had direct relations with Rome, while numerous new churches and monasteries gave her a firm hold in this region. At an early date the Church here controlled the entire religious life and, as the depositary of all culture, the entire intellectual life. She had also gained frequently decisive influence over German economic life, for she disseminated much of the skill and many of the crafts of antiquity. Moreover the Church itself had grown into an economic power in the East Frankish Kingdom. Piety led many to place themselves and their lands under the control of the Church.
Help Now >
There was also in this period a change in social life that was followed by important social consequences. The old militia composed of every freeman capable of bearing arms went to pieces, because the freemen constantly decreased in number. In its stead there arose a higher order in the State, which alone was called on for military service. In this chaotic era the German people made no important advance in civilization. Nevertheless the union that had been formed between Roman and German elements and Christianity prepared the way for a development of the East Frankish Kingdom in civilization from which great results might be expected. At the close of the Carlovingian period the external position of the kingdom was a very precarious one. The piratic Northmen boldly advanced far into the empire; Danes and Slavs continually crossed its borders; but the most dangerous incursions were those of the Magyars, who in 907 brought terrible suffering upon Bavaria ; in their marauding expeditions they also ravaged Saxony, Thuringia, and Swabia. It was then that salvation came from the empire itself. The weak authority of the last of the Carlovingians, Louis the Child, an infant in years, fell to pieces altogether, and the old ducal form of government revived in the several tribes. This was in accordance with the desires of the people. In these critical times the dukes sought to save the country; still they saw clearly that only a union of all the duchies could successfully ward off the danger from without; the royal power was to find its entire support in the laity. Once more, it is true, the attempt was made by King Conrad I (911-18) to make the Church the basis of the royal power, but the centralizing clerical policy of the king was successfully resisted by the subordinate powers. Henry I (919-36) was the free choice of the lay powers at Fritzlar. On the day he was elected the old theory of the State as the personal estate of the sovereign was finally done away with, and the Frankish realm was transformed into a German one. The manner of his election made plain to Henry the course to be pursued. It was necessary to yield to the wish of the several tribes to have their separate existence with a measure of self-government under the imperial power recognized. Thus the duchies were strengthened at the expense of the Crown. The fame of Henry I was assured by his victory over the Magyars near Merseburg (933). By regaining Lorraine, that had been lost during the reign of Conrad, he secured a bulwark on the side towards France that permitted the uninterrupted consolidation of his realm. The same result was attained on other frontiers by his successful campaigns against the Wends and Bohemians. Henry's kingdom was made up of a confederation of tribes, for the idea of a "King of the Germans" did not yet exist. It was only as the "Holy Roman Empire of the German Nation" that Germany could develop from a union of German tribes to a compact nation. As supporters of the supreme power, as vassals of the emperor, the Germans were united.
This imperial policy was continued by Otto I, the Great (936-73). During his long reign Otto sought to found a strong central power in Germany, an effort at once opposed by the particularistic powers of Germany, who took advantage of disputes in the royal family. Otto proved the necessity of a strong government by his victory over the Magyars near Augsburg (955), one result of which was the reestablishment of the East Mark. After this he was called to Rome by John XII, who had been threatened by Berengarius II of Italy, and by making a treaty that secured to the imperial dignity a share in the election of the pope, he attained the imperial crown, 2 February, 962. It was necessary for Otto to obtain imperial power in order to carry out his politico-ecclesiastical policy. His intention was to make the Church an organic feature of the German constitution. This he could only do if the Church was absolutely under his control, and this could not be attained unless the papacy and Italy were included within the sphere of his power. The emperor's aim was to found his royal power among the Germans, who were strongly inclined to particularism, upon a close union of Church and State. The Germans had now revived the empire and had freed the papacy from its unfortunate entanglement with the nobility of the city of Rome. The papacy rapidly regained strength and quickly renewed the policy of Nicholas I . By safeguarding the unity of the Church of Western Europe the Germans protected both the peaceful development of civilization, which was dependent upon religion, and the progress of culture which the Church spread. Thus the Germans, in union with the Church, founded the civilization of Western Europe. For Germany itself the heroic age of the medieval emperors was a period of progress in learning. The renaissance of antiquity during the era of the Ottos was hardly more than superficial. Nevertheless it denoted a development in learning, throughout ecclesiastical in character, in marked contrast to the tendencies in the same age of the grammarian Wilgard at Ravenna, who sought to revive not only the literature of ancient times, but also the ideas of antiquity, even when they opposed Christian ideas. Germany now boldly assumed the leadership of Western Europe and thus prevented any other power from claiming the supremacy. Moreover the new empire sought to assert its universal character in France, as well as in Burgundy and Italy. Otto also fixed his eyes on Lower Italy, which was in the hands of the Greeks, but he preferred a peaceful policy with Byzantium. He therefore married his son Otto II, in 972, to the Greek Princess Theophano.
Otto II (973-83) and his son Otto III (983-1002) firmly upheld the union with the Church inaugurated by Otto I. Otto II aimed at a great development of his power along the Mediterranean; these plans naturally turned his mind from a national German policy. His campaign against the Saracens, however, came to a disastrous end in Calabria in 982, and he did not long survive the calamity. His romantic son sought to bring about a complete revival of the ancient empire, the centre of which was to be Rome, as in ancient times. There, in union with the pope, he wished to establish the true Kingdom of God. The pope and the emperor were to be the wielders of a power one and indivisible. This idealistic policy, full of vague abstractions, led to severe German losses in the east, for the Poles and Hungarians once more gained their independence. In Italy Arduin of Ivrea founded a new kingdom; naturally enough the Apennine Peninsula revolted against the German imperial policy. Without possession of Italy, however, the empire was impossible, and the blessings of the Ottonian theory of government were now manifest. The Church became the champion of the unity and legitimacy of the empire.
After the death of Otto III and the collapse of imperialism the Church raised Henry II (1002-24) to the throne. Henry, reviving the policy of Otto I which had been abandoned by Otto III, made Germany and the German Church the basis of his imperial system; he intended to rule the Church as Otto I had done. In 1014 he defeated Arduin and thus attained the Imperial crown. The sickly ruler, whose nervousness caused him to take up projects of which he quickly tired, did his best to repair the losses of the empire on its eastern frontier. He was not able, however, to defeat the Polish King Boleslaw II: all he could do was to strengthen the position of the Germans on the Elbe River by an alliance with the Lusici, a Slavonic tribe. Towards the end of his reign a bitter dispute broke out between the emperor and the bishops. At the Synod of Seligenstadt, in 1023, Archbishop Aribo of Mainz, who was an opponent of the Reform of Cluny, made an appeal to the pope without the permission of the bishop. This ecclesiastical policy of Aribo's would have led in the end to the founding of a national German Church independent of Rome. The greater part of the clergy supported Aribo, but the emperor held to the party of reform. Henry, however, did not live to see the quarrel settled.
With Conrad II (1024-39) began the sway of the Franconian (Salian) emperors. The sovereigns of this line were vigorous, vehement, and autocratic rulers. Conrad had natural political ability and his reign is the most flourishing era of medieval imperialism. The international position of the empire was excellent. In Italy Conrad strengthened the German power, and his relations with King Canute of Denmark were friendly. Internal disputes kept the Kingdom of Poland from becoming dangerous; moreover, by regaining Lusatia the Germans recovered the old preponderance against the Poles. Important gains were also made in Burgundy, whereby the old Romanic states, France and Italy, were for a long time separated and the great passes of the Alps controlled by the Germans. The close connexion with the empire enabled the German population of north-western Burgundy to preserve its nationality. Conrad had also kept up the close union of the State with the Church and had maintained his authority over the latter. He claimed for himself the same right of ruling the Church that his predecessors had exercised, and like them appointed bishops and abbots ; he also reserved to himself the entire control of the property of the Church . Conrad's ecclesiastical policy, however, lacked definiteness; he failed to understand the most important interests of the Church, nor did he grasp the necessity of reform. Neither did he do anything to raise the papacy, discredited by John XIX and Benedict IX, from its dependence on the civil rulers of Rome. The aim of his financial policy was economic emancipation from the Church ; royal financial officials took their place alongside of the ministeriales , or financial agents, of the bishops and monasteries. Conrad sought to rest his kingdom in Germany on these royal officials and on the petty vassals. In this way the laity was to be the guarantee of the emperor's independence of the episcopate. As he pursued the same methods in Italy, he was able to maintain an independent position between the bishops and the petty Italian despots who were at strife with one another. Thus the ecclesiastical influence in Conrad's theory of government becomes less prominent.
This statesmanlike sovereign was followed by his son, the youthful Henry III (1039-56). Unlike his father Henry had a good education ; he had also been trained from an early age in State affairs. He was a born ruler and allowed himself to be influenced by no one; to force of character and courage he added a strong sense of duty. His foreign policy was at first successful. He established the suzerainty of the empire over Hungary, without, however, being always able to maintain it; Bohemia also remained a dependent state. The empire gained a dominant position in Western Europe, and a sense of national pride was awakened in the Germans that opened the way for a national spirit. But the aim of these national aspirations, the hegemony in Western Europe, was a mere phantom. Each time an emperor went to Italy to be crowned that country had to be reconquered. Even at this very time the imperial supremacy was in great danger from the threatened conflict between the imperial and the sacerdotal power, between Church and State. The Church, the only guide on earth to salvation, had attained dominion over mankind, whom it strove to wean from the earthly and to lead to the spiritual. The glaring contrast between the ideal and the reality awoke in thousands the desire to leave the world. A spirit of asceticism, which first appeared in France, took possession of many hearts. As early as the era of the first Saxon emperors the attempt was made to introduce the reform movement of Cluny into Germany, and in the reign of Henry III this reform had become powerful. Henry himself laid much more stress than his predecessors on the ecclesiastical side of his royal position. His religious views led him to side with the men of Cluny. The great mistake of his ecclesiastical policy was the belief that it was possible to promote this reform of the Church by laying stress on his suzerain authority. He repeatedly called and presided over synods and issued many decisions in Church affairs. His fundamental mistake, the thought that he could transform the Church in the manner desired by the party of reform and at the same time maintain his dominion over it, was also evident in his relations with the papacy. He sought to put an end to the disorder at Rome, caused by the unfortunate schism, by the energetic measure of deposing the three contending popes and raising Clement II to the Apostolic See. Clement crowned him emperor and made him Patrician of Rome. Thus Henry seemed to have regained the same control over the Church that Otto had exercised. But the papacy, purified by the elevated conceptions of the party of reform and freed by Henry from the influence of the degenerate Roman aristocracy, strove to be absolutely independent. The Church was now to be released from all human bonds. The chief aims of the papal policy were the celibacy of the clergy, the presentation of ecclesiastical offices by the Church alone, and the attainment by these means of as great a centralization as possible. Henry had acted with absolute honesty in raising the papacy, but he did not intend that it should outgrow his control. Sincerely pious, he was convinced of the possibility and necessity of complete accord between empire and papacy. His fanciful policy became an unpractical idealism. Consequently the monarchical power began rapidly to decline in strength. Hungary regained freedom, the southern part of Italy was held by the Normans, and the Duchy of Lorraine, already long a source of trouble, maintained its hostility to the king. By the close of the reign of Henry III discontent was universal in the empire, thus permitting a growth of the particularistic powers, especially of the dukes.
When Henry III died Germany had reached a turning-point in its history. His wife Agnes assumed the regency for their four-year-old son, Henry IV (1056-1106), and at once showed her incompetence for the position by granting the great duchies to opponents of the crown. She also sought the support of the lesser nobility and thus excited the hatred of the great princes. A conspiracy of the more powerful nobles, led by Archbishop Anno (Hanno) of Cologne, obtained possession of the royal child by a stratagem at Kaiserswert and took control of the imperial power. Henry IV, however, preferred the guidance of Adalbert, Archbishop of Bremen, who was able for the moment to give the governmental policy a more national character. Thus in 1063 he restored German influence over Hungary, and the aim of his internal policy was to strengthen the central power. At the Diet of Tribur, 1066, however, he was overthrown by the particularists, but the king by now was able to assume control for himself. In the meantime the papacy had been rapidly advancing towards absolute independence. The Curia now extended the meaning of simony to the granting of an ecclesiastical office by a layman and thus demanded an entire change in the conditions of the empire and placed itself in opposition to the imperial power. The ordinances passed in 1059 for the regulation of the papal elections excluded all imperial rights in the same. Conditions in Italy grew continually more unfavourable for the empire. The chief supporters of the papal policy were the Normans, over whom the pope claimed feudal suzerainty. The German bishops also yielded more and more to the authority of Rome ; the Ottonian theory of government was already undermined. The question was now raised: In the Kingdom of God on earth who is to rule, the emperor or the pope ? In Rome this question had long been settled. The powerful opponent of Henry, Gregory VII , claimed that the princes should acknowledge the supremacy of the Kingdom of God, and that the laws of God should be everywhere obeyed and carried out. The struggle which now broke out was in principle a conflict concerning the respective rights of the empire and the papacy. But the conflict soon shifted from the spiritual to the secular domain; at last it became a conflict for the possession of Italy, and during the struggle the spiritual and the secular were often confounded. Henry was not a match for the genius of Gregory. He was courageous and intelligent and, though of a passionate nature, fought with dogged obstinacy for the rights of his monarchical power. But Gregory as the representative of the reform movement in the Church, demanding complete liberty for the Church, was too powerful for him. Aided by the inferior nobility, Henry sought to make himself absolute. The particularistic powers, however, insisted upon the maintenance of the constitutional limits of the monarchy. The revolt of the Saxons against the royal authority was led both by spiritual and secular princes, and it was not until after many humiliations that Henry was able to conquer them in the battle on the Unstrut (1075). Directly after this began his conflict with the papacy. The occasion was the appointment of an Archbishop of Milan by the emperor without regard to the election already held by the ecclesiastical party. Gregory VII at once sent a threatening letter to Henry. Angry at this, Henry had the deposition of the pope declared at the Synod of Worms, 24 January, 1076. Gregory now felt himself released from all restraint and excommunicated the emperor. On 16 October, 1076, the German princes decided that the pope should pronounce judgment on the king and that unless Henry were released from excommunication within a year and a day he should lose his crown. Henry now sought to break the alliance between the particularists and the pope by a clever stroke. The German princes he could not win back to his cause, but he might gain over the pope. By a penitential pilgrimage he forced the pope to grant him absolution. Henry appealed to the priest, and Gregory showed his greatness. He released the king from the ban, although by so doing he injured his own interests, which required that he should keep his agreement to act in union with the German princes.
Thus the day of Canossa (2 and 3 February, 1077) was a victory for Henry. It did not, however, mean the coming of peace, for the German confederates of the pope did not recognize the reconciliation at Canossa, and elected Duke Rudolf of Swabia as king at Forchheim, 13 March, 1077. A civil war now broke out in Germany. After long hesitation Gregory finally took the side of Rudolf and once more excommunicated Henry. Soon after this however, Rudolf lost both throne and life in the battle of Hohenmoelsen not far from Merseburg. Henry now abandoned his policy of absolutism, recognizing its impracticability. He returned to the Ottonian theory of government, and the German episcopate, which was embittered by the severity of the ecclesiastical administration of Rome, now came over to the side of the king. Relying upon this strife within the Church, Henry caused Gregory to be deposed by a synod held at Brixen and Guibert of Ravenna to be elected pope as Clement III . Accompanied by this pope, he went to Rome and was crowned emperor there in 1084. Love for the rights of the Church drove the great Gregory into exile where he soon after died. After the death of his mighty opponent Henry was more powerful than the particularists who had elected a new rival king, Herman of Luxembourg. In 1090 Henry went again to Italy to defend his rights against the two powerful allies of the papacy, the Normans in the south and the Countess Matilda of Tuscany in the north. While he was in Italy his own son Conrad declared himself king in opposition to him. Overwhelmed by this blow, Henry remained inactive in Italy, and it was not until 1097 that he returned to Germany. No reconciliation had been effected between him and Pope Urban II . In Germany Henry sought to restore internal peace, and this popular policy intensified the particularism of the princes. In union with these the king's son, young Henry, rebelled against his father. The pope supported the revolt, and the emperor was unable to cope with so many opponents. In 1105 he abdicated. After this he once more asserted his rights, but death soon closed (1106) this troubled life filled with so many thrilling and tr
- Litany of the Blessed Virgin Mary
- Unfailing Prayer to St. Anthony
- The Rosary in English
- Come Holy Spirit
- Hail, Holy Queen
- Litany of the Blessed Virgin Mary
Copyright 2021 Catholic Online. All materials contained on this site, whether written, audible or visual are the exclusive property of Catholic Online and are protected under U.S. and International copyright laws, © Copyright 2021 Catholic Online. Any unauthorized use, without prior written consent of Catholic Online is strictly forbidden and prohibited.
Catholic Online is a Project of Your Catholic Voice Foundation, a Not-for-Profit Corporation. Your Catholic Voice Foundation has been granted a recognition of tax exemption under Section 501(c)(3) of the Internal Revenue Code. Federal Tax Identification Number: 81-0596847. Your gift is tax-deductible as allowed by law. | https://www.catholic.org/encyclopedia/view.php?id=5104 | 21 |
15 | Some of the slaves remained where they were and went to work for the masters that they had previously slaved under. They were paid wages instead of working for free, but they remained because they had gotten along well with their masters and knew that if they remained there they would be able to work and eventually buy land so that they and their family could have their own place to live. Sometimes the masters would even give the freed individuals that they actually liked a small piece of their land so that they could build something. This was one of the other ways that they were able to acquire land from Caucasians
Land grants from the government also gave them a chance to build churches and other buildings as they were still not allowed to share any of these with Caucasians. Many people believe that the Emancipation Proclamation work to make African-Americans more equal, but the only thing that it did was give them their freedom. There was still no equality and many Caucasians still had a very strong hatred of African-Americans which extended far beyond what any government statement could have removed from them
. In other words, these people believed that African-Americans were worthless and really no better than animals, regardless of the Emancipation Proclamation or anything else that could be made into law.
Most African-Americans took jobs that did not give them an opportunity to better themselves and very few of them could read or write. The opposition that was given to their freedom was extremely strong in the South and Caucasian individuals who lived in the Southern states worked to keep African-Americans down as far as they possibly could and not allow them to look for and find a way up the ladder of success
. It was ruled a crime to educate African-Americans and this helped Caucasians to keep African-Americans from gaining further ground once they were freed.
This did not last long, however, and schools for African-Americans were eventually built so that more individuals could be educated. The phrase 'separate the equal' came about during this time as many individuals struggled to stay within the confines of the laws while still ensuring that African-Americans were kept from doing many of the things that Caucasian individuals enjoyed. They were subject to Jim Crow laws and could not sit on the same train cars or ride on the same buses as Caucasian individuals. Some likely believed that they were better off as slaves because at least they had food and a place to live
However, most African-Americans worked extremely hard even though it was clear that Caucasians were trying to hold them down, and some of them actually succeeded in getting quite far although there was still a lack of equality that has carried throughout history. It still remains today in the hearts and minds of many individuals in this country. However, is clear that reconstruction in the South was needed very strongly and that it was important that it happened because African-Americans have been treated cruelly and used as slaves who were assumed to be little better than animals for many years
Naturally, not all masters were cruel to their slaves, and not all Caucasians hated African-Americans, but this was the pervading theme, especially in the South where slavery remained longer than it did in the North. The huge plantations and the need for people to keep them running smoothly likely contributed to this....
As it was, they left most of it up to the states to enforce, and the Southern states were not interested in doing any of that. Still angry and bruised after losing the war to the North, they rebelled and tried to keep African-Americans beaten down and in slavery as long as they could.
Freedmen in the North could find better jobs than those in the South, but they did not get enough of the jobs to greatly alter the racial makeup of the workforce. Most people were still reluctant to hire African-Americans, although employers were more open to it in the North than they were in the South, even once slavery in both areas of the country had been completely wiped out. The Reconstruction after the Civil War provided much insight into how the country works and what the more obvious shared values are, and it also allowed for a basis for some of the laws and ideals that are present today
African-Americans in many parts of the world are still not treated equally. There are laws to protect them, just as there are laws to protect other groups, but the opinions held by other individuals have much to do with whether African-Americans are really accepted, or not. In light of this, it would seem like some parts of the Reconstruction period have never really ended and are still going on today. Had it not been for the secession of the Southern states and the fighting that started the Civil War, who knows where this country would be today? The society of that time was far different than the society of this time, and the economy is much different, as well
. There are some who feel the United States grew too fast economically, though, and could not sustain it, which is why the economic and stock market collapses that have plagued history continue to be seen to some degree.
During the time of secession, civil war, and reconstruction there was so much going on that many people did not have the time to really take stock of all of the changes that were occurring in their country. They took many things for granted and did not spend time focused on how to build their economic, political, and societal ideals on a foundation that was strong and safe. Instead they rushed through to do things that they thought would be a good idea and paid the price for them later. Even the secession of the southern states from the Union was not well planned out, and in the long-term it did not work. They lost the war and were forced to concede defeat. Then they moved back into the Union, but there was no real harmony there for quite some time.
There was bitterness and disappointment and a desire to continue to do the things that they wanted to do, the way that they wanted to do them. It is only natural that the southern states would hold this opinion, but there are no longer many hints of the disharmony that plagued this country many years ago. It appears that most of the animosity has been forgotten and the 'rivalry' that remains between North and South is more of the friendly type. It is hard to imagine what the country would be like today if the South had won or if the North and South had continued to remain separate entities. It would be far different, economically and societally.
Eicher, David J., the Longest Night: A Military History of the Civil War (2001).
Donald, David et al. The Civil War and Reconstruction (latest edition 2001)
Blair, Jayne E. The Essential Civil War: A Handbook to the Battles, Armies, Navies and Commanders (2006)
Beringer, Richard E., Archer Jones, and Herman Hattaway, Why the South Lost the Civil War (1986) influential analysis of factors; the Elements of Confederate Defeat: Nationalism, War Aims, and Religion (1988…
The FDIC is one of Roosevelt's most notable legacies. However, New deal economics have largely fallen by the wayside. The neo-liberal market economy that prevailed in the latter decades of the 20th century counteracts the inherent socialism of the New Deal. A series of public works programs like the Civil Works Administration (CWA), the Public Works Association (PWA), the Works Progress Administration (WPA), and the Civilian Conservation Corps (CCC) helped
Social Impact of Cold War & Terrorism The Cold War is often associated with the idea of making great and physical divides between the good and the bad of the world. It was a symbolic representation that extended for about 30 years on the expectation that the greatest powers of the world could, under the right circumstances, impose a sort of benign order on the planet by isolating the evil empires
Kant was no exception to the paradigmatic priorities (i.e. objectivity as knowledge) of the era, and brief reference to the episteme is serves accuracy in discursive analysis of this heritage within American politics and policy thought. For instance, Kant's Critique of Judgment is enormously influential in establishing a connection between judgment and political and moral precepts to conduct in communities. Intellectual lineage to Kant's model of Enlightenment 'reason" combines
Al.; Sai). One of the reasons for the lack of political success for any of the groups that support Hawaiian sovereignty is that there is no cohesive, united, group. Much as Russia in 1916 had over 100 parties, until Lenin and the Bolshevik/Menshevik groups coalesced, there was not enough entropy to bring about change. In the 21st century, and with the history of Hawaii, this is even more difficult. A broad
Turning Points in American History Two Turning Points and Current Impact on Cultural, Social, Economic and Political Life Two historical turning points are the Social Security Act and the 19th Amendment to the U.S. Constitution. The Social Security Act, passed in 1935, was intended to provide a "safety net" for people who could not support themselves (Schultz, 2010, p. 399). This "social welfare" was a significant departure from the federal government's
This could have a negative impact on the ability of the new economy to survive. There are advantages and disadvantages to joining the EU, depending on how ready the country is to make the leap into a competitive market. Differences forced former Czechoslavakia to separate. Now, each of the fledgling republics must be evaluated on their own merits. The Velvet Revolution of 1989 destroyed the socialist republic. By the 1990s, | https://www.paperdue.com/essay/secession-and-economic-impact-on-23103 | 21 |
21 | Since there was debt because of the war, the economy was already very bad in Britain – therefore they taxed the colonies. When the colonies started boycotting British products and threatened to stop trading with them all together, it was successful because Britain’s economy wasn’t strong enough to handle those things. The merchants in Britain couldn’t afford to have trade with America end. If the British merchants were hurt, this would thus hurt The economy as a whole in Britain. In later decades, in the War of 1812, America would try to stop trade with Britain again using a method called embargo, which would not be effective because they did not have the debt that the War had caused.
In 1776, the British colonists that were living in America were getting tired of Great Britain’s control. The British were strictly ruling the colonies at this time, and the colonists decided that they were going to fight back. Firstly, colonists were given hefty taxes. Great Britain did this to receive more money, since they are in debt from the French and Indian War. Also, there were many unfair laws that were being instilled.
To prove that the British forced the colonists to commit to the republican value. Colonial resistance increased between the time period of 1763 and 1776 because of policies that were imposed on America, stirrings of revolt and the Coercive Acts that finally committed the colonist to find for their independence. During the year of 1763, frontiersmen from English colonies quickly began move over the mountains and into tribal lands in the upper Ohio Valley after the defeat of the French. The British feared that escalation would disrupt and threaten their western trade in order for that not to happen the Proclamation of 1763 was made. The Proclamation of 1763 forbade settlers to advance beyond the Appalachian Mountains.
Juan de Oñate: The Last Conquistador Your name Name of the University Juan de Onate: The Last Conquistador Juan de Onate, described as the last conquistador was a great person who led hundreds of families to settle in one of the oldest European colonies in the United States in search of unimaginable wealth. Juan de Onate was born in 1550 to aristocrats Cristobal de Onate and Catalina de Salazar in Vera Cruz, Mexico. Cristobal and Catalina were wealthy Spanish colonists and proud owners of a silver mine in Zacatecas, which is currently located in the north central Mexico. Juan involved himself in safeguarding his father’s silver mines right from an early age. As a child, Juan started accompanying his father in the raids against the Indians.
The American Revolutionary War came about after decades of grievances on the part of the American colonies, grievances which were put in place by the British Parliamentary system. The lack of American representation in parliament paired with the multitudes of acts designed to take advantage of the colonies were cause enough for the colonies to revolt and to overthrow their government. There are few who would disagree with the American’s justification for the revolution, would Locke be one of them? No he would not, the American colonies were fully justified under Lockean reasons for revolution, considering how long they endured the grievances and the legislature that was passed against them. Locke laid out the types of legislative and executive
The settlers started to want Indian land and their previous slaves back as well. With conflicts and confrontations, comes war. Three wars occurred against the United States involving the Seminole Indians in Florida. The Seminole Wars turned out to be “America’s Longest Indian Conflict” as the wars happened over a 40 year time span. “The first was a punitive excursion led by Andrew Jackson.
They wanted nothing to do Britain because it prevented them from trading and communication with Britain 's enemies. This made sense for most of the colonists and saying that they will never be able to do what they want as a country if they are apart of a powerful yet despised empire of the world. Explain why it mattered that the colonists decided to break free It mattered a lot more than realized at the time because of the effect it had on the Colonies, Britain and the rest of the World. It allowed America to become free and prosper into a great nation. It also opened a gateway into something of a quarrel with many competing countries because they thought Great Britain was vulnerable because they had lost to their once controlled subjects, who were poorly trained and equipped with weapons to win a war.
The Sugar Act, also known as the American Revenue Act or the American Duties Act, was one of the laws that led to anger, dislike, disagreement, and eventually revolution in Colonial America. Another effect was an increase in smuggling and crime in the colonies. The colonists did not want to pay the outrageous taxes so they looked for ways not to have to pay. A third effect was the colonists decided to stop buying luxury products from Great Britain and looked to local manufacturers for their products. They did this to avoid paying the high
He also confiscated over one thousand tones of opium following a blockade of the merchant’s quarter. HE then destroyed the opium in 1839. The British were enraged and demanded compensation for their stolen goods. The British merchant demanded compensation for the seized goods, which Superintendent Charles Elliot assured would be provided by the British Government. However, unsurprisingly, the British were unwilling to compensate their own merchants for the seized Opium and believed that the Chinese had to do that instead.
As history recounts, Hernando Cortes was by far the most successful Conquistador in his pursuit to seize land, acquire precious resources, and capture native peoples. Cortes’ expedition into unknown territories in search of wealth and glory is capped by his procurement of the capital of the Mexica Empire, Tenochtitlan, and the establishment of New Spain.
(America Past and Present, P. 108) These two decisions irritated the colonist because having British troops in the colonies made the colonist feel that, one they were being controlled, and two being obstructed from legitimate economic development. (America Past and Present, P. 108) After the war, Britain was also left with an overwhelming national debt. Because Great Britain had contributed so generously finically (so generously that they were left in debt), to a war that gained the British colonist territorial right to long disputed regions in North America. Britain shortly after felt that it was only fair that the colonist start raising revenues (through increased taxation) for the debt Britain was left with. Despite the common belief that taxes were what led to the American Revolution.
The battle of Lexington and Concord, a very famous battle in history, but why. Why is this event so important to are history that the story of its legacy gets passed on from generation to generation? In the 1700s The British finally won the French and Indian war at an extremely large expense. The British started to tax the new world for all that they had lost and blamed them for some of their expenses. The Boston Tea Party, Sugar Act, Tea Act, and Stamp Act all helped to inflame each side in this and help each side to grow hatred for eachother.
The Tea Acts passed by Parliament started the colonists down the path of anger. The Tea Acts were caused by the East India Company going bankrupt that is the reason the colonist got taxed in the first place. The East India Company was running out of money and they were acquainted with the colonies government so to help out the company the government of the colonies agreed to taxes the colonist
The Navigation Acts were acts that forced English colonies to send all produced goods strait, and only, to England, and prohibited any smuggling. The English colonies were technically not allowed to produce their own goods or buy from anywhere other than Britain, only buy them from England at a higher price so that their industry and economy would be built up again from debt and unemployment due to the ending of the war. This poor treatment due to England coming over to fight for us set the foundation for what would one day lead us to become independent. Of course many colonists weren’t going to heed to everything that England commanded, white men and even women were standing up for their rights. For example the Daughters of Liberty were a group of ladies dedicated to boycotting British goods and producing | https://www.ipl.org/essay/Historical-Image-Journal-Hernando-De-Soto-F38DA6H4AJP6 | 21 |
106 | Health equity arises from access to the social determinants of health, specifically from wealth, power and prestige. Individuals who have consistently been deprived of these three determinants are significantly disadvantaged from health inequities, and face worse health outcomes than those who are able to access certain resources. It is not equity to simply provide every individual with the same resources; that would be equality. In order to achieve health equity, resources must be allocated based on an individual need-based principle.
According to the World Health Organization, "Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity". The quality of health and how health is distributed among economic and social status in a society can provide insight into the level of development within that society. Health is a basic human right and human need, and all human rights are interconnected. Thus, health must be discussed along with all other basic human rights.
Health equity, sometimes also referred to as health disparity, is defined as differences in the quality of health and healthcare across different populations. Health equity is different from health equality, as it refers to the absence of disparities in controllable or remediable aspects of health. It is not possible to work towards complete equality in health, as there are some factors of health that are beyond human influence. Inequity implies some kinds of social injustice. Thus, if one population dies younger than another because of genetic differences, a non-remediable/controllable factor, we tend to say that there is a health inequality. On the other hand, if a population has a lower life expectancy due to lack of access to medications, the situation would be classified as a health inequity. These inequities may include differences in the "presence of disease, health outcomes, or access to health care":3 between populations with a different race, ethnicity, gender, sexual orientation, disability, or socioeconomic status. Although it is important to recognize the difference in health equity and equality, having equality in health is essential to begin achieving health equity. The importance of equitable access to healthcare has been cited as crucial to achieving many of the Millennium Development Goals.
Socioeconomic status is both a strong predictor of health, and a key factor underlying health inequities across populations. Poor socioeconomic status has the capacity to profoundly limit the capabilities of an individual or population, manifesting itself through deficiencies in both financial and social capital. It is clear how a lack of financial capital can compromise the capacity to maintain good health. In the UK, prior to the institution of the NHS reforms in the early 2000s, it was shown that income was an important determinant of access to healthcare resources. Because one's job or career is a primary conduit for both financial and social capital, work is an important, yet under represented, factor in health inequities research and prevention efforts. Maintenance of good health through the utilization of proper healthcare resources can be quite costly and therefore unaffordable to certain populations.
In China, for instance, the collapse of the Cooperative Medical System left many of the rural poor uninsured and unable to access the resources necessary to maintain good health. Increases in the cost of medical treatment made healthcare increasingly unaffordable for these populations. This issue was further perpetuated by the rising income inequality in the Chinese population. Poor Chinese were often unable to undergo necessary hospitalization and failed to complete treatment regimens, resulting in poorer health outcomes.
Similarly, in Tanzania, it was demonstrated that wealthier families were far more likely to bring their children to a healthcare provider: a significant step towards stronger healthcare. Some scholars have noted that unequal income distribution itself can be a cause of poorer health for a society as a result of "underinvestment in social goods, such as public education and health care; disruption of social cohesion and the erosion of social capital".
The role of socioeconomic status in health equity extends beyond simple monetary restrictions on an individual's purchasing power. In fact, social capital plays a significant role in the health of individuals and their communities. It has been shown that those who are better connected to the resources provided by the individuals and communities around them (those with more social capital) live longer lives. The segregation of communities on the basis of income occurs in nations worldwide and has a significant impact on quality of health as a result of a decrease in social capital for those trapped in poor neighborhoods. Social interventions, which seek to improve healthcare by enhancing the social resources of a community, are therefore an effective component of campaigns to improve a community's health. A 1998 epidemiological study showed that community healthcare approaches fared far better than individual approaches in the prevention of heart disease mortality.
Unconditional cash transfers for reducing poverty used by some programs in the developing world appear to lead to a reduction in the likelihood of being sick. Such evidence can guide resource allocations to effective interventions.
Research has shown that the quality of health care does indeed vary among different socioeconomic groups. Children in families of low socioeconomic status are the most susceptible to health inequities. Equity, Social Determinants and Public Health Programmes (2010) is a book edited by Blas and Sivasankara that includes a chapter discussing health equities among children. Gathering information from 100 international surveys, this chapter states that children in poor families under 5 years of age are likely to face health disparities because the quality of their health depends on others providing for them; young children are not capable of maintaining good health on their own. In addition, these children have higher mortality rates than those in richer families due to malnutrition. Because of their low socioeconomic status, receiving health care can be challenging. Children in poor families are less likely to receive health care in general, and if they do have access to care, it is likely that the quality of that care is not highly sufficient.
Education is an important factor in healthcare utilization, though it is closely intertwined with economic status. An individual may not go to a medical professional or seek care if they don't know the ills of their failure to do so, or the value of proper treatment. In Tajikistan, since the nation gained its independence, the likelihood of giving birth at home has increased rapidly among women with lower educational status. Education also has a significant impact on the quality of prenatal and maternal healthcare. Mothers with primary education consulted a doctor during pregnancy at significantly lower rates (72%) when compared to those with a secondary education (77%), technical training (88%) or a higher education (100%). There is also evidence for a correlation between socioeconomic status and health literacy; one study showed that wealthier Tanzanian families were more likely to recognize disease in their children than those that were coming from lower income backgrounds.
Education inequities are also closely associated with health inequities. Individuals with lower levels of education are more likely to incur greater health risks such as substance abuse, obesity, and injuries both intentional and unintentional. Education is also associated with greater comprehension of health information and services necessary to make the right health decisions, as well as being associated with a longer lifespan. Individuals with high grades have been observed to display better levels of protective health behavior and lower levels of risky health behaviors than their less academically gifted counterparts. Factors such as poor diets, inadequate physical activity, physical and emotional abuse, and teenage pregnancy all have significant impacts on students' academic performance and these factors tend to manifest themselves more frequently in lower-income individuals.
Spatial disparities in health
For some populations, access to healthcare and health resources is physically limited, resulting in health inequities. For instance, an individual might be physically incapable of traveling the distances required to reach healthcare services, or long distances can make seeking regular care unappealing despite the potential benefits.
In 2019, the federal government identified nearly 80 percent of rural America as "medically underserved," lacking in skilled nursing facilities, as well as rehabilitation, psychiatric and intensive care units. In rural areas, there are approximately 68 primary care doctors per 100,000 people, whereas there are 84 doctors per 100,000 in urban centers. According to the National Rural Health Association, almost 10% of rural counties had no doctors in 2017. Rural communities face lower life expectancies and increased rates of diabetes, chronic disease, and obesity.
Costa Rica, for example, has demonstrable health spatial inequities with 12–14% of the population living in areas where healthcare is inaccessible. Inequity has decreased in some areas of the nation as a result of the work of healthcare reform programs, however those regions not served by the programs have experienced a slight increase in inequity.
China experienced a serious decrease in spatial health equity following the Chinese economic revolution in the 1980s as a result of the degradation of the Cooperative Medical System (CMS). The CMS provided an infrastructure for the delivery of healthcare to rural locations, as well as a framework to provide funding based upon communal contributions and government subsidies. In its absence, there was a significant decrease in the quantity of healthcare professionals (35.9%), as well as functioning clinics (from 71% to 55% of villages over 14 years) in rural areas, resulting in inequitable healthcare for rural populations. The significant poverty experienced by rural workers (some earning less than 1 USD per day) further limits access to healthcare, and results in malnutrition and poor general hygiene, compounding the loss of healthcare resources. The loss of the CMS has had noticeable impacts on life expectancy, with rural regions such as areas of Western China experiencing significantly lower life expectancies.
Similarly, populations in rural Tajikistan experience spatial health inequities. A study by Jane Falkingham noted that physical access to healthcare was one of the primary factors influencing quality of maternal healthcare. Further, many women in rural areas of the country did not have adequate access to healthcare resources, resulting in poor maternal and neonatal care. These rural women were, for instance, far more likely to give birth in their homes without medical oversight.
Ethnic and racial disparities
Along with the socioeconomic factor of health disparities, race is another key factor. The United States historically had large disparities in health and access to adequate healthcare between races, and current evidence supports the notion that these racially centered disparities continue to exist and are a significant social health issue. The disparities in access to adequate healthcare include differences in the quality of care based on race and overall insurance coverage based on race. A 2002 study in the Journal of the American Medical Association identifies race as a significant determinant in the level of quality of care, with blacks receiving lower quality care than their white counterparts. This is in part because members of ethnic minorities such as African Americans are either earning low incomes, or living below the poverty line. In a 2007 Census Bureau, African American families made an average of $33,916, while their white counterparts made an average of $54,920. Due to a lack of affordable health care, the African American death rate reveals that African Americans have a higher rate of dying from treatable or preventable causes. According to a study conducted in 2005 by the Office of Minority Health—a U.S. Department of Health—African American men were 30% more likely than white men to die from heart disease. Also African American women were 34% more likely to die from breast cancer than their white counterparts. Additionally, among African American and Latino infants, mortality rates are 2 to 3 times higher than other racial groups.
There are also considerable racial disparities in access to insurance coverage, with ethnic minorities generally having less insurance coverage than non-ethnic minorities. For example, Hispanic Americans tend to have less insurance coverage than white Americans and as a result receive less regular medical care. The level of insurance coverage is directly correlated with access to healthcare including preventive and ambulatory care. A 2010 study on racial and ethnic disparities in health done by the Institute of Medicine has shown that the aforementioned disparities cannot solely be accounted for in terms of certain demographic characteristics like: insurance status, household income, education, age, geographic location and quality of living conditions. Even when the researchers corrected for these factors, the disparities persist. Slavery has contributed to disparate health outcomes for generations of African Americans in the United States.
Ethnic health inequities also appear in nations across the African continent. A survey of the child mortality of major ethnic groups across 11 African nations (Central African Republic, Côte d'Ivoire, Ghana, Kenya, Mali, Namibia, Niger, Rwanda, Senegal, Uganda, and Zambia) was published in 2000 by the WHO. The study described the presence of significant ethnic parities in the child mortality rates among children younger than 5 years old, as well as in education and vaccine use. In South Africa, the legacy of apartheid still manifests itself as a differential access to social services, including healthcare based upon race and social class, and the resultant health inequities. Further, evidence suggests systematic disregard of indigenous populations in a number of countries. The Pygmys of Congo, for instance, are excluded from government health programs, discriminated against during public health campaigns, and receive poorer overall healthcare.
In a survey of five European countries (Sweden, Switzerland, the UK, Italy, and France), a 1995 survey noted that only Sweden provided access to translators for 100% of those who needed it, while the other countries lacked this service potentially compromising healthcare to non-native populations. Given that non-natives composed a considerable section of these nations (6%, 17%, 3%, 1%, and 6% respectively), this could have significant detrimental effects on the health equity of the nation. In France, an older study noted significant differences in access to healthcare between native French populations, and non-French/migrant populations based upon health expenditure; however this was not fully independent of poorer economic and working conditions experienced by these populations.
A 1996 study of race-based health inequity in Australia revealed that Aborigines experienced higher rates of mortality than non-Aborigine populations. Aborigine populations experienced 10 times greater mortality in the 30–40 age range; 2.5 times greater infant mortality rate, and 3 times greater age standardized mortality rate. Rates of diarrheal diseases and tuberculosis are also significantly greater in this population (16 and 15 times greater respectively), which is indicative of the poor healthcare of this ethnic group. At this point in time, the parities in life expectancy at birth between indigenous and non-indigenous peoples were highest in Australia, when compared to the US, Canada and New Zealand. In South America, indigenous populations faced similarly poor health outcomes with maternal and infant mortality rates that were significantly higher (up to 3 to 4 times greater) than the national average. The same pattern of poor indigenous healthcare continues in India, where indigenous groups were shown to experience greater mortality at most stages of life, even when corrected for environmental effects.
On February 5, 2021, the head of the World Health Organization (WHO), Tedros Adhanom Ghebreyesus, noted regarding the global inequity in the access to COVID-19 vaccines, that almost 130 countries had not yet given a single dose. In early April 2021, the WHO reported that 87% of existing vaccines had been distributed to the wealthiest countries, while only 0.2% had been distributed to the poorest countries. As a result, one-quarter of the populations of those wealthy countries had already been vaccinated, while only 1 in 500 residents of the poor countries had been vaccinated.
LGBT health disparities
Sexuality is a basis of health discrimination and inequity throughout the world. Homosexual, bisexual, transgender, and gender-variant populations around the world experience a range of health problems related to their sexuality and gender identity, some of which are complicated further by limited research.
In spite of recent advances, LGBT populations in China, India, and Chile continue to face significant discrimination and barriers to care. The World Health Organization (WHO) recognizes that there is inadequate research data about the effects of LGBT discrimination on morbidity and mortality rates in the patient population. In addition, retrospective epidemiological studies on LGBT populations are difficult to conduct as a result of the practice that sexual orientation is not noted on death certificates. WHO has proposed that more research about the LGBT patient population is needed for improved understanding of its unique health needs and barriers to accessing care.
Recognizing the need for LGBT healthcare research, the Director of the National Institute on Minority Health and Health Disparities (NIMHD) at the U.S. Department of Health and Human Services designated sexual and gender minorities (SGMs) as a health disparity population for NIH research in October 2016. For the purposes of this designation, the Director defines SGM as "encompass[ing] lesbian, gay, bisexual, and transgender populations, as well as those whose sexual orientation, gender identity and expressions, or reproductive development varies from traditional, societal, cultural, or physiological norms". This designation has prioritized research into the extent, cause, and potential mitigation of health disparities among SGM populations within the larger LGBT community.
While many aspects of LGBT health disparities are heretofore uninvestigated, at this stage, it is known that one of the main forms of healthcare discrimination LGBT individuals face is discrimination from healthcare workers or institutions themselves. A systematic literature review of publications in English and Portuguese from 2004–2014 demonstrate significant difficulties in accessing care secondary to discrimination and homophobia from healthcare professionals. This discrimination can take the form of verbal abuse, disrespectful conduct, refusal of care, the withholding of health information, inadequate treatment, and outright violence. In a study analyzing the quality of healthcare for South African men who have sex with men (MSM), researchers interviewed a cohort of individuals about their health experiences, finding that MSM who identified as homosexual felt their access to healthcare was limited due to an inability to find clinics employing healthcare workers who did not discriminate against their sexuality. They also reportedly faced "homophobic verbal harassment from healthcare workers when presenting for STI treatment". Further, MSM who did not feel comfortable disclosing their sexual activity to healthcare workers failed to identify as homosexuals, which limited the quality of the treatment they received.
Additionally, members of the LGBT community contend with health care disparities due, in part, to lack of provider training and awareness of the population’s healthcare needs. Transgender individuals believe that there is a higher importance of providing gender identity (GI) information more than sexual orientation (SO) to providers to help inform them of better care and safe treatment for these patients. Studies regarding patient-provider communication in the LGBT patient community show that providers themselves report a significant lack of awareness regarding the health issues LGBT-identifying patients face. As a component of this fact, medical schools do not focus much attention on LGBT health issues in their curriculum; the LGBT-related topics that are discussed tend to be limited to HIV/AIDS, sexual orientation, and gender identity.
Among LGBT-identifying individuals, transgender individuals face especially significant barriers to treatment. Many countries still do not have legal recognition of transgender or non-binary gender individuals leading to placement in mis-gendered hospital wards and medical discrimination. Seventeen European states mandate sterilization of individuals who seek recognition of a gender identity that diverges from their birth gender. In addition to many of the same barriers as the rest of the LGBT community, a WHO bulletin points out that globally, transgender individuals often also face a higher disease burden. A 2010 survey of transgender and gender-variant people in the United States revealed that transgender individuals faced a significant level of discrimination. The survey indicated that 19% of individuals experienced a healthcare worker refusing care because of their gender, 28% faced harassment from a healthcare worker, 2% encountered violence, and 50% saw a doctor who was not able or qualified to provide transgender-sensitive care. In Kuwait, there have been reports of transgender individuals being reported to legal authorities by medical professionals, preventing safe access to care. An updated version of the U.S. survey from 2015 showed little change in terms of healthcare experiences for transgender and gender variant individuals. The updated survey revealed that 23% of individuals reported not seeking necessary medical care out of fear of discrimination, and 33% of individuals who had been to a doctor within a year of taking the survey reported negative encounters with medical professionals related to their transgender status.
The stigmatization represented particularly in the transgender population creates a health disparity for LGBT individuals with regard to mental health. The LGBT community is at increased risk for psychosocial distress, mental health complications, suicidality, homelessness, and substance abuse, often complicated by access-based under-utilization or fear of health services. Transgender and gender-variant individuals have been found to experience higher rates of mental health disparity than LGB individuals. According to the 2015 U.S. Transgender Survey, for example, 39% of respondents reported serious psychological distress, compared to 5% of the general population.
These mental health facts are informed by a history of anti-LGBT bias in health care. The Diagnostic and Statistical Manual of Mental Disorders (DSM) listed homosexuality as a disorder until 1973; transgender status was listed as a disorder until 2012. This was amended in 2013 with the DSM-5 when "gender identity disorder" was replaced with "gender dysphoria", reflecting that simply identifying as transgender is not itself pathological and that the diagnosis is instead for the distress a transgender person may experience as a result of the discordance between assigned gender and gender identity.
LGBT health issues have received disproportionately low levels of medical research, leading to difficulties in assessing appropriate strategies for LGBT treatment. For instance, a review of medical literature regarding LGBT patients revealed that there are significant gaps in the medical understanding of cervical cancer in lesbian and bisexual individuals it is unclear whether its prevalence in this community is a result of probability or some other preventable cause. For example, LGBT people report poorer cancer care experiences. It is incorrectly assumed that LGBT women have a lower incidence of cervical cancer than their heterosexual counterparts, resulting in lower rates of screening. Such findings illustrate the need for continued research focused on the circumstances and needs of LGBT individuals and the inclusion in policy frameworks of sexual orientation and gender identity as social determinants of health.
A June 2017 review sponsored by the European commission as part of a larger project to identify and diminish health inequities, found that LGB are at higher risk of some cancers and that LGBTI were at higher risk of mental illness, and that these risks were not adequately addressed. The causes of health inequities were, according to the review, "i) cultural and social norms that preference and prioritise heterosexuality; ii) minority stress associated with sexual orientation, gender identity and sex characteristics; iii) victimisation; iv) discrimination (individual and institutional), and; v) stigma."
Sex and gender in healthcare equity
Sex and gender in medicine
Both gender and sex are significant factors that influence health. Sex is characterized by female and male biological differences in regards to gene expression, hormonal concentration, and anatomical characteristics. Gender is an expression of behavior and lifestyle choices. Both sex and gender inform each other, and it is important to note that differences between the two genders influence disease manifestation and associated healthcare approaches. Understanding how the interaction of sex and gender contributes to disparity in the context of health allows providers to ensure quality outcomes for patients. This interaction is complicated by the difficulty of distinguishing between sex and gender given their intertwined nature; sex modifies gender, and gender can modify sex, thereby impacting health. Sex and gender can both be considered sources of health disparity; both contribute to men and women’s susceptibility to various health conditions, including cardiovascular disease and autoimmune disorders.
Health disparities in the male population
As sex and gender are inextricably linked in day-to-day life, their union is apparent in medicine. Gender and sex are both components of health disparity in the male population. In non-Western regions, males tend to have a health advantage over women due to gender discrimination, evidenced by infanticide, early marriage, and domestic abuse for females. In most regions of the world, the mortality rate is higher for adult men than for adult women; for example, adult men suffer from fatal illnesses with more frequency than females. The leading causes of the higher male death rate are accidents, injuries, violence, and cardiovascular diseases. In a number of countries, males also face a heightened risk of mortality as a result of behavior and greater propensity for violence.
Physicians tend to offer invasive procedures to male patients more than female patients. Furthermore, men are more likely to smoke than women and experience smoking-related health complications later in life as a result; this trend is also observed in regard to other substances, such as marijuana, in Jamaica, where the rate of use is 2–3 times more for men than women. Lastly, men are more likely to have severe chronic conditions and a lower life expectancy than women in the United States.
Health disparities in the female population
Gender and sex are also components of health disparity in the female population. The 2012 World Development Report (WDR) noted that women in developing nations experience greater mortality rates than men in developing nations. Additionally, women in developing countries have a much higher risk of maternal death than those in developed countries. The highest risk of dying during childbirth is 1 in 6 in Afghanistan and Sierra Leone, compared to nearly 1 in 30,000 in Sweden—a disparity that is much greater than that for neonatal or child mortality.
While women in the United States tend to live longer than men, they generally are of lower socioeconomic status (SES) and therefore have more barriers to accessing healthcare. Being of lower SES also tends to increase societal pressures, which can lead to higher rates of depression and chronic stress and, in turn, negatively impact health. Women are also more likely than men to suffer from sexual or intimate-partner violence both in the United States and worldwide. In Europe, women who grew up in poverty are more likely to have lower muscle strength and higher disability in old age.
Women have better access to healthcare in the United States than they do in many other places in the world. In one population study conducted in Harlem, New York, 86% of women reported having privatized or publicly assisted health insurance, while only 74% of men reported having any health insurance. This trend is representative of the general population of the United States.
In addition, women's pain tends to be treated less seriously and initially ignored by clinicians when compared to their treatment of men's pain complaints. Historically, women have not been included in the design or practice of clinical trials, which has slowed the understanding of women's reactions to medications and created a research gap. This has led to post-approval adverse events among women, resulting in several drugs being pulled from the market. However, the clinical research industry is aware of the problem, and has made progress in correcting it.
Health disparities are also due in part to cultural factors that involve practices based not only on sex, but also gender status. For example, in China, health disparities have distinguished medical treatment for men and women due to the cultural phenomenon of preference for male children. Recently, gender-based disparities have decreased as females have begun to receive higher-quality care. Additionally, a girl’s chances of survival are impacted by the presence of a male sibling; while girls do have the same chance of survival as boys if they are the oldest girl, they have a higher probability of being aborted or dying young if they have an older sister.
In India, gender-based health inequities are apparent in early childhood. Many families provide better nutrition for boys in the interest of maximizing future productivity given that boys are generally seen as breadwinners. In addition, boys receive better care than girls and are hospitalized at a greater rate. The magnitude of these disparities increases with the severity of poverty in a given population.
Additionally, the cultural practice of female genital mutilation (FGM) is known to impact women's health, though is difficult to know the worldwide extent of this practice. While generally thought of as a Sub-Saharan African practice, it may have roots in the Middle East as well. The estimated 3 million girls who are subjected to FGM each year potentially suffer both immediate and lifelong negative effects. Immediately following FGM, girls commonly experience excessive bleeding and urine retention. Long-term consequences include urinary tract infections, bacterial vaginosis, pain during intercourse, and difficulties in childbirth that include prolonged labor, vaginal tears, and excessive bleeding. Women who have undergone FGM also have higher rates of post-traumatic stress disorder (PTSD) and herpes simplex virus 2 (HSV2) than women who have not.
Health inequality and environmental influence
Minority populations have increased exposure to environmental hazards that include lack of neighborhood resources, structural and community factors as well as residential segregation that result in a cycle of disease and stress. The environment that surrounds us can influence individual behaviors and lead to poor health choices and therefore outcomes. Minority neighborhoods have been continuously noted to have more fast food chains and fewer grocery stores than predominantly white neighborhoods. These food deserts affect a family’s ability to have easy access to nutritious food for their children. This lack of nutritious food extends beyond the household into the schools that have a variety of vending machines and deliver over processed foods. These environmental condition have social ramifications and in the first time in US history is it projected that the current generation will live shorter lives than their predecessors will.
In addition, minority neighborhoods have various health hazards that result from living close to highways and toxic waste factories or general dilapidated structures and streets. These environmental conditions create varying degrees of health risk from noise pollution, to carcinogenic toxic exposures from asbestos and radon that result in increase chronic disease, morbidity, and mortality. The quality of residential environment such as damaged housing has been shown to increase the risk of adverse birth outcomes, which is reflective of a communities health. This occurs through exposure to lead in paint and lead contaminated soil as well as indoor air pollutants such as second-hand smoke and fine particulate matter. Housing conditions can create varying degrees of health risk that lead to complications of birth and long-term consequences in the aging population. In addition, occupational hazards can add to the detrimental effects of poor housing conditions. It has been reported that a greater number of minorities work in jobs that have higher rates of exposure to toxic chemical, dust and fumes. One example of this is the environmental hazards that poor Latino farmworkers face in the United States. This group is exposed to high levels of particulate matter and pesticides on the job, which have contributed to increased cancer rates, lung conditions, and birth defects in their communities.
Racial segregation is another environmental factor that occurs through the discriminatory action of those organizations and working individuals within the real estate industry, whether in the housing markets or rentals. Even though residential segregation is noted in all minority groups, blacks tend to be segregated regardless of income level when compared to Latinos and Asians. Thus, segregation results in minorities clustering in poor neighborhoods that have limited employment, medical care, and educational resources, which is associated with high rates of criminal behavior. In addition, segregation affects the health of individual residents because the environment is not conducive to physical exercise due to unsafe neighborhoods that lack recreational facilities and have nonexistent park space. Racial and ethnic discrimination adds an additional element to the environment that individuals have to interact with daily. Individuals that reported discrimination have been shown to have an increase risk of hypertension in addition to other physiological stress related affects. The high magnitude of environmental, structural, socioeconomic stressors leads to further compromise on the psychological and physical being, which leads to poor health and disease.
Individuals living in rural areas, especially poor rural areas, have access to fewer health care resources. Although 20 percent of the U.S. population lives in rural areas, only 9 percent of physicians practice in rural settings. Individuals in rural areas typically must travel longer distances for care, experience long waiting times at clinics, or are unable to obtain the necessary health care they need in a timely manner. Rural areas characterized by a largely Hispanic population average 5.3 physicians per 10,000 residents compared with 8.7 physicians per 10,000 residents in nonrural areas. Financial barriers to access, including lack of health insurance, are also common among the urban poor.
Disparities in access to health care
Reasons for disparities in access to health care are many, but can include the following:
- Lack of a regular source of care. Without access to a regular source of care, patients have greater difficulty obtaining care, fewer doctor visits, and more difficulty obtaining prescription drugs. Compared to whites, minority groups in the United States are less likely to have a doctor they go to on a regular basis and are more likely to use emergency rooms and clinics as their regular source of care. In the United Kingdom, which is much more racially harmonious, this issue arises for a different reason; since 2004, NHS GPs have not been responsible for care out of normal GP surgery opening hours, leading to significantly higher attendances in A+E
- Lack of financial resources. Although the lack of financial resources is a barrier to health care access for many Americans, the impact on access appears to be greater for minority populations.
- Legal barriers. Access to medical care by low-income immigrant minorities can be hindered by legal barriers to public insurance programs. For example, in the United States federal law bars states from providing Medicaid coverage to immigrants who have been in the country fewer than five years.:10 Another example could be when a non-English speaking person attends a clinic where the receptionist does not speak the person's language. This is mostly seen in people who have limited English proficiency, or LEP.
- Structural barriers. These barriers include poor transportation, an inability to schedule appointments quickly or during convenient hours, and excessive time spent in the waiting room, all of which affect a person's ability and willingness to obtain needed care.
- Scarcity of providers. In inner cities, rural areas, and communities with high concentrations of minority populations, access to medical care can be limited due to the scarcity of primary care practitioners, specialists, and diagnostic facilities. This scarcity can also extend to the personnel in the medical laboratory with some geographical regions having significantly diminished access to advanced diagnostic methods and pathology care. In the UK, Monitor (a quango) has a legal obligation to ensure that sufficient provision exists in all parts of the nation.
- The health care financing system. The Institute of Medicine in the United States says fragmentation of the U.S. health care delivery and financing system is a barrier to accessing care. Racial and ethnic minorities are more likely to be enrolled in health insurance plans which place limits on covered services and offer a limited number of health care providers.:10
- Linguistic barriers. Language differences restrict access to medical care for minorities in the United States who have limited English proficiency.
- Health literacy. This is where patients have problems obtaining, processing, and understanding basic health information. For example, patients with a poor understanding of good health may not know when it is necessary to seek care for certain symptoms. While problems with health literacy are not limited to minority groups, the problem can be more pronounced in these groups than in whites due to socioeconomic and educational factors. A study conducted in Mdantsane, South Africa depicts the correlation of maternal education and the antenatal visits for pregnancy. As patients have a greater education, they tend to use maternal health care services more than those with a lesser maternal education background.
- Lack of diversity in the health care workforce. A major reason for disparities in access to care are the cultural differences between predominantly white health care providers and minority patients. Only 4% of physicians in the United States are African American, and Hispanics represent just 5%, even though these percentages are much less than their groups' proportion of the United States population.:13
- Age. Age can also be a factor in health disparities for a number of reasons. As many older Americans exist on fixed incomes which may make paying for health care expenses difficult. Additionally, they may face other barriers such as impaired mobility or lack of transportation which make accessing health care services challenging for them physically. Also, they may not have the opportunity to access health information via the internet as less than 15% of Americans over the age of 65 have access to the internet. This could put older individuals at a disadvantage in terms of accessing valuable information about their health and how to protect it. On the other hand, older individuals in the US (65 or above) are provided with medical care via Medicare.
- Criminalization and lack of research of traditional medicine, and mental health treatments. Mental illness accounts for about one-third of adult disability globally. Conventional drug treatments have dominated psychiatry for decades, without a breakthrough in mental healthcare. Access to psychedelic-assisted therapy, and the decriminalization of Psilocybin and other entheogens are questions of health justice.
A major part of the United State’s healthcare system is health insurance. The main types of health insurance in the United States includes taxpayer-funded health insurance and private health insurance. Funded through state and federal taxes, some common examples of taxpayer-funded health insurance include Medicaid, Medicare, and CHIP. Private health insurance is offered in a variety of forms, and includes plans such as Health Maintenance Organizations (HMO’s) and Preferred Provider Organization (PPO’s). While health insurance increases the affordability of healthcare in the United States, issues of access along with additional related issues act as barriers to health equity.
There are many issues due to health insurance that affect health equity, including the following:
- Health Insurance Literacy. Within these health insurance plans, common aspects of the insurance include premiums, deductibles, co-payments, coinsurance, coverage limits, in-network versus out-of-network providers, and prior authorization. According to a United Health survey, only 9% of Americans surveyed understood these health insurance terms. To address issues in finding available insurance plans and confusion around the components of health insurance policies, the Affordable Care Act (ACA) set up state-mandated health insurance marketplaces or health exchanges, where individuals can research and compare different kinds of health care plans and their respective components. Between 2014 and 2020, over 11.4 million people have been able to sign up for health insurance through the Marketplaces. However, most Marketplaces focus more on the presentation of health insurances and their coverages, rather than including detailed explanations of the health insurance terms.
- Lack of universal health care or health insurance coverage. According to the Congressional Budget Office (CBO), 28.9 million people in the United States were uninsured in 2018, and that number would rise to an estimated 35 million people by 2029. Without health insurance, patients are more likely to postpone medical care, go without needed medical care, go without prescription medicines, and be denied access to care. Minority groups in the United States lack insurance coverage at higher rates than whites. This problem does not exist in countries with fully funded public health systems, such as the examplar of the NHS.
- Underinsured or inefficient health insurance coverage. While there are many causes of underinsurance, a common a reason is due to low premiums, the up front yearly or monthly amount individuals pay for their insurance policy, and high deductibles, the amount paid out of pocket by the policy holder before an insurance provider will pay any expenses. Under the ACA, individuals were subject to a fee called the Shared Responsibility Payment, which occurred as a result of not buying health insurance despite being able to afford it. While this mandate was aimed at increasing health insurance rates for Americans, it also led many individuals to sign up for relatively inexpensive health insurance plans that did not provide adequate health coverage in order to avoid the repercussions of the mandate. Similar to those who lack health insurance, these underinsured individuals also deal with the side effects that occur as a result of lack of care.
In many countries, dental healthcare is less accessible than other kinds of healthcare. In Western countries, dental healthcare providers are present, and private or public healthcare systems typically facilitate access. However, access remains limited for marginalized groups such as the homeless, racial minorities, and those who are homebound or disabled. In Central and Eastern Europe, the privatization of dental healthcare has resulted in a shortage of affordable options for lower-income people. In Eastern Europe, school-age children formerly had access through school programs, but these have been discontinued. Therefore, many children no longer have access to care. Access to services and the breadth of services provided is greatly reduced in developing regions. Such services may be limited to emergency care and pain relief, neglecting preventative or restorative services. Regions like Africa, Asia, and Latin America do not have enough dental health professionals to meet the needs of the populace. In Africa, for example, there is only one dentist for every 150,000 people, compared to industrialized countries which average one dentist per 2,000 people.
Disparities in quality of health care
Health disparities in the quality of care exist and are based on language and ethnicity/race which includes:
Problems with patient-provider communication
Communication is critical for the delivery of appropriate and effective treatment and care, regardless of a patient’s race, and miscommunication can lead to incorrect diagnosis, improper use of medications, and failure to receive follow-up care. The patient provider relationship is dependent on the ability of both individuals to effectively communicate. Language and culture both play a significant role in communication during a medical visit. Among the patient population, minorities face greater difficulty in communicating with their physicians. Patients when surveyed responded that 19% of the time they have problems communicating with their providers which included understanding doctor, feeling doctor listened, and had questions but did not ask. In contrast, the Hispanic population had the largest problem communicating with their provider, 33% of the time. Communication has been linked to health outcomes, as communication improves so does patient satisfaction which leads to improved compliance and then to improved health outcomes. Quality of care is impacted as a result of an inability to communicate with health care providers. Language plays a pivotal role in communication and efforts need to be taken to ensure excellent communication between patient and provider. Among limited English proficient patients in the United States, the linguistic barrier is even greater. Less than half of non-English speakers who say they need an interpreter during clinical visits report having one. The absence of interpreters during a clinical visit adds to the communication barrier. Furthermore, inability of providers to communicate with limited English proficient patients leads to more diagnostic procedures, more invasive procedures, and over prescribing of medications. Language barriers have not only hindered appointment scheduling, prescription filling, and clear communications, but have also been associated with health declines, which can be attributed to reduced compliance and delays in seeking care, which could affect particularly refugee health in the United States. Many health-related settings provide interpreter services for their limited English proficient patients. This has been helpful when providers do not speak the same language as the patient. However, there is mounting evidence that patients need to communicate with a language concordant physician (not simply an interpreter) to receive the best medical care, bond with the physician, and be satisfied with the care experience. Having patient-physician language discordant pairs (i.e. Spanish-speaking patient with an English-speaking physician) may also lead to greater medical expenditures and thus higher costs to the organization. Additional communication problems result from a decrease or lack of cultural competence by providers. It is important for providers to be cognizant of patients’ health beliefs and practices without being judgmental or reacting. Understanding a patients’ view of health and disease is important for diagnosis and treatment. So providers need to assess patients’ health beliefs and practices to improve quality of care. Patient health decisions can be influenced by religious beliefs, mistrust of Western medicine, and familial and hierarchical roles, all of which a white provider may not be familiar with.:13 Other type of communication problems are seen in LGBT health care with the spoken heterosexist (conscious or unconscious) attitude on LGBT patients, lack of understanding on issues like having no sex with men (lesbians, gynecologic examinations) and other issues.
Provider discrimination occurs when health care providers either unconsciously or consciously treat certain racial and ethnic patients differently from other patients. This may be due to stereotypes that providers may have towards ethnic/racial groups. A March, 2000 study from Social Science & Medicine suggests that doctors may be more likely to ascribe negative racial stereotypes to their minority patients. This may occur regardless of consideration for education, income, and personality characteristics. Two types of stereotypes may be involved, automatic stereotypes or goal modified stereotypes. Automated stereotyping is when stereotypes are automatically activated and influence judgments/behaviors outside of consciousness. Goal modified stereotype is a more conscious process, done when specific needs of clinician arise (time constraints, filling in gaps in information needed) to make a complex decisions. Physicians are unaware of their implicit biases. Some research suggests that ethnic minorities are less likely than whites to receive a kidney transplant once on dialysis or to receive pain medication for bone fractures. Critics question this research and say further studies are needed to determine how doctors and patients make their treatment decisions. Others argue that certain diseases cluster by ethnicity and that clinical decision making does not always reflect these differences.
Lack of preventive care
According to the 2009 National Healthcare Disparities Report, uninsured Americans are less likely to receive preventive services in health care. For example, minorities are not regularly screened for colon cancer and the death rate for colon cancer has increased among African Americans and Hispanic populations. Furthermore, limited English proficient patients are also less likely to receive preventive health services such as mammograms. Studies have shown that use of professional interpreters have significantly reduced disparities in the rates of fecal occult testing, flu immunizations and pap smears. In the UK, Public Health England, a universal service free at the point of use, which forms part of the NHS, offers regular screening to any member of the population considered to be in an at-risk group (such as individuals over 45) for major disease (such as colon cancer, or diabetic-retinopathy).
Plans for achieving health equity
There are a multitude of strategies for achieving health equity and reducing disparities outlined in scholarly texts, some examples include:
- Advocacy. Advocacy for health equity has been identified as a key means of promoting favourable policy change. EuroHealthNet carried out a systematic review of the academic and grey literature. It found, amongst other things, that certain kinds of evidence may be more persuasive in advocacy efforts, that practices associated with knowledge transfer and translation can increase the uptake of knowledge, that there are many different potential advocates and targets of advocacy and that advocacy efforts need to be tailored according to context and target. As a result of its work, it produced an online advocacy for health equity toolkit.
- Provider based incentives to improve healthcare for ethnic populations. One source of health inequity stems from unequal treatment of non-white patients in comparison with white patients. Creating provider based incentives to create greater parity between treatment of white and non-white patients is one proposed solution to eliminate provider bias. These incentives typically are monetary because of its effectiveness in influencing physician behavior.
- Using Evidence Based Medicine (EBM). Evidence Based Medicine (EBM) shows promise in reducing healthcare provider bias in turn promoting health equity. In theory EBM can reduce disparities however other research suggests that it might exacerbate them instead. Some cited shortcomings include EBM’s injection of clinical inflexibility in decision making and its origins as a purely cost driven measure.
- Increasing awareness. The most cited measure to improving health equity relates to increasing public awareness. A lack of public awareness is a key reason why there has not been significant gains in reducing health disparities in ethnic and minority populations. Increased public awareness would lead to increased congressional awareness, greater availability of disparity data, and further research into the issue of health disparities.
- The Gradient Evaluation Framework. The evidence base defining which policies and interventions are most effective in reducing health inequalities is extremely weak. It is important therefore that policies and interventions which seek to influence health inequity be more adequately evaluated. Gradient Evaluation Framework (GEF) is an action-oriented policy tool that can be applied to assess whether policies will contribute to greater health equity amongst children and their families.
- The AIM framework. In a pilot study, researchers examined the role of AIM—ability, incentives, and management feedback—in reducing care disparity in pressure-ulcer detection between African American and Caucasian residents. The results showed that while the program was implemented, the provision of (1) training to enhance ability, (2) monetary incentives to enhance motivation, and (3) management feedback to enhance accountability led to successful reduction in pressure ulcers. Specifically, the detection gap between the two groups decreased. The researchers suggested additional replications with longer duration to assess the effectiveness of the AIM framework.
- Monitoring actions on the social determinants of health. In 2017, citing the need for accountability for the pledges made by countries in the Rio Political Declaration on Social Determinants of Health, the World Health Organization and United Nations Children's Fund called for the monitoring of intersectoral interventions on the social determinants of health that improve health equity.
- Changing the distribution of health services. Health services play a major role in health equity. Health inequities stem from lack of access to care due to poor economic status and an interaction among other social determinants of health. The majority of high quality health services are distributed among the wealthy people in society, leaving those who are poor with limited options. In order to change this fact and move towards achieving health equity, it is essential that health care increases in areas or neighborhoods consisting of low socioeconomic families and individuals.
- Prioritize treatment among the poor. Because of the challenges that arise from accessing health care with low economic status, many illnesses and injuries go untreated or are not given sufficient treatment. Promoting treatment as a priority among the poor will give them the resources they need in order to achieve good health, because health is a basic human right.
Health inequality is the term used in a number of countries to refer to those instances whereby the health of two demographic groups (not necessarily ethnic or racial groups) differs despite comparative access to health care services. Such examples include higher rates of morbidity and mortality for those in lower occupational classes than those in higher occupational classes, and the increased likelihood of those from ethnic minorities being diagnosed with a mental health disorder. In Canada, the issue was brought to public attention by the LaLonde report.
In UK, the Black Report was produced in 1980 to highlight inequalities. On 11 February 2010, Sir Michael Marmot, an epidemiologist at University College London, published the Fair Society, Healthy Lives report on the relationship between health and poverty. Marmot described his findings as illustrating a "social gradient in health": the life expectancy for the poorest is seven years shorter than for the most wealthy, and the poor are more likely to have a disability. In its report on this study, The Economist argued that the material causes of this contextual health inequality include unhealthful lifestyles - smoking remains more common, and obesity is increasing fastest, amongst the poor in Britain.
In June 2018, the European Commission launched the Joint Action Health Equity in Europe. Forty-nine participants from 25 European Union Member States will work together to address health inequalities and the underlying social determinants of health across Europe. Under the coordination of the Italian Institute of Public Health, the Joint Action aims to achieve greater equity in health in Europe across all social groups while reducing the inter-country heterogeneity in tackling health inequalities.
Poor health and economic inequality
Poor health outcomes appear to be an effect of economic inequality across a population. Nations and regions with greater economic inequality show poorer outcomes in life expectancy,:Figure 1.1 mental health,:Figure 5.1 drug abuse,:Figure 5.3 obesity,:Figure 7.1 educational performance, teenage birthrates, and ill health due to violence. On an international level, there is a positive correlation between developed countries with high economic equality and longevity. This is unrelated to average income per capita in wealthy nations.:Figure 1.3 Economic gain only impacts life expectancy to a great degree in countries in which the mean per capita annual income is less than approximately $25,000. The United States shows exceptionally low health outcomes for a developed country, despite having the highest national healthcare expenditure in the world. The US ranks 31st in life expectancy. Americans have a lower life expectancy than their European counterparts, even when factors such as race, income, diet, smoking, and education are controlled for.
Relative inequality negatively affects health on an international, national, and institutional levels. The patterns seen internationally hold true between more and less economically equal states in the United States. The patterns seen internationally hold true between more and less economically equal states in the United States, that is, more equal states show more desirable health outcomes. Importantly, inequality can have a negative health impact on members of lower echelons of institutions. The Whitehall I and II studies looked at the rates of cardiovascular disease and other health risks in British civil servants and found that, even when lifestyle factors were controlled for, members of lower status in the institution showed increased mortality and morbidity on a sliding downward scale from their higher status counterparts. The negative aspects of inequality are spread across the population. For example, when comparing the United States (a more unequal nation) to England (a less unequal nation), the US shows higher rates of diabetes, hypertension, cancer, lung disease, and heart disease across all income levels.:Figure 13.2 This is also true of the difference between mortality across all occupational classes in highly equal Sweden as compared to less-equal England.:Figure 13.3
Health disparity and Genomics
Genomics applications continue to increase in clinical/medical applications. Historically, results from studies do not include underrepresented communities and races. The question of who benefits from publicly funded genomics is an important public health consideration, and attention will be needed to ensure that implementation of genomic medicine does not further entrench social‐equity concerns. Currently the National Human Genome Research Institute counts with a Genomics and Health Disparities Interest Group to tackle the issues of accessibility and application of genomic medicine to communities not normally represented. The Director of the Health Disparities Group, Vence L. Bonham Jr., leads a team that seeks to qualify and better understand the disparities and reduce the gap in access to genetic counseling, inclusion of minority communities in original research, and access to genetic information to improve health.
- Center for Minority Health
- Drift hypothesis
- Environmental justice
- Environmental racism
- Food Justice Movement
- Global Task Force on Expanded Access to Cancer Care and Control in Developing Countries
- Health-related embarrassment
- Health Disparities Center
- Health inequality in the United Kingdom
- Healthcare and the LGBT community
- Hopkins Center for Health Disparities Solutions
- Immigrant paradox
- Inequality in disease
- Joint Action Health Equity in Europe
- Mental health inequality
- Population health
- Public health
- Publicly funded health care
- Single-payer healthcare
- Social determinants of health
- Social determinants of health in poverty
- Unnatural Causes: Is Inequality Making Us Sick?
- Weathering hypothesis
- Braveman P, Gruskin S (April 2003). "Defining equity in health". Journal of Epidemiology and Community Health. 57 (4): 254–8. doi:10.1136/jech.57.4.254. PMC 1732430. PMID 12646539.
- Goldberg DS (2017). "Justice, Compound Disadvantage, and Health Inequities". Public Health Ethics and the Social Determinants of Health. SpringerBriefs in Public Health. pp. 17–32. doi:10.1007/978-3-319-51347-8_3. ISBN 978-3-319-51345-4.
- Preamble to the Constitution of WHO as adopted by the International Health Conference, New York, 19 June - 22 July 1946; signed on 22 July 1946 by the representatives of 61 States (Official Records of WHO, no. 2, p. 100) and entered into force on 7 April 1948. The definition has not been amended since 1948.
- Marmot M (September 2007). "Achieving health equity: from root causes to fair outcomes". Lancet. 370 (9593): 1153–63. doi:10.1016/S0140-6736(07)61385-3. PMID 17905168. S2CID 7136984.
- "Glossary of a Few Key Public Health Terms". Office of Health Disparities, Colorado Department of Public Health and Environment. Retrieved 3 February 2011.
- "Equity". WHO. Retrieved 27 February 2014.
- Kawachi I, Subramanian SV, Almeida-Filho N (September 2002). "A glossary for health inequalities". Journal of Epidemiology and Community Health. 56 (9): 647–52. doi:10.1136/jech.56.9.647. PMC 1732240. PMID 12177079.
- Goldberg J, Hayes W, Huntley J (November 2004). Understanding Health Disparities. Health Policy Institute of Ohio.
- U.S. Department of Health and Human Services (HHS), Healthy People 2010: National Health Promotion and Disease Prevention Objectives, conference ed. in two vols. (Washington, D.C., January 2000).
- Vandemoortele M (2010). The MDGs and equity (Report). Overseas Development Institute.
- Ben-Shlomo Y, White IR, Marmot M (April 1996). "Does the variation in the socioeconomic characteristics of an area affect mortality?". BMJ. 312 (7037): 1013–4. doi:10.1136/bmj.312.7037.1013. PMC 2350820. PMID 8616348.
- Morris S, Sutton M, Gravelle H (March 2005). "Inequity and inequality in the use of health care in England: an empirical investigation". Social Science & Medicine. 60 (6): 1251–66. doi:10.1016/j.socscimed.2004.07.016. PMID 15626522.
- Ahonen EQ, Fujishiro K, Cunningham T, Flynn M (March 2018). "Work as an Inclusive Part of Population Health Inequities Research and Prevention". American Journal of Public Health. 108 (3): 306–311. doi:10.2105/ajph.2017.304214. PMC 5803801. PMID 29345994.
- Kawachi I, Kennedy BP (April 1997). "Health and social cohesion: why care about income inequality?". BMJ. 314 (7086): 1037–40. doi:10.1136/bmj.314.7086.1037. PMC 2126438. PMID 9112854.
- Shi L, Starfield B, Kennedy B, Kawachi I (April 1999). "Income inequality, primary care, and health indicators". The Journal of Family Practice. 48 (4): 275–84. PMID 10229252.
- Kawachi I, Kennedy BP (April 1999). "Income inequality and health: pathways and mechanisms". Health Services Research. 34 (1 Pt 2): 215–27. PMC 1088996. PMID 10199670.
- Sun X, Jackson S, Carmichael G, Sleigh AC (January 2009). "Catastrophic medical payment and financial protection in rural China: evidence from the New Cooperative Medical Scheme in Shandong Province". Health Economics. 18 (1): 103–19. doi:10.1002/hec.1346. PMID 18283715.
- Zhao Z (2006). "Income Inequality, Unequal Health Care Access, and Mortality in China". Population and Development Review. 32 (3): 461–483. doi:10.1111/j.1728-4457.2006.00133.x.
- Schellenberg JA, Victora CG, Mushi A, de Savigny D, Schellenberg D, Mshinda H, Bryce J (February 2003). "Inequities among the very poor: health care for children in rural southern Tanzania". Lancet. 361 (9357): 561–6. doi:10.1016/S0140-6736(03)12515-9. PMID 12598141. S2CID 6667015.
- House JS, Landis KR, Umberson D (July 1988). "Social relationships and health". Science. 241 (4865): 540–5. Bibcode:1988Sci...241..540H. doi:10.1126/science.3399889. PMID 3399889.
- Musterd S, De Winter M (1998). "Conditions for spatial segregation: some European perspectives". International Journal of Urban and Regional Research. 22 (4): 665–673. doi:10.1111/1468-2427.00168.
- Musterd S (2005). "Social and Ethnic Segregation in Europe: Levels, Causes, and Effects". Journal of Urban Affairs. 27 (3): 331–348. doi:10.1111/j.0735-2166.2005.00239.x. S2CID 153935656.
- Hajnal ZL (1995). "The Nature of Concentrated Urban Poverty in Canada and the United States". Canadian Journal of Sociology. 20 (4): 497–528. doi:10.2307/3341855. JSTOR 3341855.
- Kanbur R, Zhang X (2005). "Spatial inequality in education and health care in China" (PDF). China Economic Review. 16 (2): 189–204. doi:10.1016/j.chieco.2005.02.002. S2CID 7513548.
- Lomas J (November 1998). "Social capital and health: implications for public health and epidemiology". Social Science & Medicine. 47 (9): 1181–8. CiteSeerX 10.1.1.460.596. doi:10.1016/s0277-9536(98)00190-7. PMID 9783861.
- Pega F, Liu SY, Walter S, Pabayo R, Saith R, Lhachimi SK (November 2017). "Unconditional cash transfers for reducing poverty and vulnerabilities: effect on use of health services and health outcomes in low- and middle-income countries". The Cochrane Database of Systematic Reviews. 11: CD011135. doi:10.1002/14651858.CD011135.pub2. PMC 6486161. PMID 29139110.
- Logan RA, Wong WF, Villaire M, Daus G, Parnell TA, Willis E, Paasche-Orlow MK (24 July 2015). "Health Literacy: A Necessary Element for Achieving Health Equity" (PDF). National Academy of Medicine: 1–8.
- World Health Organization (2010). Equity, Social Determinants and Public Health Programmes. World Health Organization. p. 50. ISBN 978-92-4-156397-0.
- Banerjee AV, Duflo E (April 2011). Poor economics : a radical rethinking of the way to fight global poverty (1st ed.). New York: PublicAffairs. ISBN 978-1-61039-160-3.
- Falkingham J (March 2003). "Inequality and changes in women's use of maternal health-care services in Tajikistan". Studies in Family Planning. 34 (1): 32–43. doi:10.1111/j.1728-4465.2003.00032.x. PMID 12772444.
- Healthy people 2010: understanding and improving health. U.S. Dept. of Health and Human Services. Washington, DC: Government Publishing Office. 2000. hdl:10919/18681. ISBN 978-0-16-050260-6.
- Breese PE, Burman WJ, Goldberg S, Weis SE (December 2007). "Education level, primary language, and comprehension of the informed consent process". Journal of Empirical Research on Human Research Ethics. 2 (4): 69–79. doi:10.1525/jer.2007.2.4.69. PMID 19385809. S2CID 28982032.
- Valois RF, MacDonald JM, Bretous L, Fischer MA, Drane JW (1 November 2002). "Risk factors and behaviors associated with adolescent violence and aggression". American Journal of Health Behavior. 26 (6): 454–64. doi:10.5993/ajhb.26.6.6. PMID 12437020.
- Chomitz VR, Slining MM, McGowan RJ, Mitchell SE, Dawson GF, Hacker KA (January 2009). "Is there a relationship between physical fitness and academic achievement? Positive results from public school children in the northeastern United States". The Journal of School Health. 79 (1): 30–7. doi:10.1111/j.1746-1561.2008.00371.x. PMID 19149783.
- Saslow E. "'Out here, it's just me': In the medical desert of rural America, one doctor for 11,000 square miles". Washington Post. Retrieved 2020-06-02.
- "National Healthcare Quality and Disparities Report chartbook on rural health care" (PDF). Agency for Healthcare Research and Quality. Rockville, MD: U.S. Department of Health and Human Services. October 2017.
- Khazan O (2014-08-28). "Would You Want to Move to a Remote Alaskan Village?". The Atlantic. Retrieved 2020-06-02.
- "Medical deserts in America: Why we need to advocate for rural healthcare". globalhealth.harvard.edu. Retrieved 2020-06-02.
- Rosero-Bixby L (April 2004). "Spatial access to health care in Costa Rica and its equity: a GIS-based study". Social Science & Medicine. 58 (7): 1271–84. doi:10.1016/S0277-9536(03)00322-8. PMID 14759675.
- Liu Y, Hsiao WC, Eggleston K (November 1999). "Equity in health and health care: the Chinese experience". Social Science & Medicine. 49 (10): 1349–56. doi:10.1016/S0277-9536(99)00207-5. PMID 10509825.
- Qian Jiwei. (n.d.). Regional Inequality in Healthcare in China. East Asian Institute, National University of Singapore.
- Wang H, Xu T, Xu J (October 2007). "Factors contributing to high costs and inequality in China's health care system". JAMA. 298 (16): 1928–30. doi:10.1001/jama.298.16.1928. PMID 17954544.
- Weinick RM, Zuvekas SH, Cohen JW (2000). "Racial and ethnic differences in access to and use of health care services, 1977 to 1996. Medical care research and review". MCRR. 57 (Suppl 1): 36–54.
- Copeland CS (Jul–Aug 2013). "Disparate Lives: Health Outcomes Among Ethnic Minorities in New Orleans" (PDF). Healthcare Journal of New Orleans: 10–16.
- Schneider EC, Zaslavsky AM, Epstein AM (March 2002). "Racial disparities in the quality of care for enrollees in medicare managed care". JAMA. 287 (10): 1288–94. doi:10.1001/jama.287.10.1288. PMID 11886320.
- DeNavas-Walt C, Proctor BD, Smith JC (August 2008). Income, Poverty, and Health Insurance Coverage in the United States: 2007 (PDF). U.S. Census Bureau. p. 6.
- Wong WF, LaVeist TA, Sharfstein JM (April 2015). "Achieving health equity by design". JAMA. 313 (14): 1417–8. doi:10.1001/jama.2015.2434. PMID 25751310.
- Nelson A (August 2002). "Unequal treatment: confronting racial and ethnic disparities in health care". Journal of the National Medical Association. 94 (8): 666–8. PMC 2594273. PMID 12152921.
- Gaskin DJ, Headen AE, White-Means SI (December 2004). "Racial Disparities in Health and Wealth: The Effects of Slavery and past Discrimination". The Review of Black Political Economy. 32 (3–4): 95–110. doi:10.1007/s12114-005-1007-9. S2CID 154156857.
- Brockerhoff M, Hewett P (2000). "Inequality of child mortality among ethnic groups in sub-Saharan Africa". Bulletin of the World Health Organization. 78 (1): 30–41. PMC 2560588. PMID 10686731.
- Bloom G, McIntyre D (November 1998). "Towards equity in health in an unequal society". Social Science & Medicine. 47 (10): 1529–38. doi:10.1016/s0277-9536(98)00233-0. PMID 9823048.
- McIntyre D, Gilson L (June 2002). "Putting equity in health back onto the social policy agenda: experience from South Africa". Social Science & Medicine. 54 (11): 1637–56. doi:10.1016/s0277-9536(01)00332-x. PMID 12113446.
- Ohenjo N, Willis R, Jackson D, Nettleton C, Good K, Mugarura B (June 2006). "Health of Indigenous people in Africa". Lancet. 367 (9526): 1937–46. doi:10.1016/S0140-6736(06)68849-1. PMID 16765763. S2CID 7976349.
- Bollini P, Siem H (September 1995). "No real progress towards equity: health of migrants and ethnic minorities on the eve of the year 2000". Social Science & Medicine. 41 (6): 819–28. doi:10.1016/0277-9536(94)00386-8. PMID 8571153.
- Mooney G (1996). "And now for vertical equity? Some concerns arising from aboriginal health in Australia". Health Economics. 5 (2): 99–103. doi:10.1002/(SICI)1099-1050(199603)5:2<99::AID-HEC193>3.0.CO;2-N. PMID 8733102.
- Anderson I, Crengle S, Kamaka ML, Chen TH, Palafox N, Jackson-Pulver L (May 2006). "Indigenous health in Australia, New Zealand, and the Pacific". Lancet. 367 (9524): 1775–85. doi:10.1016/S0140-6736(06)68773-4. PMID 16731273. S2CID 451840.
- Montenegro RA, Stephens C (June 2006). "Indigenous health in Latin America and the Caribbean". Lancet. 367 (9525): 1859–69. doi:10.1016/S0140-6736(06)68808-9. PMID 16753489. S2CID 11607968.
- Subramanian SV, Davey Smith G, Subramanyam M (October 2006). "Indigenous health and socioeconomic status in India". PLOS Medicine. 3 (10): e421. doi:10.1371/journal.pmed.0030421. PMC 1621109. PMID 17076556.
- CDC (2020-02-11). "Community, Work, and School". Centers for Disease Control and Prevention. Retrieved 2021-02-07.
- "Unless COVID is suppressed everywhere, we'll be 'back at square one', Tedros warns". UN News. 2021-02-05. Retrieved 2021-02-07.
- Miao H (2021-04-09). "WHO says more than 87% of the world's Covid vaccine supply has gone to higher-income countries". CNBC. Retrieved 2021-04-20.
- Burke J (20 January 2009). "Understanding the GLBT community". ASHA Leader. Communications and Mass Media Collection. 14: 4–46. doi:10.1044/leader.IN3.14012009.4.
- Gochman DS (1997). Handbook of health behavior research. Springer. pp. 145–147. ISBN 978-0-306-45443-1.
- Meyer JP, Springer SA, Altice FL (July 2011). "Substance abuse, violence, and HIV in women: a literature review of the syndemic". Journal of Women's Health. 20 (7): 991–1006. doi:10.1089/jwh.2010.2328. PMC 3130513. PMID 21668380.
- Burki T (April 2017). "Health and rights challenges for China's LGBT community". Lancet. 389 (10076): 1286. doi:10.1016/S0140-6736(17)30837-1. PMID 28379143. S2CID 45700706.
- Brocchetto M (3 March 2017). "Being gay in Latin America: Legal but deadly". CNN. Retrieved 30 September 2017.
- Soumya E. "Indian transgender healthcare challenges". www.aljazeera.com. Retrieved 2017-10-01.
- Tracy JK, Lydecker AD, Ireland L (February 2010). "Barriers to cervical cancer screening among lesbians". Journal of Women's Health. 19 (2): 229–37. doi:10.1089/jwh.2009.1393. PMC 2834453. PMID 20095905.
- World Health Organization (September 2013). Addressing the causes of disparities in health service access and utilization for lesbian, gay, bisexual and trans (LGBT) persons. 52nd Directing Council. 65th Session of the Regional Committee. Concept Paper. (Report).
- Meads C, Pennant M, McManus J, Bayliss S (2009). A systematic review of lesbian, gay, bisexual and transgender health in the West Midlands region of the UK compared to published UK research. WMHTAC, Department of Public Health and Epidemiology, University of Birmingham. hdl:2438/9756. ISBN 978-0-7044-2722-8.
- Kalra G, Ventriglio A, Bhugra D (3 September 2015). "Sexuality and mental health: Issues and what next?". International Review of Psychiatry. 27 (5): 463–9. doi:10.3109/09540261.2015.1094032. PMID 26552342. S2CID 31375772.
- King M, Semlyen J, Tai SS, Killaspy H, Osborn D, Popelyuk D, Nazareth I (August 2008). "A systematic review of mental disorder, suicide, and deliberate self harm in lesbian, gay and bisexual people". BMC Psychiatry. 8 (1): 70. doi:10.1186/1471-244X-8-70. PMC 2533652. PMID 18706118.
- Alencar Albuquerque G, de Lima Garcia C, da Silva Quirino G, Alves MJ, Belém JM, dos Santos Figueiredo FW, et al. (January 2016). "Access to health services by lesbian, gay, bisexual, and transgender persons: systematic literature review". BMC International Health and Human Rights. 16 (1): 2. doi:10.1186/s12914-015-0072-9. PMC 4714514. PMID 26769484.
- IOM (Institute of Medicine). 2011. The Health of Lesbian, Gay, Bisexual, and Transgender People: Building a Foundation for Better Understanding. Washington, DC: The National Academies Press.
- Lane T, Mogale T, Struthers H, McIntyre J, Kegeles SM (November 2008). ""They see you as a different thing": the experiences of men who have sex with men with healthcare workers in South African township communities". Sexually Transmitted Infections. 84 (6): 430–3. doi:10.1136/sti.2008.031567. PMC 2780345. PMID 19028941.
- Maragh-Bass AC, Torain M, Adler R, Ranjit A, Schneider E, Shields RY, et al. (June 2017). "Is It Okay To Ask: Transgender Patient Perspectives on Sexual Orientation and Gender Identity Collection in Healthcare". Academic Emergency Medicine. 24 (6): 655–667. doi:10.1111/acem.13182. PMID 28235242.
- "Rights in Transition". Human Rights Watch. 2016-01-06. Retrieved 2017-10-01.
- "Transgender people face challenges for adequate health care: study". Reuters. 2016-06-17. Retrieved 2017-10-01.
- Thomas R, Pega F, Khosla R, Verster A, Hana T, Say L (February 2017). "Ensuring an inclusive global health agenda for transgender people". Bulletin of the World Health Organization. 95 (2): 154–156. doi:10.2471/BLT.16.183913. PMC 5327942. PMID 28250518.
- Grant J, Mottet L, Tanis J, Herman JL, Harrison J, Keisling M. National transgender discrimination survey report on health and health care (Report). National Gay and Lesbian Task Force.
- James S, Herman J, Rankin S, Keisling M, Mottet L, Anafi MA. The report of the 2015 US transgender survey (Report). Washington, DC: National Center for Transgender Equality.
- Office of Disease Prevention and Health Promotion. "Lesbian, Gay, Bisexual, and Transgender Health". HealthyPeople.gov. Retrieved September 16, 2017.
- Understanding the Health Needs of LGBT People. (March 2016) National LGBT Health Education Center. The Fenway Institute.
- Parekh, Ranna (February 2016). "What Is Gender Dysphoria?". American Psychiatric Association. Retrieved September 16, 2017.
- Hulbert-Williams NJ, Plumpton CO, Flowers P, McHugh R, Neal RD, Semlyen J, Storey L (July 2017). "The cancer care experiences of gay, lesbian and bisexual patients: A secondary analysis of data from the UK Cancer Patient Experience Survey" (PDF). European Journal of Cancer Care. 26 (4): e12670. doi:10.1111/ecc.12670. PMID 28239936. S2CID 4916798.
- Pega F, Veale JF (March 2015). "The case for the World Health Organization's Commission on Social Determinants of Health to address gender identity". American Journal of Public Health. 105 (3): e58-62. doi:10.2105/ajph.2014.302373. PMC 4330845. PMID 25602894.
- Health4LGBTI (June 2017). "State-of-the-art study focusing on the health inequalities faced by LGBTI people D1.1 State-of-the-Art Synthesis Report (SSR) June, 2017" (PDF).
- Regitz-Zagrosek V (June 2012). "Sex and gender differences in health. Science & Society Series on Sex and Science". EMBO Reports. 13 (7): 596–603. doi:10.1038/embor.2012.87. PMC 3388783. PMID 22699937.
- Fikree FF, Pasha O (April 2004). "Role of gender in health disparity: the South Asian context". BMJ. 328 (7443): 823–6. doi:10.1136/bmj.328.7443.823. PMC 383384. PMID 15070642.
- Barker G (2000). What About Boys? A Literature Review on the Health and Development of Adolescent Boys. Geneva, Switzerland: World Health Organization. hdl:10822/973644.
- Kent JA, Patel V, Varela NA (2012). "Gender disparities in health care". The Mount Sinai Journal of Medicine, New York. 79 (5): 555–9. doi:10.1002/msj.21336. PMID 22976361.
- Courtenay WH (May 2000). "Constructions of masculinity and their influence on men's well-being: a theory of gender and health". Social Science & Medicine. 50 (10): 1385–401. CiteSeerX 10.1.1.462.4452. doi:10.1016/s0277-9536(99)00390-1. PMID 10741575.
- World Bank. (2012). World Development Report on Gender Equality and Development.
- Ronsmans C, Graham WJ (September 2006). "Maternal mortality: who, when, where, and why". Lancet. 368 (9542): 1189–200. doi:10.1016/s0140-6736(06)69380-x. PMID 17011946. S2CID 6990187.
- Read JG, Gorman BK (2010). "Gender and Health Inequality". Annual Review of Sociology. 36 (1): 371–386. doi:10.1146/annurev.soc.012809.102535.
- Cheval B, Boisgontier MP, Orsholits D, Sieber S, Guessous I, Gabriel R, et al. (May 2018). "Association of early- and adult-life socioeconomic circumstances with muscle strength in older age". Age and Ageing. 47 (3): 398–407. doi:10.1093/ageing/afy003. PMC 7189981. PMID 29471364.
- Landös A, von Arx M, Cheval B, Sieber S, Kliegel M, Gabriel R, et al. (February 2019). "Childhood socioeconomic circumstances and disability trajectories in older men and women: a European cohort study". European Journal of Public Health. 29 (1): 50–58. doi:10.1093/eurpub/cky166. PMC 6657275. PMID 30689924.
- Vaidya V, Partha G, Karmakar M (February 2012). "Gender differences in utilization of preventive care services in the United States". Journal of Women's Health. 21 (2): 140–5. doi:10.1089/jwh.2011.2876. PMID 22081983.
- Merzel C (June 2000). "Gender differences in health care access indicators in an urban, low-income community". American Journal of Public Health. 90 (6): 909–16. doi:10.2105/ajph.90.6.909. PMC 1446268. PMID 10846508.
- Hoffmann DE, Tarzian AJ (2001-03-01). "The girl who cried pain: a bias against women in the treatment of pain". The Journal of Law, Medicine & Ethics. 29 (1): 13–27. doi:10.1111/j.1748-720X.2001.tb00037.x. PMID 11521267. S2CID 219952180.
- Liu KA, Mager NA (2016). "Women's involvement in clinical trials: historical perspective and future implications". Pharmacy Practice. 14 (1): 708. doi:10.18549/PharmPract.2016.01.708. PMC 4800017. PMID 27011778.
- ORWH. "Including Women and Minorities in Clinical Research | ORWH". orwh.od.nih.gov. Retrieved 2017-09-29.
- Mu R, Zhang X (January 2011). "Why does the Great Chinese Famine affect the male and female survivors differently? Mortality selection versus son preference". Economics and Human Biology. 9 (1): 92–105. doi:10.1016/j.ehb.2010.07.003. PMID 20732838.
- Anson O, Sun S (September 2002). "Gender and health in rural China: evidence from Hebei Province". Social Science & Medicine. 55 (6): 1039–54. doi:10.1016/s0277-9536(01)00227-1. PMID 12220088.
- Yu MY, Sarri R (December 1997). "Women's health status and gender inequality in China". Social Science & Medicine. 45 (12): 1885–98. doi:10.1016/s0277-9536(97)00127-5. PMID 9447637.
- Gupta MD (September 2005). "Explaining Asia's 'Missing Women': A New Look at the Data". Population and Development Review. 31 (3): 529–535. doi:10.1111/j.1728-4457.2005.00082.x.
- Behrman JR (March 1988). "Intrahousehold Allocation of Nutrients in Rural India: Are Boys Favored? Do Parents Exhibit Inequality Aversion?". Oxford Economic Papers. 40 (1): 32–54. doi:10.1093/oxfordjournals.oep.a041845.
- Asfaw A, Lamanna F, Klasen S (March 2010). "Gender gap in parents' financing strategy for hospitalization of their children: evidence from India". Health Economics. 19 (3): 265–79. doi:10.1002/hec.1468. PMID 19267357.
- von der Osten-Sacken T, Uwer T (2007-01-01). "Is Female Genital Mutilation an Islamic Problem?". Middle East Quarterly.
- "Female genital mutilation (FGM)". World Health Organization. Retrieved 2017-09-29.
- "Immediate health consequences of female genital mutilation | Reproductive Health Matters: reproductive & sexual health and rights". Reproductive Health Matters: reproductive & sexual health and rights. 2015-03-01. Retrieved 2017-09-29.
- "Gynecological consequences of female genital mutilation/cutting (FGM/C)". Nasjonalt kunnskapssenter for helsetjenesten. Retrieved 2017-09-29.
- Berg RC, Underland V (June 10, 2013). "The obstetric consequences of female genital mutilation/cutting: a systematic review and meta-analysis". Obstetrics and Gynecology International. 2013: 496564. doi:10.1155/2013/496564. PMC 3710629. PMID 23878544.
- Behrendt A, Moritz S (May 2005). "Posttraumatic stress disorder and memory problems after female genital mutilation". The American Journal of Psychiatry. 162 (5): 1000–2. doi:10.1176/appi.ajp.162.5.1000. PMID 15863806.
- Morison L, Scherf C, Ekpo G, Paine K, West B, Coleman R, Walraven G (August 2001). "The long-term reproductive health consequences of female genital cutting in rural Gambia: a community-based survey". Tropical Medicine & International Health. 6 (8): 643–53. CiteSeerX 10.1.1.569.744. doi:10.1046/j.1365-3156.2001.00749.x. PMID 11555430. S2CID 11177182.
- Gee GC, Payne-Sturges DC (December 2004). "Environmental health disparities: a framework integrating psychosocial and environmental concepts". Environmental Health Perspectives. 112 (17): 1645–53. doi:10.1289/ehp.7074. PMC 1253653. PMID 15579407.
- Woolf SH, Braveman P (October 2011). "Where health disparities begin: the role of social and economic determinants--and why current policies may make matters worse". Health Affairs. 30 (10): 1852–9. doi:10.1377/hlthaff.2011.0685. PMID 21976326.
- Andersen RM (2007). Challenging the US Health Care System: Key Issues in Health Services Policy and Management. John Wiley & Sons. pp. 45–50.
- Adamkiewicz G, Zota AR, Fabian MP, Chahine T, Julien R, Spengler JD, Levy JI (December 2011). "Moving environmental justice indoors: understanding structural influences on residential exposure patterns in low-income communities". American Journal of Public Health. 101 Suppl 1 (S1): S238-45. doi:10.2105/AJPH.2011.300119. PMC 3222513. PMID 21836112.
- Miranda ML, Messer LC, Kroeger GL (March 2012). "Associations between the quality of the residential built environment and pregnancy outcomes among women in North Carolina". Environmental Health Perspectives. 120 (3): 471–7. doi:10.1289/ehp.1103578. PMC 3295337. PMID 22138639.
- Williams DR, Collins C (August 1995). "US Socioeconomic and Racial Differences in Health: Patterns and Explanations". Annual Review of Sociology. 21 (1): 349–386. doi:10.1146/annurev.soc.21.1.349.
- Núñez M (2019). "Environmental Racism and Latino Farmworker Health in the San Joaquin Valley, California". Harvard Journal of Hispanic Policy. 31: 9–14. ProQuest 2316723312 – via ProQuest.
- Williams DR, Jackson PB (1 March 2005). "Social sources of racial disparities in health". Health Affairs. 24 (2): 325–34. doi:10.1377/hlthaff.24.2.325. PMID 15757915.
- Williams DR, Jackson PB (2005). "Social sources of racial disparities in health". Health Affairs. 24 (2): 325–34. doi:10.1377/hlthaff.24.2.325. PMID 15757915.
- Williams DR, Collins C (2001). "Racial residential segregation: a fundamental cause of racial disparities in health". Public Health Reports. 116 (5): 404–16. doi:10.1093/phr/116.5.404. PMC 1497358. PMID 12042604.
- Mujahid MS, Diez Roux AV, Cooper RC, Shea S, Williams DR (February 2011). "Neighborhood stressors and race/ethnic differences in hypertension prevalence (the Multi-Ethnic Study of Atherosclerosis)". American Journal of Hypertension. 24 (2): 187–93. doi:10.1038/ajh.2010.200. PMC 3319083. PMID 20847728.
- Gee GC, Payne-Sturges DC (December 2004). "Environmental health disparities: a framework integrating psychosocial and environmental concepts". Environmental Health Perspectives. 112 (17): 1645–53. doi:10.1289/ehp.7074. PMC 1253653. PMID 15579407.
- "Field-Based Outreach Workers Facilitate Access to Health Care and Social Services for Underserved Individuals in Rural Areas". Agency for Healthcare Research and Quality. 2013-05-01. Retrieved 2013-05-13.
- "The importance of having a usual source of health care". American Family Physician. 62 (3): 477. August 2000. PMID 18853527.
- "Analysis of Minority Health Reveals Persistent, Widespread Disparities". Commonwealth Fund (CMWF). 14 May 1999.
- Agency for Healthcare Research and Quality (AHRQ), "National Healthcare Disparities Report," U.S. Department of Health and Human Services (July 2003).
- Collins KS, Hughes DL, Doty MM, Ives BL, Edwards JN, Tenney K (March 2002). "Diverse communities, common concerns: assessing health care quality for minority Americans". New York: Commonwealth Fund. Archived from the original on 25 April 2014.
- Lilley CM, Mirza KM (April 2021). "Critical role of pathology and laboratory medicine in the conversation surrounding access to healthcare". Journal of Medical Ethics: medethics-2021-107251. doi:10.1136/medethics-2021-107251. PMID 33863832. S2CID 233278658.
- National Health Law Program and the Access Project (NHeLP), Language Services Action Kit: Interpreter Services in Health Care Settings for People With Limited English Proficiency (February 2004).
- Tsawe M, Susuman AS (October 2014). "Determinants of access to and use of maternal health care services in the Eastern Cape, South Africa: a quantitative and qualitative investigation". BMC Research Notes. 7: 723. doi:10.1186/1756-0500-7-723. PMC 4203863. PMID 25315012.
- Brodie M, Flournoy RE, Altman DE, Blendon RJ, Benson JM, Rosenbaum MD (2000). "Health information, the Internet, and the digital divide". Health Affairs. 19 (6): 255–65. doi:10.1377/hlthaff.19.6.255. PMID 11192412.
- Li R (2017-08-10). "Indigenous identity and traditional medicine: Pharmacy at the crossroads". Canadian Pharmacists Journal. 150 (5): 279–281. doi:10.1177/1715163517725020. PMC 5582679. PMID 28894496.
- Wainberg ML, Scorza P, Shultz JM, Helpman L, Mootz JJ, Johnson KA, et al. (May 2017). "Challenges and Opportunities in Global Mental Health: a Research-to-Practice Perspective". Current Psychiatry Reports. 19 (5): 28. doi:10.1007/s11920-017-0780-z. PMC 5553319. PMID 28425023.
- Lake J, Turner MS (2017-08-11). "Urgent Need for Improved Mental Health Care and a More Collaborative Model of Care". The Permanente Journal. 21: 17–024. doi:10.7812/TPP/17-024. PMC 5593510. PMID 28898197.
- Carhart-Harris R (2020-06-08). "We can no longer ignore the potential of psychedelic drugs to treat depression". The Guardian. Retrieved 2021-02-05.
- "National Insurance", How social security works, Bristol University Press, pp. 67–78, ISBN 978-1-4473-4285-4, retrieved 2021-04-26
- "National Insurance", How social security works, Bristol University Press, pp. 67–78, ISBN 978-1-4473-4285-4, retrieved 2021-04-26
- "UnitedHealth survey: Most Americans don't understand basic health plan terms". Healthcare Dive. Retrieved 2021-04-24.
- Billioux, Alexander; Verlander, Katherine; Anthony, Susan; Alley, Dawn (2017-05-30). "Standardized Screening for Health-Related Social Needs in Clinical Settings: The Accountable Health Communities Screening Tool". NAM Perspectives. 7 (5). doi:10.31478/201705b. ISSN 2578-6865.
- "Marketplace Enrollment, 2014-2020". KFF. 2020-04-07. Retrieved 2021-04-26.
- "Federal Subsidies for Health Insurance Coverage for People Under Age 65: 2019 to 2029 | Congressional Budget Office". www.cbo.gov. 2019-05-02. Retrieved 2021-04-22.
- Tikkanen RS, Woolhandler S, Himmelstein DU, Kressin NR, Hanchate A, Lin MY, et al. (July 2017). "Hospital Payer and Racial/Ethnic Mix at Private Academic Medical Centers in Boston and New York City". International Journal of Health Services. 47 (3): 460–476. doi:10.1177/0020731416689549. PMC 6090544. PMID 28152644.
- Kaiser Commission on Medicaid and the Uninsured (KCMU), "The Uninsured and Their Access to Health Care" (December 2003).
- Sommers, Benjamin D.; Gawande, Atul A.; Baicker, Katherine (2017-08-10). "Health Insurance Coverage and Health — What the Recent Evidence Tells Us". New England Journal of Medicine. 377 (6): 586–593. doi:10.1056/NEJMsb1706645. ISSN 0028-4793.
- "Individual Mandate Penalty You Pay If You Don't Have Health Insurance Coverage". HealthCare.gov. Retrieved 2021-04-26.
- Northridge ME, Kumar A, Kaur R (April 2020). "Disparities in Access to Oral Health Care". Annual Review of Public Health. 41: 513–535. doi:10.1146/annurev-publhealth-040119-094318. PMC 7125002. PMID 31900100.
- "Health Care Quality Survey". The Commonwealth Fund 2001.
- Betancourt JR (2002). Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Institute of Medicine.
- Ku L, Flores G (Mar–Apr 2005). "Pay now or pay later: providing interpreter services in health care". Health Affairs. 24 (2): 435–44. doi:10.1377/hlthaff.24.2.435. PMID 15757928.
- Floyd A, Sakellariou D (November 2017). "Healthcare access for refugee women with limited literacy: layers of disadvantage". International Journal for Equity in Health. 16 (1): 195. doi:10.1186/s12939-017-0694-8. PMC 5681803. PMID 29126420.
- Ng E, Pottie K, Spitzer D (December 2011). "Official language proficiency and self-reported health among immigrants to Canada". Health Reports. 22 (4): 15–23. PMID 22352148.
- Fernandez A, Schillinger D, Grumbach K, Rosenthal A, Stewart AL, Wang F, Pérez-Stable EJ (February 2004). "Physician language ability and cultural competence. An exploratory study of communication with Spanish-speaking patients". Journal of General Internal Medicine. 19 (2): 167–74. doi:10.1111/j.1525-1497.2004.30266.x. PMC 1492135. PMID 15009796.
- Flores G, Laws MB, Mayo SJ, Zuckerman B, Abreu M, Medina L, Hardt EJ (January 2003). "Errors in medical interpretation and their potential clinical consequences in pediatric encounters". Pediatrics. 111 (1): 6–14. CiteSeerX 10.1.1.488.9277. doi:10.1542/peds.111.1.6. PMID 12509547.
- Hampers LC, McNulty JE (November 2002). "Professional interpreters and bilingual physicians in a pediatric emergency department: effect on resource utilization". Archives of Pediatrics & Adolescent Medicine. 156 (11): 1108–13. doi:10.1001/archpedi.156.11.1108. PMID 12413338.
- Kleinman A, Eisenberg L, Good B (February 1978). "Culture, illness, and care: clinical lessons from anthropologic and cross-cultural research". Annals of Internal Medicine. 88 (2): 251–8. doi:10.7326/0003-4819-88-2-251. PMID 626456.
- Gochman DS (1997). Handbook of health behavior research. New York: Plenum Press. ISBN 978-0-306-45443-1.
- van Ryn M, Burke J (March 2000). "The effect of patient race and socio-economic status on physicians' perceptions of patients". Social Science & Medicine. 50 (6): 813–28. doi:10.1016/s0277-9536(99)00338-x. PMID 10695979.
- Burgess DJ, van Ryn M, Crowley-Matoka M, Malat J (March–April 2006). "Understanding the provider contribution to race/ethnicity disparities in pain treatment: insights from dual process models of stereotyping". Pain Medicine. 7 (2): 119–34. doi:10.1111/j.1526-4637.2006.00105.x. PMID 16634725.
- Green AR, Carney DR, Pallin DJ, Ngo LH, Raymond KL, Iezzoni LI, Banaji MR (September 2007). "Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients". Journal of General Internal Medicine. 22 (9): 1231–8. doi:10.1007/s11606-007-0258-5. PMC 2219763. PMID 17594129.
- Smedley B, Stith A, Nelson A (2002). "Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care". Institute of Medicine.
- Habib JL (2010). "Progress lags in infection prevention and health disparities". Drug Benefit Trends. 22 (4): 112.
- Woloshin S, Schwartz LM, Katz SJ, Welch HG (August 1997). "Is language a barrier to the use of preventive services?". Journal of General Internal Medicine. 12 (8): 472–7. doi:10.1046/j.1525-1497.1997.00085.x. PMC 1497155. PMID 9276652.
- Jacobs EA, Lauderdale DS, Meltzer D, Shorey JM, Levinson W, Thisted RA (July 2001). "Impact of interpreter services on delivery of health care to limited-English-proficient patients". Journal of General Internal Medicine. 16 (7): 468–74. doi:10.1046/j.1525-1497.2001.016007468.x. PMC 1495243. PMID 11520385.
- "UK-wide screening programmes". Archived from the original on 2014-06-25. Retrieved 2014-03-25.
- "England-specific programmes". Archived from the original on 2014-03-25. Retrieved 2014-03-25.
- Closing the gap in a generation. WHO. 2008. ISBN 978-92-4-156370-3.
- Farrer L, Marinetti C, Cavaco YK, Costongs C (June 2015). "Advocacy for health equity: a synthesis review". The Milbank Quarterly. 93 (2): 392–437. doi:10.1111/1468-0009.12112. PMC 4462882. PMID 26044634.
- "Health Gradient | EuroHealthNet".
- "A Nation Free of Disparities in Health and Health Care" (PDF). U.S. Department of Health and Human Services.
- Betancourt JR, Maina A (2007). "Barriers to Eliminating Disparities in Clinical Practice". Eliminating Healthcare Disparities in America. pp. 83–97. doi:10.1007/978-1-59745-485-8_5. ISBN 978-1-934115-42-8.
- Maxey RW, Williams RA (2011). "Perspective: Second-Class Medicine – Implications of Evidence-Based Medicine for Improving Minority Access to Health Care". Healthcare Disparities at the Crossroads with Healthcare Reform. pp. 115–134. doi:10.1007/978-1-4419-7136-4_8. ISBN 978-1-4419-7135-7.
- "Health Gradient | EuroHealthNet".
- Pega F, Valentine NB, Rasanathan K, Hosseinpoor AR, Torgersen TP, Ramanathan V, et al. (November 2017). "The need to monitor actions on the social determinants of health". Bulletin of the World Health Organization. 95 (11): 784–787. doi:10.2471/BLT.16.184622. PMC 5677605. PMID 29147060.
- "In sickness and in health". The Economist. 11 February 2010. Retrieved 15 February 2010.
- "IA new Joint Action to tackle health inequalities in Europe". The European Commission. 21–22 June 2018. Retrieved 17 September 2018.
- Wilkinson R, Pickett K (May 2011). The spirit level: Why greater equality makes societies stronger. Bloomsbury Publishing USA.
- In Woolf, S. H., In Aron, L. Y., National Academies (U.S.)., & Institute of Medicine (U.S.). (2013). U.S. health in international perspective: Shorter lives, poorer health.
- West KM, Blacksher E, Burke W (May 2017). "Genomics, Health Disparities, and Missed Opportunities for the Nation's Research Agenda". JAMA. 317 (18): 1831–1832. doi:10.1001/jama.2017.3096. PMC 5636000. PMID 28346599.
- Belcher A, Mangelsdorf M, McDonald F, Curtis C, Waddell N, Hussey K (June 2019). "What does Australia's investment in genomics mean for public health?". Australian and New Zealand Journal of Public Health. 43 (3): 204–206. doi:10.1111/1753-6405.12887. PMID 30830712.
- Jooma S, Hahn MJ, Hindorff LA, Bonham VL (2019). "Defining and Achieving Health Equity in Genomic Medicine". Ethnicity & Disease. 29 (Suppl 1): 173–178. doi:10.18865/ed.29.S1.173. PMC 6428182. PMID 30906166.
- Bleich SN, Jarlenski MP, Bell CN, LaVeist TA (April 2012). "Health inequalities: trends, progress, and policy". Annual Review of Public Health. 33: 7–40. doi:10.1146/annurev-publhealth-031811-124658. PMC 3745020. PMID 22224876.
- Diez Roux AV (April 2012). "Conceptual approaches to the study of health disparities". Annual Review of Public Health. 33: 41–58. doi:10.1146/annurev-publhealth-031811-124534. PMC 3740124. PMID 22224879.
- Goldberg J, Hayes W, Huntley J (November 2004). Understanding health disparities (Report). Health Policy Institute of Ohio. Archived from the original on 2008-05-15.* "State Policy Agenda to Eliminate Racial and Ethnic Health Disparities". Commonwealth Fund. June 2004.
- Smedley B, Stith A, Nelson A (August 2002). "Unequal treatment: confronting racial and ethnic disparities in health care". Journal of the National Medical Association. 94 (8): 666–8. PMC 2594273. PMID 12152921.
- 2014 Health Disparities Legislation
- Progress in Community Health Partnerships: Research, Education, and Action (PCHP)
- Institute of Medicine Roundtable on Health Disparities was created to enable dialogue and discussion of issues related to the visibility of racial and ethnic disparities in health and health care as a national problem, the development of programs and strategies to reduce disparities and the emergence of new leadership.
- European Portal for Action on Health Inequalities
- Center for Managing Chronic Disease
- Cultural Diversity in Health Care Speaker Series videos presentations from expert lecturers, University of Wisconsin School of Medicine and Public Health
- Cultural Diversity in Health Care Research Symposium video presentations from expert lecturers, University of Wisconsin School of Medicine and Public Health
- National Center on Minority Health and Health Disparities
- Journal of Health Care for the Poor and Underserved
- Understanding Health Disparities
- Initiative to Eliminate Racial and Ethnic Disparities in Health United States government minority health initiative
- Health Disparities Collaborative
- EuroHealthNet's European Partnership for Improving Health, Equity and Wellbeing
- Massachusetts General Hospital seeks to bridge healthcare's racial gap
- Diversity Health Institute Clearinghouse
- Case Center for Reducing Health Disparities
- FIU Health Disparity Research Group
- "Kaiser Health Disparities Report: A Weekly Look at Race, Ethnicity and Health", News summary report from kaisernetwork.org
- Health inequality in New Zealand
- BBC News article regarding health inequalities
- EXPORT Project webpage atTuskegee University
- VIDEO: Health Status Disparities in the US, April 4, 2007, featuring Paula Braveman, Gregg Bloche, George Kaplan, Thomas Ricketts, Mary Lou deLeon Siantz, and David Williams
- UK National Health Service Specialist Library for Ethnicity & Health
- National Rural Health Association
- The National Partnership for Action to End Health Disparities
- The National Partnership for Action Toolkit for Community Action | https://library.kiwix.org/wikipedia_en_top_maxi/A/Health_equity | 21 |
18 | The Dust Bowl The early 1900's were a time of turmoil for farmers in the United States, especially in the Great Plains region. After the end of World War I, overproduction by farmers resulted in low prices for crops. When farmers first came to the Midwest, they farmed as much wheat as they could because of the high prices and demand. Of the ninety-seven acres, almost thirty-two million acres were being cultivated. The farmers were careless in their planting of the crop, caring only about profit, and they started plowing grasslands that were not made for planting.
“The dust storms that swept across the southern plains in the 1930s created the most severe environmental catastrophe in the entire history of the white man on this continent.”(Location 445.) Had the area never been over worked and farmed to produce mass quantities of wheat and other crops the dust storms would never had happened. This is much like the economical blunder that caused the stock market crash of 1929 resulting in the Great Depression. Had the banks not been so eager to grow and approve credit and loans, essentially over working the money market, at an unusually high rate the stock market most likely would not have crashed. The causes of both the dust bowl and stock market crash were caused by the American society’s greed for wanting more than what they needed to sustain their way of
As time passed, other economies grew, while the agriculture economy diminished by half in just 50 years and was overtaken by the manufacturing industry. (Doc G) Farmers struggled for success and support, but instead received very little of either. According to the Agricultural Department, the summer of 1894 brought many hardships to the crops nationwide. Crops in South Carolina, Georgia, Alabama, Tennessee, Illinois, Wisconsin, and Minnesota all were damaged by droughts, while New Jersey crops suffered from an abundance of rain. Temperatures and insect also devastated the crops.
To try to help out unemployed people, mostly men, the government introduced relief camps. During the 1930's in Prairie Canada, the Great Depression created harsh conditions and it was a struggle until it ended. The event which triggered the Great Depression was the Stock Market crash of October 24, 1929 in New York. Another important cause was that: Later in the 1930's, the wide adoption of the gold exchange in many countries was widely criticized as a great mistake which greatly contributed to the severity and length of the Great Depression. 1 In Canada, wheat, the most important export, was being over-produced around the world, despite the fact that the 1928 supply of wheat was still available in 1929.
After World War I, the price of food began to drop causing some dramatic effects on the United States economy. Americans faced a big impact on the Great Depression that made millions of people lose their jobs and were forced to leave their farm land. Many people, even those who could not afford it, invest all their money in stocks. Some even borrowed money to buy stocks. Prices of farm products fell sharply economic losses were aggravated by a drought.
Mortgages and rent payments could not be met, so people were moving into 'Hoovervilles'. These were 'shanty towns', nicknamed after Herbert Hoover, who was president at the time of the Wall Street crash. Even people who had expensive cars, etc, before now had nothing, some were even unable to pay for a bus fare. People with jobs and profitable companies even lost out because 5,000 banks went bust and the financial system virtually collapsed. Farmers suffered greatly, thousands of families who farmed had to sell their farms as it became uneconomical to grow crops.
The Dust Bowl was the name given to the Great Plains area in the 1930s. Much of the region was an agricultural area and relied on it for most of their economy. Combined with The Great Depression and the dust storms, farmers in the Great Plains area were severely hurt. These farmers were seeking opportunity elsewhere near the Pacific where they were mistreated by the others already there. The mistreatment is a form of disenfranchisement, by excluding and segregating a group of people from the rest of society.
This is a big issue because on a global scale we are letting Eastern Africa’s people suffer when there is no need of it. The drought in Eastern Africa is causing many conflicts and death due to lack of food and water. Meaning of Drought “For most of the history of our species we were helpless to understand how nature works. We took every storm, drought, illness and comet personally. We created myths and spirits in an attempt to explain the patterns of nature (Druyan).” According to Fox, Drought came also be seen as a slow- motion train wreck.
This widespread state of poverty had serious social repercussions for the country. America’s agricultural economy had already been suffering for a decade when nature conspired against the country to exacerbate the Great Depression. From 1931 through 1939, severe winds tore through the Dust Bowl – the region composed of the western parts of Kansas and Oklahoma, parts of New Mexico and Colorado, and the Texas panhandle. These winds stirred up the dust of a landscape already devastated by draught and continuous, exhaustive farming practices. These dust storms threatened people’s health and destroyed whole crops (MAP).
According to answers.com, a dust bowl is a region reduced to aridity by drought and dust storms. The best-known dust bowl is doubtless the one that hit the United States between 1933 and 1939. One major cause of that Dust Bowl was severe droughts during the 1930’s. The other cause was capitalism. Over-farming and grazing in order to achieve high profits killed of much of the plain’s grassland and when winds approached, nothing was there to hold the devastated soil on the ground. | https://www.123helpme.com/essay/Key-Factors-Of-The-Dust-Bowl-742095 | 21 |
23 | A clever new design introduces a way to image the vast ocean floor.
- Neither light- nor sound-based imaging devices can penetrate the deep ocean from above.
- Stanford scientists have invented a new system that incorporates both light and sound to overcome the challenge of mapping the ocean floor.
- Deployed from a drone or helicopter, it may finally reveal what lies beneath our planet's seas.
A great many areas of the ocean floor covering about 70 percent of the Earth remain unmapped. With current technology, it's an extremely arduous and time-consuming task, accomplished only by trawling unmapped areas with sonar equipment dangling from boats. Advanced imaging technologies that work so well on land are stymied by the relative impenetrability of water.
That may be about to change. Scientists at Stanford University have announced an innovative system that combines the strengths of light-based devices and those of sound-based devices to finally make mapping the entire sea floor possible from the sky.
The new system is detailed in a study published in IEEE Explore.
"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," says lead study author and electrical engineer Amin Arbabian of Stanford's School of Engineering in Stanford News.
One of the most reliable ways to map a terrain is by using sonar, which deduces the features of a surface by analyzing sound waves that bounce off it. However, If one were to project sound waves from above into the sea, more than 99.9 percent of those sound waves would be lost as they passed into water. If they managed to reach the seabed and bounce upward out of the water, another 99.9 percent would be lost.
Electromagnetic devices—using light, microwaves, or radar signals—are also fairly useless for ocean-floor mapping from above. Says first author Aidan Fitzpatrick, "Light also loses some energy from reflection, but the bulk of the energy loss is due to absorption by the water." (Ever try to get phone service underwater? Not gonna happen.)
The solution presented in the study is the Photoacoustic Airborne Sonar System (PASS). Its core idea is the combining of sound and light to get the job done. "If we can use light in the air, where light travels well, and sound in the water, where sound travels well, we can get the best of both worlds," says Fitzpatrick.
An imaging session begins with a laser fired down to the water from a craft above the area to be mapped. When it hits the ocean surface, it's absorbed and converted into fresh sound waves that travel down to the target. When these bounce back up to the surface and out into the air and back to PASS technicians, they do still suffer a loss. However, using light on the way in and sound only on the way out cuts that loss in half.
This means that the PASS transducers that ultimately retrieve the sound waves have plenty to work with. "We have developed a system," says Arbabian, "that is sensitive enough to compensate for a loss of this magnitude and still allow for signal detection and imaging." Form there, software assembles a 3D image of the submerged target from the acoustic signals.
PASS was initially designed to help scientists image underground plant roots.
Although its developers are confident that PASS will be able to see down thousands of meters into the ocean, so far it's only been tested in an "ocean" about the size of a fish tank—tiny and obviously free of real-world ocean turbulence.
Fitzpatrick says that, "current experiments use static water but we are currently working toward dealing with water waves. This is a challenging, but we think feasible, problem."
Scaling up, Fitzpatrick adds, "Our vision for this technology is on-board a helicopter or drone. We expect the system to be able to fly at tens of meters above the water."
A small proof-of-concept study shows smartphones could help detect drunkenness based on the way you walk.
- The legal blood alcohol concentration (BAC) limit for driving in the U.S is 0.08 percent. You can measure your BAC 15 minutes after your first drink and your levels will remain safe if you consume no more than one standard drink per hour.
- Portable breathalyzers can be used to measure BAC, but not many people own these devices.
- A small proof-of-concept study suggests that your smartphone could detect your drunkenness based on the way you walk.
The legal limit for driving within the United States is a blood alcohol concentration of 0.08 percent. According to BAC Track, you can measure your BAC (blood alcohol content) as soon as 15 minutes after your first drink. BAC Track suggests your BAC level will remain within safe limits if you consume one standard drink per hour.
According to BAC Track, one standard drink is half an ounce of alcohol, which can be:
- One 12 ounce beer
- One 5 ounce glass of wine
- One 1.5 ounce shot of distilled alcohol
There are many things that influence a person's BAC, including how quickly you drink, your body weight, altitude, how much food you've eaten, whether you're male or female, and what kind of medications you're currently on.
A new study has found that your smartphone could actually tell you if your blood alcohol concentration exceeds the limit of 0.08 percent.
The small study that could mean big things for alcohol testing
Image by gritsalak karalak on Shutterstock
While devices such as portable breath analyzers are available, not many people own them due to how expensive they are and the social stigma surrounding them. This 2020 study suggests smartphones could be an alternative. According to PEW Research, up to 81 percent of people own a smartphone.
For this small-scale study, there were 22 participants who visited the lab to consume a vodka-based drink that would raise their breath alcohol concentration to 0.02 percent.
Dr. Brian Suffoletto of the Stanford Medical School's Department of Emergency Medicine (and corresponding author of the study) explains to Medical News Today: "I lost a close friend to a drinking and driving crash in college," Dr. Suffoletto says. "And as an emergency physician, I have taken care of scores of adults with injuries related to acute alcohol intoxication. Because of this, I have dedicated the past 10 years to testing digital interventions to prevent deaths and injury related to excessive alcohol consumption."
How it works:
Before having the drink, each participant had a smartphone strapped to their back and was asked to walk 10 steps in a straight line and then back again. Every hour for the next 7 hours, the participants repeated this walk.
The sensors on the smartphone measured each person's acceleration and their movements (both from side to side and up and down).
This is not the first study of it's kind.
Previous research (such as this 2016 study) has used machine learning to determine whether a person was intoxicated. That data, gathered from 34 'intoxicated' participants, generated time and frequency domain features such as sway area and cadence, which were classified using supervised machine learning.
This 2020 study showed promising results of the smartphone analysis: over 90 percent accuracy.
Researchers found through analyzing the data that 92.5 percent of the time they were able to determine if a participant had exceeded the legal BAC limit.
Of course, the study had some limitations.
In real life, a person is very unlikely to keep their smartphone strapped to their back. Placing the phone in your pocket (or carrying it) could impact the accuracy.
This study also measured breath alcohol concentrations, which are on average 15 percent lower than blood alcohol concentrations.
The implications of this small-scale study are exciting.
While this was a relatively small study, it is being used as a "proof of concept" marker for further research. Researchers on this project explain that future research would ideally be done in real-world settings with more volunteers.
Dr. Brian Suffoletto explains to Medical News Today:
"In 5 years, I would like to imagine a world in which, if people go out with friends and drink at risky levels, they get an alert at the first sign of impairment and are sent strategies to help them stop drinking and protect them from high-risk events, like driving, interpersonal violence, and unprotected sexual encounters."
The Silicon Valley titan has promised scholarships for its tech-focused certificate courses alongside $10 million in job training grants.
American has a "middle skills" gap. Good jobs requiring a high school diploma have contracted since the 1990s, while workers wielding a college education continue to excel. But according to a report out of Georgetown University, two out of three entry-level jobs today require some training and education beyond high school but not a bachelor's degree. This demand for middle-skilled workers has resulted from the assimilation of work by the digital revolution, while people have been outpaced by the technology they rely on.
As Stephane Kasriel, former CEO of Upwork, wrote for the World Economic Forum: "Our current education system adapts to change too slowly and operates too ineffectively for this new world. […] Skills, not college pedigree, will be what matters for the future workforce" To bridge the skills gap, many employers and institutions have turned to online education and other non-traditional models. One such employer is Google.
Unable to find enough qualified candidates to fill necessary positions, the Silicon Valley titan created its own certification course of Coursera to teach people IT support skills. The program proved so successful that earlier this week, Google announced it would expand the program to include three new courses. It's also offering scholarships to help in-need people enroll.
An improved educational pipeline?
A chart showing the increase and decrease of "good jobs" based on level of education required.
The new suite of courses will train students in skills necessary for data analyst, project manager, and UX design positions. While Google has released no specifics on these courses, they will likely follow the current certificate course template. This means they won't require a degree to enroll, will be entirely online, and will be taught by Google Staff.
Like other massive open online courses (or MOOCs), they will likely be self-paced. According to Coursera, Google's current IT support course takes between three to six months to complete at $49 a month. To offset those costs, Google is also offering 100,000 need-based scholarships.
"College degrees are out of reach for many Americans, and you shouldn't need a college diploma to have economic security. We need new, accessible job-training solutions—from enhanced vocational programs to online education—to help America recover and build," wrote Kent Walker, SVP of Global Affairs at Google, in a release.
By the courses' end, students will have created hands-on projects to build their portfolio and will receive a certificate of completion. In the release, Walker states that Goggle will consider the certification as "equivalent of a four-year degree" for job seekers. The current IT support course has credit recommendation from the American Council on Education, meaning it may be possible for students to translate the certificate into some college credits. No word on whether the new courses will also have credit recommendation.
"Launched in 2018, the Google IT Certificate program has become the single most popular certificate on Coursera, and thousands of people have found new jobs and increased their earnings after completing the course," Walker added.
As part of the initiative, Google.org, the company's charity branch, has committed $10 million in job training grants. The grants will go to Google's nonprofit partners, such as YWCA, JFF, and NPower, to help women, veterans, and underrepresented groups obtain jobs skills relevant to today's in-demand positions.
An improved educational pipeline?
The need for middle-skills will grow as the American workforce continues to digitize at an extraordinary rate. According to the Brookings Institution, in 2002 just 5 percent of jobs studied—which covered 90 percent of the workforce—required high-digital skills while 40 percent required medium-level skills. By 2016, that percentage rose to 23 and 48 respectively. In the same period, jobs requiring low-digital skills fell precipitously, from 56 to 30 percent. Beyond rapid job growth and competitive advantage, those with the skills are set to reap the economic rewards.
But more needs to be done.
As of this writing, more than 275,000 people have enrolled in Google's IT Support course, but it's unclear how many companies will accept the certificate as proof of capability. While Google and its Employer Consortium, a group of employers who connect with Google to find prospective candidates, may consider the certificate equivalent to a four-year degree, MOOC certifications lack the universality of either associate's or bachelor's degrees. Without mainstream acceptance, graduates may be contending with each other within a puddle of prospective companies, not the vast, oceanic marketplace of corporate America.
And the COVID-19 pandemic hasn't halted but accelerated digitalization as companies widely adopt new technological trends to survive. Many of the 20 million unemployed Americans may suddenly need to upskill or even find their jobs outsourced to the digital realm. They'll need a quick, yet employer recognized, means to acquire new skills to help find work.
Ten million dollars will buy Google—a company valued at one trillion dollars—a nice commemorative brick in the path to a solution and hopefully help many lives. But we have many miles of work to go.
The programming giant exits the space due to ethical concerns.
- IBM sent a latter to Congress stating it will no longer research, develop, or sell facial recognition software.
- AI-based facial recognition software remains widely available to law enforcement and private industry.
- Facial recognition software is far from infallible, and often reflects its creators' bias.
In what strikes one as a classic case of shutting the stable door long after the horse has bolted, IBM's CEO Arvind Krishna has announced the company will no longer sell general-purpose facial recognition software, citing ethical concerns, in particular with the technology's potential for use in racial profiling by police. They will also cease research and development of this tech.
While laudable, this announcement arguably arrives about five years later than it might have, as numerous companies sell AI-based facial recognition software, often to law enforcement. Anyone who uses Facebook or Google also knows all about this technology, as we watch both companies tag friends and associates for us. (Facebook recently settled a lawsuit regarding the unlawful use of facial recognition for $550 million.)
It's worth noting that no one other than IBM has offered to cease developing and selling facial recognition software.
Image source: Tada Images/Shutterstock
Krishna made the announcement in a public letter to Senators Cory Booker (D-NJ) and Kamala Harris (D-CA), and Representatives Karen Bass (D-CA), Hakeem Jeffries (D-NY), and Jerrold Nadler (D-NY). Democrats in Congress are considering legislation to ban facial-recognition software as reported abuses pile up.
IBM's letter states:
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."
Prior to their exit entirely from facial recognition, IBM had a mixed record. The company scanned nearly a million Creative Commons images from Flickr without their owners' consent. On the other hand, IBM released a public data set in 2018 in an attempt at transparency.
Image source: Best-Backgrounds/Shutterstock
Privacy issues aside — and there definitely are privacy concerns here — the currently available software is immature and prone to errors. Worse, it often reflects the biases of its programmers, who work for private companies with little regulation or oversight. And since commercial facial recognition software is sold to law enforcement, the frequent identification errors and biases are dangerous: They can ruin the lives of innocent people.
The website Gender Shades offers an enlightening demonstration of the type of inaccuracies to which facial recognition is inclined. The page was put together by Joy Buolamwini and Timnit Gebru in 2018, and doesn't reflect the most recent iterations of the software it tests, from three companies, Microsoft, the now-presumably-late IBM Watson, and Face++. Nonetheless, it's telling. To begin with, all three programs did significantly better at identifying men than women. However, when it came to gender identification — simplified to binary designations for simplicity — and skin color, the unimpressive results were genuinely troubling for the bias they reflected.
Amazon's Rekognition facial recognition software is the one most frequently sold to law enforcement, and an ACLU test run in 2018 revealed it also to be pretty bad: It incorrectly identified 28 members of Congress as people in a public database of 28,000 mugshots.
Update, 6/11/2020: Amazon today announced a 12-month moratorium on law-enforcement use of Rekognition, expressing the company's hope that Congress will in the interim enact "stronger regulations to govern the ethical use of facial recognition technology."
In 2019, a federal study by the National Institute of Standards and Technology reported empirical evidence of bias relating to age, gender, and race in the 189 facial recognition algorithms they analyzed. Members of certain groups of people were 100 times more likely to be misidentified. This study is ongoing.
Facial rec's poster child
Image source: Gian Cescon/Unsplash
The company most infamously associated with privacy-invading facial recognition software has to be Clearview AI, about whom we've previously written. This company scraped identification from over 3 billion social media images without posters' permission to develop software sold to law enforcement agencies.
The ACLU sued Clearview AI in May of 2020 for engaging in "unlawful, privacy-destroying surveillance activities" in violation of Illinois' Biometric Information Privacy Act. The organization wrote to CNN, "Clearview is as free to look at online photos as anyone with an internet connection. But what it can't do is capture our faceprints — uniquely identifying biometrics — from those photos without consent." The ACLU's complaint alleges "In capturing these billions of faceprints and continuing to store them in a massive database, Clearview has failed, and continues to fail, to take the basic steps necessary to ensure that its conduct is lawful."
The longer term
Though it undoubtedly sends a chill down the spine, the onrush of facial recognition technologies — encouraged by the software industry's infatuation with AI — suggests that we can't escape being identified by our faces for long, legislation or not. Advertisers want to know who we are, law enforcement wants to know who we are, and as our lives revolve ever more decisively around social media, many will no doubt welcome technology that automatically brings us together with friends and associates old and new. Concerns about the potential for abuse may wind up taking a back seat to convenience.
It's been an open question for some time whether privacy is even an issue for those who've grown up surrounded by connected devices. These generations don't care so much about privacy because they — realistically — don't expect it, particularly in the U.S. where very little is legally private.
IBM's principled stand may ultimately be more pyrrhic than anything else.
Get your finances in shape with this powerful money manager.
- Emma is a personal finance and budgeting app to help you better control your money.
- Emma organizes and analyzes all your financial accounts to save you cash.
- A $299.99 lifetime subscription is on sale now for just $39.
Quick...how many monthly subscriptions do you have? Subscriptions for streaming services and cable; website, newspaper, magazine or app access; subscription boxes; or services like a food prep supplier or the gym? It’s probably even more than that number you just blurted out.
Welcome to the subscription economy, where companies are increasingly moving to charging you monthly or annual fees to continue providing you with goods and services that you may not always need.
That’s just one of the ways money can slip out of your pocket without you even realizing it. Emma is a money management and budgeting app that can help you stem that outgoing cashflow and streamline your expenses so money doesn’t get wasted. Right now, a lifetime subscription to Emma is almost 80 percent off, just $39.
Launched last year and already featured in outlets like TechCrunch, Forbes and the Financial Times, the Emma app is described as a fitness tracker for your money. Emma syncs financial statements from all your bank accounts, credit cards and investments, tracks your payments and analyzes your personal finances to help you make smarter decisions about where your money goes.
With Emma, you can set up budgets for all your regular expenditures like monthly bills, groceries, transportation and more. Once it has an overview of your finances, Emma will point out potential problems like overdrafts or an upcoming payment. It’ll also help you spot waste like subscriptions that you can cancel, all to help you keep a tighter rein on your money.
Emma uses end-to-end 256-bit TLS bank grade encryption protections, so your sensitive financial information won't fall into the wrong hands.
Right now, a lifetime of Emma Personal Finance and Budgeting app service, a $299.99 value, is on sale for only $39.
Prices are subject to change.
When you buy something through a link in this article or from our shop, Big Think earns a small commission. Thank you for supporting our team's work. | https://bigthink.com/tag/software | 21 |
39 | In economics, the Gini coefficient ( JEE-nee), sometimes called the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income inequality or wealth inequality within a nation or any other group of people. It was developed by the Italian statistician and sociologist Corrado Gini.
The Gini coefficient measures the inequality among values of a frequency distribution (for example, levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (e.g., for a large number of people where only one person has all the income or consumption and all others have none, the Gini coefficient will be nearly one).
For larger groups, values close to one are unlikely. Given the normalization of both the cumulative population and the cumulative share of income used to calculate the Gini coefficient, the measure is not overly sensitive to the specifics of the income distribution, but rather only on how incomes vary relative to the other members of a population. The exception to this is in the redistribution of income resulting in a minimum income for all people. When the population is sorted, if their income distribution were to approximate a well-known function, then some representative values could be calculated.
The Gini coefficient was proposed by Gini as a measure of inequality of income or wealth. For OECD countries, in the late 20th century, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 and 0.49, with Slovenia being the lowest and Mexico the highest. African countries had the highest pre-tax Gini coefficients in 2008-2009, with South Africa the world's highest, variously estimated to be 0.63 to 0.7, although this figure drops to 0.52 after social assistance is taken into account, and drops again to 0.47 after taxation. The global income Gini coefficient in 2005 has been estimated to be between 0.61 and 0.68 by various sources.
There are some issues in interpreting a Gini coefficient. The same value may result from many different distribution curves. The demographic structure should be taken into account. Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remains constant. Scholars have devised over a dozen variants of the Gini coefficient.
The Gini coefficient was developed by the Italian statistician Corrado Gini and published in his 1912 paper Variability and Mutability (Italian: Variabilità e mutabilità). Building on the work of American economist Max Lorenz, Gini proposed that the difference between the hypothetical straight line depicting perfect equality, and the actual line depicting people's incomes, be used as a measure of inequality.
The Gini coefficient is a single number that demonstrates a degree of inequality in a distribution of income/wealth. It is used to estimate how far a country's wealth or income distribution deviates from a totally equal distribution.
In terms of income-ordered population percentiles, the Gini coefficient is the cumulative shortfall from equal share of the total income up to each percentile. That summed shortfall is then divided by the value it would have in the case of complete equality.
The Gini coefficient is usually defined mathematically based on the Lorenz curve, which plots the proportion of the total income of the population (y axis) that is cumulatively earned by the bottom x of the population (see diagram). The line at 45 degrees thus represents perfect equality of incomes. The Gini coefficient can then be thought of as the ratio of the area that lies between the line of equality and the Lorenz curve (marked A in the diagram) over the total area under the line of equality (marked A and B in the diagram); i.e., . It is also equal to 2A and to due to the fact that (since the axes scale from 0 to 1).
If all people have non-negative income (or wealth, as the case may be), the Gini coefficient can theoretically range from 0 (complete equality) to 1 (complete inequality); it is sometimes expressed as a percentage ranging between 0 and 100. In reality, both extreme values are not quite reached. If negative values are possible (such as the negative wealth of people with debts), then the Gini coefficient could theoretically be more than 1. Normally the mean (or total) is assumed positive, which rules out a Gini coefficient less than zero.
An alternative approach is to define the Gini coefficient as half of the relative mean absolute difference, which is mathematically equivalent to the definition based on the Lorenz curve. The mean absolute difference is the average absolute difference of all pairs of items of the population, and the relative mean absolute difference is the mean absolute difference divided by the average, , to normalize for scale. If xi is the wealth or income of person i, and there are n persons, then the Gini coefficient G is given by:
When the income (or wealth) distribution is given as a continuous probability distribution function p(x), the Gini coefficient is again half of the relative mean absolute difference:
where is the mean of the distribution, and the lower limits of integration may be replaced by zero when all incomes are positive.
This section's tone or style may not reflect the encyclopedic tone used on Wikipedia. (February 2019)
While the income distribution of any particular country won't always follow theoretical models in reality, these functions give a qualitative understanding of the income distribution in a nation given the Gini coefficient.
The extreme cases are the most equal society in which every person receives the same income and the most unequal society where a single person receives 100% of the total income and the remaining people receive none .
A more general simplified case also just distinguishes two levels of income, low and high. If the high income group is a proportion u of the population and earns a proportion f of all income, then the Gini coefficient is . An actual more graded distribution with these same values u and f will always have a higher Gini coefficient than .
The proverbial case where the richest 20% have 80% of all income (see Pareto principle) would lead to an income Gini coefficient of at least 60%.
An often cited case that 1% of all the world's population owns 50% of all wealth, means a wealth Gini coefficient of at least 49%.
In some cases, this equation can be applied to calculate the Gini coefficient without direct reference to the Lorenz curve. For example, (taking y to mean the income or wealth of a person or household):
Since the Gini coefficient is half the relative mean absolute difference, it can also be calculated using formulas for the relative mean absolute difference. For a random sample S consisting of values yi, i = 1 to n, that are indexed in non-decreasing order (yi yi+1), the statistic:
There does not exist a sample statistic that is in general an unbiased estimator of the population Gini coefficient, like the relative mean absolute difference.
For a discrete probability distribution with probability mass function , where is the fraction of the population with income or wealth , the Gini coefficient is:
When the population is large, the income distribution may be represented by a continuous probability density function f(x) where f(x) dx is the fraction of the population with wealth or income in the interval dx about x. If F(x) is the cumulative distribution function for f(x), then the Lorenz curve L(F) may then be represented as a function parametric in L(x) and F(x) and the value of B can be found by integration:
The Gini coefficient can also be calculated directly from the cumulative distribution function of the distribution F(y). Defining ? as the mean of the distribution, and specifying that F(y) is zero for all negative values, the Gini coefficient is given by:
The latter result comes from integration by parts. (Note that this formula can be applied when there are negative values if the integration is taken from minus infinity to plus infinity.)
The Gini coefficient may be expressed in terms of the quantile function Q(F) (inverse of the cumulative distribution function: Q(F(x)) = x)
For some functional forms, the Gini index can be calculated explicitly. For example, if y follows a lognormal distribution with the standard deviation of logs equal to , then where is the error function ( since , where is the cumulative distribution function of a standard normal distribution). In the table below, some examples for probability density functions with support on on are shown. The Dirac delta distribution represents the case where everyone has the same wealth (or income); it implies that there are no variations at all between incomes.
Sometimes the entire Lorenz curve is not known, and only values at certain intervals are given. In that case, the Gini coefficient can be approximated by using various techniques for interpolating the missing values of the Lorenz curve. If (Xk, Yk) are the known points on the Lorenz curve, with the Xk indexed in increasing order (Xk - 1 < Xk), so that:
If the Lorenz curve is approximated on each interval as a line between consecutive points, then the area B can be approximated with trapezoids and:
is the resulting approximation for G. More accurate results can be obtained using other methods to approximate the area B, such as approximating the Lorenz curve with a quadratic function across pairs of intervals, or building an appropriately smooth approximation to the underlying distribution function that matches the known data. If the population mean and boundary values for each interval are also known, these can also often be used to improve the accuracy of the approximation.
The Gini coefficient calculated from a sample is a statistic and its standard error, or confidence intervals for the population Gini coefficient, should be reported. These can be calculated using bootstrap techniques but those proposed have been mathematically complicated and computationally onerous even in an era of fast computers. Economist Tomson Ogwang made the process more efficient by setting up a "trick regression model" in which respective income variables in the sample are ranked with the lowest income being allocated rank 1. The model then expresses the rank (dependent variable) as the sum of a constant A and a normal error term whose variance is inversely proportional to yk;
Thus, G can be expressed as a function of the weighted least squares estimate of the constant A and that this can be used to speed up the calculation of the jackknife estimate for the standard error. Economist David Giles argued that the standard error of the estimate of A can be used to derive that of the estimate of G directly without using a jackknife at all. This method only requires the use of ordinary least squares regression after ordering the sample data. The results compare favorably with the estimates from the jackknife with agreement improving with increasing sample size.
However it has since been argued that this is dependent on the model's assumptions about the error distributions and the independence of error terms, assumptions that are often not valid for real data sets. There is still ongoing debate surrounding this topic.
where is mean income of the population, Pi is the income rank P of person i, with income X, such that the richest person receives a rank of 1 and the poorest a rank of N. This effectively gives higher weight to poorer people in the income distribution, which allows the Gini to meet the Transfer Principle. Note that the Jasso-Deaton formula rescales the coefficient so that its value is 1 if all the are zero except one. Note however Allison's reply on the need to divide by N² instead.
FAO explains another version of the formula.
The Gini coefficient and other standard inequality indices reduce to a common form. Perfect equality--the absence of inequality--exists when and only when the inequality ratio, , equals 1 for all j units in some population (for example, there is perfect income equality when everyone's income equals the mean income , so that for everyone). Measures of inequality, then, are measures of the average deviations of the from 1; the greater the average deviation, the greater the inequality. Based on these observations the inequality indices have this common form:
where pj weights the units by their population share, and f(rj) is a function of the deviation of each unit's rj from 1, the point of equality. The insight of this generalised inequality index is that inequality indices differ because they employ different functions of the distance of the inequality ratios (the rj) from 1.
Gini coefficients of income are calculated on market income as well as disposable income basis. The Gini coefficient on market income--sometimes referred to as a pre-tax Gini coefficient--is calculated on income before taxes and transfers, and it measures inequality in income without considering the effect of taxes and social spending already in place in a country. The Gini coefficient on disposable income--sometimes referred to as after-tax Gini coefficient--is calculated on income after taxes and transfers, and it measures inequality in income after considering the effect of taxes and social spending already in place in a country.
For OECD countries over the 2008-2009 period, the Gini coefficient (pre-taxes and transfers) for a total population ranged between 0.34 and 0.53, with South Korea the lowest and Italy the highest. The Gini coefficient (after-taxes and transfers) for a total population ranged between 0.25 and 0.48, with Denmark the lowest and Mexico the highest. For the United States, the country with the largest population of the OECD countries, the pre-tax Gini index was 0.49, and the after-tax Gini index was 0.38, in 2008-2009. The OECD averages for total populations in OECD countries was 0.46 for the pre-tax income Gini index and 0.31 for the after-tax income Gini index. Taxes and social spending that were in place in 2008-2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries--especially Nordic and Continental welfare states--achieve lower levels of income inequality than other countries."
Using the Gini can help quantify differences in welfare and compensation policies and philosophies. However it should be borne in mind that the Gini coefficient can be misleading when used to make political comparisons between large and small countries or those with different immigration policies (see limitations section).
The Gini coefficient for the entire world has been estimated by various parties to be between 0.61 and 0.68. The graph shows the values expressed as a percentage in their historical development for a number of countries.
According to UNICEF, Latin America and the Caribbean region had the highest net income Gini index in the world at 48.3, on unweighted average basis in 2008. The remaining regional averages were: sub-Saharan Africa (44.2), Asia (40.4), Middle East and North Africa (39.2), Eastern Europe and Central Asia (35.4), and High-income Countries (30.9). Using the same method, the United States is claimed to have a Gini index of 36, while South Africa had the highest income Gini index score of 67.8.
Taking income distribution of all human beings, worldwide income inequality has been constantly increasing since the early 19th century. There was a steady increase in the global income inequality Gini score from 1820 to 2002, with a significant increase between 1980 and 2002. This trend appears to have peaked and begun a reversal with rapid economic growth in emerging economies, particularly in the large populations of BRIC countries.
The table below presents the estimated world income Gini coefficients over the last 200 years, as calculated by Milanovic.
|Year||World Gini coefficients|
More detailed data from similar sources plots a continuous decline since 1988. This is attributed to globalization increasing incomes for billions of poor people, mostly in countries like China and India. Developing countries like Brazil have also improved basic services like health care, education, and sanitation; others like Chile and Mexico have enacted more progressive tax policies.
|Year||World Gini coefficients|
Gini coefficient is widely used in fields as diverse as sociology, economics, health science, ecology, engineering and agriculture. For example, in social sciences and economics, in addition to income Gini coefficients, scholars have published education Gini coefficients and opportunity Gini coefficients.
Education Gini index estimates the inequality in education for a given population. It is used to discern trends in social development through educational attainment over time. From a study of 85 countries by three Economists of World Bank Vinod Thomas, Yan Wang, Xibo Fan, estimate Mali had the highest education Gini index of 0.92 in 1990 (implying very high inequality in education attainment across the population), while the United States had the lowest education inequality Gini index of 0.14. Between 1960 and 1990, China, India and South Korea had the fastest drop in education inequality Gini Index. They also claim education Gini index for the United States slightly increased over the 1980-1990 period.
Similar in concept to income Gini coefficient, opportunity Gini coefficient measures inequality of opportunity. The concept builds on Amartya Sen's suggestion that inequality coefficients of social development should be premised on the process of enlarging people's choices and enhancing their capabilities, rather than on the process of reducing income inequality. Kovacevic in a review of opportunity Gini coefficient explains that the coefficient estimates how well a society enables its citizens to achieve success in life where the success is based on a person's choices, efforts and talents, not his background defined by a set of predetermined circumstances at birth, such as, gender, race, place of birth, parent's income and circumstances beyond the control of that individual.
In 1978, Anthony Shorrocks introduced a measure based on income Gini coefficients to estimate income mobility. This measure, generalized by Maasoumi and Zandvakili, is now generally referred to as Shorrocks index, sometimes as Shorrocks mobility index or Shorrocks rigidity index. It attempts to estimate whether the income inequality Gini coefficient is permanent or temporary, and to what extent a country or region enables economic mobility to its people so that they can move from one (e.g., bottom 20%) income quantile to another (e.g., middle 20%) over time. In other words, Shorrocks index compares inequality of short-term earnings such as annual income of households, to inequality of long-term earnings such as 5-year or 10-year total income for same households.
Shorrocks index is calculated in number of different ways, a common approach being from the ratio of income Gini coefficients between short-term and long-term for the same region or country.
A 2010 study using social security income data for the United States since 1937 and Gini-based Shorrocks indices concludes that income mobility in the United States has had a complicated history, primarily due to mass influx of women into the American labor force after World War II. Income inequality and income mobility trends have been different for men and women workers between 1937 and the 2000s. When men and women are considered together, the Gini coefficient-based Shorrocks index trends imply long-term income inequality has been substantially reduced among all workers, in recent decades for the United States. Other scholars, using just 1990s data or other short periods have come to different conclusions. For example, Sastre and Ayala, conclude from their study of income Gini coefficient data between 1993 and 1998 for six developed economies, that France had the least income mobility, Italy the highest, and the United States and Germany intermediate levels of income mobility over those 5 years.
The Gini coefficient has features that make it useful as a measure of dispersion in a population, and inequalities in particular.
The Gini coefficient is a relative measure. It is possible for the Gini coefficient of a developing country to rise (due to increasing inequality of income) while the number of people in absolute poverty decreases. This is because the Gini coefficient measures relative, not absolute, wealth. Changing income inequality, measured by Gini coefficients, can be due to structural changes in a society such as growing population (baby booms, aging populations, increased divorce rates, extended family households splitting into nuclear families, emigration, immigration) and income mobility. Gini coefficients are simple, and this simplicity can lead to oversights and can confuse the comparison of different populations; for example, while both Bangladesh (per capita income of $1,693) and the Netherlands (per capita income of $42,183) had an income Gini coefficient of 0.31 in 2010, the quality of life, economic opportunity and absolute income in these countries are very different, i.e. countries may have identical Gini coefficients, but differ greatly in wealth. Basic necessities may be available to all in a developed economy, while in an undeveloped economy with the same Gini coefficient, basic necessities may be unavailable to most or unequally available, due to lower absolute wealth.
Even when the total income of a population is the same, in certain situations two countries with different income distributions can have the same Gini index (e.g. cases when income Lorenz Curves cross). Table A illustrates one such situation. Both countries have a Gini coefficient of 0.2, but the average income distributions for household groups are different. As another example, in a population where the lowest 50% of individuals have no income and the other 50% have equal income, the Gini coefficient is 0.5; whereas for another population where the lowest 75% of people have 25% of income and the top 25% have 75% of the income, the Gini index is also 0.5. Economies with similar incomes and Gini coefficients can have very different income distributions. Bellù and Liberati claim that to rank income inequality between two different populations based on their Gini indices is sometimes not possible, or misleading.
A Gini index does not contain information about absolute national or personal incomes. Populations can have very low income Gini indices, yet simultaneously very high wealth Gini index. By measuring inequality in income, the Gini ignores the differential efficiency of use of household income. By ignoring wealth (except as it contributes to income) the Gini can create the appearance of inequality when the people compared are at different stages in their life. Wealthy countries such as Sweden can show a low Gini coefficient for disposable income of 0.31 thereby appearing equal, yet have very high Gini coefficient for wealth of 0.79 to 0.86 thereby suggesting an extremely unequal wealth distribution in its society. These factors are not assessed in income-based Gini.
|1||20,000||1 & 2||50,000|
|3||40,000||3 & 4||90,000|
|5||60,000||5 & 6||130,000|
|7||80,000||7 & 8||170,000|
|9||120,000||9 & 10||270,000|
Gini index has a downward-bias for small populations. Counties or states or countries with small populations and less diverse economies will tend to report small Gini coefficients. For economically diverse large population groups, a much higher coefficient is expected than for each of its regions. Taking world economy as one, and income distribution for all human beings, for example, different scholars estimate global Gini index to range between 0.61 and 0.68. As with other inequality coefficients, the Gini coefficient is influenced by the granularity of the measurements. For example, five 20% quantiles (low granularity) will usually yield a lower Gini coefficient than twenty 5% quantiles (high granularity) for the same distribution. Philippe Monfort has shown that using inconsistent or unspecified granularity limits the usefulness of Gini coefficient measurements.
The Gini coefficient measure gives different results when applied to individuals instead of households, for the same economy and same income distributions. If household data is used, the measured value of income Gini depends on how the household is defined. When different populations are not measured with consistent definitions, comparison is not meaningful.
Deininger and Squire (1996) show that income Gini coefficient based on individual income, rather than household income, are different. For example, for the United States, they find that the individual income-based Gini index was 0.35, while for France it was 0.43. According to their individual focused method, in the 108 countries they studied, South Africa had the world's highest Gini coefficient at 0.62, Malaysia had Asia's highest Gini coefficient at 0.5, Brazil the highest at 0.57 in Latin America and Caribbean region, and Turkey the highest at 0.5 in OECD countries.
(in 2010 adjusted dollars)
|% of Population
|% of Population|
|$15,000 - $24,999||11.9%||12.0%|
|$25,000 - $34,999||12.1%||10.9%|
|$35,000 - $49,999||15.4%||13.9%|
|$50,000 - $74,999||22.1%||17.7%|
|$75,000 - $99,999||12.4%||11.4%|
|$100,000 - $149,999||8.3%||12.1%|
|$150,000 - $199,999||2.0%||4.5%|
|$200,000 and over||1.2%||3.9%|
|United States' Gini
on pre-tax basis
Expanding on the importance of life-span measures, the Gini coefficient as a point-estimate of equality at a certain time, ignores life-span changes in income. Typically, increases in the proportion of young or old members of a society will drive apparent changes in equality, simply because people generally have lower incomes and wealth when they are young than when they are old. Because of this, factors such as age distribution within a population and mobility within income classes can create the appearance of inequality when none exist taking into account demographic effects. Thus a given economy may have a higher Gini coefficient at any one point in time compared to another, while the Gini coefficient calculated over individuals' lifetime income is actually lower than the apparently more equal (at a given point in time) economy's. Essentially, what matters is not just inequality in any particular year, but the composition of the distribution over time.
Kwok claims income Gini coefficient for Hong Kong has been high (0.434 in 2010), in part because of structural changes in its population. Over recent decades, Hong Kong has witnessed increasing numbers of small households, elderly households and elderly living alone. The combined income is now split into more households. Many old people are living separately from their children in Hong Kong. These social changes have caused substantial changes in household income distribution. Income Gini coefficient, claims Kwok, does not discern these structural changes in its society. Household money income distribution for the United States, summarized in Table C of this section, confirms that this issue is not limited to just Hong Kong. According to the US Census Bureau, between 1979 and 2010, the population of United States experienced structural changes in overall households, the income for all income brackets increased in inflation-adjusted terms, household income distributions shifted into higher income brackets over time, while the income Gini coefficient increased.
Another limitation of Gini coefficient is that it is not a proper measure of egalitarianism, as it is only measures income dispersion. For example, if two equally egalitarian countries pursue different immigration policies, the country accepting a higher proportion of low-income or impoverished migrants will report a higher Gini coefficient and therefore may appear to exhibit more income inequality.
Some countries distribute benefits that are difficult to value. Countries that provide subsidized housing, medical care, education or other such services are difficult to value objectively, as it depends on quality and extent of the benefit. In absence of free markets, valuing these income transfers as household income is subjective. The theoretical model of Gini coefficient is limited to accepting correct or incorrect subjective assumptions.
In subsistence-driven and informal economies, people may have significant income in other forms than money, for example through subsistence farming or bartering. These income tend to accrue to the segment of population that is below-poverty line or very poor, in emerging and transitional economy countries such as those in sub-Saharan Africa, Latin America, Asia and Eastern Europe. Informal economy accounts for over half of global employment and as much as 90 per cent of employment in some of the poorer sub-Saharan countries with high official Gini inequality coefficients. Schneider et al., in their 2010 study of 162 countries, report about 31.2%, or about $20 trillion, of world's GDP is informal. In developing countries, the informal economy predominates for all income brackets except for the richer, urban upper income bracket populations. Even in developed economies, between 8% (United States) to 27% (Italy) of each nation's GDP is informal, and resulting informal income predominates as a livelihood activity for those in the lowest income brackets. The value and distribution of the incomes from informal or underground economy is difficult to quantify, making true income Gini coefficients estimates difficult. Different assumptions and quantifications of these incomes will yield different Gini coefficients.
Gini has some mathematical limitations as well. It is not additive and different sets of people cannot be averaged to obtain the Gini coefficient of all the people in the sets.
Given the limitations of Gini coefficient, other statistical methods are used in combination or as an alternative measure of population dispersity. For example, entropy measures are frequently used (e.g. the Atkinson index or the Theil Index and Mean log deviation as special cases of the generalized entropy index). These measures attempt to compare the distribution of resources by intelligent agents in the market with a maximum entropy random distribution, which would occur if these agents acted like non-interacting particles in a closed system following the laws of statistical physics.
There is a summary measure of the diagnostic ability of a binary classifier system that is also called Gini coefficient, which is defined as twice the area between the receiver operating characteristic (ROC) curve and its diagonal. It is related to the AUC (Area Under the ROC Curve) measure of performance given by and to Mann-Whitney U. Although both Gini coefficients are defined as areas between certain curves and share certain properties, there is no direct simple relation between the Gini coefficient of statistical dispersion and the Gini coefficient of a classifier.
In certain fields such as ecology, inverse Simpson's index is used to quantify diversity, and this should not be confused with the Simpson index . These indicators are related to Gini. The inverse Simpson index increases with diversity, unlike Simpson index and Gini coefficient which decrease with diversity. The Simpson index is in the range [0, 1], where 0 means maximum and 1 means minimum diversity (or heterogeneity). Since diversity indices typically increase with increasing heterogeneity, Simpson index is often transformed into inverse Simpson, or using the complement , known as Gini-Simpson Index.
Although the Gini coefficient is most popular in economics, it can in theory be applied in any field of science that studies a distribution. For example, in ecology the Gini coefficient has been used as a measure of biodiversity, where the cumulative proportion of species is plotted against cumulative proportion of individuals. In health, it has been used as a measure of the inequality of health related quality of life in a population. In education, it has been used as a measure of the inequality of universities. In chemistry it has been used to express the selectivity of protein kinase inhibitors against a panel of kinases. In engineering, it has been used to evaluate the fairness achieved by Internet routers in scheduling packet transmissions from different flows of traffic.
A 2005 study accessed US census data to measure home computer ownership, and used the Gini coefficient to measure inequalities amongst whites and African Americans. Results indicated that although decreasing overall, home computer ownership inequality is substantially smaller among white households.
A 2016 peer-reviewed study titled Employing the Gini coefficient to measure participation inequality in treatment-focused Digital Health Social Networks illustrated that the Gini coefficient was helpful and accurate in measuring shifts in inequality, however as a standalone metric it failed to incorporate overall network size.
The discriminatory power refers to a credit risk model's ability to differentiate between defaulting and non-defaulting clients. The formula , in calculation section above, may be used for the final model and also at individual model factor level, to quantify the discriminatory power of individual factors. It is related to accuracy ratio in population assessment models.
|journal=(help) The Chinese version of this paper appears in Xu, Kuan (2003). "How Has the Literature on Gini's Index Evolved in the Past 80 Years?". China Economic Quarterly. 2: 757-778. | https://popflock.com/learn?s=Gini_coefficient | 21 |
28 | Aboriginal peoples have inhabited what is now Manitoba for thousands of years. In the late 17th century, fur traders arrived in the area when it was part of Rupert's Land and owned by the Hudson's Bay Company. In 1869, negotiations for the creation of the province of Manitoba led to an armed uprising of the Métis people against the Government of Canada, a conflict known as the Red River Rebellion. The rebellion's resolution led to the Parliament of Canada passing the Manitoba Act in 1870 that created the province.
Manitoba's capital and largest city, Winnipeg, is the eighth-largest census metropolitan area in Canada. Other census agglomerations in the province are Brandon, Steinbach, Portage la Prairie, and Thompson.
The name Manitoba is believed to be derived from the Cree, Ojibwe or Assiniboine languages. The name derives from Cree manitou-wapow or Ojibwa manidoobaa, both meaning "straits of Manitou, the Great Spirit", a place referring to what are now called The Narrows in the centre of Lake Manitoba. It may also be from the Assiniboine for "Lake of the Prairie".
The lake was known to French explorers as Lac des Prairies. Thomas Spence chose the name to refer to a new republic he proposed for the area south of the lake. Métis leader Louis Riel also chose the name, and it was accepted in Ottawa under the Manitoba Act of 1870.
Manitoba is bordered by the provinces of Ontario to the east and Saskatchewan to the west, the territories of Nunavut to the north, and Northwest Territories to the northwest, and the US states of North Dakota and Minnesota to the south. It adjoins Hudson Bay to the northeast, and is the only prairie province to have a saltwater coastline. The Port of Churchill is Canada's only Arctic deep-water port and the shortest shipping route between North America and Asia. Lake Winnipeg is the tenth-largest freshwater lake in the world. Hudson Bay is the world's second-largest bay. Manitoba is at the heart of the giant Hudson Bay watershed, once known as Rupert's Land. It was a vital area of the Hudson's Bay Company, with many rivers and lakes that provided excellent opportunities for the lucrative fur trade.
The province has a saltwater coastline bordering Hudson Bay and more than 110,000 lakes, covering approximately 15.6 percent or 101,593 square kilometres (39,225 sq mi) of its surface area. Manitoba's major lakes are Lake Manitoba, Lake Winnipegosis, and Lake Winnipeg, the tenth-largest freshwater lake in the world. Some traditional Native lands and boreal forest on Lake Winnipeg's east side are a proposed UNESCO World Heritage Site.
Manitoba is at the centre of the Hudson Bay drainage basin, with a high volume of the water draining into Lake Winnipeg and then north down the Nelson River into Hudson Bay. This basin's rivers reach far west to the mountains, far south into the United States, and east into Ontario. Major watercourses include the Red, Assiniboine, Nelson, Winnipeg, Hayes, Whiteshell and Churchill rivers. Most of Manitoba's inhabited south has developed in the prehistoric bed of Glacial Lake Agassiz. This region, particularly the Red River Valley, is flat and fertile; receding glaciers left hilly and rocky areas throughout the province.
Baldy Mountain is the province's highest point at 832 metres (2,730 ft) above sea level, and the Hudson Bay coast is the lowest at sea level. Riding Mountain, the Pembina Hills, Sandilands Provincial Forest, and the Canadian Shield are also upland regions. Much of the province's sparsely inhabited north and east lie on the irregular granite Canadian Shield, including Whiteshell, Atikaki, and Nopiming Provincial Parks.
Extensive agriculture is found only in the province's southern areas, although there is grain farming in the Carrot Valley Region (near The Pas). The most common agricultural activity is cattle husbandry (34.6%), followed by assorted grains (19.0%) and oilseed (7.9%). Around 12 percent of Canada's farmland is in Manitoba.
Manitoba has an extreme continental climate. Temperatures and precipitation generally decrease from south to north and increase from east to west. Manitoba is far from the moderating influences of mountain ranges or large bodies of water. Because of the generally flat landscape, it is exposed to cold Arctic high-pressure air masses from the northwest during January and February. In the summer, air masses sometimes come out of the Southern United States, as warm humid air is drawn northward from the Gulf of Mexico. Temperatures exceed 30 °C (86 °F) numerous times each summer, and the combination of heat and humidity can bring the humidex value to the mid-40s. Carman, Manitoba recorded the second-highest humidex ever in Canada in 2007, with 53.0. According to Environment Canada, Manitoba ranked first for clearest skies year round, and ranked second for clearest skies in the summer and for the sunniest province in the winter and spring.
Southern Manitoba (including the city of Winnipeg), falls into the humid continental climate zone (Köppen Dfb). This area is cold and windy in the winter and has frequent blizzards because of the open landscape. Summers are warm with a moderate length. This region is the most humid area in the prairie provinces, with moderate precipitation. Southwestern Manitoba, though under the same climate classification as the rest of Southern Manitoba, is closer to the semi-arid interior of Palliser's Triangle. The area is drier and more prone to droughts than other parts of southern Manitoba. This area is cold and windy in the winter and has frequent blizzards due to the openness of the prairie landscape. Summers are generally warm to hot, with low to moderate humidity.
Southern parts of the province just north of Tornado Alley, experience tornadoes, with 16 confirmed touchdowns in 2016. In 2007, on 22 and 23 June, numerous tornadoes touched down, the largest an F5 tornado that devastated parts of Elie (the strongest recorded tornado in Canada).
The province's northern sections (including the city of Thompson) fall in the subarctic climate zone (Köppen climate classification Dfc). This region features long and extremely cold winters and brief, warm summers with little precipitation. Overnight temperatures as low as −40 °C (−40 °F) occur on several days each winter.
Manitoba natural communities may be grouped within five ecozones: boreal plains, prairie, taiga shield, boreal shield and Hudson plains. Three of these—taiga shield, boreal shield and Hudson plain—contain part of the Boreal forest of Canada which covers the province's eastern, southeastern, and northern reaches.
Forests make up about 263,000 square kilometres (102,000 sq mi), or 48 percent, of the province's land area. The forests consist of pines (Jack Pine, Red Pine, Eastern White Pine), spruces (White Spruce, Black Spruce), Balsam Fir, Tamarack (larch), poplars (Trembling Aspen, Balsam Poplar), birches (White Birch, Swamp Birch) and small pockets of Eastern White Cedar.
Two sections of the province are not dominated by forest. The province's northeast corner bordering Hudson Bay is above the treeline and is considered tundra. The tallgrass prairie once dominated the south central and southeastern parts including the Red River Valley. Mixed grass prairie is found in the southwestern region. Agriculture has replaced much of the natural prairie but prairie still can be found in parks and protected areas; some are notable for the presence of the endangered western prairie fringed orchid,.
Manitoba is especially noted for its northern polar bear population; Churchill is commonly referred to as the "Polar Bear Capital". Other large animals, including moose, white-tailed deer, black bears, cougars, lynx, and wolves, are common throughout the province, especially in the provincial and national parks. There is a large population of red sided garter snakes near Narcisse; the dens there are home to the world's largest concentration of snakes.
Manitoba's bird diversity is enhanced by its position on two major migration routes, with 392 confirmed identified species; 287 of these nesting within the province. These include the great grey owl, the province's official bird, and the endangered peregrine falcon.
Manitoba's lakes host 18 species of game fish, particularly species of trout, pike, and goldeye, as well as many smaller fish.
Modern-day Manitoba was inhabited by the First Nations people shortly after the last ice age glaciers retreated in the southwest about 10,000 years ago; the first exposed land was the Turtle Mountain area. The Ojibwe, Cree, Dene, Sioux, Mandan, and Assiniboine peoples founded settlements, and other tribes entered the area to trade. In Northern Manitoba, quartz was mined to make arrowheads. The first farming in Manitoba was along the Red River, where corn and other seed crops were planted before contact with Europeans.
In 1611, Henry Hudson was one of the first Europeans to sail into what is now known as Hudson Bay, where he was abandoned by his crew. The first European to reach present-day central and southern Manitoba was Sir Thomas Button, who travelled upstream along the Nelson River to Lake Winnipeg in 1612 in an unsuccessful attempt to find and rescue Hudson. When the British ship Nonsuch sailed into Hudson Bay in 1668–1669, she became the first trading vessel to reach the area; that voyage led to the formation of the Hudson's Bay Company, to which the British government gave absolute control of the entire Hudson Bay watershed. This watershed was named Rupert's Land, after Prince Rupert, who helped to subsidize the Hudson's Bay Company. York Factory was founded in 1684 after the original fort of the Hudson's Bay Company, Fort Nelson (built in 1682), was destroyed by rival French traders.
Pierre Gaultier de Varennes, sieur de La Vérendrye, visited the Red River Valley in the 1730s to help open the area for French exploration and trade. As French explorers entered the area, a Montreal-based company, the North West Company, began trading with the Métis. Both the North West Company and the Hudson's Bay Company built fur-trading forts; the two companies competed in southern Manitoba, occasionally resulting in violence, until they merged in 1821 (the Hudson's Bay Company Archives in Winnipeg preserve the history of this era).
Great Britain secured the territory in 1763 after their victory over France in the North American theatre of the Seven Years' War, better known as the French and Indian War in North America; lasting from 1754 to 1763. The founding of the first agricultural community and settlements in 1812 by Lord Selkirk, north of the area which is now downtown Winnipeg, led to conflict between British colonists and the Métis. Twenty colonists, including the governor, and one Métis were killed in the Battle of Seven Oaks in 1816. Thomas Spence attempted to be President of the Republic of Manitobah in 1867, that he and his council named.
Rupert's Land was ceded to Canada by the Hudson's Bay Company in 1869 and incorporated into the Northwest Territories; a lack of attention to Métis concerns caused Métis leader Louis Riel to establish a local provisional government as part of the Red River Rebellion. In response, Prime Minister John A. Macdonald introduced the Manitoba Act in the Canadian House of Commons, the bill was given Royal Assent and Manitoba was brought into Canada as a province in 1870. Louis Riel was pursued by British army officer Garnet Wolseley because of the rebellion, and Riel fled into exile. The Canadian government blocked the Métis' attempts to obtain land promised to them as part of Manitoba's entry into confederation. Facing racism from the new flood of white settlers from Ontario, large numbers of Métis moved to what would become Saskatchewan and Alberta.
Numbered Treaties were signed in the late 19th century with the chiefs of various First Nations that lived in the area. These treaties made specific promises of land for every family. As a result, a reserve system was established under the jurisdiction of the Federal Government. The prescribed amount of land promised to the native peoples was not always given; this led aboriginal groups to assert rights to the land through aboriginal land claims, many of which are still ongoing.
The original province of Manitoba was a square one-eighteenth of its current size, and was known colloquially as the "postage stamp province". Its borders were expanded in 1881, taking land from the Northwest Territories and the District of Keewatin, but Ontario claimed a large portion of the land; the disputed portion was awarded to Ontario in 1889. Manitoba grew to its current size in 1912, absorbing land from the Northwest Territories to reach 60°N, uniform with the northern reach of its western neighbours Saskatchewan, Alberta and British Columbia.
The Manitoba Schools Question showed the deep divergence of cultural values in the territory. The Catholic Franco-Manitobans had been guaranteed a state-supported separate school system in the original constitution of Manitoba, but a grassroots political movement among English Protestants from 1888 to 1890 demanded the end of French schools. In 1890, the Manitoba legislature passed a law removing funding for French Catholic schools. The French Catholic minority asked the federal government for support; however, the Orange Order and other anti-Catholic forces mobilized nationwide to oppose them.
The federal Conservatives proposed remedial legislation to override Manitoba, but they were blocked by the Liberals, led by Wilfrid Laurier, who opposed the remedial legislation because of his belief in provincial rights. Once elected Prime Minister in 1896, Laurier implemented a compromise stating Catholics in Manitoba could have their own religious instruction for 30 minutes at the end of the day if there were enough students to warrant it, implemented on a school-by-school basis.
By 1911, Winnipeg was the third largest city in Canada, and remained so until overtaken by Vancouver in the 1920s. A boomtown, it grew quickly around the start of the 20th century, with outside investors and immigrants contributing to its success. The drop in growth in the second half of the decade was a result of the opening of the Panama Canal in 1914, which reduced reliance on transcontinental railways for trade, as well as a decrease in immigration due to the outbreak of the First World War. Over 18,000 Manitoba residents enlisted in the first year of the war; by the end of the war, 14 Manitobans had received the Victoria Cross.
After the First World War ended, severe discontent among farmers (over wheat prices) and union members (over wage rates) resulted in an upsurge of radicalism, coupled with a polarization over the rise of Bolshevism in Russia. The most dramatic result was the Winnipeg general strike of 1919. It began on 15 May and collapsed on 25 June 1919; as the workers gradually returned to their jobs, the Central Strike Committee decided to end the movement.
Government efforts to violently crush the strike, including a Royal Northwest Mounted Police charge into a crowd of protesters that resulted in multiple casualties and one death, had led to the arrest of the movement's leaders. In the aftermath, eight leaders went on trial, and most were convicted on charges of seditious conspiracy, illegal combinations, and seditious libel; four were aliens who were deported under the Canadian Immigration Act.
The Great Depression (1929–c. 1939) hit especially hard in Western Canada, including Manitoba. The collapse of the world market combined with a steep drop in agricultural production due to drought led to economic diversification, moving away from a reliance on wheat production. The Manitoba Co-operative Commonwealth Federation, forerunner to the New Democratic Party of Manitoba (NDP), was founded in 1932.
Canada entered the Second World War in 1939. Winnipeg was one of the major commands for the British Commonwealth Air Training Plan to train fighter pilots, and there were air training schools throughout Manitoba. Several Manitoba-based regiments were deployed overseas, including Princess Patricia's Canadian Light Infantry. In an effort to raise money for the war effort, the Victory Loan campaign organized "If Day" in 1942. The event featured a simulated Nazi invasion and occupation of Manitoba, and eventually raised over C$65 million.
Winnipeg was inundated during the 1950 Red River Flood and had to be partially evacuated. In that year, the Red River reached its highest level since 1861 and flooded most of the Red River Valley. The damage caused by the flood led then-Premier Duff Roblin to advocate for the construction of the Red River Floodway; it was completed in 1968 after six years of excavation. Permanent dikes were erected in eight towns south of Winnipeg, and clay dikes and diversion dams were built in the Winnipeg area. In 1997, the "Flood of the Century" caused over C$400 million in damages in Manitoba, but the floodway prevented Winnipeg from flooding.
In 1990, Prime Minister Brian Mulroney attempted to pass the Meech Lake Accord, a series of constitutional amendments to persuade Quebec to endorse the Canada Act 1982. Unanimous support in the legislature was needed to bypass public consultation. Manitoba politician Elijah Harper, a Cree, opposed because he did not believe First Nations had been adequately involved in the Accord's process, and thus the Accord failed.
In 2013, Manitoba was the second province to make accessibility legislation law, protecting the rights of persons with disabilities.
At the 2011 census, Manitoba had a population of 1,208,268, more than half of which is in the Winnipeg Capital Region; Winnipeg is Canada's eighth-largest Census Metropolitan Area, with a population of 730,018 (2011 Census). Although initial colonization of the province revolved mostly around homesteading, the last century has seen a shift towards urbanization; Manitoba is the only Canadian province with over fifty-five percent of its population located in a single city.
According to the 2006 Canadian census, the largest ethnic group in Manitoba is English (22.9%), followed by German (19.1%), Scottish (18.5%), Ukrainian (14.7%), Irish (13.4%), North American Indian (10.6%), Polish (7.3%), Métis (6.4%), French (5.6%), Dutch (4.9%), and Russian (4.0%). Almost one-fifth of respondents also identified their ethnicity as "Canadian". There is a significant indigenous community: aboriginals (including Métis) are Manitoba's fastest-growing ethnic group, representing 13.6 percent of Manitoba's population as of 2001 (some reserves refused to allow census-takers to enumerate their populations). There is a significant Franco-Manitoban minority (148,370) and a growing aboriginal population (192,865, including the Métis). Gimli, Manitoba is home to the largest Icelandic community outside of Iceland.
Most Manitobans belong to a Christian denomination: on the 2001 census, 758,760 Manitobans (68.7%) reported being Christian, followed by 13,040 (1.2%) Jewish, 5,745 (0.5%) Buddhist, 5,485 (0.5%) Sikh, 5,095 (0.5%) Muslim, 3,840 (0.3%) Hindu, 3,415 (0.3%) Aboriginal spirituality and 995 (0.1%) pagan. 201,825 Manitobans (18.3%) reported no religious affiliation. The largest Christian denominations by number of adherents were the Roman Catholic Church with 292,970 (27%); the United Church of Canada with 176,820 (16%); and the Anglican Church of Canada with 85,890 (8%).
Manitoba has a moderately strong economy based largely on natural resources. Its Gross Domestic Product was C$50.834 billion in 2008. The province's economy grew 2.4 percent in 2008, the third consecutive year of growth; in 2009, it neither increased nor decreased. The average individual income in Manitoba in 2006 was C$25,100 (compared to a national average of C$26,500), ranking fifth-highest among the provinces. As of October 2009, Manitoba's unemployment rate was 5.8 percent.
Manitoba's economy relies heavily on agriculture, tourism, energy, oil, mining, and forestry. Agriculture is vital and is found mostly in the southern half of the province, although grain farming occurs as far north as The Pas. Around 12 percent of Canadian farmland is in Manitoba. The most common type of farm found in rural areas is cattle farming (34.6%), followed by assorted grains (19.0%) and oilseed (7.9%).
Manitoba is the nation's largest producer of sunflower seed and dry beans, and one of the leading sources of potatoes. Portage la Prairie is a major potato processing centre, and is home to the McCain Foods and Simplot plants, which provide French fries for McDonald's, Wendy's, and other commercial chains. Can-Oat Milling, one of the largest oat mills in the world, also has a plant in the municipality.
Manitoba's largest employers are government and government-funded institutions, including crown corporations and services like hospitals and universities. Major private-sector employers are The Great-West Life Assurance Company, Cargill Ltd., and James Richardson and Sons Ltd. Manitoba also has large manufacturing and tourism sectors. Churchill's Arctic wildlife is a major tourist attraction; the town is a world capital for polar bear and beluga whale watchers. Manitoba is the only province with an Arctic deep-water seaport which is located in Churchill, which links to the shortest shipping route between North America, Europe and Asia.
Manitoba's early economy depended on mobility and living off the land. Aboriginal Nations (Cree, Ojibwa, Dene, Sioux and Assiniboine) followed herds of bison and congregated to trade among themselves at key meeting places throughout the province. After the arrival of the first European traders in the 17th century, the economy centred on the trade of beaver pelts and other furs. Diversification of the economy came when Lord Selkirk brought the first agricultural settlers in 1811, though the triumph of the Hudson's Bay Company (HBC) over its competitors ensured the primacy of the fur trade over widespread agricultural colonization.
HBC control of Rupert's Land ended in 1868; when Manitoba became a province in 1870, all land became the property of the federal government, with homesteads granted to settlers for farming. Transcontinental railways were constructed to simplify trade. Manitoba's economy depended mainly on farming, which persisted until drought and the Great Depression led to further diversification.
CFB Winnipeg is a Canadian Forces Base at the Winnipeg International Airport. The base is home to flight operations support divisions and several training schools, as well as the 1 Canadian Air Division and Canadian NORAD Region Headquarters. 17 Wing of the Canadian Forces is based at CFB Winnipeg; the Wing has three squadrons and six schools. It supports 113 units from Thunder Bay to the Saskatchewan/Alberta border, and from the 49th parallel north to the high Arctic. 17 Wing acts as a deployed operating base for CF-18 Hornet fighter–bombers assigned to the Canadian NORAD Region.
The two 17 Wing squadrons based in the city are: the 402 ("City of Winnipeg" Squadron), which flies the Canadian designed and produced de Havilland Canada CT-142 Dash 8 navigation trainer in support of the 1 Canadian Forces Flight Training School's Air Combat Systems Officer and Airborne Electronic Sensor Operator training programs (which trains all Canadian Air Combat Systems Officer); and the 435 ("Chinthe" Transport and Rescue Squadron), which flies the Lockheed C-130 Hercules tanker/transport in airlift search and rescue roles, and is the only Air Force squadron equipped and trained to conduct air-to-air refuelling of fighter aircraft.
Canadian Forces Base Shilo (CFB Shilo) is an Operations and Training base of the Canadian Forces located 35 kilometres (22 mi) east of Brandon. During the 1990s, Canadian Forces Base Shilo was designated as an Area Support Unit, acting as a local base of operations for Southwest Manitoba in times of military and civil emergency. CFB Shilo is the home of the 1st Regiment, Royal Canadian Horse Artillery, both battalions of the 1 Canadian Mechanized Brigade Group, and the Royal Canadian Artillery. The Second Battalion of Princess Patricia's Canadian Light Infantry (2 PPCLI), which was originally stationed in Winnipeg (first at Fort Osborne, then in Kapyong Barracks), has operated out of CFB Shilo since 2004. CFB Shilo hosts a training unit, 3rd Canadian Division Training Centre. It serves as a base for support units of 3rd Canadian Division, also including 3 CDSG Signals Squadron, Shared Services Unit (West), 11 CF Health Services Centre, 1 Dental Unit, 1 Military Police Regiment, and an Integrated Personnel Support Centre. The base currently houses 1,700 soldiers.
After the control of Rupert's Land was passed from Great Britain to the Government of Canada in 1869, Manitoba attained full-fledged rights and responsibilities of self-government as the first Canadian province carved out of the Northwest Territories. The Legislative Assembly of Manitoba was established on 14 July 1870. Political parties first emerged between 1878 and 1883, with a two-party system (Liberals and Conservatives). The United Farmers of Manitoba appeared in 1922, and later merged with the Liberals in 1932. Other parties, including the Co-operative Commonwealth Federation (CCF), appeared during the Great Depression; in the 1950s, Manitoban politics became a three-party system, and the Liberals gradually declined in power. The CCF became the New Democratic Party of Manitoba (NDP), which came to power in 1969. Since then, the Conservatives and the NDP have been the dominant parties.
Like all Canadian provinces, Manitoba is governed by a unicameral legislative assembly. The executive branch is formed by the governing party; the party leader is the premier of Manitoba, the head of the executive branch. The head of state, Queen Elizabeth II, is represented by the Lieutenant Governor of Manitoba, who is appointed by the Governor General of Canada on advice of the Prime Minister. The head of state is primarily a ceremonial role, although the Lieutenant Governor has the official responsibility of ensuring that Manitoba has a duly constituted government.
The Legislative Assembly consists of the 57 Members elected to represent the people of Manitoba. The premier of Manitoba is Brian Pallister of the PC Party. The PCs were elected with a majority government of 40 seats. The NDP holds 14 seats, and the Liberal Party have three seats but does not have official party status in the Manitoba Legislature. The last provincial general election was held on 19 April 2016. The province is represented in federal politics by 14 Members of Parliament and six Senators.
Manitoba's judiciary consists of the Court of Appeal, the Court of Queen's Bench, and the Provincial Court. The Provincial Court is primarily for criminal law; 95 percent of criminal cases in Manitoba are heard here. The Court of Queen's Bench is the highest trial court in the province. It has four jurisdictions: family law (child and family services cases), civil law, criminal law (for indictable offences), and appeals. The Court of Appeal hears appeals from both benches; its decisions can only be appealed to the Supreme Court of Canada.
English and French are the official languages of the legislature and courts of Manitoba, according to §23 of the Manitoba Act, 1870 (part of the Constitution of Canada). In April 1890, the Manitoba legislature attempted to abolish the official status of French, and ceased to publish bilingual legislation. However, in 1985 the Supreme Court of Canada ruled in the Reference re Manitoba Language Rights that §23 still applied, and that legislation published only in English was invalid (unilingual legislation was declared valid for a temporary period to allow time for translation).
Although French is an official language for the purposes of the legislature, legislation, and the courts, the Manitoba Act does not require it to be an official language for the purpose of the executive branch (except when performing legislative or judicial functions). Hence, Manitoba's government is not completely bilingual. The Manitoba French Language Services Policy of 1999 is intended to provide a comparable level of provincial government services in both official languages. According to the 2006 Census, 82.8 percent of Manitoba's population spoke only English, 3.2 percent spoke only French, 15.1 percent spoke both, and 0.9 percent spoke neither.
In 2010, the provincial government of Manitoba passed the Aboriginal Languages Recognition Act, which gives official recognition to seven indigenous languages: Cree, Dakota, Dene, Inuktitut, Michif, Ojibway and Oji-Cree.
Transportation and warehousing contribute approximately C$2.2 billion to Manitoba's GDP. Total employment in the industry is estimated at 34,500, or around 5 percent of Manitoba's population. Trucks haul 95 percent of land freight in Manitoba, and trucking companies account for 80 percent of Manitoba's merchandise trade to the United States. Five of Canada's twenty-five largest employers in for-hire trucking are headquartered in Manitoba. C$1.18 billion of Manitoba's GDP comes directly or indirectly from trucking.
Greyhound Canada and Grey Goose Bus Lines offer domestic bus service from the Winnipeg Bus Terminal. The terminal was relocated from downtown Winnipeg to the airport in 2009, and is a Greyhound hub. Municipalities also operate localized transit bus systems.
Manitoba has two Class I railways: Canadian National Railway (CN) and Canadian Pacific Railway (CPR). Winnipeg is centrally located on the main lines of both carriers, and both maintain large inter-modal terminals in the city. CN and CPR operate a combined 2,439 kilometres (1,516 mi) of track in Manitoba. Via Rail offers transcontinental and Northern Manitoba passenger service from Winnipeg's Union Station. Numerous small regional and short-line railways also run trains within Manitoba: the Hudson Bay Railway, the Southern Manitoba Railway, Burlington Northern Santa Fe Manitoba, Greater Winnipeg Water District Railway, and Central Manitoba Railway. Together, these smaller lines operate approximately 1,775 kilometres (1,103 mi) of track in the province.
Winnipeg James Armstrong Richardson International Airport, Manitoba's largest airport, is one of only a few 24-hour unrestricted airports in Canada and is part of the National Airports System. A new, larger terminal opened in October 2011. The airport handles approximately 195,000 tonnes (430,000,000 lb) of cargo annually, making it the third largest cargo airport in the country.
Eleven regional passenger airlines and nine smaller and charter carriers operate out of the airport, as well as eleven air cargo carriers and seven freight forwarders. Winnipeg is a major sorting facility for both FedEx and Purolator, and receives daily trans-border service from UPS. Air Canada Cargo and Cargojet Airways use the airport as a major hub for national traffic.
The Port of Churchill, owned by OmniTRAX, is the only Arctic deep-water port in Canada. It is nautically closer to ports in Northern Europe and Russia than any other port in Canada. It has four deep-sea berths for the loading and unloading of grain, general cargo and tanker vessels. The port is served by the Hudson Bay Railway (also owned by OmniTRAX). Grain represented 90 percent of the port's traffic in the 2004 shipping season. In that year, over 600,000 tonnes (1.3×109 lb) of agricultural products were shipped through the port.
The first school in Manitoba was founded in 1818 by Roman Catholic missionaries in present-day Winnipeg; the first Protestant school was established in 1820. A provincial board of education was established in 1871; it was responsible for public schools and curriculum, and represented both Catholics and Protestants. The Manitoba Schools Question led to funding for French Catholic schools largely being withdrawn in favour of the English Protestant majority. Legislation making education compulsory for children between seven and fourteen was first enacted in 1916, and the leaving age was raised to sixteen in 1962.
Public schools in Manitoba fall under the regulation of one of thirty-seven school divisions within the provincial education system (except for the Manitoba Band Operated Schools, which are administered by the federal government). Public schools follow a provincially mandated curriculum in either French or English. There are sixty-five funded independent schools in Manitoba, including three boarding schools. These schools must follow the Manitoban curriculum and meet other provincial requirements. There are forty-four non-funded independent schools, which are not required to meet those standards.
There are five universities in Manitoba, regulated by the Ministry of Advanced Education and Literacy. Four of these universities are in Winnipeg: the University of Manitoba, the largest and most comprehensive; the University of Winnipeg, a liberal arts school primarily focused on undergrad studies located downtown; Université de Saint-Boniface, the province's only French-language university; and the Canadian Mennonite University, a religious-based institution. The Université de Saint-Boniface, established in 1818 and now affiliated with the University of Manitoba, is the oldest university in Western Canada. Brandon University, formed in 1899 and located in Brandon, is the province's only university not in Winnipeg.
Manitoba has thirty-eight public libraries; of these, twelve have French-language collections and eight have significant collections in other languages. Twenty-one of these are part of the Winnipeg Public Library system. The first lending library in Manitoba was founded in 1848.
Manitoba's culture has been influenced by traditional (Aboriginal and Métis) and modern Canadian artistic values, as well as by the cultures of its immigrant populations and American neighbours. The Minister of Culture, Heritage, Tourism and Sport is responsible for promoting and, to some extent, financing Manitoban culture. Manitoba is the birthplace of the Red River Jig, a combination of aboriginal pow-wows and European reels popular among early settlers. Manitoba's traditional music has strong roots in Métis and Aboriginal culture, in particular the old-time fiddling of the Métis. Manitoba's cultural scene also incorporates classical European traditions. The Winnipeg-based Royal Winnipeg Ballet (RWB), is Canada's oldest ballet and North America's longest continuously operating ballet company; it was granted its royal title in 1953 under Queen Elizabeth II. The Winnipeg Symphony Orchestra (WSO) performs classical music and new compositions at the Centennial Concert Hall. Manitoba Opera, founded in 1969, also performs out of the Centennial Concert Hall.
Le Cercle Molière (founded 1925) is the oldest French-language theatre in Canada, and Royal Manitoba Theatre Centre (founded 1958) is Canada's oldest English-language regional theatre. Manitoba Theatre for Young People was the first English-language theatre to win the Canadian Institute of the Arts for Young Audiences Award, and offers plays for children and teenagers as well as a theatre school. The Winnipeg Art Gallery (WAG), Manitoba's largest art gallery and the sixth largest in the country, hosts an art school for children; the WAG's permanent collection comprises over twenty thousand works, with a particular emphasis on Manitoban and Canadian art.
The 1960s pop group The Guess Who was formed in Manitoba, and later became the first Canadian band to have a No. 1 hit in the United States; Guess Who guitarist Randy Bachman later created Bachman–Turner Overdrive (BTO) with fellow Winnipeg-based musician Fred Turner. Fellow rocker Neil Young, lived for a time in Manitoba, played with Stephen Stills in Buffalo Springfield, and again in supergroup Crosby, Stills, Nash & Young. Soft-rock band Crash Test Dummies formed in the late 1980s in Winnipeg and were the 1992 Juno Awards Group of the Year.
Several prominent Canadian films were produced in Manitoba, such as The Stone Angel, based on the Margaret Laurence book of the same title, The Saddest Music in the World, Foodland, For Angela, and My Winnipeg. Major films shot in Manitoba include The Assassination of Jesse James by the Coward Robert Ford and Capote, both of which received Academy Award nominations. Falcon Beach, an internationally broadcast television drama, was filmed at Winnipeg Beach, Manitoba.
Manitoba has a strong literary tradition. Manitoban writer Bertram Brooker won the first-ever Governor General's Award for Fiction in 1936. Cartoonist Lynn Johnston, author of the comic strip For Better or For Worse, was nominated for a Pulitzer Prize and inducted into the Canadian Cartoonist Hall of Fame. Margaret Laurence's The Stone Angel and A Jest of God were set in Manawaka, a fictional town representing Neepawa; the latter title won the Governor General's Award in 1966. Carol Shields won both the Governor General's Award and the Pulitzer Prize for The Stone Diaries. Gabrielle Roy, a Franco-Manitoban writer, won the Governor General's Award three times. A quote from her writings is featured on the Canadian $20 bill.
Festivals take place throughout the province, with the largest centred in Winnipeg. The inaugural Winnipeg Folk Festival was held in 1974 as a one-time celebration to mark Winnipeg's 100th anniversary. Today, the five-day festival is one of the largest folk festivals in North America with over 70 acts from around the world and an annual attendance that exceeds 80,000. The Winnipeg Folk Festival's home – Birds Hill Provincial Park – is located 34 kilometres outside of Winnipeg and for the five days of the festival, it becomes Manitoba's third largest "city." The Festival du Voyageur is an annual ten-day event held in Winnipeg's French Quarter, and is Western Canada's largest winter festival. It celebrates Canada's fur-trading past and French-Canadian heritage and culture. Folklorama, a multicultural festival run by the Folk Arts Council, receives around 400,000 pavilion visits each year, of which about thirty percent are from non-Winnipeg residents. The Winnipeg Fringe Theatre Festival is an annual alternative theatre festival, the second-largest festival of its kind in North America (after the Edmonton International Fringe Festival).
Manitoban museums document different aspects of the province's heritage. The Manitoba Museum is the largest museum in Manitoba and focuses on Manitoban history from prehistory to the 1920s. The full-size replica of the Nonsuch is the museum's showcase piece. The Manitoba Children's Museum at The Forks presents exhibits for children. There are two museums dedicated to the native flora and fauna of Manitoba: the Living Prairie Museum, a tall grass prairie preserve featuring 160 species of grasses and wildflowers, and FortWhyte Alive, a park encompassing prairie, lake, forest and wetland habitats, home to a large herd of bison. The Canadian Fossil Discovery Centre houses the largest collection of marine reptile fossils in Canada. Other museums feature the history of aviation, marine transport, and railways in the area. The Canadian Museum for Human Rights is the first Canadian national museum outside of the National Capital Region.
Winnipeg has three daily newspapers: the Winnipeg Free Press, a broadsheet with the highest circulation numbers in Manitoba, as well as the Winnipeg Sun and Metro, both smaller tabloid-style papers. There are several ethnic weekly newspapers, including the weekly French-language La Liberté, and regional and national magazines based in the city. Brandon has two newspapers: the daily Brandon Sun and the weekly Wheat City Journal. Many small towns have local newspapers.
There are five English-language television stations and one French-language station based in Winnipeg. The Global Television Network (owned by Canwest) is headquartered in the city. Winnipeg is home to twenty-one AM and FM radio stations, two of which are French-language stations. Brandon's five local radio stations are provided by Astral Media and Westman Communications Group. In addition to the Brandon and Winnipeg stations, radio service is provided in rural areas and smaller towns by Golden West Broadcasting, Corus Entertainment, and local broadcasters. CBC Radio broadcasts local and national programming throughout the province. Native Communications is devoted to Aboriginal programming and broadcasts to many of the isolated native communities as well as to larger cities.
Manitoba has four professional sports teams: the Winnipeg Blue Bombers (Canadian Football League), the Winnipeg Jets (National Hockey League), the Manitoba Moose (American Hockey League), and the Winnipeg Goldeyes (American Association). The province was previously home to another team called the Winnipeg Jets, which played in the World Hockey Association and National Hockey League from 1972 until 1996, when financial troubles prompted a sale and move of the team, renamed the Phoenix Coyotes. A second incarnation of the Winnipeg Jets returned, after True North Sports & Entertainment bought the Atlanta Thrashers and moved the team to Winnipeg in time for the 2011 hockey season. Manitoba has one major junior-level hockey team, the Western Hockey League's Brandon Wheat Kings, and one junior football team, the Winnipeg Rifles of the Canadian Junior Football League.
The province is represented in university athletics by the University of Manitoba Bisons, the University of Winnipeg Wesmen, and the Brandon University Bobcats. All three teams compete in the Canada West Universities Athletic Association (the regional division of Canadian Interuniversity Sport).
Curling is an important winter sport in the province with Manitoba producing more men's national champions than any other province, while additionally in the top 3 women's national champions, as well as multiple world champions in the sport. The province also hosts the world's largest curling tournament in the MCA Bonspiel. The province is regular host to Grand Slam events which feature as the largest cash events in the sport such as the annual Manitoba Lotteries Women's Curling Classic as well as other rotating events.
Though not as prominent as hockey and curling, long track speed skating also features as a notable and top winter sport in Manitoba. The province has produced some of the world's best female speed skaters including Susan Auch and the country's top Olympic medal earners Cindy Klassen and Clara Hughes. | https://citypages.neocities.org/manitoba.htm | 21 |
45 | Opportunity cost represents the benefit that is forgone when one alternative is chosen over another. Whenever you are presented with two options, choosing one option over the other would bring you an opportunity cost. This concept bases on the rationale to critically analyze all the available options/choices before making a decision.
Simply put, the term ‘Opportunity cost’ refers to what you’d have to give up to gain something.
Opportunity Cost Examples
- Let’s suppose you have $10. You can use this money to buy a KFC Mighty Zinger or an Accounting textbook for your upcoming quiz. If you choose to buy a burger, you won’t be able to afford the Accounting textbook. The opportunity cost to enjoy a KFC Mighty Zinger, therefore, is an Accounting textbook.
Similarly, if you opt for the latter and buy the textbook instead, you will be out of money to buy yourself a burger. So, the opportunity cost to buy a textbook is a KFC Mighty Zinger. For each choice that you make, you forsake the next best alternative that makes the opportunity cost of the chosen alternative.
- Bill is just a week ahead of his 3-month long Summer vacations. This year he wants to learn horse riding and swimming both. However, both the courses are 3-months long, and he can schedule either of them, only. If Bill chooses to learn swimming, he will have to let go of the option of horse-riding.
Or on the contrary, he would have to lose out the option of swimming to learn horse-riding. In either case, the course that he drops is the opportunity cost of the course that he adopts. The opportunity cost of learning swimming is horse-riding, and vice-versa.
- The government has to allocate the budget of $1,000 billion for the upcoming year between defense, education, health, and infrastructure. If the government decides to spend $500 billion on defense and $500 billion on education, there would be nothing left back to spend on health and infrastructure. Thus, the opportunity cost of government investment in education and defense operations is health and infrastructure projects.
Similarly, if the government plans to spend the entire $1,000 budget on health and modern infrastructure, then the same budget cannot be used for the next best alternatives i.e. education and defense. That, in a nutshell, defines how opportunity cost works.
Opportunity Cost Explained
The simplest definition of opportunity cost is ‘the price of the next best alternative that you would have opted for, had you not made your first choice’.
Let’s understand this through the following example.
Harry has won $500 in a lottery. He is faced with several options to spend the prize money.
- Buy an iPhone worth $500. (10/10)
- Buy a PS4 worth $500. (6/10)
- Buy a 7-day trip to Paris for $500. (7/10)
- Buy an Xbox worth $500. (9/10)
While he wishes to buy all the above items, he can only afford to buy one. The rating parallel to each item represents how much Harry can benefit from each item.
What is the opportunity cost of buying an iPhone?
Is it the combination of all the other items i.e. a PS4, a 7-day trip to Paris, and an Xbox?
No, opportunity cost only represents the value of the next best alternative forgone.
The opportunity cost of buying an iPhone is thus, buying an Xbox. Had he not bought himself an iPhone, he would most likely have bought an Xbox as it tends to be the next most beneficial alternative.
By buying an iPhone, Harry has lost the benefit that he could have availed from an Xbox.
Opportunity Cost Formula
Understanding and critically analyzing the potential missed opportunities for each investment chosen over another, promotes better decision making.
The financial reports and statements of a company do not show Opportunity costs. Estimating and evaluating the opportunity cost of a decision is purely management-based. Business owners use the underlying concept of these costs to make an educated decision when faced with multiple options to choose from.
Opportunity cost is calculated by using the following formula,
$$Opportunity\: cost\: = RFO - RCO$$
- RFO = Return on the next best-forsaken option
- RCO = Return on the chosen option
Here is how this formula works:
You have $10 million and you choose to invest it in a project that yields an annual return of 5%. Exploring more options, you could have invested the same $5 million into another project that would have yielded a 10% annual return.
Return on the chosen option = 5%, Return on the next best forsaken option = 10%
RFO – RCO = 10% – 5%
Opportunity Cost = 5%
The differential 5% return is the lost opportunity cost of this decision i.e. (to invest in a 5% return yielding project). The formula to calculate opportunity cost is simply the difference between the foreseen returns of each alternative.
While the decision to choose a 5% return may seem irrational, real-life decisions may be different. For example,
A company is faced with an option to invest $8 million in stocks to generate capital gains. Or reinvest the same amount within the business to launch a new product line and earn more profits.
Assuming, the expected return on Option A (investment in stocks) is 7% and that on Option B (reinvestment in business) is 9%. The opportunity cost of investing in Option A (investment in stocks) is 2% (9%-7%). In other words, by investing in stocks, the company would lose the opportunity of launching a new product line and earning more profits.
Opportunity Cost is Estimate-Based
However, the concept of opportunity cost is forward-looking, and everything is based on estimates. The return of 7% and 9% (refer to the above example) is expected and the actual rate of return is unknown.
The company reinvests in the business instead of investing in the stock market.
There is an utter possibility of the new product to fail; the concerned audience may not like it, or the targeted sale volume might not be achieved. In either case, the expected 9% rate of return can turn out to be a wrong estimate.
If the product faces a backlash (as the above-taken assumption), the company could end up bearing an opportunity cost of 7%, instead of enjoying the return of 9%.
Reapplying the OC formula, the return on the stock investment is 7%, whereas the Return on reinvestment in business is now 0% (assuming the product launch failed).
Therefore, the new OC is:
- Return on the next best forsaken option (RFO) = 7%
- Return on the chosen option (RCO) = 0%
- Opportunity Cost = RFO – RCO
- Opportunity Cost = 7% – 0% = 7%
Time Based Opportunity Cost
The concept of Opportunity cost is not limited to monetary decisions. It makes its way to all our daily and personal decisions. Each second that you spend doing a particular activity could have been spent doing something different. Therefore, each act that you do has a cost of something that you didn’t do at that particular time.
For instance, the time you spend learning Accounting could have been spent in learning Economics. The opportunity cost learning Accounting is, thus, learning Economics.
Talking a little more like economists, the term ‘Opportunity costs’ refers to the decision of spending your funds now or investing them to earn a return. For each penny that you hold in your pocket, the opportunity cost is the interest that you could have earned by investing the same penny in an investment vehicle.
- Buy a car for $8,000 today or invest the same in stocks to earn an annual return of $10%. The opportunity cost of buying a car today is thus the potential annual return that you could earn in the future.
- Payback your loans today to save the interest expenses or use the same to buy assets and generate future revenue. The cost of saving your interest expenses is the potential revenue that you can make from the assets that you buy from the loaned amount.
- Sell your car for $3,000 today or use it for another 2 years. The cost of selling your car for an immediate receipt of $3,000 is the ability to use it for another 2 years.
Opportunity Cost in Businesses
When applied to a business, the idea of opportunity cost refers to the potential profit that a business could have earned by investing the same assets, capital, equipment, resources, and funds into a different project, product, or service.
All businesses consider the relevant costs, incremental costs, and all implicit and explicit opportunity costs before taking any business decision. Below are the examples of some business decisions based on a critical evaluation of opportunity costs and potential revenue.
- Limiting factor decisions
- Make or buy decisions
- Continuing operations or shutdown decisions
- Joint product & further processing decisions
A business considers opportunity costs in terms of several factors including labor-hours, machine-hours, mechanical output, raw material etc. However, the cost evaluation process of a business is different and includes the analysis of explicit & implicit costs.
An explicit cost is an incremental cost the or direct payment that is made in the course of running a business. These costs are specifically incurred and are booked as an expense, resulting in actual cash outflows e.g. wages, salaries paid to employees, rent, price of raw materials, etc.
Flair Bakery is planning to introduce a new Smoked Beef Lasagna recipe. To prepare the said dish, Flair Bakery would need to hire two trained chefs. Also, it would require new Pasta cutting machines and a special set of sauces. The Finance team estimates an expense of $200 upon the launch of this new menu item.
Considering the above example, $200 is the explicit opportunity cost of introducing the Smoked Beef Lasagna at Flair Bakery. The same $200 could have been used to introduce another recipe, to buy other latest machinery or any other business activity.
Implicit costs do not represent direct payments, but the usage of already-owned resources. Implicit costs make the best use of the concept of Opportunity costs. These costs trigger no additional payments or cash outflows, but rather the loss of an opportunity to earn from the existing resources differently.
Sturdy Constructors Inc. is an established real-estate company. It has several buildings and flats around the town that are tenanted and sold. However, due to some business operations’ expansion, a building was vacated. The board of directors decided to set up the office headquarters within the vacated building. Before being used for business purposes, the building was rented out for $3500 per annum.
In the above example, Sturdy Constructors Inc. has won an opportunity to expand its business and make more profits than before for no additional cash outflows. However, it has lost the annual rental income of $3500. Thus, the implicit opportunity cost of business expansion born by Sturdy Constructors Inc. is $3500 per annum.
Limitations of Opportunity Costs
The idea of Opportunity cost helps you to better analyze the potential options and opportunities available at the time of decision-making. However, there are some limitations to this concept which are as follows,
- Opportunity cost cannot always be authentically estimated at the time of decision-making. Particularly, in businesses when the variability of the rate of return is higher.
- Making a quantitative comparison between the two alternatives is not always possible. It requires a common measuring unit i.e. time, money spent, man-force used etc.
- Some factors of production and resources might have only one use. For the utilization of such factors/ resources, there is no opportunity cost. This opposes the basic idea of Opportunity cost.
- Opportunity cost is the price of the next best alternative forgone, when one option is chosen over another. It is not the combination of all the available options but only the next best option.
- Opportunity cost = Return on the next best Forsaken Option – Return on the Chosen Option
- Considering opportunity costs navigates you to more profitable and successful decisions by evaluating the feasibility of all the available options.
- In addition to potential returns, the relative risks involved with each option must also be assessed to reach the right decisions. | https://studyfinance.com/opportunity-cost/ | 21 |
22 | Allies of World War II
The Big Three:
Allied combatants with governments-in-exile:
Other Allied combatant states:
|Historical era||World War II|
|1-15 Jul 1944|
|4-11 Feb 1945|
The Allies of World War II were a group of countries that together opposed the Axis powers during the Second World War (1939-1945). The Allies promoted the alliance as a means to defeat Nazi Germany, the Empire of Japan, Fascist Italy and their allies.
At the start of the war on 1 September 1939, the Allies consisted of Poland, the United Kingdom, and France as well as their dependent states, such as British India. They were joined by the independent Dominions of the British Commonwealth: Canada, Australia, New Zealand and South Africa. After the start of the German invasion of North Europe until the Balkan Campaign, the Netherlands, Belgium, Greece, and Yugoslavia joined the Allies. After first having cooperated with Germany in invading Poland whilst remaining neutral in the Allied-Axis conflict, the Soviet Union perforce joined the Allies in June 1941 after being invaded by Germany. The United States provided war materiel and money to the Allies all along, and officially joined in December 1941 after the Japanese attack on Pearl Harbor. China had already been in a prolonged war with Japan since the Marco Polo Bridge Incident of 1937 and officially joined the Allies in December 1941.
The Big Three--the United Kingdom, the Soviet Union, and the United States--formed a Grand Alliance that was key to victory. They controlled Allied strategy; relations between the United Kingdom and the United States were especially close. The alliance was formalized by the Declaration by United Nations, on 1 January 1942. The Big Three together with China were referred to as a "trusteeship of the powerful", then were recognized as the "Four Powers" in the Declaration by United Nations and later as the "Four Policemen" of the United Nations.
The origins of the Allied powers stem from the Allies of World War I and cooperation of the victorious powers at the Paris Peace Conference, 1919. Germany resented signing the Treaty of Versailles. The new Weimar Republic's legitimacy became shaken. However, the 1920s were peaceful.
With the Wall Street Crash of 1929 and the ensuing Great Depression, political unrest in Europe soared including the rise in support of revanchist nationalists in Germany who blamed the severity of the economic crisis on the Treaty of Versailles. By the early 1930s, the Nazi Party led by Adolf Hitler became the dominant revanchist movement in Germany and Hitler and the Nazis gained power in 1933. The Nazi regime demanded the immediate cancellation of the Treaty of Versailles and made claims to German-populated Austria, and German-populated territories of Czechoslovakia. The likelihood of war was high, and the question was whether it could be avoided through strategies such as appeasement.
In Asia, when Japan seized Manchuria in 1931, the League of Nations condemned it for aggression against China. Japan responded by leaving the League of Nations in March 1933. After four quiet years, the Sino-Japanese War erupted in 1937 with Japanese forces invading China. The League of Nations condemned Japan's actions and initiated sanctions on Japan. The United States, in particular, was angered at Japan and sought to support China.
In March 1939, Germany took over Czechoslovakia, violating the Munich Agreement signed six months before, and demonstrating that the appeasement policy was a failure. Britain and France decided that Hitler had no intention to uphold diplomatic agreements and responded by preparing for war. On 31 March 1939, Britain formed the Anglo-Polish military alliance in an effort to avert a German attack on the country. Also, the French had a long-standing alliance with Poland since 1921. The Soviet Union sought an alliance with the western powers, but Hitler ended the risk of a war with Stalin by signing the Nazi-Soviet non-aggression pact in August 1939. The agreement secretly divided the independent states of Central and Eastern Europe between the two powers and assured adequate oil supplies for the German war machine.
On 1 September 1939, Germany invaded Poland; two days later Britain and France declared war on Germany. Then, on 17 September 1939, the Soviet Union invaded Poland from the east. Britain and France established the Anglo-French Supreme War Council to coordinate military decisions. A Polish government-in-exile was set up in London and it continued to be one of the Allies. After a quiet winter, Germany in April 1940 invaded and quickly defeated Denmark, Norway, Belgium, the Netherlands and France. Britain and its Empire stood alone against Hitler and Mussolini.
Before entering into an alliance, there was pre-emptive cooperation between the United Kingdom and the United States. In addition, through US armament supplies in the form of Lend-Lease, there was an effort to collaborate before the official forming of the alliance.
The First Inter-Allied Meeting took place in London in early June 1941 between the United Kingdom, the four co-belligerent British Dominions (Canada, Australia, New Zealand and South Africa), the eight governments in exile (Belgium, Czechoslovakia, Greece, Luxembourg, the Netherlands, Norway, Poland, Yugoslavia) and Free France. The Declaration of St James's Palace at the meeting set out a first vision for the postwar world.
In June 1941, Hitler broke the non-aggression agreement with Stalin and Germany invaded the Soviet Union, and the Soviet Union declared war on Germany. Britain agreed to an alliance with the Soviet Union in July. The Atlantic Conference followed in August 1941 between American President Franklin Roosevelt and British Prime Minister Winston Churchill which defined a common Anglo-American vision of the postwar world. At the Second Inter-Allied Meeting in London in September 1941, the eight European governments in exile, together with the Soviet Union and representatives of the Free French Forces, unanimously adopted adherence to the common principles of policy set forth by Britain and the United States. In December, Japan attacked the US and Britain resulting in a state of war between the US and the Axis powers, with whom China also declared war. The main lines of World War II had formed. Churchill referred to the Grand Alliance of the United Kingdom, the United States, and the Soviet Union.
The alliance was one of convenience in the fight against the Axis powers. The British had reason to ask for one as Germany, Italy, and Imperial Japan threatened not only the colonies of the British Empire in North Africa and Asia but also the British mainland. The United States felt that the Japanese and German expansion should be contained, but ruled out force until the attack by the Imperial Japanese Navy on Pearl Harbor on 7 December 1941. The Soviet Union, after the breaking of the Molotov-Ribbentrop Pact by the instigation of Operation Barbarossa in 1941, greatly despised German belligerence and the unchallenged Japanese expansion in the East, particularly considering their defeat in several previous wars with Japan. They also recognized, as the US and Britain had suggested, the advantages of a two-front war.
Franklin D. Roosevelt, Winston Churchill, and Joseph Stalin were The Big Three leaders. They were in frequent contact through ambassadors, top generals, foreign ministers and special emissaries such as the American Harry Hopkins. It is also often called the "Strange Alliance", because it united the leaders of the world's greatest capitalist state (the United States), the greatest socialist state (the Soviet Union) and the greatest colonial power (the United Kingdom).
Relations between them resulted in the major decisions that shaped the war effort and planned for the postwar world. Cooperation between the United Kingdom and the United States was especially close and included forming a Combined Chiefs of Staff.
There were numerous high-level conferences; in total Churchill attended 14 meetings, Roosevelt 12, and Stalin 5. Most visible were the three summit conferences that brought together the three top leaders. The Allied policy toward Germany and Japan evolved and developed at these three conferences.
In 1942 Roosevelt proposed becoming, with China, the Four Policemen of world peace. Although the 'Four Powers' were reflected in the wording of the Declaration by United Nations, Roosevelt's proposal was not initially supported by Churchill or Stalin.
Division emerged over the length of time taken by the Western Allies to establish a second front in Europe. Stalin and the Soviets used the potential employment of the second front as an 'acid test' for their relations with the Anglo-American powers. The Soviets were forced to use as much manpower as possible in the fight against the Germans, whereas the United States had the luxury of flexing industrial power, but with the "minimum possible expenditure of American lives." Roosevelt delayed until 1944 to enforce a second front in Europe; in the meantime he had endorsed the British proposal to invade North Africa, straining Anglo-American and Soviet relations.
The essential ideological differences between the United States and the Soviet Union strained their relationship. Tensions between the two countries had existed for decades, with the Soviets remembering America's participation in the armed intervention against the Bolsheviks in the Russian Civil War as well as its long refusal to recognize the Soviet Union's existence as a state. The original terms of the Lend-Lease loan were amended towards the Soviets, to be put in line with British terms. The United States would now expect interest with the repayment from the Soviets, following the initiation of the Operation Barbarossa, at the end of the war--the United States were not looking to support any "postwar Soviet reconstruction efforts", which eventually manifested into the Molotov Plan. At the Tehran conference, Stalin judged Roosevelt to be a "lightweight compared to the more formidable Churchill". During the meetings from 1943 to 1945, there were disputes over the growing list of demands from the USSR.
Tensions increased further when Roosevelt died and his successor Harry Truman rejected demands put forth by Stalin. Roosevelt understood that cultural differences could doom the alliance and, as opposed to the likes of Truman and W. Averell Harriman, Roosevelt wanted to play down these tensions. Roosevelt felt he "understood Stalin's psychology" which aided him in cooperating more successfully with the Soviet Union in comparison to Truman, stating "Stalin was too anxious to prove a point... he suffered from an inferiority complex."
During December 1941, U.S. President Franklin D. Roosevelt devised the name "United Nations" for the Allies and proposed it to British Prime Minister Winston Churchill. He referred to the Big Three and China as a "trusteeship of the powerful", and then later the "Four Powers".
The alliance was formalised in the Declaration by United Nations signed on 1 January 1942.
These were the 26 signatories of the declaration:
The United Nations began growing immediately after its formation. In 1942, Mexico, the Philippines and Ethiopia adhered to the declaration. The African state had been restored in its independence by British forces after the Italian defeat on Amba Alagi in 1941, while the Philippines, still dependent on Washington but granted international diplomatic recognition, was allowed to join on 10 June despite their occupation by Japan.
In 1943, the Declaration was signed by Iraq, Iran, Brazil, Bolivia and Colombia. A Tripartite Treaty of Alliance with Britain and the USSR formalised Iran's assistance to the Allies. In Rio de Janeiro, Brazilian dictator Getúlio Vargas was considered near to fascist ideas, but realistically joined the United Nations after their evident successes.
In 1944, Liberia and France signed. The French situation was very confused. Free French forces were recognized only by Britain, while the United States considered Vichy France to be the legal government of the country until Operation Overlord, while also preparing US occupation francs. Winston Churchill urged Roosevelt to restore France to its status of a major power after the liberation of Paris in August 1944; the Prime Minister feared that after the war, Britain could remain the sole great power in Europe facing the Communist threat, as it was in 1940 and 1941 against Nazism.
During the early part of 1945, Peru, Chile, Paraguay, Venezuela, Uruguay, Turkey, Egypt, Saudi Arabia, Lebanon, Syria (these latter two French colonies had been declared independent states by British occupation troops, despite protests by Pétain and later De Gaulle) and Ecuador became signatories. Ukraine and Belarus, which were not independent states but parts of the Soviet Union, were accepted as members of the United Nations as a way to provide greater influence to Stalin, who had only Yugoslavia as a communist partner in the alliance.
British Prime Minister, Neville Chamberlain delivered his Ultimatum Speech on 3 September 1939 which declared war on Germany, a few hours before France. As the Statute of Westminster 1931 was not yet ratified by the parliaments of Australia and New Zealand, the British declaration of war on Germany also applied to those dominions. The other dominions and members of the British Commonwealth declared war from 3 September 1939, all within one week of each other; these countries were Canada, India and South Africa as well as Nepal.
During the war, Churchill attended seventeen Allied conferences at which key decisions and agreements were made. He was "the most important of the Allied leaders during the first half of World War II".
British West Africa and the British colonies in East and Southern Africa participated, mainly in the North African, East African and Middle-Eastern theatres. Two West African and one East African division served in the Burma Campaign.
Southern Rhodesia was a self-governing colony, having received responsible government in 1923. It was not a sovereign dominion. It governed itself internally and controlled its own armed forces, but had no diplomatic autonomy, and, therefore, was officially at war as soon as Britain was at war. The Southern Rhodesian colonial government issued a symbolic declaration of war nevertheless on 3 September 1939, which made no difference diplomatically but preceded the declarations of war made by all other British dominions and colonies.
These included: the British West Indies, British Honduras, British Guiana and the Falkland Islands. The Dominion of Newfoundland was directly ruled as a royal colony from 1933 to 1949, run by a governor appointed by London who made the decisions regarding Newfoundland.
Territories controlled by the Colonial Office, namely the Crown Colonies, were controlled politically by the UK and therefore also entered hostilities with Britain's declaration of war. At the outbreak of World War II, the British Indian Army numbered 205,000 men. Later during World War II, the Indian Army became the largest all-volunteer force in history, rising to over 2.5 million men in size.
Indian soldiers earned 30 Victoria Crosses during the Second World War. It suffered 87,000 military casualties (more than any Crown colony but fewer than the United Kingdom). The UK suffered 382,000 military casualties.
The Cyprus Regiment was formed by the British Government during the Second World War and made part of the British Army structure. It was mostly Greek Cypriot volunteers and Turkish-speaking Cypriot inhabitants of Cyprus but also included other Commonwealth nationalities. On a brief visit to Cyprus in 1943, Winston Churchill praised the "soldiers of the Cyprus Regiment who have served honourably on many fields from Libya to Dunkirk". About 30,000 Cypriots served in the Cyprus Regiment. The regiment was involved in action from the very start and served at Dunkirk, in the Greek Campaign (about 600 soldiers were captured in Kalamata in 1941), North Africa (Operation Compass), France, the Middle East and Italy. Many soldiers were taken prisoner especially at the beginning of the war and were interned in various PoW camps (Stalag) including Lamsdorf (Stalag VIII-B), Stalag IVC at Wistritz bei Teplitz and Stalag 4b near Most in the Czech Republic. The soldiers captured in Kalamata were transported by train to prisoner of war camps.
At the end of five months of war, one thing has become more and more clear. It is that Germany seeks to establish a domination of the world completely different from any known in world history.
The domination at which the Nazis aim is not limited to the displacement of the balance of power and the imposition of the supremacy of one nation. It seeks the systematic and total destruction of those conquered by Hitler and it does not treaty with the nations which it has subdued. He destroys them. He takes from them their whole political and economic existence and seeks even to deprive them of their history and culture. He wishes only to consider them as vital space and a vacant territory over which he has every right.
The human beings who constitute these nations are for him only cattle. He orders their massacre or migration. He compels them to make room for their conquerors. He does not even take the trouble to impose any war tribute on them. He just takes all their wealth and, to prevent any revolt, he scientifically seeks the physical and moral degradation of those whose independence he has taken away.
France experienced several major phases of action during World War II:
In Africa these included: French West Africa, French Equatorial Africa, the League of Nations mandates of French Cameroun and French Togoland, French Madagascar, French Somaliland, and the protectorates of French Tunisia and French Morocco.
In Asia and Oceania these included: French Polynesia, Wallis and Futuna, New Caledonia, the New Hebrides, French Indochina, French India, the mandates of Greater Lebanon and French Syria. The French government in 1936 attempted to grant independence to its mandate of Syria in the Franco-Syrian Treaty of Independence of 1936 signed by France and Syria. However, opposition to the treaty grew in France and the treaty was not ratified. Syria had become an official republic in 1930 and was largely self-governing. In 1941, a British-led invasion supported by Free French forces expelled Vichy French forces in Operation Exporter.
In the lead up to the war between the Soviet Union and Nazi Germany, relations between the two states underwent several stages. General Secretary Joseph Stalin and the government of the Soviet Union had supported so-called popular front movements of anti-fascists including communists and non-communists from 1935 to 1939. The popular front strategy was terminated from 1939 to 1941 when the Soviet Union cooperated with Germany in 1939 in the occupation and partitioning of Poland. The Soviet leadership refused to endorse either the Allies or the Axis from 1939 to 1941, as it called the Allied-Axis conflict an "imperialist war".
Stalin had studied Hitler, including reading Mein Kampf and from it knew of Hitler's motives for destroying the Soviet Union. As early as in 1933, the Soviet leadership voiced its concerns with the alleged threat of a potential German invasion of the country should Germany attempt a conquest of Lithuania, Latvia, or Estonia, and in December 1933 negotiations began for the issuing of a joint Polish-Soviet declaration guaranteeing the sovereignty of the three Baltic countries. However, Poland withdrew from the negotiations following German and Finnish objections. The Soviet Union and Germany at this time competed with each other for influence in Poland. The Soviet government also was concerned with the anti-Soviet sentiment in Poland and particularly Józef Pi?sudski's proposed Polish federation that would include the territories of Poland, Lithuania, Belarus, and Ukraine within it that threatened the territorial integrity of the Soviet Union.
On 20 August 1939, forces of the Union of Soviet Socialist Republics under General Georgy Zhukov, together with the People's Republic of Mongolia eliminated the threat of conflict in the east with a victory over Imperial Japan at the Battle of Khalkhin Gol in eastern Mongolia.
On the same day, Soviet party leader Joseph Stalin received a telegram from German Chancellor Adolf Hitler, suggesting that German Foreign Minister Joachim von Ribbentrop fly to Moscow for diplomatic talks. (After receiving a lukewarm response throughout the spring and summer, Stalin abandoned attempts for a better diplomatic relationship with France and the United Kingdom.)
On 23 August, Ribbentrop and Soviet Foreign Minister Vyacheslav Molotov signed the non-aggression pact including secret protocols dividing Eastern Europe into defined "spheres of influence" for the two regimes, and specifically concerning the partition of the Polish state in the event of its "territorial and political rearrangement".
On 15 September 1939, Stalin concluded a durable ceasefire with Japan, to take effect the following day (it would be upgraded to a non-aggression pact in April 1941). The day after that, 17 September, Soviet forces invaded Poland from the east. Although some fighting continued until 5 October, the two invading armies held at least one joint military parade on 25 September, and reinforced their non-military partnership with the German-Soviet Treaty of Friendship, Cooperation and Demarcation on 28 September. German and Soviet cooperation against Poland in 1939 has been described as co-belligerence.
On 30 November, the Soviet Union attacked Finland, for which it was expelled from the League of Nations. In the following year of 1940, while the world's attention was focused upon the German invasion of France and Norway, the USSR militarily occupied and annexed Estonia, Latvia, and Lithuania as well as parts of Romania.
German-Soviet treaties were brought to an end by the German surprise attack on the USSR on 22 June 1941. After the invasion of the Soviet Union in 1941, Stalin endorsed the Western Allies as part of a renewed popular front strategy against Germany and called for the international communist movement to make a coalition with all those who opposed the Nazis. The Soviet Union soon entered in alliance with the United Kingdom. Following the USSR, a number of other communist, pro-Soviet or Soviet-controlled forces fought against the Axis powers during the Second World War. They were as follows: the Albanian National Liberation Front, the Chinese Red Army, the Greek National Liberation Front, the Hukbalahap, the Malayan Communist Party, the People's Republic of Mongolia, the Polish People's Army, the Tuvan People's Republic (annexed by the Soviet Union in 1944), the Viet Minh and the Yugoslav Partisans.
The Soviet Union intervened against Japan and its client state in Manchuria in 1945, cooperating with the Nationalist Government of China and the Nationalist Party led by Chiang Kai-shek; though also cooperating, preferring, and encouraging the Communist Party led by Mao Zedong to take effective control of Manchuria after expelling Japanese forces.
The United States had indirectly supported Britain's war effort against Germany up to 1941 and declared its opposition to territorial aggrandizement. Materiel support to Britain was provided while the U.S. was officially neutral via the Lend-Lease Act starting in 1941.
President Franklin D. Roosevelt and Prime Minister Winston Churchill in August 1941 promulgated the Atlantic Charter that pledged commitment to achieving "the final destruction of Nazi tyranny". Signing the Atlantic Charter, and thereby joining the "United Nations" was the way a state joined the Allies, and also became eligible for membership in the United Nations world body that formed in 1945.
The US strongly supported the Nationalist Government in China in its war with Japan, and provided military equipment, supplies, and volunteers to the Nationalist Government of China to assist in its war effort. In December 1941 Japan opened the war with its attack on Pearl Harbor, the US declared war on Japan, and Japan's allies Germany and Italy declared war on the US, bringing the US into World War II.
The US played a central role in liaising among the Allies and especially among the Big Four. At the Arcadia Conference in December 1941, shortly after the US entered the war, the US and Britain established a Combined Chiefs of Staff, based in Washington, which deliberated the military decisions of both the US and Britain.
On 8 December 1941, following the attack on Pearl Harbor, the United States Congress declared war on Japan at the request of President Franklin D. Roosevelt. This was followed by Germany and Italy declaring war on the United States on 11 December, bringing the country into the European theatre.
The US-led Allied forces in the Pacific theatre against Japanese forces from 1941 to 1945. From 1943 to 1945, the US led and coordinated the Western Allies' war effort in Europe under the leadership of General Dwight D. Eisenhower.
The surprise attack on Pearl Harbor followed by Japan's swift attacks on Allied locations throughout the Pacific, resulted in major US losses in the first several months in the war, including losing control of the Philippines, Guam, Wake Island and several Aleutian islands including Attu and Kiska to Japanese forces. American naval forces attained some early successes against Japan. One was the bombing of Japanese industrial centres in the Doolittle Raid. Another was repelling a Japanese invasion of Port Moresby in New Guinea during the Battle of the Coral Sea. A major turning point in the Pacific War was the Battle of Midway where American naval forces were outnumbered by Japanese forces that had been sent to Midway to draw out and destroy American aircraft carriers in the Pacific and seize control of Midway that would place Japanese forces in proximity to Hawaii. However American forces managed to sink four of Japan's six large aircraft carriers that had initiated the attack on Pearl Harbor along with other attacks on Allied forces. Afterwards, the US began an offensive against Japanese-captured positions. The Guadalcanal Campaign from 1942 to 1943 was a major contention point where Allied and Japanese forces struggled to gain control of Guadalcanal.
In the Pacific it held multiple island dependencies such as American Samoa, Guam, Hawaii, Midway Islands, Wake Island and others. These dependencies were directly involved in the Pacific campaign of the war.
The Commonwealth of the Philippines was a sovereign protectorate referred to as an "associated state" of the United States. From late 1941 to 1944, the Philippines was occupied by Japanese forces, who established the Second Philippine Republic as a client state that had nominal control over the country.
In the 1920s the Soviet Union provided military assistance to the Kuomintang, or the Nationalists and helped reorganize their party along Leninist lines: a unification of party, state, and army. In exchange the Nationalists agreed to let members of the Chinese Communist Party join the Nationalists on an individual basis. However, following the nominal unification of China at the end of the Northern Expedition in 1928, Generalissimo Chiang Kai-shek purged leftists from his party and fought against the revolting Chinese Communist Party, former warlords, and other militarist factions. A fragmented China provided easy opportunities for Japan to gain territories piece by piece without engaging in total war. Following the 1931 Mukden Incident, the puppet state of Manchukuo was established. Throughout the early to mid-1930s, Chiang's anti-communist and anti-militarist campaigns continued while he fought small, incessant conflicts against Japan, usually followed by unfavorable settlements and concessions after military defeats.
In 1936 Chiang was forced to cease his anti-communist military campaigns after his kidnap and release by Zhang Xueliang, and reluctantly formed a nominal alliance with the Communists, while the Communists agreed to fight under the nominal command of the Nationalists against the Japanese. Following the Marco Polo Bridge Incident of 7 July 1937, China and Japan became embroiled in a full-scale war. The Soviet Union, wishing to keep China in the fight against Japan, supplied China with military assistance until 1941, when it signed a non-aggression pact with Japan. China formally declared war on Japan, as well as Germany and Italy, in December 1941, after the attack on Pearl Harbor.
Continuous clashes between the Communists and Nationalists behind enemy lines cumulated in a major military conflict between these two former allies that effectively ended their cooperation against the Japanese, and China had been divided between the internationally recognized Nationalist China under the leadership of Generalissimo Chiang Kai-shek and Communist China under the leadership of Mao Zedong until the Japanese surrendered in 1945.
Prior to the alliance of Germany and Italy to Japan, the Nationalist Government held close relations with both Germany and Italy. In the early 1930s, Sino-German cooperation existed between the Nationalist Government and Germany in military and industrial matters. Nazi Germany provided the largest proportion of Chinese arms imports and technical expertise. Relations between the Nationalist Government and Italy during the 1930s varied, however even after the Nationalist Government followed League of Nations sanctions against Italy for its invasion of Ethiopia, the international sanctions proved unsuccessful, and relations between the Fascist government in Italy and the Nationalist Government in China returned to normal shortly afterwards. Up until 1936, Mussolini had provided the Nationalists with Italian military air and naval missions to help the Nationalists fight against Japanese incursions and communist insurgents. Italy also held strong commercial interests and a strong commercial position in China supported by the Italian concession in Tianjin. However, after 1936 the relationship between the Nationalist Government and Italy changed due to a Japanese diplomatic proposal to recognize the Italian Empire that included occupied Ethiopia within it in exchange for Italian recognition of Manchukuo, Italian Foreign Minister Galeazzo Ciano accepted this offer by Japan, and on 23 October 1936 Japan recognized the Italian Empire and Italy recognized Manchukuo, as well as discussing increasing commercial links between Italy and Japan.
The Nationalist Government held close relations with the United States. The United States opposed Japan's invasion of China in 1937 that it considered an illegal violation of China's sovereignty, and offered the Nationalist Government diplomatic, economic, and military assistance during its war against Japan. In particular, the United States sought to bring the Japanese war effort to a complete halt by imposing a full embargo on all trade between the United States to Japan, Japan was dependent on the United States for 80 per cent of its petroleum, resulting in an economic and military crisis for Japan that could not continue its war effort with China without access to petroleum. In November 1940, American military aviator Claire Lee Chennault upon observing the dire situation in the air war between China and Japan, set out to organize a volunteer squadron of American fighter pilots to fight alongside the Chinese against Japan, known as the Flying Tigers. US President Franklin D. Roosevelt accepted dispatching them to China in early 1941. However, they only became operational shortly after the attack on Pearl Harbor.
The Soviet Union recognised the Republic of China but urged reconciliation with the Communist Party of China and inclusion of Communists in the government. The Soviet Union also urged military and cooperation between Nationalist China and Communist China during the war.
Even though China had been fighting the longest among all the Allied powers, it only officially joined the Allies after the attack on Pearl Harbor, on 7 December 1941. China fought the Japanese Empire before joining the Allies in the Pacific War. Generalissimo Chiang Kai-shek thought Allied victory was assured with the entrance of the United States into the war, and he declared war on Germany and the other Axis states. However, Allied aid remained low because the Burma Road was closed and the Allies suffered a series of military defeats against Japan early on in the campaign. General Sun Li-jen led the R.O.C. forces to the relief of 7,000 British forces trapped by the Japanese in the Battle of Yenangyaung. He then reconquered North Burma and re-established the land route to China by the Ledo Road. But the bulk of military aid did not arrive until the spring of 1945. More than 1.5 million Japanese troops were trapped in the China Theatre, troops that otherwise could have been deployed elsewhere if China had collapsed and made a separate peace.
Communist China had been tacitly supported by the Soviet Union since the 1920s, though the Soviet Union diplomatically recognised the Republic of China, Joseph Stalin supported cooperation between the Nationalists and the Communists--including pressuring the Nationalist Government to grant the Communists state and military positions in the government. This was continued into the 1930s that fell in line with the Soviet Union's subversion policy of popular fronts to increase communists' influence in governments. The Soviet Union urged military and cooperation between Soviet China and Nationalist China during China's war against Japan. Initially Mao Zedong accepted the demands of the Soviet Union and in 1938 had recognized Chiang Kai-shek as the "leader" of the "Chinese people". In turn, the Soviet Union accepted Mao's tactic of "continuous guerilla warfare" in the countryside that involved a goal of extending the Communist bases, even if it would result in increased tensions with the Nationalists.
After the breakdown of their cooperation with the Nationalists in 1941, the Communists prospered and grew as the war against Japan dragged on, building up their sphere of influence wherever opportunities were presented, mainly through rural mass organizations, administrative, land and tax reform measures favoring poor peasants; while the Nationalists attempted to neutralize the spread of Communist influence by military blockade and fighting the Japanese at the same time.
The Communist Party's position in China was boosted further upon the Soviet invasion of Manchuria in August 1945 against the Japanese puppet state of Manchukuo and the Japanese Kwantung Army in China and Manchuria. Upon the intervention of the Soviet Union against Japan in World War II in 1945, Mao Zedong in April and May 1945 had planned to mobilize 150,000 to 250,000 soldiers from across China to work with forces of the Soviet Union in capturing Manchuria.
Albania was retroactively recognized as an "Associated Power" at the 1946 Paris conference and officially signed the treaty ending WWII between the "Allied and Associated Powers" and Italy in Paris, on 10 February 1947.
Australia was a sovereign Dominion under the Australian monarchy, as per the Statute of Westminster 1931. At the start of the war Australia followed Britain's foreign policies and accordingly declared war against Germany on 3 September 1939. Australian foreign policy became more independent after the Australian Labor Party formed government in October 1941, and Australia separately declared war against Finland, Hungary and Romania on 8 December 1941 and against Japan the next day.
Before the war, Belgium had pursued a policy of neutrality and only became an Allied member after being invaded by Germany on 10 May 1940. During the ensuing fighting, Belgian forces fought alongside French and British forces against the invaders. While the British and French were struggling against the fast German advance elsewhere on the front, the Belgian forces were pushed into a pocket to the north. Finally, on 28 May, the King Leopold III surrendered himself and his military to the Germans, having decided the Allied cause was lost. The legal Belgian government was reformed as a government in exile in London. Belgian troops and pilots continued to fight on the Allied side as the Free Belgian Forces. Belgium itself was occupied, but a sizeable Resistance was formed and was loosely coordinated by the government in exile and other Allied powers.
British and Canadian troops arrived in Belgium in September 1944 and the capital, Brussels, was liberated on 6 September. Because of the Ardennes Offensive, the country was only fully liberated in early 1945.
Belgium held the colony of the Belgian Congo and the League of Nations mandate of Ruanda-Urundi. The Belgian Congo was not occupied and remained loyal to the Allies as an important economic asset while its deposits of uranium were useful to the Allied efforts to develop the atomic bomb. Troops from the Belgian Congo participated in the East African Campaign against the Italians. The colonial Force Publique also served in other theatres including Madagascar, the Middle-East, India and Burma within British units.
Initially, Brazil maintained a position of neutrality, trading with both the Allies and the Axis, while Brazilian president Getúlio Vargas's quasi-Fascist policies indicated a leaning toward the Axis powers. However, as the war progressed, trade with the Axis countries became almost impossible and the United States initiated forceful diplomatic and economic efforts to bring Brazil onto the Allied side.
At the beginning of 1942, Brazil permitted the United States to set up air bases on its territory, especially in Natal, strategically located at the easternmost corner of the South American continent, and on 28 January the country severed diplomatic relations with Germany, Japan and Italy. After that, 36 Brazilian merchant ships were sunk by the German and Italian navies, which led the Brazilian government to declare war against Germany and Italy on 22 August 1942.
Brazil then sent a 25,700 strong Expeditionary Force to Europe that fought mainly on the Italian front, from September 1944 to May 1945. Also, the Brazilian Navy and Air Force acted in the Atlantic Ocean from the middle of 1942 until the end of the war. Brazil was the only South American country to send troops to fight in the European theatre in the Second World War.
Canada was a sovereign Dominion under the Canadian monarchy, as per the Statute of Westminster 1931. In a symbolic statement of autonomous foreign policy Prime Minister William Lyon Mackenzie King delayed parliament's vote on a declaration of war for seven days after Britain had declared war. Canada was the last member of the Commonwealth to declare war on Germany on 10 September 1939.
Because of Cuba's geographical position at the entrance of the Gulf of Mexico, Havana's role as the principal trading port in the West Indies, and the country's natural resources, Cuba was an important participant in the American Theater of World War II, and subsequently one of the greatest beneficiaries of the United States' Lend-Lease program. Cuba declared war on the Axis powers in December 1941, making it one of the first Latin American countries to enter the conflict, and by the war's end in 1945 its military had developed a reputation as being the most efficient and cooperative of all the Caribbean states. On 15 May 1943, the Cuban patrol boat CS-13 sank the German submarine U-176.
In 1938, with the Munich Agreement, Czechoslovakia, the United Kingdom, and France sought to resolve German irredentist claims to the Sudetenland region. As a result, the incorporation of the Sudetenland into Germany began on 1 October 1938. Additionally, a small northeastern part of the border region known as Zaolzie was occupied by and annexed to Poland. Further, by the First Vienna Award, Hungary received southern territories of Slovakia and Carpathian Ruthenia.
A Slovak State was proclaimed on 14 March 1939, and the next day Hungary occupied and annexed the remainder of Carpathian Ruthenia, and the German Wehrmacht moved into the remainder of the Czech Lands. On 16 March 1939 the Protectorate of Bohemia and Moravia was proclaimed after negotiations with Emil Hácha, who remained technically head of state with the title of State President. After a few months, former Czechoslovak President Bene? organized a committee in exile and sought diplomatic recognition as the legitimate government of the First Czechoslovak Republic. The committee's success in obtaining intelligence and coordinating actions by the Czechoslovak resistance led first Britain and then the other Allies to recognize it in 1941. In December 1941 the Czechoslovak government-in-exile declared war on the Axis powers. Czechoslovakian military units took part in the war.
The Dominican Republic was one of the very few countries willing to accept mass Jewish immigration during World War II. At the Évian Conference, it offered to accept up to 100,000 Jewish refugees. The DORSA (Dominican Republic Settlement Association) was formed with the assistance of the JDC, and helped settle Jews in Sosúa, on the northern coast. About 700 European Jews of Ashkenazi Jewish descent reached the settlement where each family received 33 hectares (82 acres) of land, 10 cows (plus 2 additional cows per children), a mule and a horse, and a US$10,000 loan (about 176,000 dollars at 2021 prices) at 1% interest.
The Dominican Republic officially declared war on the Axis powers on 11 December 1941, after the attack on Pearl Harbor. However, the Caribbean state had already been engaged in war actions since before the formal declaration of war. Dominican sailboats and schooners had been attacked on previous occasions by German submarines as, highlighting the case of the 1,993-ton merchant ship, "San Rafael", which was making a trip from Tampa, Florida to Kingston, Jamaica, when 80 miles away from its final destination, it was torpedoed by the German submarine U-125, causing the command to abandon the ship by the commander. Although the crew of San Rafael managed to escape the event, it would be remembered by the Dominican press as a sign of the infamy of the German submarines and the danger they represented in the Caribbean.
Recently, due to a research work carried out by the Embassy of the United States of America in Santo Domingo and the Institute of Dominican Studies of the City of New York (CUNY), documents of the Department of Defense were discovered in which it was confirmed that around 340 men and women of Dominican origin were part of the US Armed Forces during the World War II. Many of them received medals and other recognitions for their outstanding actions in combat.
The Ethiopian Empire was invaded by Italy on 3 October 1935. On 2 May 1936, Emperor Haile Selassie I fled into exile, just before the Italian occupation on 7 May. After the outbreak of World War II, the Ethiopian government-in-exile cooperated with the British during the British Invasion of Italian East Africa beginning in June 1940. Haile Selassie returned to his rule on 18 January 1941. Ethiopia declared war on Germany, Italy and Japan in December 1942.
Greece was invaded by Italy on 28 October 1940 and subsequently joined the Allies. The Greek Army managed to stop the Italian offensive from Italy's protectorate of Albania, and Greek forces pushed Italian forces back into Albania. However, after the German invasion of Greece in April 1941, German forces managed to occupy mainland Greece and, a month later, the island of Crete. The Greek government went into exile, while the country was placed under a puppet government and divided into occupation zones run by Italy, Germany and Bulgaria. From 1941, a strong resistance movement appeared, chiefly in the mountainous interior, where it established a "Free Greece" by mid-1943. Following the Italian capitulation in September 1943, the Italian zone was taken over by the Germans. Axis forces left mainland Greece in October 1944, although some Aegean islands, notably Crete, remained under German occupation until the end of the war.
Before the war, Luxembourg had pursued a policy of neutrality and only became an Allied member after being invaded by Germany on 10 May 1940. The government in exile fled, winding up in England. It made Luxembourgish language broadcasts to the occupied country on BBC radio. In 1944, the government in exile signed a treaty with the Belgian and Dutch governments, creating the Benelux Economic Union and also signed into the Bretton Woods system.
Mexico declared war on Germany in 1942 after German submarines attacked the Mexican oil tankers Potrero del Llano and Faja de Oro that were transporting crude oil to the United States. These attacks prompted President Manuel Ávila Camacho to declare war on the Axis powers.
Mexico formed Escuadrón 201 fighter squadron as part of the Fuerza Aérea Expedicionaria Mexicana (FAEM--"Mexican Expeditionary Air Force"). The squadron was attached to the 58th Fighter Group of the United States Army Air Forces and carried out tactical air support missions during the liberation of the main Philippine island of Luzon in the summer of 1945.
Some 300,000 Mexican citizens went to the United States to work on farms and factories. Some 15,000 US nationals of Mexican origin and Mexican residents in the US enrolled in the US Armed Forces and fought in various fronts around the world.
The Netherlands became an Allied member after being invaded on 10 May 1940 by Germany. During the ensuing campaign, the Netherlands were defeated and occupied by Germany. The Netherlands was liberated by Canadian, British, American and other allied forces during the campaigns of 1944 and 1945. The Princess Irene Brigade, formed from escapees from the German invasion, took part in several actions in 1944 in Arromanches and in 1945 in the Netherlands. Navy vessels saw action in the British Channel, the North Sea and the Mediterranean, generally as part of Royal Navy units. Dutch airmen flying British aircraft participated in the air war over Germany.
The Dutch East Indies (modern-day Indonesia) was the principal Dutch colony in Asia, and was seized by Japan in 1942. During the Dutch East Indies Campaign, the Netherlands played a significant role in the Allied effort to halt the Japanese advance as part of the American-British-Dutch-Australian (ABDA) Command. The ABDA fleet finally encountered the Japanese surface fleet at the Battle of Java Sea, at which Doorman gave the order to engage. During the ensuing battle the ABDA fleet suffered heavy losses, and was mostly destroyed after several naval battles around Java; the ABDA Command was later dissolved. The Japanese finally occupied the Dutch East Indies in February-March 1942. Dutch troops, aircraft and escaped ships continued to fight on the Allied side and also mounted a guerrilla campaign in Timor.
New Zealand was a sovereign Dominion under the New Zealand monarchy, as per the Statute of Westminster 1931. It quickly entered World War II, officially declaring war on Germany on 3 September 1939, just hours after Britain. Unlike Australia, which had felt obligated to declare war, as it also had not ratified the Statute of Westminster, New Zealand did so as a sign of allegiance to Britain, and in recognition of Britain's abandonment of its former appeasement policy, which New Zealand had long opposed. This led to then Prime Minister Michael Joseph Savage declaring two days later:
"With gratitude for the past and confidence in the future we range ourselves without fear beside Britain. Where she goes, we go; where she stands, we stand. We are only a small and young nation, but we march with a union of hearts and souls to a common destiny."
Because of its strategic location for control of the sea lanes in the North Sea and the Atlantic, both the Allies and Germany worried about the other side gaining control of the neutral country. Germany ultimately struck first with Operation Weserübung on 9 April 1940, resulting in the two-month-long Norwegian Campaign, which ended in a German victory and their war-long occupation of Norway.
Units of the Norwegian Armed Forces evacuated from Norway or raised abroad continued participating in the war from exile.
The Norwegian merchant fleet, then the fourth largest in the world, was organized into Nortraship to support the Allied cause. Nortraship was the world's largest shipping company, and at its height operated more than 1000 ships.
Norway was neutral when Germany invaded, and it is not clear when Norway became an Allied country. Great Britain, France and Polish forces in exile supported Norwegian forces against the invaders but without a specific agreement. Norway's cabinet signed a military agreement with Britain on 28 May 1941. This agreement allowed all Norwegian forces in exile to operate under UK command. Norwegian troops in exile should primarily be prepared for the liberation of Norway, but could also be used to defend Britain. At the end of the war German forces in Norway surrendered to British officers on 8 May and allied troops occupied Norway until 7 June.
The Invasion of Poland on 1 September 1939, started the war in Europe, and the United Kingdom and France declared war on Germany on 3 September. Poland fielded the third biggest army among the European Allies, after the Soviet Union and United Kingdom, but before France.
Polish Army suffered a series of defeats in the first days of the invasion. The Soviet Union unilaterally considered the flight to Romania of President Ignacy Mo?cicki and Marshal Edward Rydz-?mig?y on 17 September as evidence of debellatio causing the extinction of the Polish state, and consequently declared itself allowed to invade (according to the Soviet position: "to protect") Eastern Poland starting from the same day. However, the Red Army had invaded the Second Polish Republic several hours before the Polish president fled to Romania. The Soviets invaded on 17 September at 3 a.m., while president Mo?cicki crossed the Polish-Romanian border at 21:45 on the same day. The Polish military continued to fight against both the Germans and the Soviets, and the last major battle of the war, the Battle of Kock, ended at 1 a.m. on 6 October 1939 with the Independent Operational Group "Polesie," a field army, surrendering due to lack of ammunition. The country never officially surrendered to the Third Reich, nor to the Soviet Union, primarily because neither of the totalitarian powers requested an official surrender, and continued the war effort under the Polish government in exile.
Polish soldiers fought under their own flag but under the command of the British military. They were major contributors to the Allies in the theatre of war west of Germany and in the theatre of war east of Germany, with the Soviet Union. The Polish armed forces in the West created after the fall of Poland played minor roles in the Battle of France, and larger ones in the Italian and North African Campaigns. The Soviet Union recognized the London-based government at first. But it broke diplomatic relations after the Katyn massacre of Polish nationals was revealed. In 1943, the Soviet Union organized the Polish People's Army under Zygmunt Berling, around which it constructed the post-war successor state People's Republic of Poland. The Polish People's Army formed in USSR took part in a number of battles of the Eastern Front, including the Battle of Berlin, the closing battle of the European theater of war.
The Home Army, loyal to the London-based government and the largest underground force in Europe, as well other smaller resistance organizations in occupied Poland provided intelligence to the Allies and led to uncovering of Nazi war crimes (i.e., death camps).
Yugoslavia entered the war on the Allied side after the invasion of Axis powers on 6 April 1941. The Royal Yugoslav Army was thoroughly defeated in less than two weeks and the country was occupied starting on 18 April. The Italian-backed Croatian fascist leader Ante Paveli? declared the Independent State of Croatia before the invasion was over. King Peter II and much of the Yugoslavian government had left the country. In the United Kingdom, they joined numerous other governments in exile from Nazi-occupied Europe. Beginning with the uprising in Herzegovina in June 1941, there was continuous anti-Axis resistance in Yugoslavia until the end of the war.
Before the end of 1941, the anti-Axis resistance movement split between the royalist Chetniks and the communist Yugoslav Partisans of Josip Broz Tito who fought both against each other during the war and against the occupying forces. The Yugoslav Partisans managed to put up considerable resistance to the Axis occupation, forming various liberated territories during the war. In August 1943, there were over 30 Axis divisions on the territory of Yugoslavia, not including the forces of the Croatian puppet state and other quisling formations. In 1944, the leading Allied powers persuaded Tito's Yugoslav Partisans and the royalist Yugoslav government led by Prime Minister Ivan ?uba?i? to sign the Treaty of Vis that created the Democratic Federal Yugoslavia.
The Partisans were a major Yugoslav resistance movement against the Axis occupation and partition of Yugoslavia. Initially, the Partisans were in rivalry with the Chetniks over control of the resistance movement. However, the Partisans were recognized by both the Eastern and Western Allies as the primary resistance movement in 1943. After that, their strength increased rapidly, from 100,000 at the beginning of 1943 to over 648,000 in September 1944. In 1945 they were transformed into the Yugoslav army, organized in 4 field armies with 800,000 fighters.
The Chetniks, the short name given to the movement titled the Yugoslav Army of the Fatherland, were initially a major Allied Yugoslav resistance movement. However, due to their royalist and anti-communist views, Chetniks were considered to have begun collaborating with the Axis as a tactical move to focus on destroying their Partisan rivals. The Chetniks presented themselves as a Yugoslav movement, but were primarily a Serb movement. They reached their peak in 1943 with 93,000 fighters. Their major contribution was Operation Halyard in 1944. In collaboration with the OSS, 413 Allied airmen shot down over Yugoslavia were rescued and evacuated.
Egypt was a neutral country for most of World War II, but the Anglo-Egyptian treaty of 1936 permitted British forces in Egypt to defend the Suez Canal. The United Kingdom controlled Egypt and used it as a major base for Allied operations throughout the region, especially the battles in North Africa against Italy and Germany. Its highest priorities were control of the Eastern Mediterranean, and especially keeping the Suez Canal open for merchant ships and for military connections with India and Australia.[page needed]
The Kingdom of Egypt was nominally an independent state since 1922 but effectively remained in the British sphere of influence with the British Mediterranean Fleet being stationed in Alexandria and British Army forces being stationed in the Suez Canal zone. Egypt faced an Axis campaign led by Italian and German forces during the war. British frustration over King Farouk's reign over Egypt resulted in the Abdeen Palace incident of 1942 where British Army forces surrounded the royal palace and demanded a new government be established, nearly forcing the abdication of Farouk until he submitted to British demands. The Kingdom of Egypt joined the United Nations on 24 February 1945.
At the outbreak of World War II, the British Indian Army numbered 205,000 men. Later during World War II, the Indian Army became the largest all-volunteer force in history, rising to over 2.5 million men in size. These forces included tank, artillery and airborne forces.
Indian soldiers earned 30 Victoria Crosses during the Second World War. During the war, India suffered more civilian casualties than the United Kingdom, with the Bengal famine of 1943 estimated to have killed at least 2-3 million people. In addition, India suffered 87,000 military casualties, more than any Crown colony but fewer than the United Kingdom, which suffered 382,000 military casualties.
Burma was a British colony at the start of World War II. It was later invaded by Japanese forces and that contributed to the Bengal Famine of 1943. For the native Burmese, it was an uprising against colonial rule, so some fought on the Japanese's side, but most minorities fought on the Allies side. Burma also contributed resources such as rice and rubber.
After a period of neutrality, Bulgaria joined the Axis powers from 1941 to 1944. The Orthodox Church and others convinced King Boris to not allow the Bulgarian Jews to be exported to concentration camps. The king died shortly afterwards, suspected of being poisoned after a visit to Germany. Bulgaria abandoned the Axis and joined the Allies when the Soviet Union invaded, offering no resistance to the incoming forces. Bulgarian troops then fought alongside Soviet Army in Yugoslavia, Hungary and Austria. In the 1947 peace treaties, Bulgaria gained a small area near the Black Sea from Romania, making it the only former German ally to gain territory from WWII.
Among the Soviet forces during World War II, millions of troops were from the Soviet Central Asian Republics. They included 1,433,230 soldiers from Uzbekistan, more than 1million from Kazakhstan, and more than 700,000 from Azerbaijan, among other Central Asian Republics.
Mongolia fought against Japan during the Battles of Khalkhin Gol in 1939 and the Soviet-Japanese War in August 1945 to protect its independence and to liberate Southern Mongolia from Japan and China. Mongolia had been a Soviet sphere of influence since the 1920s.
By 1944, Poland entered the Soviet sphere of influence with the establishment of W?adys?aw Gomu?ka's communist regime. Polish forces fought alongside Soviet forces against Germany.
Romania had initially been a member of the Axis powers but switched allegiance upon facing invasion by the Soviet Union. In a radio broadcast to the Romanian people and army on the night of 23 August 1944 King Michael issued a cease-fire, proclaimed Romania's loyalty to the Allies, announced the acceptance of an armistice (to be signed on 12 September) offered by the Soviet Union, the United Kingdom, the United States, and declared war on Germany. The coup accelerated the Red Army's advance into Romania, but did not avert a rapid Soviet occupation and capture of about 130,000 Romanian soldiers, who were transported to the Soviet Union where many perished in prison camps.
The armistice was signed three weeks later on 12 September 1944, on terms virtually dictated by the Soviet Union. Under the terms of the armistice, Romania announced its unconditional surrender to the USSR and was placed under the occupation of the Allied forces with the Soviet Union as their representative, in control of the media, communication, post, and civil administration behind the front.
The Tuvan People's Republic was a partially recognized state founded from the former Tuvan protectorate of Imperial Russia. It was a client state of the Soviet Union and was annexed into the Soviet Union in 1944.
Italy initially had been a leading member of the Axis powers, however after facing multiple military losses including the loss of all of Italy's colonies to advancing Allied forces, Duce Benito Mussolini was deposed and arrested in July 1943 by order of King Victor Emmanuel III of Italy in co-operation with members of the Grand Council of Fascism who viewed Mussolini as having led Italy to ruin by allying with Germany in the war. Victor Emmanuel III dismantled the remaining apparatus of the Fascist regime and appointed Field Marshal Pietro Badoglio as Prime Minister of Italy. On 8 September 1943, Italy signed the Armistice of Cassibile with the Allies, ending Italy's war with the Allies and ending Italy's participation with the Axis powers. Expecting immediate German retaliation, Victor Emmanuel III and the Italian government relocated to southern Italy under Allied control. Germany viewed the Italian government's actions as an act of betrayal, and German forces immediately occupied all Italian territories outside of Allied control, in some cases even massacring Italian troops.
Italy became a co-belligerent of the Allies, and the Italian Co-Belligerent Army was created to fight against the German occupation of Northern Italy, where German paratroopers rescued Mussolini from arrest and he was placed in charge of a German puppet state known as the Italian Social Republic (RSI). Italy descended into civil war until the end of hostilities after his deposition and arrest, with Fascists loyal to him allying with German forces and helping them against the Italian armistice government and partisans.
The Declaration by United Nations on 1 January 1942, signed by the Four Policemen - the United States, United Kingdom, Soviet Union and China - and 22 other nations laid the groundwork for the future of the United Nations. At the Potsdam Conference of July-August 1945, Roosevelt's successor, Harry S. Truman, proposed that the foreign ministers of China, France, the Soviet Union, the United Kingdom, and the United States "should draft the peace treaties and boundary settlements of Europe", which led to the creation of the Council of Foreign Ministers of the "Big Five", and soon thereafter the establishment of those states as the permanent members of the UNSC.
The Charter of the United Nations was agreed to during the war at the United Nations Conference on International Organization, held between April and July 1945. The Charter was signed by 50 states on 26 June (Poland had its place reserved and later became the 51st "original" signatory), and was formally ratified shortly after the war on 24 October 1945. In 1944, the United Nations was formulated and negotiated among the delegations from the Soviet Union, the United Kingdom, the United States and China at the Dumbarton Oaks Conference where the formation and the permanent seats (for the "Big Five", China, France, the UK, US, and USSR) of the United Nations Security Council were decided. The Security Council met for the first time in the immediate aftermath of war on 17 January 1946.
These are the original 51 signatories (UNSC permanent members are asterisked):
Despite the successful creation of the United Nations, the alliance of the Soviet Union with the United States and the western allies ultimately broke down and evolved into the Cold War, which took place over the following half-century.
|Country||Declaration by United Nations||Declared war on the Axis||San Francisco Conference|
|India (UK-appointed administration, 1858-1947)||1942||1939|
The following list denotes dates on which states declared war on the Axis powers, or on which an Axis power declared war on them. The Indian Empire had a status less independent than the Dominions.
Provisional governments or governments-in exile that declared war against the Axis in 1941:
Although many factors manifestly contributed to the ultimately victory, not least the Soviet Union's joining of the coalition, the coalition partners ability to orchestrate their efforts and coordinate the many elements of modern warfare successfully must rank high in any assessment.
In World War II, the three great Allied powers--Great Britain, the United States, and the Soviet Union--formed a Grand Alliance that was the key to victory. But the alliance partners did not share common political aims, and did not always agree on how the war should be fought.
This collection by leading British and American scholars on twentieth century international history covers the strategy, diplomacy and intelligence of the Anglo-American-Soviet alliance during the Second World War. It includes the evolution of allied war aims in both the European and Pacific theatres, the policies surrounding the development and use of the atomic bomb and the evolution of the international intelligence community.
The USA entered World War Two against Germany and Japan in 1941, creating the Grand Alliance of the USA, Britain and the USSR. This alliance brought together great powers that had fundamentally different views of the world, but they did co-operate for four years against the Germans and Japanese. The Grand Alliance would ultimately fail and break down into the Cold War.
merging of their chiefs of staff organizations into the Combined Chiefs of Staff (CCS) to direct their combined forces and plan global strategy. ... the strategic, diplomatic, security, and civil-military views of the service chiefs and their planners were based to a large extent on events that had taken place before December 7, 1941
There were bright hopes that the cooperative spirit of the Grand Alliance would persist after WWII, but with FDR's death only two months after Yalta, the political dynamics changed dramatically.
After a long chat, Stalin went away amused by the American president's cheery, casual approach to diplomacy but judged him a lightweight compared to the more formidable Churchill
Groom describes how "fake news" about the Soviet Union blinded Roosevelt to Stalin's character and intentions ... Churchill [had] been on to Stalin from the beginning and he did not trust the Communists at their word. Roosevelt was more ambivalent.
The Soviet Union participated as a cobelligerent with Germany after September 17, 1939, when Soviet forces invaded eastern Poland
As a co-belligerent of Nazi Germany, the Soviet Union secretly assisted the German invasion of central and western Poland before launching its own invasion of eastern Poland on September 17
The first peace treaty concluded between the Allies and a former Axis nation was with Italy . It was signed in Paris on 10 February, by representatives from Albania, Australia ....
|volume=has extra text (help) | https://popflock.com/learn?s=Allies_of_World_War_II | 21 |
22 | Consider that you have been asked to explain financial statements to someone who knows nothing about accounting.
Discuss each of the four financial statements. Explain the different components of the statements as well as what the statements tell about a business.© BrainMass Inc. brainmass.com March 4, 2021, 7:58 pm ad1c9bdddf
Accounting is the means by which information about an enterprise is communicated and, thus, is sometimes called the language of business. Costs, prices, sales volume, profits, and return on investment are all accounting measurements.
Financial Statements is designed primarily to assist investors and creditors in deciding where to place their scarce investment resources. It is also used to help management to know the performance of organization.
Financial statements are useful tools for evaluating both profitability and liquidity. Used separately, or in combination, the income statement and balance sheet help interested parties to measure a company's current financial performance, and to forecast its profit and cash flow potential. Accountants summarize this information in a balance sheet, income statement, statement of retained earnings and statement of cash flows.
Balance sheet, tells about the assets and liabilities of business. It portrays the picture of the organization on a particular date. The balance sheet highlights the financial condition of a company at a single point in time. This is important; the cash flow and income statements record performance over a period of time, while the balance ...
This post easily explains financial statements. | https://brainmass.com/economics/balance-of-payments/explain-financial-statements-135809 | 21 |
149 | |Part of a series on|
Fiat money is a currency (a medium of exchange) established as money, often by government regulation. Fiat money does not have intrinsic value and does not have use value. It has value only because a government maintains its value, or because parties engaging in exchange agree on its value. It was introduced as an alternative to commodity money (a medium which has its own intrinsic value) and representative money (money which represents something with intrinsic value). Representative money is similar to fiat money, but it represents a claim on a commodity (which can be redeemed to a greater or lesser extent).[a]
Government-issued fiat money banknotes were used first during the 11th century in China. Fiat money started to predominate during the 20th century. Since President Nixon's decision to decouple the US dollar from gold in 1971, a system of national fiat currencies has been used globally.
Fiat money can be:
- Any money declared by a government to be legal tender.
- State-issued money which is neither convertible through a central bank to anything else nor fixed in value in terms of any objective standard.
- Money used because of government decree.
- An otherwise non-valuable object that serves as a medium of exchange (also known as fiduciary money.)
Treatment in economics
In monetary economics, fiat money is an intrinsically valueless object or record that is accepted widely as a means of payment. Modern theories of money try to explain that the value of fiat money is greater than the value of its metal content. This stands in contrast with earlier monetary theories from the Middle Ages which were more similar to the coins-as-commodity valuation of the Arrow-Debreu model.
One justification for fiat money comes from a micro-founded model. In most economic models, agents are intrinsically happier when they have more money. In a model by Lagos and Wright, fiat money doesn't have an intrinsic worth but agents get more of the goods they want when they trade assuming fiat money is valuable. Fiat money's value is created internally by the community and, at equilibrium, makes otherwise infeasible trades possible.
Another mathematical model that explains the value of fiat money comes from Game Theory. In a game where agents produce and trade objects, there can be multiple Nash equilibria where agents settle on stable behavior. In a model by Kiyotaki and Wright, an object with no intrinsic worth can have value during trade in one (or more) of the Nash Equilibria.
China has a long history with paper money, beginning in the 7th century. During the 11th century, the government established a monopoly on its issuance, and about the end of the 12th century, convertibility was suspended. The use of such money became widespread during the subsequent Yuan and Ming dynasties.
The Song Dynasty in China was the first to issue paper money, jiaozi, about the 10th century AD. Although the notes were valued at a certain exchange rate for gold, silver, or silk, conversion was never allowed in practice. The notes were initially to be redeemed after three years' service, to be replaced by new notes for a 3% service charge, but, as more of them were printed without notes being retired, inflation became evident. The government made several attempts to maintain the value of the paper money by demanding taxes partly in currency and making other laws, but the damage had been done, and the notes became disfavored.
The succeeding Yuan Dynasty was the first dynasty of China to use paper currency as the predominant circulating medium. The founder of the Yuan Dynasty, Kublai Khan, issued paper money known as Jiaochao during his reign. The original notes during the Yuan Dynasty were restricted in area and duration as in the Song Dynasty.
All these pieces of paper are issued with as much solemnity and authority as if they were of pure gold or silver... and indeed everybody takes them readily, for wheresoever a person may go throughout the Great Kaan's dominions he shall find these pieces of paper current, and shall be able to transact all sales and purchases of goods by means of them just as well as if they were coins of pure gold.— Marco Polo, The Travels of Marco Polo
Washington Irving records an emergency use of paper money by the Spanish for a siege during the Conquest of Granada (1482–1492). In 1661, Johan Palmstruch issued the first regular paper money in the West, by royal charter from the Kingdom of Sweden, through a new institution, the Bank of Stockholm. While this private paper currency was largely a failure, the Swedish parliament eventually assumed control of the issue of paper money in the country. By 1745, its paper money was inconvertible to specie, but acceptance was mandated by the government. This fiat currency depreciated so rapidly that by 1776 it was returned to a silver standard. Fiat money also has other beginnings in 17th-century Europe, having been introduced by the Bank of Amsterdam in 1683.
New France 1685–1770
In 17th century New France, now part of Canada, the universally accepted medium of exchange was the beaver pelt. As the colony expanded, coins from France came to be used widely, but there was usually a shortage of French coins. In 1685, the colonial authorities in New France found themselves seriously short of money. A military expedition against the Iroquois had gone badly and tax revenues were down, reducing government money reserves. Typically, when short of funds, the government would simply delay paying merchants for purchases, but it was not safe to delay payment to soldiers due to the risk of mutiny.
Jacques de Meulles, the Intendant of Finance, conceived an ingenious ad hoc solution – the temporary issuance of paper money to pay the soldiers, in the form of playing cards. He confiscated all the playing cards in the colony, had them cut into pieces, wrote denominations on the pieces, signed them, and issued them to the soldiers as pay in lieu of gold and silver. Because of the chronic shortages of money of all types in the colonies, these cards were accepted readily by merchants and the public and circulated freely at face value. It was intended to be purely a temporary expedient, and it was not until years later that its role as a medium of exchange was recognized. The first issue of playing card money occurred during June 1685 and was redeemed three months later. However, the shortages of coinage reoccurred and more issues of card money were made during subsequent years. Because of their wide acceptance as money and the general shortage of money in the colony, many of the playing cards were not redeemed but continued to circulate, acting as a useful substitute for scarce gold and silver coins from France. Eventually, the Governor of New France acknowledged their useful role as a circulating medium of exchange.
As the finances of the French government deteriorated because of European wars, it reduced its financial assistance to its colonies, so the colonial authorities in Canada relied more and more on card money. By 1757, the government had discontinued all payments in coin and payments were made in paper instead. In an application of Gresham’s Law – bad money drives out good – people hoarded gold and silver, and used paper money instead. The costs of the Seven Years' War resulted in rapid inflation in New France. After the British conquest in 1760, the paper money became almost worthless, but business did not end because gold and silver that had been hoarded came back into circulation. By the Treaty of Paris (1763), the French government agreed to convert the outstanding card money into debentures, but with the French government essentially bankrupt, these bonds were defaulted and by 1771 they were worthless.
The Royal Canadian Mint still issues Playing Card Money in commemoration of its history, but now in 92.5% silver form with gold plate on the edge. It therefore has an intrinsic value which considerably exceeds its fiat value. The Bank of Canada and Canadian economists often use this early form of paper currency to illustrate the true nature of money for Canadians.
18th and 19th century
|United States (de facto)||1873|
|United States (de jure)||1900|
An early form of fiat currency in the American Colonies was "bills of credit." Provincial governments produced notes which were fiat currency, with the promise to allow holders to pay taxes with those notes. The notes were issued to pay current obligations and could be used for taxes levied at a later time. Since the notes were denominated in the local unit of account, they were circulated from person to person in non-tax transactions. These types of notes were issued particularly in Pennsylvania, Virginia and Massachusetts. Such money was sold at a discount of silver, which the government would then spend, and would expire at a fixed date later.
Bills of credit have generated some controversy from their inception. Those who have wanted to emphasize the dangers of inflation have emphasized the colonies where the bills of credit depreciated most dramatically: New England and the Carolinas. Those who have wanted to defend the use of bills of credit in the colonies have emphasized the middle colonies, where inflation was practically nonexistent.
Colonial powers consciously introduced fiat currencies backed by taxes (e.g., hut taxes or poll taxes) to mobilise economic resources in their new possessions, at least as a transitional arrangement. The purpose of such taxes was later served by property taxes. The repeated cycle of deflationary hard money, followed by inflationary paper money continued through much of the 18th and 19th centuries. Often nations would have dual currencies, with paper trading at some discount to money which represented specie.
Examples include the “Continental” bills issued by the U.S. Congress before the United States Constitution; paper versus gold ducats in Napoleonic era Vienna, where paper often traded at 100:1 against gold; the South Sea Bubble, which produced bank notes not representing sufficient reserves; and the Mississippi Company scheme of John Law.
During the American Civil War, the Federal Government issued United States Notes, a form of paper fiat currency known popularly as 'greenbacks'. Their issue was limited by Congress at slightly more than $340 million. During the 1870s, withdrawal of the notes from circulation was opposed by the United States Greenback Party. It was termed 'fiat money' in an 1878 party convention.
After World War I, governments and banks generally still promised to convert notes and coins into their nominal commodity (redemption by specie, typically gold) on demand. However, the costs of the war and the required repairs and economic growth based on government borrowing afterward made governments suspend redemption by specie. Some governments were careful of avoiding sovereign default but not wary of the consequences of paying debts by consigning newly printed cash not associated with a metal standard to their creditors, which resulted in hyperinflation – for example the hyperinflation in the Weimar Republic.
From 1944 to 1971, the Bretton Woods agreement fixed the value of 35 United States dollars to one troy ounce of gold. Other currencies were calibrated with the U.S. dollar at fixed rates. The U.S. promised to redeem dollars with gold transferred to other national banks. Trade imbalances were corrected by gold reserve exchanges or by loans from the International Monetary Fund (IMF).
The Bretton Woods system was ended by what became known as the Nixon shock. This was a series of economic changes by United States President Richard Nixon in 1971, including unilaterally canceling the direct convertibility of the United States dollar to gold. Since then, a system of national fiat monies has been used globally, with variable exchange rates between the major currencies.
Precious metal coinage
During the 1960s, production of silver coins for circulation ceased when the face value of the coin was less than the cost of the precious metal they contained (whereas it had been greater historically ). In the United States, the Coinage Act of 1965 eliminated silver from circulating dimes and quarter dollars, and most other countries did the same with their coins. The Canadian penny, which was mostly copper until 1996, was removed from circulation altogether during the autumn of 2012 due to the cost of production relative to face value.
Money creation and regulation
A central bank introduces new money into an economy by purchasing financial assets or lending money to financial institutions. Commercial banks then redeploy or repurpose this base money by credit creation through fractional reserve banking, which expands the total supply of "broad money" (cash plus demand deposits).
In modern economies, relatively little of the supply of broad money is physical currency. For example, in December 2010 in the U.S., of the $8,853.4 billion of broad money supply (M2), only $915.7 billion (about 10%) consisted of physical coins and paper money. The manufacturing of new physical money is usually the responsibility of the national bank, or sometimes, the government's treasury.
The Bank for International Settlements published a detailed review of payment system developments in the Group of Ten (G10) countries in 1985, in the first of a series that has become known as "red books". Currently the red books cover the participating countries on Committee on Payments and Market Infrastructures (CPMI). A red book summary of the value of banknotes and coins in circulation is shown in the table below where the local currency is converted to US dollars using the end of the year rates. The value of this physical currency as a percentage of GDP ranges from a maximum of 19.4% in Japan to a minimum of 1.7% in Sweden with the overall average for all countries in the table being 8.9% (7.9% for the US).
|Country||Billions of dollars||Per capita|
|Hong Kong SAR||$48||$6,550|
The most notable currency not included in this table is the Chinese yuan, for which the statistics are listed as "not available".
The adoption of fiat currency by many countries, from the 18th century onwards, made much larger variations in the supply of money possible. Since then, huge increases in the supply of paper money have occurred in a number of countries, producing hyperinflations – episodes of extreme inflation rates much greater than those observed during earlier periods of commodity money. The hyperinflation in the Weimar Republic of Germany is a notable example.
Economists generally believe that high rates of inflation and hyperinflation are caused by an excessive growth of the money supply. Presently, most economists favor a small and steady rate of inflation. Small (as opposed to zero or negative) inflation reduces the severity of economic recessions by enabling the labor market to adjust more quickly to a recession, and reduces the risk that a liquidity trap (a reluctance to lend money due to low rates of interest) prevents monetary policy from stabilizing the economy. However, money supply growth does not always cause nominal increases of price. Money supply growth may instead result in stable prices at a time in which they would otherwise be decreasing. Some economists maintain that with the conditions of a liquidity trap, large monetary injections are like "pushing on a string".
The task of keeping the rate of inflation small and stable is usually given to monetary authorities. Generally, these monetary authorities are the national banks that control monetary policy by the setting of interest rates, by open market operations, and by the setting of banking reserve requirements.
Loss of backing
A fiat-money currency greatly loses its value should the issuing government or central bank either lose the ability to, or refuse to, continue to guarantee its value. The usual consequence is hyperinflation. Some examples of this are the Zimbabwean dollar, China's money during 1945 and the Weimar Republic's mark during 1923. A more recent example is the currency instability in Venezuela that began in 2016 during the country's ongoing socioeconomic and political crisis.
But this need not necessarily occur, especially if a currency continues to be the most easily available; for example, the pre-1990 Iraqi dinar continued to retain value in the Kurdistan Regional Government even after its legal tender status was ended by the Iraqi government which issued the notes.
- Criticism of the Federal Reserve
- Fractional-reserve banking
- Hard currency
- Modern monetary theory
- Money creation
- Money supply
- Network effect
- Silver coin
- Silver standard
- See Monetary economics for further discussion.
- Goldberg, Dror (2005). "Famous Myths of "Fiat Money"". Journal of Money, Credit and Banking. 37 (5): 957–967. doi:10.1353/mcb.2005.0052. JSTOR 3839155. S2CID 54713138.
N. Gregory Mankiw (2014). Principles of Economics. p. 220. ISBN 978-1-285-16592-9.
fiat money: money without intrinsic value that is used as money because of government decree
- Walsh, Carl E. (2003). Monetary Theory and Policy. The MIT Press. ISBN 978-0-262-23231-9.
- Peter Bernholz (2003). Monetary Regimes and Inflation: History, Economic and Political Relationships. Edward Elgar Publishing. p. 53. ISBN 978-1-84376-155-6.
- Montgomery Rollins (1917). Money and Investments. George Routledge & Sons. ISBN 9781358416323. Archived from the original on December 27, 2016.
Fiat Money. Money which a government declares shall be accepted as legal tender at its face value;
- John Maynard Keynes (1965) . "1. The Classification of Money". A Treatise on Money. 1. Macmillan & Co Ltd. p. 7.
Fiat Money is Representative (or token) Money (i.e something the intrinsic value of the material substance of which is divorced from its monetary face value) – now generally made of paper except in the case of small denominations – which is created and issued by the State, but is not convertible by law into anything other than itself, and has no fixed value in terms of an objective standard.
- Blume, Lawrence E; (Firm), Palgrave Macmillan; Durlauf, Steven N (2019). The new Palgrave dictionary of economics. Palgrave Macmillan (Firm) (Living Reference Work ed.). United Kingdom. ISBN 9781349951215. OCLC 968345651.
- "The Four Different Types of Money - Quickonomics". Quickonomics. September 17, 2016. Archived from the original on February 13, 2018. Retrieved February 12, 2018.
- Fiat is the third-person singular present active subjunctive of fiō ("I become", "I am made").
- Schueffel, Patrick (2017). The Concise Fintech Compendium. Fribourg: School of Management Fribourg/Switzerland. Archived from the original on October 24, 2017. Retrieved January 8, 2018.
- Dror Goldberg (October 2005). "Famous Myths of "Fiat Money"". Journal of Money, Credit and Banking. Ohio State University Press. 37 (5): 957–967. doi:10.1353/mcb.2005.0052. JSTOR 3839155. S2CID 54713138.
- Sargent, Thomas J. (2001). The Princeton Economic History of the Western World. Princeton University Press. p. 70.
- Lagos, Ricardo & Wright, Randall (2005). "A Unified Framework for Monetary Theory and Policy Analysis". Journal of Political Economy. 113 (3): 463–84. CiteSeerX 10.1.1.563.3199. doi:10.1086/429804. S2CID 154851073..
- Kiyotaki, Nobuhiro & Wright, Randall (1989). "On Money as a Medium of Exchange". Journal of Political Economy. 97 (4): 927–54. doi:10.1086/261634. S2CID 154872512..
- Selgin, George (2003), "Adaptive Learning and the Transition to Fiat Money", The Economic Journal, 113 (484): 147–65, doi:10.1111/1468-0297.00094, S2CID 153964856.
- Von Glahn, Richard (1996), Fountain of Fortune: Money and Monetary Policy in China, 1000–1700, Berkeley: University of California Press.
- Ramsden, Dave (2004). "A Very Short History of Chinese Paper Money". James J. Puplava Financial Sense. Archived from the original on June 9, 2008.
- David Miles; Andrew Scott (January 14, 2005). Macroeconomics: Understanding the Wealth of Nations. John Wiley & Sons. p. 273. ISBN 978-0-470-01243-7.
- Marco Polo (1818). The Travels of Marco Polo, a Venetian, in the Thirteenth Century: Being a Description, by that Early Traveller, of Remarkable Places and Things, in the Eastern Parts of the World. pp. 353–55. Retrieved September 19, 2012.
- Foster, Ralph T. (2010). Fiat Paper Money – The History and Evolution of Our Currency. Berkeley, California: Foster Publishing. pp. 59–60. ISBN 978-0-9643066-1-5.
- "How Amsterdam Got Fiat Money". www.frbatlanta.org. Archived from the original on November 10, 2013. Retrieved May 8, 2018.
- Bank of Canada (2010). "New France (ca. 1600–1770)" (PDF). A History of the Canadian Dollar. Bank of Canada. Archived (PDF) from the original on October 2, 2013. Retrieved February 12, 2014.
- "Playing Card Money Set". Royal Canadian Mint. 2014. Archived from the original on August 15, 2016. Retrieved July 6, 2016.
- "Rise and fall of the Gold Standard". news24.com. May 30, 2014. Archived from the original on May 4, 2017. Retrieved May 8, 2018.
- Michener, Ron (2003). "Money in the American Colonies Archived February 21, 2015, at the Wayback Machine." EH.Net Encyclopedia, edited by Robert Whaples.
- "Fiat Money". Chicago Daily Tribune. May 24, 1878.
- ""Bretton Woods" Federal Research Division Country Studies (Austria)". Library of Congress. Archived from the original on December 2, 2010.
Jeffrey D. Sachs, Felipe Larrain (1992). Macroeconomics for Global Economies. Prentice-Hall. ISBN 978-0745006086.
The Bretton Woods arrangement collapsed in 1971 when U.S. President Richard Nixon suspended the convertibility of the dollar into gold. Since then, the world has lived in a system of national fiat monies, with flexible exchange rates between the major currencies
- Dave (August 22, 2014). "Silver as Money: A History of US Silver Coins". Silver Coins. Retrieved March 7, 2019.
- Agency, Canada Revenue (June 22, 2017). "ARCHIVED – Eliminating the penny from Canada's coinage system - Canada.ca". www.cra-arc.gc.ca. Archived from the original on May 17, 2017. Retrieved May 8, 2018.
- "Million Dollar Coin". www.mint.ca. Archived from the original on January 25, 2015. Retrieved May 8, 2018.
- "FRB: H.6 Release--Money Stock and Debt Measures--January 27, 2011". www.federalreserve.gov. Archived from the original on July 10, 2017. Retrieved May 8, 2018.
- "About the CPMI". www.bis.org. February 2, 2016. Archived from the original on October 4, 2017. Retrieved May 8, 2018.
- "CPMI - BIS - Red Book: CPMI countries". www.bis.org. Archived from the original on October 20, 2017. Retrieved May 8, 2018.
- Robert Barro and Vittorio Grilli (1994), European Macroeconomics, Ch. 8, p. 139, Fig. 8.1. Macmillan, ISBN 0-333-57764-7.
- Hummel, Jeffrey Rogers. "Death and Taxes, Including Inflation: the Public versus Economists" (January 2007). "Death and Taxes, Including Inflation: The Public versus Economists · Econ Journal Watch : Inflation, deadweight loss, deficit, money, national debt, seigniorage, taxation, velocity". Archived from the original on December 25, 2013. Retrieved March 30, 2014. p. 56
- "Escaping from a Liquidity Trap and Deflation: The Foolproof Way and Others Archived February 26, 2014, at the Wayback Machine" Lars E.O. Svensson, Journal of Economic Perspectives, Volume 17, Issue 4 Fall 2003, pp. 145–66
- John Makin (November 2010). "Bernanke Battles U.S. Deflation Threat" (PDF). AEI. Archived from the original (PDF) on December 21, 2013.
- Paul Krugman; Gauti Eggertsson. "Debt, Deleveraging, and the liquidity trap: A Fisher‐Minsky‐Koo approach" (PDF). Archived (PDF) from the original on December 17, 2013.
- Taylor, Timothy (2008). Principles of Economics. Freeload Press. ISBN 978-1-930789-05-0.
- Foote, Christopher; Block, William; Crane, Keith & Gray, Simon (2004). "Economic Policy and Prospects in Iraq" (PDF). The Journal of Economic Perspectives. 18 (3): 47–70. doi:10.1257/0895330042162395..
- Budget and Finance (2003). "Iraq Currency Exchange". The Coalition Provisional Authority. Archived from the original on May 15, 2007. | https://en.wikipedia.org/wiki/Fiat_money | 21 |
14 | The National Institute on Deafness and Other Communication Disorders (NIDCD) describes noise-induced hearing loss (NIHL) as a unique type of hearing loss that occurs due to repeated exposure to sounds above recommended decibel levels. NIHL can also occur from one-time exposure to extremely loud noises. According to the NIDCD, exposure to loud sounds can cause hearing loss by damaging sensitive structures in the inner ear.
NIHL can happen immediately, or it can take several years to develop. It can also occur in one or both ears and be a temporary or permanent condition. Unlike other types of hearing loss related to genetic or aging, however, hearing loss from noise exposure is preventable.
What Are the Statistics on NIHL?
People can develop NIHL at any age. A study conducted by the Centers for Disease Control (CDC) a decade ago indicated that approximately six percent of the adult population in the United States has some degree of NIHL. All study participants were under the age of 70.
The study also suggested that 17 percent of people between age 12 and 19 have NIHL due to ongoing exposure to loud noise. Young adults have a higher risk of developing NIHL due to listening to loud music through headphones or earbuds and attending live concerts more than older people do.
Detecting Harmful Environments Before They Damage Hearing
Although people have varying sensitivity to noise, certain situations can cause a hearing risk to everyone. These include:
- The noise is loud enough to cause ear pain or ringing in the ears.
- People have to shout for others sitting near them to hear what they are saying due to the noise level.
- Partial or full hearing loss lasts for several hours after exposure to extremely loud noises.
Unfortunately, some people believe the common myth that repeatedly exposing themselves to loud noise will make their ears able to withstand it better. Not only is this untrue, but people who already have NIHL may not experience sounds as loudly as they did before the damage occurred. A proactive approach to preventing noise-induced hearing loss is key since few treatment options exist once it has already developed.
How to Prevent Noise-Induced Hearing Loss from Developing
The simplest thing people can do to reduce the risk of NIHL is to avoid situations that can trigger it. Since this isn’t always possible, the next best thing is to take steps to protect hearing when loud noise exposure is unavoidable. For example, people who drive in heavy city traffic every day can invest in specialty earmuffs to wear until they get to their destination or at least to quieter streets. Other ways to prevent temporary or permanent hearing damage include:
- Use materials that absorb sound at home and work such as rubber mats, carpeting, curtains, or double-paned windows.
- Wear a pair of disposable foam earplugs in noisy environments or when working with loud machinery to reduce noise intake by as much as 25 decibels. Examples of times people should always wear earplugs include when cutting the grass or using a leaf blower, riding a motorcycle, riding a snowmobile, attending concerts, using power tools, or when traveling on noisy roads.
- Don’t turn up the radio or TV to drown out unpleasant loud noises as this will only increase the likelihood of NIHL.
- Use only one loud machine at a time.
- Musicians should wearing specialized hearing devices to protect their hearing when performing
- Schedule an annual hearing check if regularly exposed to loud noise at home or work.
How IQbuds2 MAX May Help
Many people use earbuds or headphones to tune out surrounding noise. You’ll often see this on crowded buses, throughout busy cafes, or in open office spaces. However masking loud environmental noise with even louder digital audio only compounds the risks of hearing damage.
Headphones or earbuds with active noise cancellation (ANC) assist by reversing or cancelling out the frequencies of outside noise. This enables the listener to stream their preferred digital audio at a lower, safer volume.
IQbuds2 MAX go a step further with both ANC and their speech-in-noise-control (SINC) technology. The SINC functions of MAX empowers users to selectively pass through outside noise to their earbuds at the volumes and frequencies of their choosing. This means they can more clearly hear speech and conversations, while blocking out other distracting or potentially harmful environmental noise.
Symptoms Indicating NIHL Has Already Occurred
When hearing loss occurs slowly, people don’t always recognize that they have a problem and therefore don’t take precautions to protect their hearing in time. Anyone experiencing one or more of these issues shouldn’t hesitate to schedule an appointment with an audiologist for testing.
- The person with NIHL feels like people are mumbling when they are speaking with a normal pitch and tone.
- Difficulty hearing conversation when background noise is present such as at a restaurant.
- Struggling to hear the voices of women or children.
- Not being able to follow conservations at social gatherings or work meetings, thereby missing the entire context.
- Others complain that the person with NIHL speaks too loudly or turns the TV up too high.
- Pain in the ears with loud noise exposure.
NIHL Treatment Options
The bad news about NIHL is that it isn’t possible to restore hearing after damage to the hair cells of the inner ear. However, programmable smart earbuds such as IQbuds2 MAX, tailored to the user’s hearing loss, may well help make conversations clearer for people with mild to moderate hearing challenges or who are at the early stages of their hearing health journey. . | https://www.nuheara.com/news/noise-induced-hearing-loss/ | 21 |
14 | Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species, either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchange of kinetic energy of particles through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when bulk flow of a fluid (gas or liquid) carries heat along with the flow of matter in the fluid. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case the fluid is forced to flow by use of a pump, fan or other mechanical means.
Heat is defined in physics as the transfer of thermal energy across a well-defined boundary around a thermodynamic system. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat.
Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process.
Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat-flow through a surface.
In engineering contexts, the term heat is taken as synonymous to thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life.
The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate prediction of conversion from any one to the others.
Thermal engineering concerns the generation, use, conversion, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes.
The fundamental modes of heat transfer are:
- Advection is the transport mechanism of a fluid from one location to another, and is dependent on motion and momentum of that fluid.
- Conduction or diffusion
- The transfer of energy between objects that are in physical contact. Thermal conductivity is the property of a material to conduct heat and evaluated primarily in terms of Fourier's Law for heat conduction.
- The transfer of energy between an object and its environment, due to fluid motion. The average temperature is a reference for evaluating properties related to convective heat transfer.
- The transfer of energy by the emission of electromagnetic radiation.
By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula:
- is heat flux (W/m2),
- is density (kg/m3),
- is heat capacity at constant pressure (J/kg·K),
- is the difference in temperature (K),
- is velocity (m/s).
On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant, so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the change in temperature (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time.
Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study.
The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case the fluid is forced to flow by using a pump, fan or other mechanical means.
Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. Bulk motion of fluid enhances heat transfer in many physical situations, such as (for example) between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction.
Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current.
Convective cooling is sometimes described as Newton's law of cooling:
The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings.
However, by definition, the validity of Newton's law of Cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply.
Convection vs. conduction
In a body of fluid that is heated from underneath its container, conduction and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong.
- g is acceleration due to gravity,
- ρ is the density with being the density difference between the lower and upper ends,
- μ is the dynamic viscosity,
- α is the Thermal diffusivity,
- β is the volume thermal expansivity (sometimes denoted α elsewhere),
- T is the temperature,
- ν is the kinematic viscosity, and
- L is characteristic length.
The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system.
The buoyancy force driving the convection is roughly , so the corresponding pressure is roughly . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals , where V is the typical fluid velocity due to convection and the order of its timescale. The conduction timescale, on the other hand, is of the order of .
Convection occurs when the Rayleigh number is above 1,000–2,000.
Thermal radiation is energy emitted by matter as electromagnetic waves, due to the pool of thermal energy in all matter with a temperature above absolute zero. Thermal radiation propagates without the presence of matter through the vacuum of space.
Thermal radiation is a direct result of the random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation, which carries energy away from the surface.
The Stefan-Boltzmann equation, which describes the rate of transfer of radiant energy, is as follows for an object in a vacuum :
For radiative transfer between two objects, the equation is as follows:
- is the heat flux,
- is the emissivity (unity for a black body),
- is the Stefan–Boltzmann constant,
- is the view factor between two surfaces a and b, and
- and are the absolute temperatures (in kelvins or degrees Rankine) for the two objects.
Radiation is typically only important for very hot objects, or for objects with a large temperature difference.
Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to 285 °C (545 °F).
The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse-flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsly 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France.
Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water. The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation.
Phase transitions involve the four fundamental states of matter:
- Solid – Deposition, freezing and solid to solid transformation.
- Gas – Boiling / evaporation, recombination / deionization, and sublimation.
- Liquid – Condensation and melting / fusion.
- Plasma – Ionization.
The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume.
In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB).
At similar standard atmospheric pressure and high temperatures, the hydrodynamically-quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low, but rise slowly with temperature. Any contact between fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF).
The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier".
Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of the heat is the same as that absorbed during vaporization at the same fluid pressure.
There are several types of condensation:
- Homogeneous condensation, as during a formation of fog.
- Condensation in direct contact with subcooled liquid.
- Condensation on direct contact with a cooling wall of a heat exchanger: This is the most common mode used in industry:
- Filmwise condensation is when a liquid film is formed on the subcooled surface, and usually occurs when the liquid wets the surface.
- Dropwise condensation is when liquid drops are formed on the subcooled surface, and usually occurs when the liquid does not wet the surface.
- Dropwise condensation is difficult to sustain reliably; therefore, industrial equipment is normally designed to operate in filmwise condensation mode.
Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically with heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state.
Heat transfer can be modeled in various ways.
The heat equation is an important partial differential equation that describes the distribution of heat (or variation in temperature) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.).
Lumped system analysis
Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling.
System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may be changing in time.
In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, and power station engineering.
Insulation, radiance and resistance
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference.
Radiance or spectral radiance are measures of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator.
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature.
A thermocouple is a temperature-measuring device and widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
A thermoelectric cooler is a solid state electronic device that pumps (transfers) heat from one side of the device to the other when electric current is passed through it. It is based on the Peltier effect.
A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.[further explanation needed]
A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces.
Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks or the addition of energy-efficient windows and doors.
- Smart meter is a device that records electric energy consumption in intervals.
- Thermal transmittance is the rate of transfer of heat through a structure divided by the difference in temperature across the structure. It is expressed in watts per square meter per kelvin, or W/(m2K). Well-insulated parts of a building have a low thermal transmittance, whereas poorly-insulated parts of a building have a high thermal transmittance.
- Thermostat is a device to monitor and control temperature.
Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases.
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases, and is re-radiated in all directions. Since part of this re-radiation is back towards the surface and the lower atmosphere, it results in an elevation of the average surface temperature above what it would be in the absence of the gases.
Heat transfer in the human body
The principles of heat transfer in engineering systems can be applied to the human body in order to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature in order to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced in order to keep the internal temperature at a healthy level.
Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered.
In order to ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and the fluid thickness can all be related with the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids.
Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity.
Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in air occurs; thus, there is no cooling effect.
In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level.
- Doppler cooling is the most common method of laser cooling.
- Sympathetic cooling is a process in which particles of one type cool particles of another type. Typically, atomic ions that can be directly laser-cooled are used to cool nearby ions or atoms. This technique allows cooling of ions and atoms that cannot be laser cooled directly.
Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect.
Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide (CO2) at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere.
Thermal energy storage
Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.
- Combined forced and natural convection
- Heat capacity
- Heat transfer physics
- Stefan–Boltzmann law
- Thermal contact conductance
- Thermal physics
- Thermal resistance in electronics
- Heat transfer enhancement
- Geankoplis, Christie John (2003). Transport Processes and Separation Principles (4th ed.). Prentice Hall. ISBN 0-13-101367-X.
- "B.S. Chemical Engineering". New Jersey Institute of Technology, Chemical Engineering Departement. Archived from the original on 10 December 2010. Retrieved 9 April 2011.
- Lienhard, John H. IV; Lienhard, John H. V (2019). A Heat Transfer Textbook (5th ed.). Mineola, NY: Dover Pub. p. 3.
- Welty, James R.; Wicks, Charles E.; Wilson, Robert Elliott (1976). Fundamentals of momentum, heat, and mass transfer (2nd ed.). New York: Wiley. ISBN 978-0-471-93354-0. OCLC 2213384.
- Faghri, Amir; Zhang, Yuwen; Howell, John (2010). Advanced Heat and Mass Transfer. Columbia, MO: Global Digital Press. ISBN 978-0-9842760-0-4.
- Taylor, R. A. (2012). "Socioeconomic impacts of heat transfer research". International Communications in Heat and Mass Transfer. 39 (10): 1467–1473. doi:10.1016/j.icheatmasstransfer.2012.09.007.
- "Mass transfer". Thermal-FluidsPedia. Thermal Fluids Central.
- Abbott, J.M.; Smith, H.C.; Van Ness, M.M. (2005). Introduction to Chemical Engineering Thermodynamics (7th ed.). Boston, Montreal: McGraw-Hill. ISBN 0-07-310445-0.
- "Heat conduction". Thermal-FluidsPedia. Thermal Fluids Central.
- Çengel, Yunus (2003). Heat Transfer: A practical approach (2nd ed.). Boston: McGraw-Hill. ISBN 978-0-07-245893-0.
- "Convective heat transfer". Thermal-FluidsPedia. Thermal Fluids Central.
- "Convection — Heat Transfer". Engineers Edge. Retrieved 20 April 2009.
- Incropera, Frank P.; et al. (2012). Fundamentals of heat and mass transfer (7th ed.). Wiley. p. 603. ISBN 978-0-470-64615-1.
- "Radiation". Thermal-FluidsPedia. Thermal Fluids Central.
- Howell, John R.; Menguc, M.P.; Siegel, Robert (2015). Thermal Radiation Heat Transfer. Taylor and Francis.
- Mojiri, A (2013). "Spectral beam splitting for efficient conversion of solar energy—A review". Renewable and Sustainable Energy Reviews. 28: 654–663. doi:10.1016/j.rser.2013.08.026.
- Taylor, Robert A.; Phelan, Patrick E.; Otanicar, Todd P.; Walker, Chad A.; Nguyen, Monica; Trimble, Steven; Prasher, Ravi (March 2011). "Applicability of nanofluids in high flux solar collectors". Journal of Renewable and Sustainable Energy. 3 (2): 023104. doi:10.1063/1.3571565.
- Megan Crouse: This Gigantic Solar Furnace Can Melt Steel manufacturing.net, 28 July 2016, retrieved 14 April 2019.
- See Flashes in the Sky: Earth's Gamma-Ray Bursts Triggered by Lightning
- David.E. Goldberg (1988). 3,000 Solved Problems in Chemistry (1st ed.). McGraw-Hill. Section 17.43, page 321. ISBN 0-07-023684-4.
- Louis Theodore, R. Ryan Dupont and Kumar Ganesan (Editors) (1999). Pollution Prevention: The Waste Management Approach to the 21st Century. CRC Press. Section 27, page 15. ISBN 1-56670-495-2.CS1 maint: extra text: authors list (link)
- Tro, Nivaldo (2008). Chemistry: A Molecular Approach. Upper Saddle River, New Jersey: Prentice Hall. p. 479.
When a substance condenses from a gas to a liquid, the same amount of heat is involved, but the heat is emitted rather than absorbed.
- C. Michael Hogan (2011) Sulfur, Encyclopedia of Earth, eds. A. Jorgensen and C. J. Cleveland, National Council for Science and the environment, Washington DC
- Wendl, M. C. (2012). Theoretical Foundations of Conduction and Convection Heat Transfer. Wendl Foundation.
- Peng, Z.; Doroodchi, E.; Moghtaderi, B. (2020). "Heat transfer modelling in Discrete Element Method (DEM)-based simulations of thermal processes: Theory and model development". Progress in Energy and Combustion Science. 79, 100847: 100847. doi:10.1016/j.pecs.2020.100847.
- "How to simplify for small Biot numbers". Retrieved 21 December 2016.
- Fundamentals of Classical Thermodynamics, 3rd ed. p. 159, (1985) by G. J. Van Wylen and R. E. Sonntag: "A heat engine may be defined as a device that operates in a thermodynamic cycle and does a certain amount of net positive work as a result of heat transfer from a high-temperature body and to a low-temperature body. Often the term heat engine is used in a broader sense to include all devices that produce work, either through heat transfer or combustion, even though the device does not operate in a thermodynamic cycle. The internal-combustion engine and the gas turbine are examples of such devices, and calling these heat engines is an acceptable use of the term."
- Mechanical efficiency of heat engines, p. 1 (2007) by James R. Senf: "Heat engines are made to provide mechanical energy from thermal energy."
- "What is a Heat Exchanger?". Lytron Total Thermal Solutions. Retrieved 12 December 2018.
- "EnergySavers: Tips on Saving Money & Energy at Home" (PDF). U.S. Department of Energy. Retrieved 2 March 2012.
- Hartman, Carl; Bibb, Lewis. (1913). "The Human Body and Its Enemies". World Book Co., p. 232.
- Cengel, Yunus A. and Ghajar, Afshin J. "Heat and Mass Transfer: Fundamentals and Applications", McGraw-Hill, 4th Edition, 2010.
- Tao, Xiaoming. "Smart fibres, fabrics, and clothing", Woodhead Publishing, 2001
- Wilmore, Jack H.; Costill, David L.; Kenney, Larry (2008). Physiology of Sport and Exercise (6th ed.). Human Kinetics. p. 256. ISBN 9781450477673.
- The global infrared energy budget of the thermosphere from 1947 to 2016 and implications for solar variability Martin G. Mlynczak Linda A. Hunt James M. Russell III B. Thomas Marshall Christopher J. Mertens R. Earl Thompson https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2016GL070965
|Wikimedia Commons has media related to Heat transfer.|
- A Heat Transfer Textbook - (free download).
- Thermal-FluidsPedia - An online thermal fluids encyclopedia.
- Hyperphysics Article on Heat Transfer - Overview
- Interseasonal Heat Transfer - a practical example of how heat transfer is used to heat buildings without burning fossil fuels.
- Aspects of Heat Transfer, Cambridge University
- Thermal-Fluids Central
- Energy2D: Interactive Heat Transfer Simulations for Everyone | https://en.wikipedia.org/wiki/Heat_transfer | 21 |
19 | COVID-19 is the condition generated by the SARS-CoV-2 virus, the latest zoonotic coronavirus to cause an outbreak—in this case, a global pandemic.
Other zoonotic coronaviruses are infamous for causing outbreaks of such severe illnesses as the 2002 severe acute respiratory syndrome (SARS) and the 2012 Middle East respiratory syndrome (MERS).
Zoonotic viruses are viruses that are capable of interspecies transmission and thus can infect humans as well as animals.
Coronaviruses (abbreviated CoV) are a family of viruses that infect humans as well as animals. Coronaviruses got named because their outer proteins look like a crown (corona means “crown” in Latin).
A human coronaviruses infection usually results in mild to moderate symptoms like those of the common cold, including stuffy nose, cough, and sore throat.
The (CDC) Centers for Disease Control and Prevention reported four commonly circulating coronavirus strains: 229E, NL63, OC43, and HKU1.
These strains readily spread around the world each year; most people will get infected with one or more of those coronaviruses during their life.
Rarely, a new human coronavirus appears that causes more severe illness. This type of virus emerges when a coronavirus strain that usually infects only animals crosses the species barrier to infect humans.
Examples of new human coronaviruses that can cause more severe respiratory symptoms to include SARS-CoV, MERS-CoV, and the 2019 novel coronavirus (SARS-CoV-2 or COVID-19).
SARS originated in southern China and first appeared in November 2002. It provoked a worldwide epidemic between 2002 to 2003, resulting in 8,098 possible cases and 774 deaths (9.6% fatalities).
There hasn’t been one SARS case reported since 2004.
MERS-CoV, occasionally known as camel flu, initially emerged in Jeddah, Saudi Arabia, back in 2012 and spread an outbreak of local character to countries surrounding the Arabian Peninsula.
MERS epidemic then spread to another country in the Middle East, then by countries in Asia, North and South America, Europe, and Africa.
MERS infections are still happening, with new cases being confirmed by the WHO recently as March 2021.
Between 2012 and March 2021, the WHO reported 2,586 cases and 939 deaths (34.4% mortality). There were 2 cases of MERS reported in the United States in 2014, but none since that date.
INITIAL COVID-19 CASES
Although the origin of COVID-19 has been called into question, a cluster of 27 cases of so-called viral pneumonia of unknown origin, some of which were severe and not responding to conventional treatment, were reported to China’s National Health Commission (NHC) on December 30, 2019.
Most of the people initially infected were vendors at the Huanan Seafood Wholesale Market in Wuhan, China, where they were selling life or recently killed fish, birds, and other animals.
The seafood market was closed on January 1, 2020, for cleaning and disinfection. Officials have yet to determine the animal species that initially conveyed the virus to humans.
It is possible that the novel virus first emerged as early as October 2019.
An early genomic analysis published in The Lancet on January 29, 2020, showed up to 99% sequence identity between ten SARS-CoV-2 genome sequences from nine patients in Wuhan, eight of whom had visited the Huanan Seafood Wholesale Market.
This early analysis also observed that SARS-CoV-2 is closely related to two other bat-derived SARS-like coronaviruses (88% sequence identity), suggesting that bats may be the original reservoir of the virus.
NEW VARIANTS OF COVID-19
All viruses mutate frequently, and COVID-19 is no exception.
As of April 2021, the CDC, NIH, FDA, and several other U.S. government agencies have developed a three-level classification for variants of the COVID-19 virus:
- Variants of interest (VOIs): These are variants with specific genetic markers that may affect the speed of transmission of the virus, weaken the effectiveness of vaccines or antibodies, or reduce the effectiveness of treatments for COVID-19. VOIs circulating in the United States as of April 2021 include B.1.526, B.1.526.1, B.1.525, and P.2.
- Variants of concern (VOCs): These are variants of the virus with evidence of causing more severe disease, significantly reducing the effectiveness of vaccines or treatments, or being more readily transmissible. The B.1.1.7, B.1.351, P.1, B.1.427, and B.1.429 alternatives circulating in the United States are classified as variants of concern as of April 2021.
- Variants of high consequence (VOCs): These are variants of the COVID-19 virus, with clear evidence that preventive measures or medical treatments have significantly reduced effectiveness relative to previously known variants. A VOHC would require notification to the WHO, reporting to the CDC, public announcement of strategies to prevent or contain the transmission, and recommendations for updating treatments and vaccines. No VOHCs are circulating in the United States as of April 21.
Definitions and other information about COVID-19 variants can be found at https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/variant-surveillance/variant-info.html.
TRANSMISSION OF COVID-19
Although the initial group of people infected with COVID-19 was believed to have been infected by animals from the Huanan market, human-to-human transmission has since been verified worldwide.
Like other respiratory illnesses, COVID-19 is transmitted via respiratory droplets or aerosols when infected talk, sings, coughs, or sneezes.
A new infection occurs when either respiratory droplets or aerosols exhaled by an infected person enter the mouth, nose, or eyes of other people who are in close contact with the infected person.
There is, however, no evidence as of early 2021 that COVID-19 can be transmitted by skin-to-skin contact.
As found in a new research study reported in the Annals of Internal Medicine in November 2020, roughly a third of people with COVID-19 show no symptoms.
These people can unknowingly spread the disease due to their lack of symptoms (asymptomatically) or extend the illness before developing symptoms (pre-symptomatically).
People may transmit the virus to others as early as two days before they develop symptoms themselves.
This finding urged the CDC to recommend that the ordinary public wear cloth face masks in public where social distancing is hard to maintain, such as in grocery stores and pharmacies.
The primary reproduction number (also called R0, pronounced R-nought or R-zero) of a germ is how one sick person infects most people on average.
The larger the number, the more infectious the virus is. In common, a number higher than one usually means an epidemic will go higher.
While the actual reproduction number is constantly changing according to the latest information, the WHO estimates a preliminary R0 of 2.0 – 2.5 for COVID-19.
This number means that up to 2.5 people can be infected by every one person infected with COVID-19.
For comparison, the transmission rate of SARS-CoV was estimated to be 2 to 3; that of common influenza strains is around 1.3; and of measles virus (highly contagious), between 12 and 15.
The best method to slow the spreading of the coronaviruses and prevent infection is to minimize social and physical contact with people you don’t live with and, when you have to go out in public, put on a face mask and keep at least six feet away from other people.
Other measures to take include:
- Wash your hands regularly and entirely with soap and water for 20 seconds
- Using an alcohol-based hand sanitizer that is above 60% alcohol if you unable to wash your hands (or on top of hand washing)
- Avoid touching your eyes, mouth, and nose
- Cleaning and disinfecting things repeatedly
- Avert close contact with people who are sick
- Staying home if you are sick
- Getting an annual flu shot
- Receiving an FDA-approved COVID-19 vaccine
Everyone who has been subjected to someone who is known to be infected with COVID-19 should do self-quarantine for two weeks, as recommended by the CDC since April 2021.
Nevertheless, the CDC confirms that some local public health authorities can permit people to quit quarantine after ten days with no mandatory testing. After day 7, having negative test results, the test was done on day five or later.
Due to the active community transmission of COVID-19 in the United States, initial guidelines called “15 Days to Slow the Spread” were issued on March 16, 2020.
Those guidelines were extended for an additional 30 days on March 31, 2020, and grown in most states as of April 2021.
The guidelines outline the following precautions that should be followed by all Americans (and anyone in a country currently experiencing COVID-19 community spread).
If you are healthy and in a region of active transmission, you should take additional precautions, such as:
- Practicing social distancing by minimizing contact with others and staying at least 6 feet away from others
- Avoiding crowds and public places; stay home as much as possible (work and complete schoolwork at home)
- Avoiding discretionary travel, shopping, and social visits
- Avoiding eating in restaurants and bars; use the drive-thru, pickup, or delivery option instead.
- Avoiding visiting nursing homes, retirement facilities, and long-term care facilities
- You are wearing a cloth face mask (but be sure to use it correctly by washing your hands before putting the cover on, keeping the mask over your mouth and nose while among the people, never touching the front of the surface, and washing the mask after every use). People must also wear masks when traveling on airplanes, buses, trains, and all other forms of public transportation in the United States.
- Receiving an FDA-approved COVID-19 vaccine
People in the following high-risk groups should take the following further precautions:
- If you are an older individual or a person with underlying health conditions (such as diabetes, heart disease, asthma, a weakened immune system, etc.), stay at home and stay away from other people.
- If you or someone inside your house is sick, hold the whole household at home and talk to your medical provider—do not go to work or school.
- Should you or another member of your family tested positive for COVID-19, keep the entire household at home and give a call to your medical provider—do not go to work or school.
Animal care measures
Since April 2021, the chance of such household pets as dogs and cats spreading COVID-19 to the human population is thought to be low;
nevertheless, there is specific evidence that humans can advance the virus to dogs and also cats, with cats seemingly being more vulnerable than dogs.
To stay on the safe side, persons ill with COVID-19 should be confined from pets and others in the family; and pets should not be permitted to contact anyone outside the household.
Cats should be held strictly indoors. Last but not least, people should not put masks on their pets as it can harm pets.
Albeit the FDA has not issued a EUA to this date for an animal COVID-19 vaccine, the Russian government declared on March 31, 2021, that it had successfully registered the first vaccine suitable for animals.
State-level changes in masking mandates
Although the CDC still recommends the use of masks to slow down the spreading of COVID-19 (see https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/cloth-face-cover-guidance.html for the most up to date instructions), 13 states have started to lift mask mandates for the common public since April 2021: 10 states by order of the governor (Alabama, Indiana, Arkansas, Mississippi, Iowa, Montana, North Dakota, New Hampshire, Wyoming, and Texas); two by legal action (Kansas and Utah); and also one by court order (Wisconsin). Details specifying each state or territory’s mask mandate as of April 23, 2021, can be found https://www.aarp.org/health/healthy-living/info-2020/states-mask-mandates-coronavirus.html.
As of April 26, 2021, there are 3 COVID-19 vaccines licensed by the Food and Drug Administration (FDA) for implementation in the United States:
- The Pfizer-BioNTech vaccine (also known ad tozinameran) for persons 16 years or older.
- The Moderna-vaccine (also called mRNA-1273) for persons 18 years and older.
- The Johnson & Johnson COVID-19 vaccine (also known as Ad.26.COV2.S or the Janssen COVID-19 vaccine) for people 18 years and older.
The CDC does not recommend any of the vaccines over the others; all are considered equally acceptable to protect individuals and the general public.
The first two vaccines are applied as two shots 28 days apart in the muscle of the upper arm.
Those vaccines are mRNA vaccines (also known as genetic-code vaccines), meaning that they use a copy of messenger RNA to make an immune response in the recipient; they are not using the live virus which generates COVID-19.
An individual is deemed to be fully vaccinated two weeks after receiving the second application.
The Johnson & Johnson vaccine is called a viral vector vaccine; it uses a virus that is not causing the illness to carry genetic material from the illness-causing virus to provoke an immune response.
Dissimilar to the two mRNA vaccines, the Johnson & Johnson vaccine takes only one dose. Its extra benefit is that it does not have the requirement to be transported frozen and stays viable in a standard refrigerator for a couple of months.
Although the CDC shortly paused the Johnson & Johnson vaccine administration on April 13, 2021, Johnson & Johnson application was continued on April 23.
An individual is considered fully vaccinated two weeks after receiving the single dose of this vaccine.
According to the CDC, in extensive Phase, three trials of the AstraZeneca COVID-19 vaccine (AZD1222) and the Novavax COVID-19 vaccine are ongoing in the United States since April 2021.
The AstraZeneca cure is a viral vector vaccine, and the Novavax vaccine, otherwise known as NVX-CoV2373, is a protein subunit vaccine. Both vaccines require two doses for the total effect.
Another COVID-19 vaccine approved for use by other countries as of late April 2021:
- Viral vector vaccines. Those vaccines use a chemically weakened virus to move fragments of the disease virus to stimulate an immune reaction. They involve Gam-COVID-Vac, refined in Russia; AD5-nCoV (Convidecea), refined by the Chinese Academy of Military Medical Sciences; and AZD1222, refined by AstraZeneca and the University of Oxford. Administration of the AstraZeneca vaccine was stopped briefly at the beginning of March 2021 in Germany, France, Spain, and Italy thanks to fears that it could be tied with uncommon blood clots in specific recipients. The European Medicines Agency (EMA) claims that the vaccine’s benefits outweigh the risks; nevertheless, and administration resumed on March 18, 2021. As noted above, this vaccine is currently put to trial tests in the United States for likely approval by the FDA.
- Inactivated virus vaccines. These include CoronaVac and BBIBP-CorV, refined by Chinese pharmaceutical companies and government institutes; CoviVac, refined by the Russian Academy of Sciences; and BBV152 (Covaxin), refined by the Indian Council of Medical Research.
- Protein subunit vaccines. These include EpiVacCorona, refined by the Vector Institute in Russia, and RBD-Dimer (ZF2001), refined by the Chinese Academy of Sciences.
Since April 26, 2021, over 1.03 billion doses of COVID-19 vaccines had been applied globally, corresponding to official national health agencies; with the same date, the CDC stated that 230,768,454 Americans had received the initial dose of an approved COVID-19 vaccine.
In Dec. 2020, once the supply of approved vaccines was restricted, the CDC issued lead recommendations for the United States as follows:
- Phase 1a: health care staff and residents of long-term care facilities.
- Phase 1b: crucial frontline workers (firefighters, police officers, corrections officers, food and agricultural workers, public transit workers, manufacturing workers, United States Postal Service workers, grocery store workers, support staff and teachers, and daycare workers) also adults over 75 years old.
- Phase 1c: Adults between 65 to 74 years old; people between 16 and 64 years old with hidden health problems; and individuals whose jobs are in transportation and logistics, law, media, food service, information technology, housing construction and finance, communications, energy, public safety, and public health.
The CDC updated these preference proposals in April 2021 to state that all Americans being 16 years old or older are now eligible for vaccination even though the supply of vaccines is still limited as of today.
People should consult their local health department for comprehensive information on vaccine administration in their region.
People may also check on a website named VaccineFinder
(https://vaccinefinder.org/search/) to pinpoint locations where they can arrange appointments to get a particular COVID-19 vaccine; the CDC also suggests contacting one’s regional pharmacy.
At the end of April 2021, the federal government is active in making all FDA-approved vaccines universally available to anyone at no cost at all.
The vaccines licensed for usage in the United States are safe for most people.
Nevertheless, some persons should not receive these vaccines, and sure others should talk to their health care worker:
- The CDC advocate that people who have had short allergic responses to either of the elements in the COVID-19 vaccines should not take the vaccination. Individuals who have had a serious allergic response to other vaccines should talk to a physician before receiving the COVID-19 vaccines. Since April 2021, individuals who do not have a history of allergic responses to vaccines are noticed for 15 minutes after receiving the shot; people who have such a history are kept for 30 minutes afterward. Everyone who has a harsh allergic response to the first COVID-19 vaccine shot is recommended not to get the 2nd shot.
- Child-bearing and lactation: Pregnant and lactating women were excluded from the COVID-19 vaccine trials, so little data about the vaccine’s safety in this community. The CDC, nevertheless, considers infection with COVID-19 to be a more significant risk to childbearing women and their unborn offspring than potential risks connected with the vaccine.
- Kids and younger teenagers: Kids were excluded from the initial COVID-19 vaccine trial tests. Trials tests of the vaccines’ safety and influence in children started in March 2021, with data ready in early summer.
- Individuals with a positive test for COVID-19: Those who indicated the illness should wait a moment until they have met all criteria to suspend isolation to get the vaccine. Persons who received antibody treatment against the virus should wait 90 days before getting vaccinated.
- Immuno-compromised persons: Subjects with HIV infection or exhausted immune systems should discuss the risks and benefits of vaccination with their physician before receiving a COVID-19 vaccine. Since April 23, 2021, the CDC has posted updated information for specific groups of people https://www.cdc.gov/coronavirus/2019-ncov/vaccines/recommendations/specific-groups.html.
Since April 2021, the CDC still recommends that people avoid all nonessential travel, while those who are eligible should be entirely vaccinated before traveling.
U.S. travel advisories were first issued in late January 2020 with mainland China (where most cases were at the time) and extended to include other countries as they began reporting increasing COVID-19 chances.
In April 2021, the CDC added a list of countries grouped by a risk assessment of COVID-19 transmission from high to high, moderate, low, and unknown. This list can be obtained at https://www.cdc.gov/coronavirus/2019-ncov/travelers/map-and-travel-notices.html.
The U.S. Department of State advises that overseas U.S. citizens who wish to return to the U.S. should contact the nearest U.S. embassy or consulate for travel assistance.
The most recent CDC guidelines for travel can be found at https://www.cdc.gov/coronavirus/2019-ncov/travelers/index.html.
International travelers returning to the U.S. should follow the guidelines at https://www.cdc.gov/coronavirus/2019-ncov/travelers/after-travel-precautions.html. Even fully vaccinated travelers should take the CDC’s recommended precautions after returning to the United States.
Domestic travel: Masks are required on airplanes, trains, buses, and other methods of general transportation traveling in, within, or out of the United States and in such U.S. transportation hubs like airports and bus or train stations.
SYMPTOMS OF COVID-19 INFECTION
Symptoms of more common coronaviruses are similar to other upper respiratory infections, including stuffy nose, runny nose, sore throat, cough, and sometimes fever.
Symptoms of COVID-19, however, range from very mild to severe upper respiratory symptoms.
The CDC says that moderate to severe symptoms can include fever, cough, runny nose, sore throat, difficulty breathing, shortness of breath, muscle pain, and tiredness.
Other symptoms of COVID-19 infection can include nausea or vomiting, diarrhea, loss of the senses of taste and smell, and headache.
About 15 to 20% of cases so far have become severe, according to the WHO.
Symptoms can appear anywhere from 2 to 14 days after exposure to the virus, with the average being around five days. Symptoms tend to be soft in the beginning and gradually worsen over a few days.
The infection can spread to the lower respiratory tract and cause pneumonia, which is the primary cause of severe illness and death.
Pneumonia is more common in older people, infants, people with weakened immune systems, and people with underlying conditions such as heart disease.
You should ask for medical attention urgently if you have any of the following severe signs of COVID-19, per the CDC:
- Trouble breathing
- Persistent pain or pressure in the chest
- Confusion (spatial disorientation; loss of average mental clarity; inability to recognize time, place, or one’s identity)
- Difficulty waking up or staying awake
- Bluish lips or face.
The CDC advises anyone caring for a person with these symptoms to call 9-1-1 and notify the operator that care is needed for a person who has or may have COVID-19.
There are two significant categories of testing for COVID-19: diagnostic (to test whether a person is presently infected); and antibody testing (to detect antibodies to the virus as evidence of past infection).
Diagnostic testing uses two laboratory methods: molecular (reverse transcription-polymerase chain reaction or RT-PCR) testing or antigen testing.
Antibody testing is done on a blood sample. Diagnostic testing requires collecting a nasal swab and a throat swab; sometimes, a saliva sample may be collected.
Antibody testing requires a blood sample drawn from a vein or collected from a fingerstick. Neither type of testing requires any preparation on the patient’s part.
Diagnostic kits for SARS-CoV-2 were made available by the CDC on February 4, 2020, to ship to qualified U.S. and international laboratories.
As of April 2021, the FDA has issued EUAs for more than 80 different diagnostic tests for COVID-19.
In some cases, patients must go to their physician or an authorized testing center to be tested for the virus; however, as of April 2021, the FDA has approved several test kits for the virus’s antigen that allow a person to collect a nasal sample at home and then send it to a certified laboratory for analysis.
Some of these tests require a doctor’s prescription, but at least one does not. The user must first download the appropriate application (app) for a smartphone to use these antigen tests.
In addition, the Lab Tests Online website has added a link to HealthTestingCenters.com (https://www.healthtestingcenters.com) that allows people to order COVID-19 tests online or over the telephone.
There is currently only one FDA-approved coronavirus-specific antiviral treatment: remdesivir for COVID-19.
The treatment for most coronaviruses is similar to cure for the common cold: rest, fluids, and such over-the-counter medications as acetaminophen and NSAIDs for fever, sore throat, and body aches.
More seriously ill patients may require hospitalization to relieve their symptoms and ensure their organs are functioning correctly.
The CDC recommends treating dexamethasone (a corticosteroid medication) for patients with low oxygen levels thanks to an overactive immune response to the virus.
There is some evidence that vitamin D deficiency increases the risk of a severe course of COVID-19 in infected persons, particularly those of Black or Asian origin.
Several clinical trials underway identify any specific role for vitamin D in COVID-19 prevention and management.
COVID-19 DRUG DEVELOPMENT
The only drug currently approved by the Food and Drug Administration (FDA) to treat COVID-19 is remdesivir (Veklury), an antiviral agent approved by the FDA on October 22, 2020.
Other currently available drugs are being repurposed to see whether they are effective against SARS-CoV-2; these include nitazoxanide, ivermectin, and niclosamide.
Investigational antiviral drugs include molnupiravir and favipiravir; investigational immune modulator drugs include infliximab (Remicade), abatacept (Orencia), cenicriviroc, and tocilizumab.
On June 15, 2020, the FDA revoked the emergency use authorization (EUA) of hydroxychloroquine and chloroquine to treat COVID-19.
The agency reported that “these medicines showed no benefit for decreasing the likelihood of death or speeding recovery.”
On July 4, 2020, WHO discontinued its trials of hydroxychloroquine and lopinavir/ritonavir for the same reasons.
Other drugs made available to treat COVID-19 under the FDA’s Emergency Use Authorization (EUA) as of April 2021 include:
- Casirivimab/imdevimab: This treatment is a drug “cocktail” of two monoclonal antibodies. It was given a EUA on November 21, 2020, to treat nonhospitalized patients with confirmed COVID-19 experiencing mild to moderate symptoms at high risk of severe symptoms and hospitalization.
- Baricitinib in combination with remdesivir: Baricitinib is a kinase inhibitor used to treat rheumatoid arthritis. The FDA issued a EUA on November 19, 2020, for the use of this combination in hospitalized COVID-19 patients aged two years and older who require supplemental oxygen, invasive mechanical ventilation, or extracorporeal membrane oxygenation.
- Bamlanivimab plus etesevimab: Both these drugs are monoclonal antibodies. The FDA gave the combination of bamlanivimab and etesevimab a EUA on February 8, 2021, to treat mild to moderate COVID-19 in adults and children 12 years and older who are at high risk of progressing to severe disease. The FDA revoked an earlier EUA for the use of bamlanivimab alone on April 16, 2021.
Convalescent plasma: Convalescent plasma is plasma taken from the blood of adults who have recovered from COVID-19; it contains antibodies to the virus.
Clinical trials of convalescent plasma began in April 2020. On August 23, 2020, the FDA issued a EUA to use convalescent plasma in treating patients with COVID-19.
As of March 19, 2021, the CDC states that there is not enough evidence to recommend this treatment.
Other investigational treatments for COVID-19: The search for effective treatments for the disease is complex, with several types of drug therapies (antibodies, antivirals, cell-based therapies, RNA-based treatments, and repurposed drugs) under investigation.
As of late April 2021, at least 326 different drugs are being tested worldwide as possible COVID-19 treatments, according to the Milken Institute.
Readers can find the most recent updates on investigational drugs at the Milken Institute link listed under the website resources below.
COVID-19 VACCINE DEVELOPMENT
Noninjectable immunizations against COVID-19 are under development, along with injectable vaccines.
Although these newer types of vaccines are only in Phase 1 trials as of April 2021, candidates include two intranasal vaccines (MV-014-212 and AdCOVID); two oral vaccines (hAd5 T-cell and VXA-CoV2-1); two orally inhaled vaccines (saRNA inhaled and ChAdOx1 nCov-19); and one administered by microneedle (PittCoVacc) developed at the University of Pittsburgh.
As of April 2021, the following injectable COVID-19 vaccines have not yet been approved for administration in any country but are in late-stage development:
- Viral vector vaccines: These include IIBR-100, developed by the Israel Institute for Biological Research, and GRAd-COV2, designed by Italy’s national institute for infectious diseases.
- Virus subunit vaccines include the Novavax vaccine (also called NVX-CoV2373), which uses recombinant nanoparticle technology from the SARS-CoV-2 genetic sequence, an antigen derived from the coronavirus spike protein. As noted earlier, this vaccine is undergoing trials in the United States for possible authorization by the FDA.
- Genetic-code vaccines: CVnCoV (zoricimeran), developed by a German biopharmaceutical company; and Lunar-COV19/ARCT-021, developed by the American biotechnology company Arcturus.
- Plant-based virus-like particle vaccines: The only vaccine in this category to date is CoVLP, developed by a Canadian biotechnology company.
- Inactivated virus vaccines: These include VLA2001, developed by a biotechnology company headquartered in France.
Readers can find the most recent updates on COVID-19 vaccines at the New York Times link listed under website resources below.
STATISTICAL DATA ON THE COVID-19 PANDEMIC
For regularly updated COVID-19 data, please see the following sources:
- “CDC COVID Data Tracker.” Centers for Disease Control and Prevention (CDC). https://covid.cdc.gov/covid-data-tracker
- “Coronavirus Map: Tracking the Global Outbreak.” The New York Times. https://www.nytimes.com/interactive/2020/world/coronavirus-maps.html
- “Coronavirus Update (Live).” Worldometer. https://www.worldometers.info/coronavirus/
- “COVID-19 Global Map.” Johns Hopkins Coronavirus Resource Center. https://coronavirus.jhu.edu/map.html
- “Coronavirus Variant Tracker.” Axios. https://www.axios.com/coronavirus-variant-tracker-where-different-strains-are-spreading-ffd71934-1596-43e6-923d-32c4698e2f8b.html
- “Genomic epidemiology of novel coronavirus—Global subsampling.” Nextstrain.org. https://nextstrain.org/ncov/global
- “What’s New and Updated.” Centers for Disease Control and Prevention (CDC). This is a new landing page on the CDC COVID-19 website to help people stay current with significant changes in guidance and information about the pandemic. Readers can filter the content by topic, type, or intended audience. The page is updated daily: https://www.cdc.gov/coronavirus/2019-ncov/whats-new-all.html.
- “WHO Coronavirus Disease (COVID-19) Dashboard.” The World Health Organization. https://covid19.who.int
Bergman, Scott J. “Treatment of Coronavirus Disease 2019 (COVID-19): Investigational Drugs and Other Therapies.” Medscape Reference. Updated April 19, 2021. https://emedicine.medscape.com/article/2500116-overview (accessed April 26, 2021).
Cennimo, David J. “COVID-19 Vaccines.” Medscape Reference. Updated April 14, 2021. https://emedicine.medscape.com/article/2500139-overview (accessed April 26, 2021).
“Coronavirus (COVID-19) Testing.” Lab Tests Online. April 20, 2021. https://labtestsonline.org/tests/coronavirus-covid-19-testing (accessed April 26, 2021).
“Coronavirus Disease 2019 (COVID-19): Frequently Asked Questions.” Centers for Disease Control and Prevention (CDC). Updated April 2, 2021. https://www.cdc.gov/coronavirus/2019-ncov/faq.html (accessed April 26, 2021).
“Coronavirus disease (COVID-19) pandemic.” World Health Organization. Updated April 26, 2021. https://www.who.int/emergencies/diseases/novel-coronavirus-2019 (accessed April 26, 2021).
“Coronavirus Symptoms: Frequently Asked Questions.” Johns Hopkins Medicine. Updated February 24, 2021. https://www.hopkinsmedicine.org/health/conditions-and-diseases/coronavirus/coronavirus-symptoms-frequently-asked-questions (accessed April 26, 2021).
“COVID-19.” HealthyChildren.org. https://healthychildren.org/English/health-issues/conditions/COVID-19/Pages/default.aspx (accessed April 26, 2021).
“COVID-19: Coronavirus Disease 2019.” Public Health Emergency. April 16, 2021. https://www.phe.gov/emergency/events/COVID19/investigation-MCM/Pages/default.aspx (accessed April 26, 2021).
“COVID-19 Real-Time Learning Network.” Infectious Diseases Society of America. Updated daily. https://www.idsociety.org/covid-19-real-time-learning-network/ (accessed April 26, 2021).
“COVID-19 Treatment and Vaccine Tracker.” Milken Institute, FasterCures Center. Updated April 25, 2021. https://covid-19tracker.milkeninstitute.org (accessed April 26, 2021).
Ferran, Maureen. “How does the Johnson & Johnson vaccine compare to other coronavirus vaccines? 4 questions answered.” The Conversation. February 27, 2021. https://theconversation.com/how-does-the-johnson-and-johnson-vaccine-compare-to-other-coronavirus-vaccines-4-questions-answered-155944 (accessed April 26, 2021).
“Timeline of WHO’s response to COVID-19.” World Health Organization. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/interactive-timeline (accessed April 26, 2021).
“Understanding mRNA COVID-19 Vaccines.” Centers for Disease Control and Prevention. Updated March 4, 2021. https://www.cdc.gov/coronavirus/2019-ncov/vaccines/different-vaccines/mrna.html (accessed April 26, 2021).
Zimmer, Carl, Jonathan Corum, and Sui-Lee Wee. “Coronavirus Vaccine Tracker.” New York Times. Updated April 26, 2021. https://www.nytimes.com/interactive/2020/science/coronavirus-vaccine-tracker.html (accessed April 26, 2021).
Infectious Diseases Society of America (IDSA), 4040 Wilson Boulevard, Suite 300, Arlington, VA, 22203, (703) 299-0200, https://www.idsociety.org/contact-us/, https://www.idsociety.org/.
National Institute of Allergy and Infectious Diseases, National Institutes of Health, 5601 Fishers Ln, MSC 9806, Bethesda, MD, 20892-9806, (301) 496-5717, (866) 284-4107 or TDD (800) 877-8339 (for hearing impaired), Fax: (301) 402-3573, [email protected], https://www.niaid.nih.gov/ .
U.S. Centers for Disease Control and Prevention (CDC), 1600 Clifton Rd, Atlanta, GA, 30333, (800) CDC-INFO (232-4636), http://www.cdc.gov/cdc-info/requestform.html, http://www.cdc.gov.
U.S. Food and Drug Administration (FDA), 10903 New Hampshire Ave., Silver Spring, MD, 20993, (888) 463-6332, https://www.fda.gov/.
World Health Organization, Ave Appia 20, Geneva 27, Switzerland, 1211, +41 22 791-2111, Fax: +41 22 791-3111, https://www.who.int/ . | https://www.mkexpress.net/coronaviruses-all-about-covid-19-you-need-to-know/ | 21 |
15 | No two humans are genetically identical. Even monozygotic twins (who develop from one zygote) have infrequent genetic differences due to mutations occurring during development and gene copy-number variation. Differences between individuals, even closely related individuals, are the key to techniques such as genetic fingerprinting. As of 2017, there are a total of 324 million known variants from sequenced human genomes. As of 2015, the typical difference between an individual's genome and the reference genome was estimated at 20 million base pairs (or 0.6% of the total of 3.2 billion base pairs).
Alleles occur at different frequencies in different human populations. Populations that are more geographically and ancestrally remote tend to differ more. The differences between populations represent a small proportion of overall human genetic variation. Populations also differ in the quantity of variation among their members. The greatest divergence between populations is found in sub-Saharan Africa, consistent with the recent African origin of non-African populations. Populations also vary in the proportion and locus of introgressed genes they received by archaic admixture both inside and outside of Africa.
The study of human genetic variation has evolutionary significance and medical applications. It can help scientists understand ancient human population migrations as well as how human groups are biologically related to one another. For medicine, study of human genetic variation may be important because some disease-causing alleles occur more often in people from specific geographic regions. New findings show that each human has on average 60 new mutations compared to their parents.
Causes of differences between individuals include independent assortment, the exchange of genes (crossing over and recombination) during reproduction (through meiosis) and various mutational events.
There are at least three reasons why genetic variation exists between populations. Natural selection may confer an adaptive advantage to individuals in a specific environment if an allele provides a competitive advantage. Alleles under selection are likely to occur only in those geographic regions where they confer an advantage. A second important process is genetic drift, which is the effect of random changes in the gene pool, under conditions where most mutations are neutral (that is, they do not appear to have any positive or negative selective effect on the organism). Finally, small migrant populations have statistical differences - called the founder effect - from the overall populations where they originated; when these migrants settle new areas, their descendant population typically differs from their population of origin: different genes predominate and it is less genetically diverse.
In humans, the main cause is genetic drift. Serial founder effects and past small population size (increasing the likelihood of genetic drift) may have had an important influence in neutral differences between populations. The second main cause of genetic variation is due to the high degree of neutrality of most mutations. A small, but significant number of genes appear to have undergone recent natural selection, and these selective pressures are sometimes specific to one region.
Genetic variation among humans occurs on many scales, from gross alterations in the human karyotype to single nucleotide changes. Chromosome abnormalities are detected in 1 of 160 live human births. Apart from sex chromosome disorders, most cases of aneuploidy result in death of the developing fetus (miscarriage); the most common extra autosomal chromosomes among live births are 21, 18 and 13.
Nucleotide diversity is the average proportion of nucleotides that differ between two individuals. As of 2004, the human nucleotide diversity was estimated to be 0.1% to 0.4% of base pairs. In 2015, the 1000 Genomes Project, which sequenced one thousand individuals from 26 human populations, found that "a typical [individual] genome differs from the reference human genome at 4.1 million to 5.0 million sites ... affecting 20 million bases of sequence"; the latter figure corresponds to 0.6% of total number of base pairs. Nearly all (>99.9%) of these sites are small differences, either single nucleotide polymorphisms or brief insertions or deletions (indels) in the genetic sequence, but structural variations account for a greater number of base-pairs than the SNPs and indels.
A single nucleotide polymorphism (SNP) is a difference in a single nucleotide between members of one species that occurs in at least 1% of the population. The 2,504 individuals characterized by the 1000 Genomes Project had 84.7 million SNPs among them. SNPs are the most common type of sequence variation, estimated in 1998 to account for 90% of all sequence variants. Other sequence variations are single base exchanges, deletions and insertions. SNPs occur on average about every 100 to 300 bases and so are the major source of heterogeneity.
A functional, or non-synonymous, SNP is one that affects some factor such as gene splicing or messenger RNA, and so causes a phenotypic difference between members of the species. About 3% to 5% of human SNPs are functional (see International HapMap Project). Neutral, or synonymous SNPs are still useful as genetic markers in genome-wide association studies, because of their sheer number and the stable inheritance over generations.
A coding SNP is one that occurs inside a gene. There are 105 Human Reference SNPs that result in premature stop codons in 103 genes. This corresponds to 0.5% of coding SNPs. They occur due to segmental duplication in the genome. These SNPs result in loss of protein, yet all these SNP alleles are common and are not purified in negative selection.
Structural variation is the variation in structure of an organism's chromosome. Structural variations, such as copy-number variation and deletions, inversions, insertions and duplications, account for much more human genetic variation than single nucleotide diversity. This was concluded in 2007 from analysis of the diploid full sequences of the genomes of two humans: Craig Venter and James D. Watson. This added to the two haploid sequences which were amalgamations of sequences from many individuals, published by the Human Genome Project and Celera Genomics respectively.
According to the 1000 Genomes Project, a typical human has 2,100 to 2,500 structural variations, which include approximately 1,000 large deletions, 160 copy-number variants, 915 Alu insertions, 128 L1 insertions, 51 SVA insertions, 4 NUMTs, and 10 inversions.
A copy-number variation (CNV) is a difference in the genome due to deleting or duplicating large regions of DNA on some chromosome. It is estimated that 0.4% of the genomes of unrelated humans differ with respect to copy number. When copy number variation is included, human-to-human genetic variation is estimated to be at least 0.5% (99.5% similarity). Copy number variations are inherited but can also arise during development.
Epigenetic variation is variation in the chemical tags that attach to DNA and affect how genes get read. The tags, "called epigenetic markings, act as switches that control how genes can be read." At some alleles, the epigenetic state of the DNA, and associated phenotype, can be inherited across generations of individuals.
Genetic variability is a measure of the tendency of individual genotypes in a population to vary (become different) from one another. Variability is different from genetic diversity, which is the amount of variation seen in a particular population. The variability of a trait is how much that trait tends to vary in response to environmental and genetic influences.
In biology, a cline is a continuum of species, populations, varieties, or forms of organisms that exhibit gradual phenotypic and/or genetic differences over a geographical area, typically as a result of environmental heterogeneity. In the scientific study of human genetic variation, a gene cline can be rigorously defined and subjected to quantitative metrics.
In the study of molecular evolution, a haplogroup is a group of similar haplotypes that share a common ancestor with a single nucleotide polymorphism (SNP) mutation. The study of haplogroups provides information about ancestral origins dating back thousands of years.
The most commonly studied human haplogroups are Y-chromosome (Y-DNA) haplogroups and mitochondrial DNA (mtDNA) haplogroups, both of which can be used to define genetic populations. Y-DNA is passed solely along the patrilineal line, from father to son, while mtDNA is passed down the matrilineal line, from mother to both daughter or son. The Y-DNA and mtDNA may change by chance mutation at each generation.
A variable number tandem repeat (VNTR) is the variation of length of a tandem repeat. A tandem repeat is the adjacent repetition of a short nucleotide sequence. Tandem repeats exist on many chromosomes, and their length varies between individuals. Each variant acts as an inherited allele, so they are used for personal or parental identification. Their analysis is useful in genetics and biology research, forensics, and DNA fingerprinting.
The recent African origin of modern humans paradigm assumes the dispersal of non-African populations of anatomically modern humans after 70,000 years ago. Dispersal within Africa occurred significantly earlier, at least 130,000 years ago. The "out of Africa" theory originates in the 19th century, as a tentative suggestion in Charles Darwin's Descent of Man, but remained speculative until the 1980s when it was supported by the study of present-day mitochondrial DNA, combined with evidence from physical anthropology of archaic specimens.
According to a 2000 study of Y-chromosome sequence variation, human Y-chromosomes trace ancestry to Africa, and the descendants of the derived lineage left Africa and eventually were replaced by archaic human Y-chromosomes in Eurasia. The study also shows that a minority of contemporary populations in East Africa and the Khoisan are the descendants of the most ancestral patrilineages of anatomically modern humans that left Africa 35,000 to 89,000 years ago. Other evidence supporting the theory is that variations in skull measurements decrease with distance from Africa at the same rate as the decrease in genetic diversity. Human genetic diversity decreases in native populations with migratory distance from Africa, and this is thought to be due to bottlenecks during human migration, which are events that temporarily reduce population size.
A 2009 genetic clustering study, which genotyped 1327 polymorphic markers in various African populations, identified six ancestral clusters. The clustering corresponded closely with ethnicity, culture and language. A 2018 whole genome sequencing study of the world's populations observed similar clusters among the populations in Africa. At K=9, distinct ancestral components defined the Afroasiatic-speaking populations inhabiting North Africa and Northeast Africa; the Nilo-Saharan-speaking populations in Northeast Africa and East Africa; the Ari populations in Northeast Africa; the Niger-Congo-speaking populations in West-Central Africa, West Africa, East Africa and Southern Africa; the Pygmy populations in Central Africa; and the Khoisan populations in Southern Africa.
Because of the common ancestry of all humans, only a small number of variants have large differences in frequency between populations. However, some rare variants in the world's human population are much more frequent in at least one population (more than 5%).
It is commonly assumed that early humans left Africa, and thus must have passed through a population bottleneck before their African-Eurasian divergence around 100,000 years ago (ca. 3,000 generations). The rapid expansion of a previously small population has two important effects on the distribution of genetic variation. First, the so-called founder effect occurs when founder populations bring only a subset of the genetic variation from their ancestral population. Second, as founders become more geographically separated, the probability that two individuals from different founder populations will mate becomes smaller. The effect of this assortative mating is to reduce gene flow between geographical groups and to increase the genetic distance between groups.
The expansion of humans from Africa affected the distribution of genetic variation in two other ways. First, smaller (founder) populations experience greater genetic drift because of increased fluctuations in neutral polymorphisms. Second, new polymorphisms that arose in one group were less likely to be transmitted to other groups as gene flow was restricted.
Populations in Africa tend to have lower amounts of linkage disequilibrium than do populations outside Africa, partly because of the larger size of human populations in Africa over the course of human history and partly because the number of modern humans who left Africa to colonize the rest of the world appears to have been relatively low. In contrast, populations that have undergone dramatic size reductions or rapid expansions in the past and populations formed by the mixture of previously separate ancestral groups can have unusually high levels of linkage disequilibrium
The distribution of genetic variants within and among human populations are impossible to describe succinctly because of the difficulty of defining a "population," the clinal nature of variation, and heterogeneity across the genome (Long and Kittles 2003). In general, however, an average of 85% of genetic variation exists within local populations, ~7% is between local populations within the same continent, and ~8% of variation occurs between large groups living on different continents. The recent African origin theory for humans would predict that in Africa there exists a great deal more diversity than elsewhere and that diversity should decrease the further from Africa a population is sampled.
Sub-Saharan Africa has the most human genetic diversity and the same has been shown to hold true for phenotypic variation in skull form. Phenotype is connected to genotype through gene expression. Genetic diversity decreases smoothly with migratory distance from that region, which many scientists believe to be the origin of modern humans, and that decrease is mirrored by a decrease in phenotypic variation. Skull measurements are an example of a physical attribute whose within-population variation decreases with distance from Africa.
The distribution of many physical traits resembles the distribution of genetic variation within and between human populations (American Association of Physical Anthropologists 1996; Keita and Kittles 1997). For example, ~90% of the variation in human head shapes occurs within continental groups, and ~10% separates groups, with a greater variability of head shape among individuals with recent African ancestors (Relethford 2002).
A prominent exception to the common distribution of physical characteristics within and among groups is skin color. Approximately 10% of the variance in skin color occurs within groups, and ~90% occurs between groups (Relethford 2002). This distribution of skin color and its geographic patterning -- with people whose ancestors lived predominantly near the equator having darker skin than those with ancestors who lived predominantly in higher latitudes -- indicate that this attribute has been under strong selective pressure. Darker skin appears to be strongly selected for in equatorial regions to prevent sunburn, skin cancer, the photolysis of folate, and damage to sweat glands.
Understanding how genetic diversity in the human population impacts various levels of gene expression is an active area of research. While earlier studies focused on the relationship between DNA variation and RNA expression, more recent efforts are characterizing the genetic control of various aspects of gene expression including chromatin states, translation, and protein levels. A study published in 2007 found that 25% of genes showed different levels of gene expression between populations of European and Asian descent. The primary cause of this difference in gene expression was thought to be SNPs in gene regulatory regions of DNA. Another study published in 2007 found that approximately 83% of genes were expressed at different levels among individuals and about 17% between populations of European and African descent.
The population geneticist Sewall Wright developed the fixation index (often abbreviated to FST) as a way of measuring genetic differences between populations. This statistic is often used in taxonomy to compare differences between any two given populations by measuring the genetic differences among and between populations for individual genes, or for many genes simultaneously. It is often stated that the fixation index for humans is about 0.15. This translates to an estimated 85% of the variation measured in the overall human population is found within individuals of the same population, and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different populations are almost as likely to be more similar to each other than either is to a member of their own group. "The shared evolutionary history of living humans has resulted in a high relatedness among all living people, as indicated for example by the very low fixation index (FST) among living human populations." Richard Lewontin, who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful ways to describe human populations.
Wright himself believed that values >0.25 represent very great genetic variation and that an FST of 0.15-0.25 represented great variation. However, about 5% of human variation occurs between populations within continents, therefore FST values between continental groups of humans (or races) of as low as 0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. Graves (1996) has countered that FST should not be used as a marker of subspecies status, as the statistic is used to measure the degree of differentiation between populations, although see also Wright (1978).
Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations in their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They argue the underlying statistical model incorrectly assumes equal and independent histories of variation for each large human population. A more realistic approach is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. Under this scenario, human populations do not have equal amounts of local variability, but rather diminished amounts of diversity the further from Africa any population lives. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
There is a hypothesis that anatomically modern humans interbred with Neanderthals during the Middle Paleolithic. In May 2010, the Neanderthal Genome Project presented genetic evidence that interbreeding did likely take place and that a small but significant portion, around 2-4%, of Neanderthal admixture is present in the DNA of modern Eurasians and Oceanians, and nearly absent in sub-Saharan African populations.
Between 4% and 6% of the genome of Melanesians (represented by the Papua New Guinean and Bougainville Islander) are thought to derive from Denisova hominins - a previously unknown species which shares a common origin with Neanderthals. It was possibly introduced during the early migration of the ancestors of Melanesians into Southeast Asia. This history of interaction suggests that Denisovans once ranged widely over eastern Asia.
Thus, Melanesians emerge as the most archaic-admixed population, having Denisovan/Neanderthal-related admixture of ~8%.
In a study published in 2013, Jeffrey Wall from University of California studied whole sequence-genome data and found higher rates of introgression in Asians compared to Europeans. Hammer et al. tested the hypothesis that contemporary African genomes have signatures of gene flow with archaic human ancestors and found evidence of archaic admixture in the genomes of some African groups, suggesting that modest amounts of gene flow were widespread throughout time and space during the evolution of anatomically modern humans.
New data on human genetic variation has reignited the debate about a possible biological basis for categorization of humans into races. Most of the controversy surrounds the question of how to interpret the genetic data and whether conclusions based on it are sound. Some researchers argue that self-identified race can be used as an indicator of geographic ancestry for certain health risks and medications.
Although the genetic differences among human groups are relatively small, these differences in certain genes such as duffy, ABCC11, SLC24A5, called ancestry-informative markers (AIMs) nevertheless can be used to reliably situate many individuals within broad, geographically based groupings. For example, computer analyses of hundreds of polymorphic loci sampled in globally distributed populations have revealed the existence of genetic clustering that roughly is associated with groups that historically have occupied large continental and subcontinental regions (Rosenberg et al. 2002; Bamshad et al. 2003).
Some commentators have argued that these patterns of variation provide a biological justification for the use of traditional racial categories. They argue that the continental clusterings correspond roughly with the division of human beings into sub-Saharan Africans; Europeans, Western Asians, Central Asians, Southern Asians and Northern Africans; Eastern Asians, Southeast Asians, Polynesians and Native Americans; and other inhabitants of Oceania (Melanesians, Micronesians & Australian Aborigines) (Risch et al. 2002). Other observers disagree, saying that the same data undercut traditional notions of racial groups (King and Motulsky 2002; Calafell 2003; Tishkoff and Kidd 2004). They point out, for example, that major populations considered races or subgroups within races do not necessarily form their own clusters.
Furthermore, because human genetic variation is clinal, many individuals affiliate with two or more continental groups. Thus, the genetically based "biogeographical ancestry" assigned to any given person generally will be broadly distributed and will be accompanied by sizable uncertainties (Pfaff et al. 2004).
In many parts of the world, groups have mixed in such a way that many individuals have relatively recent ancestors from widely separated regions. Although genetic analyses of large numbers of loci can produce estimates of the percentage of a person's ancestors coming from various continental populations (Shriver et al. 2003; Bamshad et al. 2004), these estimates may assume a false distinctiveness of the parental populations, since human groups have exchanged mates from local to continental scales throughout history (Cavalli-Sforza et al. 1994; Hoerder 2002). Even with large numbers of markers, information for estimating admixture proportions of individuals or groups is limited, and estimates typically will have wide confidence intervals (Pfaff et al. 2004).
Genetic data can be used to infer population structure and assign individuals to groups that often correspond with their self-identified geographical ancestry. Jorde and Wooding (2004) argued that "Analysis of many loci now yields reasonably accurate estimates of genetic similarity among individuals, rather than populations. Clustering of individuals is correlated with geographic origin or ancestry." However, identification by geographic origin may quickly break down when considering historical ancestry shared between individuals back in time.
An analysis of autosomal SNP data from the International HapMap Project (Phase II) and CEPH Human Genome Diversity Panel samples was published in 2009. The study of 53 populations taken from the HapMap and CEPH data (1138 unrelated individuals) suggested that natural selection may shape the human genome much more slowly than previously thought, with factors such as migration within and among continents more heavily influencing the distribution of genetic variations. A similar study published in 2010 found strong genome-wide evidence for selection due to changes in ecoregion, diet, and subsistence particularly in connection with polar ecoregions, with foraging, and with a diet rich in roots and tubers. In a 2016 study, principal component analysis of genome-wide data was capable of recovering previously-known targets for positive selection (without prior definition of populations) as well as a number of new candidate genes.
Forensic anthropologists can assess the ancestry of skeletal remains by analyzing skeletal morphology as well as using genetic and chemical markers, when possible. While these assessments are never certain, the accuracy of skeletal morphology analyses in determining true ancestry has been estimated at about 90%.
Gene flow between two populations reduces the average genetic distance between the populations, only totally isolated human populations experience no gene flow and most populations have continuous gene flow with other neighboring populations which create the clinal distribution observed for moth genetic variation. When gene flow takes place between well-differentiated genetic populations the result is referred to as "genetic admixture".
Admixture mapping is a technique used to study how genetic variants cause differences in disease rates between population. Recent admixture populations that trace their ancestry to multiple continents are well suited for identifying genes for traits and diseases that differ in prevalence between parental populations. African-American populations have been the focus of numerous population genetic and admixture mapping studies, including studies of complex genetic traits such as white cell count, body-mass index, prostate cancer and renal disease.
An analysis of phenotypic and genetic variation including skin color and socio-economic status was carried out in the population of Cape Verde which has a well documented history of contact between Europeans and Africans. The studies showed that pattern of admixture in this population has been sex-biased and there is a significant interactions between socio economic status and skin color independent of the skin color and ancestry. Another study shows an increased risk of graft-versus-host disease complications after transplantation due to genetic variants in human leukocyte antigen (HLA) and non-HLA proteins.
Differences in allele frequencies contribute to group differences in the incidence of some monogenic diseases, and they may contribute to differences in the incidence of some common diseases. For the monogenic diseases, the frequency of causative alleles usually correlates best with ancestry, whether familial (for example, Ellis-van Creveld syndrome among the Pennsylvania Amish), ethnic (Tay-Sachs disease among Ashkenazi Jewish populations), or geographical (hemoglobinopathies among people with ancestors who lived in malarial regions). To the extent that ancestry corresponds with racial or ethnic groups or subgroups, the incidence of monogenic diseases can differ between groups categorized by race or ethnicity, and health-care professionals typically take these patterns into account in making diagnoses.
Even with common diseases involving numerous genetic variants and environmental factors, investigators point to evidence suggesting the involvement of differentially distributed alleles with small to moderate effects. Frequently cited examples include hypertension (Douglas et al. 1996), diabetes (Gower et al. 2003), obesity (Fernandez et al. 2003), and prostate cancer (Platz et al. 2000). However, in none of these cases has allelic variation in a susceptibility gene been shown to account for a significant fraction of the difference in disease prevalence among groups, and the role of genetic factors in generating these differences remains uncertain (Mountain and Risch 2004).
Some other variations on the other hand are beneficial to human, as they prevent certain diseases and increase the chance to adapt to the environment. For example, mutation in CCR5 gene that protects against AIDS. CCR5 gene is absent on the surface of cell due to mutation. Without CCR5 gene on the surface, there is nothing for HIV viruses to grab on and bind into. Therefore, the mutation on CCR5 gene decreases the chance of an individual's risk with AIDS. The mutation in CCR5 is also quite common in certain areas, with more than 14% of the population carry the mutation in Europe and about 6-10% in Asia and North Africa.
Apart from mutations, many genes that may have aided humans in ancient times plague humans today. For example, it is suspected that genes that allow humans to more efficiently process food are those that make people susceptible to obesity and diabetes today.
Neil Risch of Stanford University has proposed that self-identified race/ethnic group could be a valid means of categorization in the US for public health and policy considerations. A 2002 paper by Noah Rosenberg's group makes a similar claim: "The structure of human populations is relevant in various epidemiological contexts. As a result of variation in frequencies of both genetic and nongenetic risk factors, rates of disease and of such phenotypes as adverse drug response vary across populations. Further, information about a patient's population of origin might provide health care practitioners with information about risk when direct causes of disease are unknown." However, in 2018 Noah Rosenberg released a study arguing against genetically essentialist ideas of health disparities between populations stating environmental variants are a more likely cause Interpreting polygenic scores, polygenic adaptation, and human phenotypic differences
By these criteria, 1.6% of Perlegen SNPs were found to exhibit the genetic architecture of selection.
In each great region of the world the living mammals are closely related to the extinct species of the same region. It is, therefore, probable that Africa was formerly inhabited by extinct apes closely allied to the gorilla and chimpanzee; and as these two species are now man's nearest allies, it is somewhat more probable that our early progenitors lived on the African continent than elsewhere. But it is useless to speculate on this subject, for an ape nearly as large as a man, namely the Dryopithecus of Lartet, which was closely allied to the anthropomorphous Hylobates, existed in Europe during the Upper Miocene period; and since so remote a period the earth has certainly undergone many great revolutions, and there has been ample time for migration on the largest scale.
We incorporated geographic data into a Bayesian clustering analysis, assuming no admixture (TESS software) (25) and distinguished six clusters within continental Africa (Fig. 5A). The most geographically widespread cluster (orange) extends from far Western Africa (the Mandinka) through central Africa to the Bantu speakers of South Africa (the Venda and Xhosa) and corresponds to the distribution of the Niger-Kordofanian language family, possibly reflecting the spread of Bantu-speaking populations from near the Nigerian/Cameroon highlands across eastern and southern Africa within the past 5000 to 3000 years (26,27). Another inferred cluster includes the Pygmy and SAK populations (green), with a noncontiguous geographic distribution in central and southeastern Africa, consistent with the STRUCTURE (Fig. 3) and phylogenetic analyses (Fig. 1). Another geographically contiguous cluster extends across northern Africa (blue) into Mali (the Dogon), Ethiopia, and northern Kenya. With the exception of the Dogon, these populations speak an Afroasiatic language. Chadic-speaking and Nilo-Saharan-speaking populations from Nigeria, Cameroon, and central Chad, as well as several Nilo-Saharan-speaking populations from southern Sudan, constitute another cluster (red). Nilo-Saharan and Cushitic speakers from the Sudan, Kenya, and Tanzania, as well as some of the Bantu speakers from Kenya, Tanzania, and Rwanda (Hutu/Tutsi), constitute another cluster (purple), reflecting linguistic evidence for gene flow among these populations over the past ~5000 years (28,29). Finally, the Hadza are the sole constituents of a sixth cluster (yellow), consistent with their distinctive genetic structure identified by PCA and STRUCTURE. | https://popflock.com/learn?s=Human_genetic_variation | 21 |
20 | 1 The Circular Flow The simple circular flow model of the economy is designed to have us understand the basic operations of the economy.
Published byModified over 6 years ago
Presentation on theme: "1 The Circular Flow The simple circular flow model of the economy is designed to have us understand the basic operations of the economy."— Presentation transcript:
1 The Circular Flow The simple circular flow model of the economy is designed to have us understand the basic operations of the economy
2 5 8 7 6 4 3 2 1 Households Businesses Markets for factors of production Markets for good and services
3 The simple circular flow In the simple circular flow model we have two players of the economic game: Households and Businesses. Households are: sellers of all inputs, or factors or production, and buyers of all output of good and services. Businesses are: Buyers of all inputs and sellers of all output. On the next slide I jump into the circular flow in a somewhat arbitrary place because the system is operating in all places, but we have to start our discussion somewhere.
4 Starting at the box with households, lets follow flows 1 through 4 in a counterclockwise fashion. Flow 1 – Households sell their land, labor and capital in the market for factors of production. Flow 2 – Businesses buy these factors of production and use them to make goods and services. Flow 3 – Businesses sell the goods and services made. Flow 4 - Households buy the goods and services. So, when we start at the households and go counterclockwise from 1 to 4 we will follow the flows of what are called “real” things – the resources and the goods and services made. These are what are really important in the economy because these are the items used to create our standard of living.
5 Next we look at flows 5 through 8 and these are financial flows and we see a connection between spending, revenues, and income. Flow 5 – The households payment after selling resources in the factor markets is called income. Flow 6 – When the households buy stuff they pay for it and the term used in the national economy sense to represent this buying is spending or consumption expenditure. The households buy from businesses in the markets for output of good and services. Flow 7 – When the businesses sell goods and services to household the businesses bring home revenue. (So, if we ignore government for now, expenditure = revenue).
6 Flow 8 – When businesses take in revenue from sales then they use the money to pay for the resources they have purchased in the markets for factors of production. Here we talk about costs of business So the flows 5 through 8 are the financial flows that correspond to our “real” flows. The simple circular flow model is a simple model of the day to day operations of the economy. Much of the rest of the course will be filling out more realistic parts of the story.
7 Flows 1 through 4 are flows of inputs (resources) and output (goods and services). Flows 5 through 8 are flows of money. The flow of money is one way we account for the flow of resources and goods and services. Analogy – A grocery store We look at the revenue of a grocery store to get a feel for the output amount – but we know the output is made up of items like milk, cheese, steak, etc… We look at expenses to get a feel for amount of inputs used – but we know the inputs are hours of labor, watts of electricity used, and so on.
8 The last idea I would have you think about here is that while resources are turned into output 1) The output, or production, has a dollar value, 2) The resources used get paid income, and 3) The dollar value of production = income of resources. In other words, someone must earn an income when production occurs. The two values are equal in dollar amount.
9 Final thought Our economy is large and complex. Each individual business has a pretty decent grip on what resources are being used and can probably make a list of what those resources are on a sheet of paper – you know, labor, cash registers, and on and on. Each individual household knows what goods and services are being bought and can probably make a list of those items on a sheet of paper – you know, cookies, milk, and on and on. In our large complex economy it would be difficult to get these lists from businesses and households. But we have come up with ways to get at the money flows. Often our focus will be on money flows when we really want to talk about the lists. | https://slideplayer.com/slide/4989087/ | 21 |
53 | What are “Measures of Inflation”?
Inflation can be defined as a sustained increase in the overall general level of prices of goods and services in an economy over time. Inflation in an economy is measured as an annual percentage (%) increase (i.e. % increase on a year earlier) in some price index – such as the Consumer Price Index (CPI), GDP deflator etc. – which adequately reflects the overall inflation in an economy.
Key Learning Points
- The consumer price index (CPI) is a weighted average of the price of a ‘fixed’ basket of goods and services included in this index and consumed by households.
- The Producer Price Index (PPI) is a price index that measures the average change in selling prices or wholesale prices received by domestic producers for their output.
- The Wholesale Price Index (WPI) measures the change in the price of a representative basket of wholesale goods in an economy. It covers only goods and not services.
- The GDP Deflator is defined as the ratio of nominal GDP to real GDP, it reflects the changes in prices of all goods and services included in the GDP of a country.
- Core Inflation is calculated by excluding temporary volatile factors, such as prices of food and energy prices, from a relevant price index, such as the CPI
Measures of Inflation
To compute CPI, each item (goods and services) is usually weighted in proportion to its importance in the ‘fixed’ representative basket and these weights are usually based on surveys of familiar expenditure in an economy.
The percentage change in the CPI provides a sound measure of the rate of consumer price inflation in an economy i.e. how rapidly the cost of living is rising for the average household or consumer in an economy.
The CPI measure of inflation is used for inflation targeting purposes and for judging the effectiveness of monetary policy in keeping inflation in check. A marked or unexpected rise in this measure of inflation in an economy usually leads to rising interest rates, drop in private investment and falling bond prices and stock prices.
The Producer Price Index (PPI), which measures inflation from the point of view of producers in the economy. It captures the price movements of goods and services at the wholesale level and measures average change in selling prices received by domestic producers for their output.
The Wholesale Price Index (WPI) measures the change in the price of a representative basket of wholesale goods in an economy. It covers only goods and not services, therefore, prices of services are excluded.
The WPI focuses on the price movements of goods that are bought by corporations/businesses, rather than consumers, and reflects the inflation that the industrial sector of an economy.
The GDP Deflator, defined as the ratio of nominal GDP to real GDP, is an indicator of the overall inflation in an economy, as it reflects changes in prices of all goods and services included in the GDP of a country (i.e. changes in prices of domestically produced output). It does not reflect import prices unlike the CPI. Basically, the GDP deflator is a broader measure of inflation than CPI.
Core Inflation is calculated by excluding temporary volatile factors, such as prices of food and energy prices, from a relevant price index, such as the CPI. This measure of inflation is computed because food and energy prices are subject to extreme volatility due to changes in supply and demand conditions in these commodity markets. This measure of inflation is a sound indicator of the underlying weakness or strength of domestic demand in an economy.
Consumer Price Index and Inflation
Given below is a workout of the CPI and inflation (on the CPI measure) in Country A for Year 2. Weights are assigned to each category (for example food, alcohol and housing among others) of spending, per their relative importance or share in consumption of households. A CPI is calculated for each year using these weights.
The annual rate of inflation in the above example is 3%, which is the change in the CPI from Year 1 to Year 2. | https://www.fe.training/free-resources/portfolio-management/measures-of-inflation/ | 21 |
339 | In economics, a recession is a business cycle contraction when there is a general decline in economic activity. Recessions generally occur when there is a widespread drop in spending (an adverse demand shock). This may be triggered by various events, such as a financial crisis, an external trade shock, an adverse supply shock, the bursting of an economic bubble, or a large-scale anthropogenic or natural disaster (e.g. a pandemic). In the United States, it is defined as "a significant decline in economic activity spread across the market, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales". In the United Kingdom, it is defined as a negative economic growth for two consecutive quarters.
|Part of a series on|
Governments usually respond to recessions by adopting expansionary macroeconomic policies, such as increasing money supply or increasing government spending and decreasing taxation.
Put simply, a recession is the decline of economic activity, which means that the public has stopped buying products for a while which can cause the downfall of GDP after a period of economic expansion (a time where products become popular and the income profit of a business becomes large). This causes inflation (the rise of product prices). In a recession, the rate of inflation slows down, stops, or becomes negative.
In a 1974 The New York Times article, Commissioner of the Bureau of Labor Statistics Julius Shiskin suggested several rules of thumb for defining a recession, one of which was two consecutive quarters of negative GDP growth. In time, the other rules of thumb were forgotten. Some economists prefer a definition of a 1.5-2 percentage points rise in unemployment within 12 months.
In the United States, the Business Cycle Dating Committee of the National Bureau of Economic Research (NBER) is generally seen as the authority for dating US recessions. The NBER, a private economic research organization, defines an economic recession as: "a significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales". Almost universally, academics, economists, policy makers, and businesses refer to the determination by the NBER for the precise dating of a recession's onset and end.
In the United Kingdom, recessions are generally defined as two consecutive quarters of negative economic growth, as measured by the seasonal adjusted quarter-on-quarter figures for real GDP. The same definition is used by member states of the European Union.
A recession has many attributes that can occur simultaneously and includes declines in component measures of economic activity (GDP) such as consumption, investment, government spending, and net export activity. These summary measures reflect underlying drivers such as employment levels and skills, household savings rates, corporate investment decisions, interest rates, demographics, and government policies.
Economist Richard C. Koo wrote that under ideal conditions, a country's economy should have the household sector as net savers and the corporate sector as net borrowers, with the government budget nearly balanced and net exports near zero. When these relationships become imbalanced, recession can develop within the country or create pressure for recession in another country. Policy responses are often designed to drive the economy back towards this ideal state of balance.
A severe (GDP down by 10%) or prolonged (three or four years) recession is referred to as an economic depression, although some argue that their causes and cures can be different. As an informal shorthand, economists sometimes refer to different recession shapes, such as V-shaped, U-shaped, L-shaped and W-shaped recessions.
Type of recession or shape
The type and shape of recessions are distinctive. In the US, v-shaped, or short-and-sharp contractions followed by rapid and sustained recovery, occurred in 1954 and 1990–91; U-shaped (prolonged slump) in 1974–75, and W-shaped, or double-dip recessions in 1949 and 1980–82. Japan's 1993–94 recession was U-shaped and its 8-out-of-9 quarters of contraction in 1997–99 can be described as L-shaped. Korea, Hong Kong and South-east Asia experienced U-shaped recessions in 1997–98, although Thailand’s eight consecutive quarters of decline should be termed L-shaped.
Recessions have psychological and confidence aspects. For example, if companies expect economic activity to slow, they may reduce employment levels and save money rather than invest. Such expectations can create a self-reinforcing downward cycle, bringing about or worsening a recession. Consumer confidence is one measure used to evaluate economic sentiment. The term animal spirits has been used to describe the psychological factors underlying economic activity. Economist Robert J. Shiller wrote that the term "...refers also to the sense of trust we have in each other, our sense of fairness in economic dealings, and our sense of the extent of corruption and bad faith. When animal spirits are on ebb, consumers do not want to spend and businesses do not want to make capital expenditures or hire people." Behavioral economics, has also explained some psychological biases that may trigger a recession including availability heuristic, money illusion, and non-regressive prediction.
Balance sheet recession
High levels of indebtedness or the bursting of a real estate or financial asset price bubble can cause what is called a "balance sheet recession". This is when large numbers of consumers or corporations pay down debt (i.e., save) rather than spend or invest, which slows the economy. The term balance sheet derives from an accounting identity that holds that assets must always equal the sum of liabilities plus equity. If asset prices fall below the value of the debt incurred to purchase them, then the equity must be negative, meaning the consumer or corporation is insolvent. Economist Paul Krugman wrote in 2014 that "the best working hypothesis seems to be that the financial crisis was only one manifestation of a broader problem of excessive debt—that it was a so-called "balance sheet recession". In Krugman's view, such crises require debt reduction strategies combined with higher government spending to offset declines from the private sector as it pays down its debt.
For example, economist Richard Koo wrote that Japan's "Great Recession" that began in 1990 was a "balance sheet recession". It was triggered by a collapse in land and stock prices, which caused Japanese firms to have negative equity, meaning their assets were worth less than their liabilities. Despite zero interest rates and expansion of the money supply to encourage borrowing, Japanese corporations in aggregate opted to pay down their debts from their own business earnings rather than borrow to invest as firms typically do. Corporate investment, a key demand component of GDP, fell enormously (22% of GDP) between 1990 and its peak decline in 2003. Japanese firms overall became net savers after 1998, as opposed to borrowers. Koo argues that it was massive fiscal stimulus (borrowing and spending by the government) that offset this decline and enabled Japan to maintain its level of GDP. In his view, this avoided a U.S. type Great Depression, in which U.S. GDP fell by 46%. He argued that monetary policy was ineffective because there was limited demand for funds while firms paid down their liabilities. In a balance sheet recession, GDP declines by the amount of debt repayment and un-borrowed individual savings, leaving government stimulus spending as the primary remedy.
Krugman discussed the balance sheet recession concept during 2010, agreeing with Koo's situation assessment and view that sustained deficit spending when faced with a balance sheet recession would be appropriate. However, Krugman argued that monetary policy could also affect savings behavior, as inflation or credible promises of future inflation (generating negative real interest rates) would encourage less savings. In other words, people would tend to spend more rather than save if they believe inflation is on the horizon. In more technical terms, Krugman argues that the private sector savings curve is elastic even during a balance sheet recession (responsive to changes in real interest rates) disagreeing with Koo's view that it is inelastic (non-responsive to changes in real interest rates).
A July 2012 survey of balance sheet recession research reported that consumer demand and employment are affected by household leverage levels. Both durable and non-durable goods consumption declined as households moved from low to high leverage with the decline in property values experienced during the subprime mortgage crisis. Further, reduced consumption due to higher household leverage can account for a significant decline in employment levels. Policies that help reduce mortgage debt or household leverage could therefore have stimulative effects.
A liquidity trap is a Keynesian theory that a situation can develop in which interest rates reach near zero (zero interest-rate policy) yet do not effectively stimulate the economy. In theory, near-zero interest rates should encourage firms and consumers to borrow and spend. However, if too many individuals or corporations focus on saving or paying down debt rather than spending, lower interest rates have less effect on investment and consumption behavior; the lower interest rates are like "pushing on a string". Economist Paul Krugman described the U.S. 2009 recession and Japan's lost decade as liquidity traps. One remedy to a liquidity trap is expanding the money supply via quantitative easing or other techniques in which money is effectively printed to purchase assets, thereby creating inflationary expectations that cause savers to begin spending again. Government stimulus spending and mercantilist policies to stimulate exports and reduce imports are other techniques to stimulate demand. He estimated in March 2010 that developed countries representing 70% of the world's GDP were caught in a liquidity trap.
Paradoxes of thrift and deleveraging
Behavior that may be optimal for an individual (e.g., saving more during adverse economic conditions) can be detrimental if too many individuals pursue the same behavior, as ultimately one person's consumption is another person's income. Too many consumers attempting to save (or pay down debt) simultaneously is called the paradox of thrift and can cause or deepen a recession. Economist Hyman Minsky also described a "paradox of deleveraging" as financial institutions that have too much leverage (debt relative to equity) cannot all de-leverage simultaneously without significant declines in the value of their assets.
During April 2009, U.S. Federal Reserve Vice Chair Janet Yellen discussed these paradoxes: "Once this massive credit crunch hit, it didn’t take long before we were in a recession. The recession, in turn, deepened the credit crunch as demand and employment fell, and credit losses of financial institutions surged. Indeed, we have been in the grips of precisely this adverse feedback loop for more than a year. A process of balance sheet deleveraging has spread to nearly every corner of the economy. Consumers are pulling back on purchases, especially on durable goods, to build their savings. Businesses are cancelling planned investments and laying off workers to preserve cash. And, financial institutions are shrinking assets to bolster capital and improve their chances of weathering the current storm. Once again, Minsky understood this dynamic. He spoke of the paradox of deleveraging, in which precautions that may be smart for individuals and firms—and indeed essential to return the economy to a normal state—nevertheless magnify the distress of the economy as a whole."
When the CFNAI Diffusion Index drops below the value of -0.35, then there is an increased probability of the beginning a recession. Usually, the signal happens in the three months of the recession. The CFNAI Diffusion Index signal tends to happen about one month before a related signal by the CFNAI-MA3 (3-month moving average) drops below the -0.7 level. The CFNAI-MA3 correctly identified the 7 recessions between March 1967–August 2019, while triggering only 2 false alarms.
- The Federal Reserve Bank of Chicago posts updates of the Brave-Butters-Kelley Indexes (BBKI).
- The Federal Reserve Bank of St. Louis posts the Weekly Economic Index (Lewis-Mertens-Stock) (WEI).
- The Federal Reserve Bank of St. Louis posts the Smoothed U.S. Recession Probabilities (RECPROUSM156N).
- Inverted yield curve, the model developed by economist Jonathan H. Wright, uses yields on 10-year and three-month Treasury securities as well as the Fed's overnight funds rate. Another model developed by Federal Reserve Bank of New York economists uses only the 10-year/three-month spread.,
- The three-month change in the unemployment rate and initial jobless claims. U.S. unemployment index defined as the difference between the 3-month average of the unemployment rate and the 12-month minimum of the unemployment rate. Unemployment momentum and acceleration with Hidden Markov model.
- Index of Leading (Economic) Indicators (includes some of the above indicators).
- Lowering of asset prices, such as homes and financial assets, or high personal and corporate debt levels.
- Commodity prices may increase before recessions, which usually hinders consumer spending by making necessities like transportation and housing costlier. This will tend to constrict spending for non-essential goods and services. Once the recession occurs, commodity prices will usually reset to a lower level.
- Increased income inequality.
- Decreasing recreational vehicle shipments.
- Declining trucking volumes.
Analysis by Prakash Loungani of the International Monetary Fund found that only two of the sixty recessions around the world during the 1990s had been predicted by a consensus of economists one year earlier, while there were zero consensus predictions one year earlier for the 49 recessions during 2009.
S&P 500 and BBB bond spread to shows the probability of a recession
Most mainstream economists believe that recessions are caused by inadequate aggregate demand in the economy, and favor the use of expansionary macroeconomic policy during recessions. Strategies favored for moving an economy out of a recession vary depending on which economic school the policymakers follow. Monetarists would favor the use of expansionary monetary policy, while Keynesian economists may advocate increased government spending to spark economic growth. Supply-side economists may suggest tax cuts to promote business capital investment. When interest rates reach the boundary of an interest rate of zero percent (zero interest-rate policy) conventional monetary policy can no longer be used and government must use other measures to stimulate recovery. Keynesians argue that fiscal policy—tax cuts or increased government spending—works when monetary policy fails. Spending is more effective because of its larger multiplier but tax cuts take effect faster.
For example, Paul Krugman wrote in December 2010 that significant, sustained government spending was necessary because indebted households were paying down debts and unable to carry the U.S. economy as they had previously: "The root of our current troubles lies in the debt American families ran up during the Bush-era housing bubble...highly indebted Americans not only can’t spend the way they used to, they’re having to pay down the debts they ran up in the bubble years. This would be fine if someone else were taking up the slack. But what’s actually happening is that some people are spending much less while nobody is spending more — and this translates into a depressed economy and high unemployment. What the government should be doing in this situation is spending more while the private sector is spending less, supporting employment while those debts are paid down. And this government spending needs to be sustained..."
Keynes on Government Response
John Maynard Keynes believed that government institutions could stimulate aggregate demand in a crisis. “Keynes showed that if somehow the level of aggregate demand could be triggered, possibly by the government printing currency notes to employ people to dig holes and fill them up, the wages that would be paid out would resuscitate the economy by generating successive rounds of demand through the multiplier process”
Some recessions have been anticipated by the stock market declines. In Stocks for the Long Run, Siegel mentions that since 1948, ten recessions were preceded by a stock market decline, by a lead time of 0 to 13 months (average 5.7 months), while ten stock market declines of greater than 10% in the Dow Jones Industrial Average were not followed by a recession.
Since the business cycle is very hard to predict, Siegel argues that it is not possible to take advantage of economic cycles for timing investments. Even the National Bureau of Economic Research (NBER) takes a few months to determine if a peak or trough has occurred in the US.
During an economic decline, high-yield stocks such as fast-moving consumer goods, pharmaceuticals, and tobacco tend to hold up better. However, when the economy starts to recover and the bottom of the market has passed, growth stocks tend to recover faster. There is significant disagreement about how health care and utilities tend to recover. Diversifying one's portfolio into international stocks may provide some safety; however, economies that are closely correlated with that of the U.S. may also be affected by a recession in the U.S.
There is a view termed the halfway rule according to which investors start discounting an economic recovery about halfway through a recession. In the 16 U.S. recessions since 1919, the average length has been 13 months, although the recent recessions have been shorter. Thus, if the 2008 recession had followed the average, the downturn in the stock market would have bottomed around November 2008. The actual US stock market bottom of the 2008 recession was in March 2009.
Generally, an administration gets credit or blame for the state of economy during its time. This has caused disagreements about on how it actually started. In an economic cycle, a downturn can be considered a consequence of an expansion reaching an unsustainable state, and is corrected by a brief decline. Thus it is not easy to isolate the causes of specific phases of the cycle.
The 1981 recession is thought to have been caused by the tight-money policy adopted by Paul Volcker, chairman of the Federal Reserve Board, before Ronald Reagan took office. Reagan supported that policy. Economist Walter Heller, chairman of the Council of Economic Advisers in the 1960s, said that "I call it a Reagan-Volcker-Carter recession." The resulting taming of inflation did, however, set the stage for a robust growth period during Reagan's presidency. .
Economists usually teach that to some degree recession is unavoidable, and its causes are not well understood.
Unemployment is particularly high during a recession. Many economists working within the neoclassical paradigm argue that there is a natural rate of unemployment which, when subtracted from the actual rate of unemployment, can be used to calculate the negative GDP gap during a recession. In other words, unemployment never reaches 0 percent, and thus is not a negative indicator of the health of an economy unless above the "natural rate", in which case it corresponds directly to a loss in the gross domestic product, or GDP.
The full impact of a recession on employment may not be felt for several quarters. Research in Britain shows that low-skilled, low-educated workers and the young are most vulnerable to unemployment in a downturn. After recessions in Britain in the 1980s and 1990s, it took five years for unemployment to fall back to its original levels. Many companies often expect employment discrimination claims to rise during a recession.
Productivity tends to fall in the early stages of a recession, then rises again as weaker firms close. The variation in profitability between firms rises sharply. The fall in productivity could also be attributed to several macro-economic factors, such as the loss in productivity observed across UK due to Brexit, which may create a mini-recession in the region. Global epidemics, such as COVID-19, could be another example, since they disrupt the global supply chain or prevent movement of goods, services and people.
Recessions have also provided opportunities for anti-competitive mergers, with a negative impact on the wider economy: the suspension of competition policy in the United States in the 1930s may have extended the Great Depression.
The living standards of people dependent on wages and salaries are not more affected by recessions than those who rely on fixed incomes or welfare benefits. The loss of a job is known to have a negative impact on the stability of families, and individuals' health and well-being. Fixed income benefits receive small cuts which make it tougher to survive.
According to the International Monetary Fund (IMF), "Global recessions seem to occur over a cycle lasting between eight and 10 years." The IMF takes many factors into account when defining a global recession. Until April 2009, IMF several times communicated to the press, that a global annual real GDP growth of 3.0 percent or less in their view was "...equivalent to a global recession". By this measure, six periods since 1970 qualify: 1974–1975, 1980–1983, 1990–1993, 1998, 2001–2002, and 2008–2009. During what IMF in April 2002 termed the past three global recessions of the last three decades, global per capita output growth was zero or negative, and IMF argued—at that time—that because of the opposite being found for 2001, the economic state in this year by itself did not qualify as a global recession.
In April 2009, IMF had changed their Global recession definition to:
- A decline in annual per‑capita real World GDP (purchasing power parity weighted), backed up by a decline or worsening for one or more of the seven other global macroeconomic indicators: Industrial production, trade, capital flows, oil consumption, unemployment rate, per‑capita investment, and per‑capita consumption.
By this new definition, a total of four global recessions took place since World War II: 1975, 1982, 1991 and 2009. All of them only lasted one year, although the third would have lasted three years (1991–93) if IMF as criteria had used the normal exchange rate weighted per‑capita real World GDP rather than the purchase power parity weighted per‑capita real World GDP.
The worst recession Australia has ever suffered happened in the beginning of the 1930s. As a result of late 1920s profit issues in agriculture and cutbacks, 1931-1932 saw Australia's biggest recession in its entire history. It fared better than other nations, that underwent depressions, but their poor economic states influenced Australia's as well, that depended on them for export, as well as foreign investments. The nation also benefited from bigger productivity in manufacturing, facilitated by trade protection, which also helped with feeling the effects less.
Due to a credit squeeze, the economy had gone into a brief recession in 1961 Australia was facing a rising level of inflation in 1973, caused partially by the oil crisis happening in that same year, which brought inflation at a 13% increase. Economic recession hit by the middle of the year 1974, with no change in policy enacted by the government as a measure to counter the economic situation of the country. Consequently, the unemployment level rose and the trade deficit increased significantly.
Another recession – the most recent one to date – came in the 1990s, at the beginning of the decade. It was the result of a major stock collapse in 1987, in October, referred to now as Black Monday. Although the collapse was larger than the one in 1929, the global economy recovered quickly, but North America still suffered a decline in lumbering savings and loans, which led to a crisis. The recession wasn't limited to only America, but it also affected partnering nations, such as Australia. The unemployment level increased to 10.8%, employment declined by 3.4% and the GDP also decreased as much as 1.7%. Inflation, however, was successfully reduced. Australia is facing recession in 2020 due to the impact of the bush fires and Covid-19 impacting tourism and other important aspects of the economy.
The most recent recession to affect the United Kingdom was the 2020 recession attributed to the COVID‑19 global pandemic, the first recession since the late-2000s recession.
According to economists, since 1854, the U.S. has encountered 32 cycles of expansions and contractions, with an average of 17 months of contraction and 38 months of expansion. However, since 1980 there have been only eight periods of negative economic growth over one fiscal quarter or more, and four periods considered recessions:
- July 1981 – November 1982: 15 months
- July 1990 – March 1991: 8 months
- March 2001 – November 2001: 8 months
- December 2007 – June 2009: 18 months
- 20 February 2020–present: 15 months (Ongoing recession)
For the past three recessions, the NBER decision has approximately conformed with the definition involving two consecutive quarters of decline. While the 2001 recession did not involve two consecutive quarters of decline, it was preceded by two quarters of alternating decline and weak growth.
Official economic data shows that a substantial number of nations were in recession as of early 2009. The US entered a recession at the end of 2007, and 2008 saw many other nations follow suit. The US recession of 2007 ended in June 2009 as the nation entered the current economic recovery. The timeline of the Great Recession details the many elements of this period.
The 2007–2009 recession saw private consumption fall for the first time in nearly 20 years. This indicated the depth and severity of the recession. With consumer confidence so low, economic recovery took a long time. Consumers in the U.S. were hit hard by the Great Recession, with the value of their houses dropping and their pension savings decimated on the stock market.
U.S. employers shed 63,000 jobs in February 2008, the most in five years. Former Federal Reserve chairman Alan Greenspan said on 6 April 2008 that "There is more than a 50 percent chance the United States could go into recession." On 1 October, the Bureau of Economic Analysis reported that an additional 156,000 jobs had been lost in September. On 29 April 2008, Moody's declared that nine US states were in a recession. In November 2008, employers eliminated 533,000 jobs, the largest single-month loss in 34 years. In 2008, an estimated 2.6 million U.S. jobs were eliminated.
The unemployment rate in the U.S. grew to 8.5 percent in March 2009, and there were 5.1 million job losses by March 2009 since the recession began in December 2007. That was about five million more people unemployed compared to just a year prior, which was the largest annual jump in the number of unemployed persons since the 1940s.
Although the US Economy grew in the first quarter by 1%, by June 2008 some analysts stated that due to a protracted credit crisis and "...rampant inflation in commodities such as oil, food, and steel", the country was nonetheless in a recession. The third quarter of 2008 brought on a GDP retraction of 0.5% the biggest decline since 2001. The 6.4% decline in spending during Q3 on non-durable goods, like clothing and food, was the largest since 1950.
A 17 November 2008 report from the Federal Reserve Bank of Philadelphia based on the survey of 51 forecasters, suggested that the recession started in April 2008 and would last 14 months. They project real GDP declining at an annual rate of 2.9% in the fourth quarter and 1.1% in the first quarter of 2009. These forecasts represent significant downward revisions from the forecasts of three months ago.
A 1 December 2008 report from the National Bureau of Economic Research stated that the U.S. had been in a recession since December 2007 (when economic activity peaked), based on a number of measures including job losses, declines in personal income, and declines in real GDP. By July 2009 a growing number of economists believed that the recession may have ended. The National Bureau of Economic Research announced on 20 September 2010 that the 2008/2009 recession ended in June 2009, making it the longest recession since World War II. Prior to the start of the recession, it appears that no known formal theoretical or empirical model was able to accurately predict the advance of this recession, except for minor signals in the sudden rise of forecasted probabilities, which were still well under 50%.
- "Recession". Merriam-Webster Online Dictionary. Retrieved 19 November 2008.
- "Recession definition". Encarta World English Dictionary [North American Edition]. Microsoft Corporation. 2007. Archived from the original on 28 March 2009. Retrieved 19 November 2008.
- "The NBER's Recession Dating Procedure". www.nber.org.
- "Q&A: What is a recession?". BBC News. 8 July 2008.
- "Glossary of Treasury terms". HM Treasury. Archived from the original on 2 November 2012. Retrieved 25 October 2012.
- Shiskin, Julius (1 December 1974). "The Changing Business Cycle". The New York Times. Retrieved 12 March 2020.
- "What is the difference between a recession and a depression?" Saul Eslake Nov 2008
- "Business Cycle Expansions and Contractions". National Bureau of Economic Research. Archived from the original on 5 March 2020. Retrieved 20 March 2020.
- Koo, Richard (2009). The Holy Grail of Macroeconomics-Lessons from Japan's Great Recession. John Wiley & Sons (Asia) Pte. Ltd. ISBN 978-0-470-82494-8.
- Koo, Richard. "The world in balance sheet recession: causes, cure, and politics" (PDF). real-world economics review, issue no. 58, 12 December 2011, pp. 19–37. Retrieved 15 April 2012.
- "Key Indicators 2001: Growth and Change in Asia and the Pacific". ADB.org. Archived from the original on 17 March 2010. Retrieved 31 July 2010.
- Samuelson, Robert J. (14 June 2010). "Our economy's crisis of confidence". The Washington Post. Retrieved 29 January 2011.
- "The Conference Board – Consumer Confidence Survey Press Release – May 2010". Conference-board.org. 25 March 2010. Retrieved 29 January 2011.
- Shiller, Robert J. (27 January 2009). "WSJ – Robert Shiller – Animal Spirits Depend on Trust". The Wall Street Journal. Retrieved 29 January 2011.
- Krugman, Paul. "Does He Pass the Test?". Retrieved 26 November 2018.
- Gregory White (14 April 2010). "Presentation by Richard Koo – The Age of Balance Sheet Recessions". Businessinsider.com. Retrieved 29 January 2011.
- "Richard Koo – The World In Balance Sheet Recession – Real World Economics Review – December 2011" (PDF). Retrieved 26 November 2018.
- "Notes On Koo (Wonkish)". Retrieved 26 November 2018.
- Krugman, Paul (18 November 2010). "Debt, deleveraging, and the liquidity trap". Retrieved 26 November 2018.
- "Grim Natural Experiments". Retrieved 26 November 2018.
- "New Report: A Literature Summary on New Balance-Sheet Recession Research". Next New Deal. Archived from the original on October 16, 2015.
- Krugman, Paul (2009). The Return of Depression Economics and the Crisis of 2008. W.W. Norton Company Limited. ISBN 978-0-393-07101-6.
- "How Much of the World is in a Liquidity Trap?". Krugman.blogs.nytimes.com. 17 March 2010. Retrieved 29 January 2011.
- "A Minsky Meltdown: Lessons for Central Bankers". Federal Reserve Bank of San Francisco.
- Federal Reserve Bank of New York, Consumer Confidence: A Useful Indicator of . . . the Labor Market? Jason Bram, Robert Rich, and Joshua Abel ... Conference Board’s Present Situation Index This article incorporates text from this source, which is in the public domain.
- "Wall Street starts 2017 with tailwind | By Juergen Buettner | January 4, 2017 | Chart 1: Consumer Confidence Index and Historically Shocks". Archived from the original on April 28, 2020. Retrieved February 14, 2020.
- Consumer Confidence Drops -- Why Does It Matter? Forbes. Jun 27, 2019. Brad McMillan.
- Yahoo News | Gundlach: We don't see a recession on the horizon | February 13, 2019
- Seeking Alpha | Take Me To Your Leader: Analyzing The Latest Leading Indicators | by -1.9% | Sep. 24, 2019
- Background on the Chicago Fed National Activity Index | Federal Reserve Bank of Chicago | September 19, 2019
- A Estrella, FS Mishkin (1995). "Predicting U.S. Recessions: Financial Variables as Leading Indicators" (PDF). Review of Economics and Statistics. MIT Press. 80: 45–61. doi:10.1162/003465398557320. S2CID 11641969.
- A “Big Data” View of the U.S. Economy: Introducing the Brave-Butters-Kelley Indexes | By Scott A. Brave , Ross Cole , David Kelley
- Weekly Economic Index (Lewis-Mertens-Stock)
- The Federal Reserve Bank of St. Louis | Smoothed U.S. Recession Probabilities
- Investor Jeffrey Ulatan Indicates 2020 Recession Signals
- Grading Bonds on Inverted Curve By Michael Hudson
- Wright, Jonathan H., The Yield Curve and Predicting Recessions (March 2006). FEDs Working Paper No. 2006-7.
- "The Yield Curve as a Leading Indicator". www.newyorkfed.org. FEDERAL RESERVE BANK of NEW YORK. 2020.
- Using the U.S. Treasury Yield Curve to predict S&P 500 returns and U.S. recessions | Theodore Gregory Hanks | Pennsylvania State University, Schreyer Honors College Department of Finance | Spring 2012
- "Labor Model Predicts Lower Recession Odds". The Wall Street Journal. 28 January 2008. Retrieved 29 January 2011.
- Sahm, Claudia (2019-05-06). "Direct Stimulus Payments to Individuals" (PDF). Board of Governors of the Federal Reserve System.
- Lihn, Stephen H. T. (2019-08-10). "Real-time Recession Probability with Hidden Markov Model and Unemployment Momentum". Rochester, NY. doi:10.2139/ssrn.3435667. S2CID 214619854. SSRN 3435667. Cite journal requires
- Leading Economic Indicators Suggest U.S. In Recession 21 January 2008
- "Income and wealth inequality make recessions worse, research reveals". phys.org. 2016.
- Neves, Pedro Cunha; Afonso, Óscar; Silva, Sandra Tavares (February 2016). "A Meta-Analytic Reassessment of the Effects of Inequality on Growth". World Development. 78: 386–400. doi:10.1016/j.worlddev.2015.10.038.
- Raice, Shayndi. "An Economic Warning Sign: RV Shipments Are Slipping". WSJ.
- "The 'bloodbath' in America's trucking industry has officially spilled over to the rest of the economy".
- Cass Freight Index Report, August 2019
- "Grim Stock Signals Piling Up as Wall Street Mulls Recession Odds". www.bloomberg.com. Retrieved 26 November 2018.
- JPMorgan | The US Economic Outlook | Feb. 2020 | Page 22]
- Graystone Consulting, Morgan Stanley | 2nd Quarter 2020 Investment Outlook | Page 34 | As of March 31, 2020]
- Krugman, Paul. "Opinion – Block Those Economic Metaphors". Retrieved 26 November 2018.
- Anatomy of the Financial Crisis: Between Keynes and Schumpeter. Economic and Political Weekly, 44
- Siegel, Jeremy J. (2002). Stocks for the Long Run: The Definitive Guide to Financial Market Returns and Long-Term Investment Strategies, 3rd, New York: McGraw-Hill, 388. ISBN 978-0-07-137048-6
- "From the subprime to the terrigenous: Recession begins at home". Land Values Research Group. 2 June 2009.
A downturn in the property market, especially in turnover (sales) of properties, is a leading indicator of recession, with a lead time of up to 9 quarters...
- Robert J. Shiller (6 June 2009). "Why Home Prices May Keep Falling". The New York Times. Retrieved 10 April 2010.
- Allan Sloan (11 December 2007). "Recession Predictions and Investment Decisions".
- Shawn Tully (6 February 2008). "Recession? Where to put your money now".
- "Which investments are the best during a recession". Currency.com. March 13, 2020.
- Rethinking Recession-Proof Stocks Joshua Lipton 28 January 2008
- Douglas Cohen (18 January 2008). "Recession Stock Picks".
- Gaffen, David (11 November 2008). "Recession Puts Halfway Rule to the Test". The Wall Street Journal. Retrieved 29 January 2011.
- "Economy puts Republicans at risk". BBC. 29 January 2008.
- The Bush Recession Archived 2011-02-04 at the Wayback Machine Prepared by: Democratic staff, Senate Budget Committee, 31 July 2003
- George J. Church (23 November 1981). "Ready for a Real Downer". Time.
- Unemployment Rate p. 1. The Saylor Foundation. Accessed 20 June 2012.
- US in Recession Rising Unemployment Market Oracle. John Mauldin Feb 2009
- Vaitilingam, Romesh (17 September 2009). "Recession Britain: New ESRC report on the impact of recession on people's jobs, businesses and daily lives". Economic and Social Research Council. Archived from the original on 2 January 2010. Retrieved 22 January 2010.
- Rampell, Catherine (11 January 2011). "More Workers Complain of Bias on the Job, a Trend Linked to Widespread Layoffs". The New York Times.
- The Recession that Almost Was. Kenneth Rogoff, International Monetary Fund, Financial Times, 5 April 2002
- "The world economy Bad, or worse". Economist.com. 2008-10-09. Retrieved 2009-04-15.
- Lall, Subir. International Monetary Fund, April 9, 2008. IMF Predicts Slower World Growth Amid Serious Market Crisis
- Global Economic Slump Challenges Policies IMF. January 2009.
- "Global Recession Risk Grows as U.S. 'Damage' Spreads. Jan 2008". Bloomberg.com. 2008-01-28. Archived from the original on March 21, 2010. Retrieved 2009-04-15.
- "World Economic Outlook (WEO) April 2013: Statistical appendix – Table A1 – Summary of World Output" (PDF). IMF. 16 April 2013. Retrieved 16 April 2013.
- Davis, Bob (22 April 2009). "What's a Global Recession?". The Wall Street Journal. Retrieved 17 September 2013.
- "World Economic Outlook – April 2009: Crisis and Recovery" (PDF). Box 1.1 (pp. 11–14). IMF. 24 April 2009. Retrieved 17 September 2013.
- Australian Economic Indicators, Australian Bureau of Statistics, 27 February 1998
- Reasons for 1990s Recession, Melbourne: The Age, 2 December 2006
- Australian Recession, Australian Broadcasting Corporation - Michael Janda, 3 June 2020
- "Percent change from preceding period". U.S. Bureau of Economic Analysis (BEA). Archived from the original on 14 August 2018. Retrieved 26 November 2018.
- Isidore, Chris (1 December 2008). "It's official: Recession since Dec. '07". CNN. Retrieved 29 January 2011.
- "BBC News – Business – US economy out of recession". BBC. 29 October 2009. Retrieved 6 February 2010.
- "Determination of the December 2007 Peak in Economic Activity" (PDF). NBER Business Cycle Dating Committee. 11 December 2008. Retrieved 26 April 2009.
- Izzo, Phil (20 September 2010). "Recession Over in June 2009". The Wall Street Journal.
- Economic Crisis: When will it End? IBISWorld Recession Briefing " Archived 2011-05-14 at the Wayback Machine Dr. Richard J. Buczynski and Michael Bright, IBISWorld, January 2009
- Andrews, Edmund L. (7 March 2008). "Employment Falls for Second Month". The New York Times. Retrieved 10 March 2020.
- Recession unlikely if US economy gets through next two crucial months Archived August 12, 2011, at the Wayback Machine
- Uchitelle, Louis; Andrews, Edmund L.; Labaton, Stephen (6 December 2008). "U.S. Loses 533,000 Jobs in Biggest Drop Since 1974". The New York Times. Retrieved 10 April 2010.
- Uchitelle, Louis (9 January 2009). "U.S. lost 2.6 million jobs in 2008". The New York Times. Retrieved 10 March 2020.
- Unemployment rate in March 2009 6 April 2009. U.S. Bureau of Labor Statistics. Retrieved 10 March 2020.
- 2 million jobs lost so far in '09 CNN/Money. 3 April 2009. Retrieved 10 March 2020.
- "Employment Situation Summary". Bls.gov. 2 July 2010. Retrieved 31 July 2010.
- Goldman, David (9 January 2009). "Worst year for jobs since '45". CNN. Retrieved 10 April 2010.
- Brent Meyer (16 October 2008). "Real GDP First-Quarter 2008 Preliminary Estimate :: Brent Meyer :: Economic Trends :: 06.03.08 :: Federal Reserve Bank of Cleveland". Clevelandfed.org. Archived from the original on 5 October 2008. Retrieved 29 January 2011.
- "Fragile economy improves but not out of woods yet". finance.yahoo.com. Archived from the original on July 7, 2008.
- Why it's worse than you think, 16 June 2008, Newsweek.
- "Gross Domestic Product: Third quarter 2008". Bea.gov. Retrieved 29 January 2011.
- Chandra, Shobhana (30 October 2008). "U.S. Economy Contracts Most Since the 2001 Recession". Bloomberg. Retrieved 29 January 2011.
- "Fourth quarter 2008 Survey of Professional Forecasters". Philadelphiafed.org. 17 November 2008. Retrieved 29 January 2011.
- "Text of the NBER's statement on the recession". USA Today. 1 December 2008. Retrieved 29 January 2011.
- Daniel Gross, The Recession Is... Over?, Newsweek, 14 July 2009.
- V.I. Keilis-Borok et al., Pattern of Macroeconomic Indicators Preceding the End of an American Economic Recession. Journal of Pattern Recognition Research, JPRR Vol.3 (1) 2008.
- "Business Cycle Dating Committee, National Bureau of Economic Research". www.nber.org. Retrieved 26 November 2018.
|Library resources about | | https://library.kiwix.org/wikipedia_en_top_maxi/A/Recession | 21 |
21 | Supply refers to the quantity of a product that producers, sellers or firms are both willing and able to offer in the market at a particular price over a period of time (Mabry & Ulbrich, 1989).
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
The Law of Supply
The law of Supply states that the quantity supplied of a good or commodity has a positive relationship with price; as the price of a commodity rises, producers will increase their supply of goods to the market, ceteris paribus (Blinder & Baulmol, 2000). Ceteris paribus is a Latin term that means everything is unchanged, equal or constant (Tancred Lidderdale, 2003).
A higher market price is necessary to entice a seller to sell more of a product, since the marginal opportunity cost of supplying the good increases as more of the good is produced.
Illustration of supply
Supply can be illustrated using a supply schedule or a supply curve (Tancred Lidderdale, 2003). A supply schedule is a tabloid representation, while a supply curve is a graphical representation of supply. They show how the quantity supplied of a product changes over time as the price of the product changes (Blinder & Baulmol, 2000).
Price of Good
Quantity of Good
Table 1: A supply schedule showing the positive relationship between Price and Quantity supplied of a good.
Why does a supply Curve Slope Upwards?
Supply curves are drawn from left to right because market price and quantity supplied share a positive relationship; when price increases, quantity supplied will increase simultaneously and in addition when price decreases, quantity supplied will increase simultaneously (Blinder & Baulmol, 2000).
Determinants of Supply & how they affect the supply curve
Other factors, independent of price, that affect quantities supplied are called “Determinants of Supply” (Mabry & Ulbrich, 1989). A change in any of the determinants of supply will result in a shift of the supply curve. Determinants of supply include:
The number of sellers in the market or size of the industry – Market supply is the sum of the supply schedules of individual producers. When additional firms enter the market for a product the supply of a product increases. This increase in supply of the product causes the supply curve to shift to the right. Conversely, when firms exit the market for a product, supply of that product decreases. This results in a leftward shift of the supply curve (Blinder & Baulmol, 2000).
Prices of resources – that is the price of inputs such as land, labour, capital, and raw materials that is used to produce goods and services (Tancred Lidderdale, 2003). For instance, a reduction in price of flour may cause a corresponding increase of pastry being supplied on the market, since producers would be inclined to invest in the production of pastry. This can be expressed by a shift in the supply curve to the right. On the other hand, an increase in price of flour may cause a decrease in quantity of pastry being produced. This will cause a shift of the supply curve to the left.
Technology – Advancements in techniques of production may lower or raise production costs (Mabry & Ulbrich, 1989). One example of how it can lower production cost is by replacing typewriters with computers. This saves time and money and results in less wastage. Computers offer a print preview option whereby a document can be corrected before printed on hard copy, while typewriter mistakes can only be undone on the paper using a correction tape. The correction tape cost must be borne by the business. Further, several documents can be prepared on a computer at once as opposed to a typewriter which can only produce one document at a time. This will effect a rightward shift of the supply curve. On the other hand, production costs can be increased, for example, a Plantain chip producer upgrading from sealing with matches to a sealing machine. The producer must bear the cost of the machine, electrical costs and perhaps even the cost of a special kind of bag for the machine. This will cause a shift of the supply curve to the left.
Government taxes and subsidies – increase in taxation on a business may result in its unwillingness to produce a product altogether or as much of a product as before, while tax reductions may cause an increase in supply of a product (Miller, 1999). Increase in taxation will cause a leftward shift of the supply curve while reduction of taxation will cause a rightward shift of the supply curve. Government subsidies and financial support, may be an incentive for new firms to enter the market for a product and will result in a shift of the supply curve to the right (Miller, 1999).
Producers or sellers expectations for future prices- Businesses’ expectations for future market prices for a product to raise or fall will affect market supply (Miller, 1999). When sellers expect the price of a product to fall in the future, sellers tend to increase the quantity currently supplied resulting in a supply curve shift to the right. On the other hand, when firms expect the price of a product to increase in the future, they would be inclined to store their inventory for the product, reducing supply in current time. Their rationale for this is so that they will be able to increase supply when the price rises, resulting in profit maximization. The decrease in supply in current time will cause supply curve shift to the left (Mabry & Ulbrich, 1989).
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
Price of related goods in production – Related good are those goods that can be produced with the same factors of production (Mabry & Ulbrich, 1989). An example of related goods in production is tennis rolls and bread, which are substitutes. When a bakery learns that the production tennis rolls is more profitable, they would use their ingredients in producing more tennis rolls. Similarly, if the market price for bread increases, firms would decrease their supply of tennis rolls, because bakeries would use their ingredients in the production of bread. This decrease in supply of tennis rolls will cause a leftward shift of the supply curve while an increase in supply of tennis rolls will cause a rightward shift of the supply curve for the product. The same principle applies for the supply of bread and how it shifts the supply curve.
A change in the price of a complement good in production will make a firm sell more or less of both products (AmosWEB LLC, 2012). This means that an increase in the price of a complement motivates sellers to sell more of this good as they sell more of the complement good, while a decrease in the price of a complement will cause a firm to sell less of a good in conjunction with its complement (AmosWEB LLC, 2012). This is the case with hot dog bread and sausages. When the price of hot dog bread increases, firms may sell more sausages. This will cause a rightward shift in the supply curve. However, as the price of hot dog bread decreases, firms may respond by selling fewer sausages. The supply curve would shift to the left in this instance.
Price Decrease in Increase in
4 8 14 Quantity
Graph 1 – A supply curve illustrating a change in supply (shift in the supply curve)
A rightward shift on the supply curve denotes an increase in supply, while a leftward shift denotes a decrease in supply
2 5 8 Quantity
Graph 2 – A supply curve illustrating a change in quantity supplied (movement along the supply curve).
Difference between a change in supply and a change in quantity supplied
A change in supply is a change in the general supply relation in all price and quantity pairs which is consequent of a change in one of the determinants of supply and causes the supply curve to shift. A change in quantity supplied is the change in the specific amount of a good that sellers are willing and able to supply, which is consequent of a change in price and causes a movement along the supply curve (AmosWEB LLC, 2012).
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: | https://www.ukessays.com/essays/economics/the-law-of-supply-economics-essay.php | 21 |
18 | The Aztec Empire was the last of the great Mesoamerican cultures. Between A.D. 1345 and 1521, the Aztecs forged an empire over much of the central Mexican highlands.
At its height, the Aztecs ruled over 80,000 square miles throughout central Mexico, from the Gulf Coast to the Pacific Ocean, and south to what is now Guatemala. Millions of people in 38 provinces paid tribute to the Aztec ruler, Montezuma II, prior to the Spanish Conquest in 1521.
Click here to see more posts in this category. Scroll down to see articles about the government, religion, military, and agricultural system of the Aztec Empire.
Aztec Empire Overview
The Aztecs didn’t start out as a powerful people. The Nahuatl speaking peoples began as poor hunter-gatherers in northern Mexico, in a place known to them as Aztlan. Sometime around A.D. 1111, they left Aztlan, told by their war god Huitzilopochtli that they would have to find a new home. The god would send them a sign when they reached their new homeland.
Scholars believe the Aztecs wandered for generations, heading ever southward. Backward and poor, other more settled people didn’t want the Aztecs to settle near them and drove them on. Finally, around A.D. 1325, they saw the god’s sign—the eagle perched on a cactus eating a serpent on an island in Lake Texcoco, or so the legend has it. The city established by the Aztecs, Tenochtitlan, grew to become the capital of their empire.
Fortunately, the site was a strong, strategic area with good sources of food and clean water. The Aztecs began to build the canals and dikes necessary for their form of agriculture and to control water levels. They build causeways linking the island to the shore. Because of the island location, commerce with other cities around the lakes was easily be carried out via canoes and boats.
Through marriage alliances with ruling families in other city states, the Aztecs began to build their political base. They became fierce warriors and skillful diplomats. Throughout the late 1300s and early 1400s, the Aztecs began to grow in political power. In 1428, the Aztec ruler Itzcoatl formed alliances with the nearby cities of Tlacopan and Texcoco, creating the Triple Alliance that ruled until the coming of the Spanish in 1519.
The last half of the 15th century saw the Aztec Triple Alliance dominating the surrounding areas, reaping a rich bounty in tribute. Eventually, the Aztecs controlled much of central and southern Mexico. Thirty-eight provinces sent tribute regularly in the form of rich textiles, warrior costumes, cacao beans, maize, cotton, honey, salt and slaves for human sacrifice. Gems, gold and jewelry came to Tenochtitlan as tribute for the emperor. Wars for tribute and captives became a way of life as the empire grew in power and strength. While the Aztecs successfully conquered many, some city states resisted. Tlaxcalla, Cholula and Huexotzinco all refused Aztec dominance and were never fully conquered.
The Aztec Empire was powerful, wealthy and rich in culture, architecture and the arts. The Spanish entered the scene in 1519 when Hernan Cortes landed an exploratory vessel on the coast. Cortes was first welcomed by Montezuma II, but Cortes soon took the emperor and his advisors hostage. Though the Aztecs managed to throw the conquistadors out of Tenochtitlan, the Spanish regrouped and made alliances with the Aztec’s greatest enemy, the Tlaxcalans. They returned in 1521 and conquered Tenochtitlan, razing the city to the ground and destroying the Aztec empire in the process.
Governance of the Aztec Empire
The Aztec Empire had a hierarchical government with power and responsibility running from the top down. The empire’s rule was indirect over its provinces. That is, as long as the province or territory paid the tribute it owed the empire in full and on time, the empire left the local leaders alone.
The foundation of the empire’s hierarchical structure was the family. A group of interrelated families then formed a calpulli, a sort of neighborhood or guild. The calpullis organized local schools and shrines and took care of the group as a whole. Each calpulli elected a headman to oversee the calpulli’s responsibilities. Most Aztec cities contained many calpulli.
The headman of each calpulli was a member of the city council. The city councils had a good deal of power; they made sure the city ran smoothly. Each council had an executive council of four members. These four members were nobles and usually a member of a military society.
One of the four executive council members would be elected the leader of the city, the tlatcani, who oversaw not only the city but the surrounding countryside as well. These city councils and leaders formed the provincial network of the empire.
At the center of the empire were the main Aztec altepetls, or city states, of Texcoco, Tlacopan and Tenochtitlan. Of the three, Tenochtitlan gradually muscled its way to dominate over the others.
The pinnacle of power centered in the Huey Tlatoani, the Reverend Speaker or emperor. The emperor had absolute power and was worshipped as a god. By the emperor’s side was his Snake Woman or Cihuacoatl, who functioned as a grand vizier or prime minister. Although Snake Woman was the title of this position, it was always held by a man, usually the emperor’s brother or cousin. While the Huey Tlatoani dealt with issues of diplomacy, tribute, war and expansion of the empire, the Snake Woman’s responsibility was Tenochtitlan itself.
Directly under the emperor were his advisors, the Council of Four. These advisors were generals from the military societies. If something were to happen to the emperor, one of these four men would be the next Huey Tlatoani. The council advised the emperor in his decisions.
The empire required a host of other government offices, which were filled by a city’s noble families. Each city had a court system with Special Courts, Appellate Courts and a Supreme Court. The city’s merchant class, the pochteca, had their own court to consider matters of trade.
Managing the constant incoming tribute goods from far-flung provinces required another power structure, both central and provincial. Government officials also oversaw the markets, from the central markets of the cities to the smaller markets of town and country.
All of the priesthood and government officials reported to the emperor and his Council of Four. All supported the emperor. Although the Aztec Empire’s grip on its provinces was light, the tribute flowed into the central coffers.
Weapons of the Aztec Empire
As Aztec warriors showed their courage and craftiness in battle and skill at capturing enemy soldiers for sacrifice, they gained in military rank. The Aztec emperors honored the higher ranks with weapons and distinctive garb that reflected their status in the military.
Aztecs warriors carried projectile weapons such as bow and arrows to attack the enemy from afar. They also carried weapons for the melee when armies came together. The lowest ranks of warriors carried a club and shield. Higher ranks were awarded finer weapons. Each rank in the army wore special clothing that denoted the honors they had won.
Projectile Weapons of Aztec Warriors
The atlatl was a spear thrower, which produced greater force from a greater distance. Only the highest ranks were allowed these weapons as they were in the front lines of the battle. Each warrior carrying the atlatl also carried many tlacochtli, 5.9 foot long spears tipped with obsidian.
War Bow and Arrows
The tlahhuitolli was a five foot long war bow strung with animal sinew. Warriors carried their arrows, barbed with obsidian, flint or chert and fletched with turkey feathers in a micomitl or quiver. Quivers could hold about 20 arrows.
Aztec warriors and hunters carried slings made of maguey cactus fiber. The warriors collected rocks as they marched. They also made clay balls spiked with obsidian and full of obsidian flakes. Even well armored enemies could be wounded by these.
Blowguns and poisoned darts were more often used in hunting, but Aztec warriors trained in ambush would bring along their tlacalhuazcuahuitl and darts tipped with poisonous tree frog secretions.
Aztec warriors carried different types of clubs. The macuahuitl club was edged with obsidian blades. While the obsidian shattered easily, it was razor sharp. A macuahuitl could easily decapitate a man. A macuauitzoctli was a long club made of hardwood with a knob on each side. A huitzauhqui was a baseball bat type club, although some of these were studded with obsidian or flint. A cuahuitl was a club shaped like a baton, made of oak. A cuauololli was basically a mace, a club topped with a rock or copper sphere.
Tepoztopilli were spears with obsidian points.
Itztopilli were axes shaped like a tomahawk with a head of either copper or stone. One edge was sharpened, the other blunt.
Tecaptl were daggers with handles seven to nine inches long. They had a double sided blade made of flint. Aztec warriors drew their tecaptl for hand-to-hand combat.
Aztec warriors carried round shield made of wood that was either plain or decorated with their military insignia called a chimalli. The higher rank warriors had special chimalli with a mosaic of feathers denoting their society or rank.
Basic Aztec armor was quilted cotton of two to three thicknesses. The cotton was soaked in salt brine then hung to dry. The salt crystallized in the material, which gave it the ability to resist obsidian blades and spears. An extra layer of armor, a tunic, was worn by noble Aztec warriors. Warrior societies also wore a helmet made of hardwood, carved to represent their society or different animals like birds or coyotes.
Tlahuiztli were special suits awarded to various ranks of the military. Each rank wore different colored and decorated tlahuiztli to make them easily distinguished on the battlefield. Each rank also wore pamitl or military emblems.
Warriors of the Aztec Empire
The Aztec warrior was highly honored in society if he was successful. Success depended on bravery in battle, tactical skill, heroic deeds and most of all, in capturing enemy warriors. Since every boy and man received military training, all were called for battle when war was in the offing. Both commoners and nobles who captured enemy warriors moved up in military rank or became members of military orders. Many nobles joined the army professionally and functioned as the command core of the army.
While the Aztec economy depended on trade, tribute and agriculture, the real business of the empire was war. Through war, the Aztec Empire gained tribute from conquered enemies. People captured during war became slaves or sacrifices in the Aztec’s religious ceremonies. Expanding the empire through further conquests strengthened the empire and brought more riches in tribute. For this reason, the emperor rewarded successful warriors of both classes with honors, the right to wear certain garments in distinctive colors, nobility for the commoners and higher status for nobles and land. Every Aztec warrior could, if he captured enemy warriors, advance far in society.
Aztec Warrior Societies
Rank in the military required bravery and skill on the battlefield and capture of enemy soldiers. With each rank, came special clothing and weapons from the emperor, which conveyed high honor. Warrior clothing, costumes and weaponry was instantly recognizable in Aztec society.
- Tlamani: One captive warriors. Received an undecorated obsidian-edged club and shield, two distinctive capes and a bright red loincloth.
- Cuextecatl: Two captive warriors. This rank enabled the warrior to wear the distinguishing black and red suit called a tlahuiztli, sandals and a conical hat.
- Papalotl: Three captive warriors. Papalotl (butterfly) were awarded with a butterfly banner to wear on his back, conferring special honor.
- Cuauhocelotl: Four or more captive warriors. These Aztec warriors reached the high rank of Eagle and Jaguar knights.
Eagle and Jaguar Knights
Eagle and Jaguar warriors were the two main military societies, the highest rank open to commoners. In battle they carried atlatls, bows, spears and daggers. They received special battle costumes, representing eagles and jaguars with feathers and jaguar pelts. They became full-time warriors and commanders in the army. Great physical strength, battlefield bravery and captured enemy soldiers were necessary to obtain this rank.
Commoners who reached the vaunted Eagle or Jaguar rank were awarded the rank of noble along with certain privileges: they were given land, could drink alcohol (pulque), wear expensive jewelry denied to commoners, were asked to dine at the palace and could keep concubines. They also wore their hair tied with a red cord with green and blue feathers. Eagle and jaguar knights traveled with the pochteca, protecting them, and guarded their city. While these two ranks were equal, the Eagle knights worshipped Huitzilopochtli, the war god and the Jaguars worshipped Tezcatlipocha.
Otomies and the Shorn Ones
The two highest military societies were the Otomies and the Shorn Ones. Otomies took their name from fierce tribe of fighters. The Shorn Ones was the most prestigious rank. They shaved their heads except for a long braid of hair on the left side and wore yellow tlahuiztli. These two ranks were the shock troops of the empire, the special forces of the Aztec army, and were open only to the nobility. These warriors were greatly feared and went first into battle.
Religion of the Aztec Empire
While many other Aztec art works were destroyed, either by the Spanish or by the degradations of time, Aztec stone carvings remain to give us a glimpse into the worldview of this supreme Mesoamerican culture. These masterpieces were discovered in Mexico City in the buried ruins of the former Aztec capital of Tenochtitlan and its grand pyramid, Templo Mayor.
Statue of Coatlicue
Coatlicue was the Aztec’s earth mother goddess, although a fearsome one. Goddess of the earth, childbirth, fertility and agriculture, she represented the feminine power of both creation and destruction. A massive stone statue of Coatlicue was discovered in Mexico City in 1790. Almost 12 feet tall and 5 feet broad, the statue shows the goddess as much a goddess of death as of birth. With two facing serpents as her head, claws on her hands and feet, a skirt of serpents and a necklace of skulls, hands and hearts, she reveals the Aztec’s terrifying view of their gods.
The myth of Coatlicue tells of the birth of Huitzilopochtli, the Aztec god of war and the sun. The myth of Coatlicue tells of a priestess sweeping the sacred temple on Mount Coatepec when she was impregnated by a ball of feathers. Her son Huitzilopochtli is born full grown when Coatlicue is attacked by her daughter, the moon goddess. The newborn warrior kills his sister and cuts her into pieces, symbolizing the victory of the sun over the moon. The statue was so horrifying that each time it was dug up, it was reburied. The statue now resides at the National Museum of Anthropology in Mexico City.
Stone of Tizoc
The Stone of Tizoc is a carved disk showing the victory of the emperor Tizoc over the Matlatzinca tribe. The emperor had it carved to celebrate his victory and reveal the martial power of the Aztecs. The large, circular disk has an eight-pointed sun carved on the top, which was used for sacrificial battles. A warrior captured in battle was tied to the stone, and armed with a feather lined club. Aztec warriors, armed with obsidian lined clubs, fought the tied warrior and naturally defeated him. The side of the eight-foot diameter disk depicts Tizoc’s victory. The Matlatzincas are shown as despised barbarians, while Tizoc and his warriors are represented as noble Toltec warriors. The Stone of Tizoc artfully mixes sun worship, mythology and Aztec power. Today this masterful carved stone is at the National Museum of Anthropology in Mexico City.
Another massive stone disk, the carvings on the Sun Stone, also known as the Calendar Stone, show the four consecutive worlds of the Aztecs, each one created by the gods only to end in destruction. This basalt stone, 12 feet in diameter and three feet thick, was discovered near the cathedral in Mexico City in the 18th century. At the center is the sun god Tonatiuh. Around Tonatiuh are the four other suns which met destruction as the gods Quetzalcoatl and Tezcatlipoca fought for control. After the destruction of a sun and the epoch it represents, the gods had to recreate the world and humans until finally the fifth sun held. At either side of the center, jaguar heads and paws hold hearts, representing earth. Fire serpents are at the bottom of the stone, as their bodies snake around the edge. The Sun Stone carving is probably the most recognized artwork of the Aztec world.
The Aztecs created a rich variety of art works from massive stone sculptures to miniature, exquisitely carved gemstone insects. They made stylized hand crafted pottery, fine gold and silver jewelry and breathtaking feather work garments. The Aztecs were as intimately involved with art as they were with their religion and the two were tightly interwoven. Our knowledge of the Aztec culture mostly comes from their pictogram codices and their art.
Aztec craftsmen worked images of their gods into much of their artwork. In another article we’ll describe the great stone carvings: the Stone of Tizoc, the massive statue of Coatlicue and the Sun or Calendar Stone, as they are masterpieces of Aztec art. Of gold and silver jewelry, much of it was lost to the conquering Spanish who melted it down for currency. Feather works, unfortunately, don’t last for ages, although some samples remain. Textiles too, are destroyed by time, and pottery is fragile. Energetic stone carvings, however, remain to show us the great artistry of the Aztecs.
While much of the Aztec population worked in agriculture to keep the empire fed and others were involved in the great trading networks, many others devoted themselves to producing the artworks that noble Aztecs loved. Thus, samples of artistic creativity in precious metal jewelry, decorated with jade, obsidian, turquoise, greenstone and coral still exist, mainly in smaller pieces such as earrings or labrets for lips. Pottery from Tenochtitlan and surrounding areas still reveal the fine abstract symbolism of the Aztecs. Feather workers made colorful tilmas for the emperor and nobles, and produced ceremonial costumes for the highest warrior castes, creating intricately decorated shields and headdresses.
Many Aztec families and even villages were devoted to providing artwork for Aztec nobles. Every art had its own calpulli or guild. The nobles in the calpulli provided the raw materials and the artists created the finished works—the magnificent stone carvings, jewelry, elaborate ritual costumes for the great religious ceremonies and feather shirts, cloaks and headdresses. The Aztec emperors received art works as tribute or the artists sold them in the great marketplace at Tlatelolco.
The walls of the great Tenochtitlan Templo Mayor are covered with carvings of Aztec symbolism. Stone carvers created sculptures of the Aztec gods to be used in the monthly religious ceremonies. Very common was the chacmool, a reclining figure which received the extracted hearts and blood of sacrificial victims. Aztecs in the rural regions carved the agricultural gods in both stone and wood, especially Xipe Totec, the god of spring and vegetation. Other carvers worked in miniature, creating tiny shells, insects and plants out of jade, pearl, onyx and obsidian. Artists created mosaic masks used in religious ceremonies with pieces of turquoise, shell and coral. These masks are highly representative of the Aztec devotion to their gods.
Although much Aztec was destroyed during the Spanish conquest, many fine samples of each distinct art form remains to outline for viewers the great talent and technique of Aztec artists. Check the Aztec Resource Page on Aztec art for links to further information.
Aztec symbols were a component of material culture in which the ancient society expressed understanding of the corporeal and immaterial world. The members of that culture absorb the symbols and their meanings as they grow up. They see the symbols all around them, on the walls of their temples, in jewelry, in weaving and in their language and religion. The Aztecs also used symbols to express perceptions and experiences of reality.
The Aztecs, like the other Mesoamerican cultures surrounding them, loved symbols of their gods, animals and common items around them. Each day in the ritual 260-day calendar, for example, is represented by a number and a symbol. The tonalppohualli or sacred calendar, consists of two interlocking cycles, one of 13 days, represented by a number called a coefficient and one of 20 days represented by a day glyph or symbol. The day symbols include animals such as crocodiles, dogs or jaguars; abstract subjects such as death and motion; and natural things that the Aztecs saw around them every day like houses, reeds, water and rain. See the Ancient Scripts section on Aztecs to see good, colorful example of the day glyphs.
All Mesoamerican cultures used body paint, especially warriors going into battle. Different ranks of warriors wore specific colors and used those same colors in painting their bodies. The most prestigious warrior society, the Shorn Ones, shaved their heads and painted half their head blue and half yellow. Other warriors striped their faces with black and other colors. Aztecs also decorated their bodies permanently in the form of piercing and tattoos, although there is not as much evidence for Aztec tattooing as for the cultures around them.
The Aztecs centered their lives on their religion. For that reason, many statues and carvings exist of the Aztec gods, as hideous as they may be to modern eyes. Symbols of the sun, the eagle, the feathered serpent and cactus were used in the Aztec writing system, in dates and time and in titles and names. The magnificent Sun or Calendar Stone contains both the 365 day solar calendar and the sacred 260-day tonalpohualli, all of which are represented by the rich symbolism of the Aztec culture.
Most Aztec symbols had layers of meaning. A butterfly symbol, for instance, represented transformation while frogs symbolized joy. When symbols were combined as in Aztec pictograms, entire stories could be told through the multiple layers of an Aztec symbol’s meaning. The day signs and coefficients corresponded to one of the Aztec gods, which means the 260-day calendar could be used for divination. An order of the Aztec priesthood were diviners. When a child was born, they were called to find a name for the baby based on the day of the birth and the god corresponding to that day. From these symbols, it was believed these priests could tell the baby’s fortune and fate.
Today, because of the growing interest in body art, more people are learning about Aztec symbols and designs.
Codex painter was an honored and necessary profession in the Aztec world. They were highly trained in the calmecacs, the advanced schools of the noble class. Some calmecacs invited commoner children to train as scribes if they were highly talented, but most scribes were nobles. After the Spanish conquest, codex painters worked with the priests recording the details of Aztec life. These codices are the richest source of information we have about the Aztecs.
The Aztec Empire, as with many empires, required a great deal of paperwork: keeping track of taxes and tribute paid, recording the events of the year both great and small, genealogies of the ruling class, divinations and prophecies, temple business, lawsuits and court proceedings and property lists with maps, ownership, borders, rivers and fields noted. Merchants needed scribes to keep accounts of all their trades and profits. All of this official work required the scribes of the Aztecs—the codex painters.
The Aztecs didn’t have a writing system as we know it, instead they used pictograms, little pictures that convey meaning to the reader. Pictography combines pictograms and ideograms—graphic symbols or pictures that represent an idea, much like cuneiform or hieroglyphic or Japanese or Chinese characters.
To understand pictography, one must either understand the cultural conventions or the graphic symbol must resemble a physical object. For instance, the idea of death in Aztec pictography was conveyed by a drawing of a corpse wrapped in a bundle for burial; night was conveyed by a black sky and a closed eye, and the idea of walking by a footprint trail.
The codices were made of Aztec paper, deer skin or maguey cloth. Strips of these materials up to 13 yards by 7 inches high were cut, and the ends pasted onto thin pieces of wood as the cover. The strip was folded like a concertina or a map. Writing in the form of pictograms covered both sides of the strip.
Only 15 pre-Columbian Mesoamerican codices survive today—none of them Aztec, but from other cultures of about the same time. However, hundreds of colonial-era codices survive—those that carry the art of the tlacuilo (codex painters) but with Nahuatl and Spanish written commentary or description.
The Aztec number system was vigesimal or based on twenty. Numbers up to twenty were represented by dots. A flag represented twenty, which could be repeated as often as needed. One hundred, for instance, was five flags. Four hundred was depicted by the symbol of a feather or fir tree. The next number was eight thousand, shown as a bag of copal incense. With these simple symbols, the Aztecs counted all their tribute and trade. For example, one tribute page might show 15 dots and a feather, followed by a pictogram of a shield, which meant that the province sent 415 shields to the emperor.
Religion in the Aztec Empire
To understand the Aztecs, it is necessary to understand, as best we can, their religious beliefs and how those beliefs manifested in their culture. To that end, we will look at their religion in general, the gods, sacred calendar and temples here. Other articles will cover religious ceremonies and rituals and the practice of human sacrifice.
Religion Ruled All of Life
Aztecs were a devoutly religious people, to the extent that no Aztec made a decision about any aspect of his or her life without considering its religious significance. The timing of any event large or small required consulting the religious calendar. No child was named before a special priest, a diviner, could consider what name might best fit the child’s tonali or fate. Religion permeated every aspect of Aztec life, no matter what one’s station, from the highest born emperor to the lowliest slave. The Aztecs worshipped hundreds of deities and honored them all in a variety of rituals and ceremonies, some featuring human sacrifice. In the Aztec creation myths, all the gods had sacrificed themselves repeatedly to bring the world and humans into being. Thus, human sacrifice and blood offerings were necessary to pay the gods their due and to keep the natural world in balance.
The main Aztec gods can be classified in this way:
- Primordial Creators and Celestial Gods
- Ometecuhtli (Two Lord) and Omecihuatl (Two Lady)—the divine male/female creative force permeating everything on earth
- Xiuhtecuhtli (Turquoise Lord)
- Tezcatlipoca (Smoking Mirror—Fate and Destiny)
- Quetzalcoatl (Feathered Serpent—Creator, Wind and Storm)
- Gods of Agriculture, Fertility and Sacred Elements
- Tlaloc (Rain)
- Centeotl (Maize, Corn)
- Xipe Totec (Our Flayed Lord—vegetation god)
- Huehueteotl (Old, Old Deity–fire)
- Chalchiutlicue (She of the Jade Skirt—deity of rivers, lakes, springs and the sea)
- Mayahuel (Maguey cactus goddess)
- Gods of Sacrifice and War
- Huitzilopochtli (War and Warrior god)
- Tonatiuh (Sun god)
- Tlaltecuhtli (Earth god)
The Sacred Calendar
The Aztecs used two systems for counting time. The Xiuhpohualli was the natural solar 365-day calendar used to count the years; it followed the agricultural seasons. The year was separated into 18 months of 20 days each. The 5 extra days at the end of the year were set aside as a period of mourning and waiting. The second system was the ritual calendar, a 260-day cycle used for divination. Every 52 years the two calendars would align, giving occasion for the great New Fire Ceremony before a new cycle started.
The Aztecs built temples at the top of sacred mountains as well as in the center of their cities. The temple we know most about is the Templo Mayor in the heart of what was Tenochtitlan, now Mexico City. At the top of this 197 foot tall pyramid stood two shrines, one to Tlaloc, the god of rain and one to Huitzilopochtli, the god of war. Templo Mayor was in the center of a great plaza, one of 75 or 80 buildings which constituted the religious center of the city. Sacrificial victims walked up the numerous steps to the top of the pyramid. After their hearts were extracted and given to the gods, their bodies were thrown down into the plaza.
Human sacrifices Aztecs were a part of their religious ceremony that they believed properly appeased their gods to spare them from suffering. The numbers of people sacrificed by the Aztecs is a mystery today and will probably remain a mystery, unless more archeological evidence is uncovered. Whether only a few thousand of victims were sacrificed each year, or 250,000 as some scholars say, few human remains such as bones have been found at Templo Mayor or other Aztec temples. A couple of dozen skeletons and a few thousand loose bones and skulls do not add up to 250,000 or 20,000 or whatever number is cited.
Evidence of human sacrifice comes from both the Aztecs themselves, their art and codices containing their writings, and from the Spanish conquerors. However, it is safe to say that the Spanish could easily have exaggerated the numbers killed to make the Aztecs seem more savage and brutal than they actually were.
In 1487, the great Templo Mayor was dedicated in the main Aztec city of Tenochtitlan with a four-day celebration. How many were sacrificed during that time is a subject of scholarly speculation: some put the figure as low as 10,000 or 20,000, several others put it as high as 80,400 people sacrificed during those four days. Scholars think the Aztec priests used four sacrificial altars for the dedication ceremonies. However, if that’s the case and 80,400 people were killed, then the priests would have had to sacrifice 14 people every minute, which is a physical impossibility.
Spanish missionaries sent to convert the Aztecs to Christianity learned the Nahuatl language spoken by the Aztecs. These priests and friars spoke to old Aztecs to learn their history. These Aztecs put the number of sacrificial victims at the time of the temple’s dedication at 4,000, a much lower total than 80,400.
With scant archeological evidence, it is hard to know how many Aztecs died under the sacrificial knife. Many reputable scholars today put the number between 20,000 and 250,000 per year for the whole Aztec Empire. All Aztecs cities contained temples dedicated to their gods and all of them saw human sacrifices. Whatever the total was, we know from both the Aztecs and the Spanish that many human beings lost their lives to human sacrifice. We will probably never know exactly how many.
The first thing to understand about the Mesoamerican cultures and the Aztecs’ use of human sacrifice is that they were not horrified by it. Instead, it was a natural part of life to them, necessary to keep the world balanced and going forward. Blood and sacrifice helped the sun to rise and move across the sky. Without it, their world would end.
That’s not to say that all Aztecs and other Mesoamericans went to the sacrifice willingly. No doubt many did not want to be sacrificed or to die. Others, however, agreed to give of themselves for the greater good. When we picture victims being led to sacrifice, we see them as weeping, moaning and fighting to get free. For the most part, that simply didn’t happen.
To die as a sacrifice was the most honorable death the Aztecs knew. When an Aztec warrior died in battle or an Aztec woman in childbirth, those were also good, honorable deaths. People who died as a sacrifice, as a warrior or in childbirth went to a paradise to be with the gods after death. In contrast, a person who died of disease went to the lowest level of the underworld, Mictlan.
Many scholars have devised theories to explain this “darkness” of the Aztecs, their love of human sacrifice. Some posited that Aztecs were savages and amoral, less than human. Others have said the Aztec leaders used human sacrifice to terrorize their population and the nearby cultures. Some stated that an essential protein was missing from the Aztec diet and they needed the “meat” from human sacrifices to feed themselves, using cannibalism to do so. None of these theories, however, have held up.
From its earliest inception, Mesoamerican cultures featured human sacrifice so it was plainly not “invented” by Aztec rulers to terrorize the people, nor was it a betrayal by the priesthood of Aztec spirituality. Studies of the Aztec’s mainly vegetarian diet flavored with occasional turkey or dog revealed all necessary ingredients to sustain life. The Aztecs had laws against murder and injury, just as we do, so it wasn’t that they were depraved savages.
Rather, it was a central part of their religion and spirituality, to give up their blood and lives in devotion and dedication to the gods who had sacrificed themselves to create the world and keep it going. Most religions contain an element of sacrifice—giving up meat in Lent, for example—and giving your life for a friend is a great act of love. The Aztecs accepted this as a necessary part of life. By dying as a sacrifice, they honored the gods. Still, we can’t help but think that many didn’t wish to die, but accepted it as inevitable.
After the Spanish Conquest, many Spanish priests and friars learned enough of the Aztec’s language to talk with Aztec survivors of the battles and diseases. From them, the Spanish learned that many of the sacrificial victims were friends of the Royal House, or high-ranking nobility and priests. Every class of Aztec occasionally were sacrificed, and all ages as well. Children were sacrificed to the god of rain. Often enough, however, it was nobles and captured warriors whose hearts fed the gods. Remember, however, that being sacrificed was most prestigious way to die. While this shocks us today, we must nevertheless give the Aztecs their due—they found human sacrifice not only acceptable, but necessary and honorable.
Trade in the Aztec Empire
The Aztec economy was based on three things: agricultural goods, tribute, and trade. Aztec trade was crucially important to the empire; there could be no empire without it as many goods used by the Aztecs were not produced locally. Prized white cotton could not grow at the altitude of the Valley of Mexico and had to be imported from conquered semi-tropical regions further south, as were cacao beans, from which chocolate is made.
Two types of trading were important to the Aztecs: the local, regional markets where the goods that sustain daily life were traded and long-distance luxury trades. Each were vital to the empire, but served different purposes in the larger scheme of Aztec trade.
Aztec Trade and Regional Markets
Every Aztec city and village had its own market located near the city center. Tlatelolco, sister city to Tenochtitlan, had the grandest market, drawing 60,000 people to it daily. As with most regional markets, all kinds of utilitarian goods were sold, such as cloth, garden produce, food animals, obsidian knives and tools, medicines, wood, leather, furs and animal skins, precious metals, gems and pottery. If an Aztec housewife needed some tomatoes, bone needles and a headache remedy, she’d go to the market for them. While there, she could buy something to eat and drink if she had a cacao bean or two to trade. Many Aztec people went to the market not only to shop, but to socialize, another important aspect of the teeming regional markets. There Aztecs from every walk of life could meet and swap news and gossip.
The regional markets were overseen by government trade officials who made sure the goods and the prices asked for them were fair. Four levels of regional markets existed: the grand, daily Tlatelolco market, the markets at Xochimilco and Texcoco, the every-five-day markets at many other Aztec cities and the small village markets. Officials collected tribute and taxes for the emperor from each of these interlocking markets. Some of the regional markets also contained specialized goods, fine ceramics for example, or turkeys for food or feathers from tropical birds
Pochteca, Far Distance Traders
Pochteca were professional merchants, traveling long distances to obtain the luxury goods desired by the nobility: feathers from tropical birds, rare gems or jewelry and pottery created by other Mesoamerican cultures. The pochteca obtained anything rare and special, as well as the white cotton and cacao beans, earning them a special place in the Aztec society. They had their own capulli, laws and section of the city, even their own god, who watched over traders.
They often had dual or even triple roles in the empire, besides being simple traders. They often communicated crucial information from one area of the empire to another. And some served as spies for the emperor, often going disguised as something other than trader. This last group, the naualoztomeca, traded in rare, easily carried goods such as gems, rare feathers or secrets. Some pochteca were the importers, others dealt in wholesale goods and others still were retailers.
Aztec Agriculture: Floating Farms Fed the People
Agriculture, along with trade and tribute, formed the basis of the Aztec Empire. As such, growing enough food to feed the urban populations of the Aztec cities was of major importance. Many inhabitants of all of the Aztec cities were involved in planting, cultivating and harvesting the empire’s food.
Three crops formed the staples of the Aztec diet: maize, or corn, beans and squash. Each of these three plants assists the others when they are grown together. For example, corn takes nitrogen from the soil, which beans then replace. Bean plants need firm support on which to grow; corn stalks provide that support. Luxurious squash leaves shade the soil, which keeps moisture in and keeps weeds out. These three plants are called the Three Sisters and planted together, provide a rich harvest of all three.
Besides maize, beans and squash, the Aztecs farmed a host of other vegetables: tomatoes, avocados, chili peppers, limes, onions, amaranth, peanuts, sweet potatoes and jimacas. While most cacti grew wild, the Aztecs also cultivated those they found most useful, including the remarkable maguey cactus, also known as the Mexican aloe, which provided the Aztecs with paper, thatching for roofs, cloth, rope, needles, food from the roots of the plant, and a popular alcoholic beverage fermented from its sap.
To grow all this food, the Aztecs used two main farming methods: the chinampas and terracing. Chinampas were essentially man-made islands, raised bed gardens on the surface of Lake Texcoco’s shallow waters. The Aztecs centered their empire in the Valley of Mexico, with its central basin leading up into the mountains surrounding the valley. To use the hilly land for farming, the Aztecs terraced the hills by cutting into them. They then built a restraining wall to form a step in the hillside so that the land on the step can be used for crops.
The chinampas farms were man-made plots of land built up from the sedimentation from the bottom of the lake. The Aztecs created large reed mats, which they floated in the shallows, the edges of which were built of woven twigs and branches attached to posts anchored in the lakebed. On the mats, they put soil from the lake bottom, rotting vegetation and dirt from nearby areas. Aztec farmers built up the soil until it was above the surface of the lake. They planted fast-growing willow trees at the corners of the plots to attach the chinampa to the bottom of the lake by the trees’ roots. At the height of the Aztec Empire, thousands of these fertile and productive chinampas surrounded Tenochtitlan and other Aztec cities.
Terraced, irrigated fields added another layer of farmland for the hungry Aztecs. To bring water to these fields, Aztecs farmers dug irrigation canals in the soil. The terraces also grew the Aztecs major crops, providing an extra layer of protection for its vital agricultural production, on which the empire depended.
Around the chinampas, the Aztecs could also catch fish, frogs, turtles and waterfowl such as ducks and geese. Lake Texcoco also produced one other favorite Aztec crop—algae from the lake, which we know today as spirulina.
Education in the Aztec Empire
Aztec education was quite sophisticated compared to contemporary empires in the Eastern and Western Hemispheres. The Aztec Empire is one of the few older civilizations that featured mandatory education at home and in schools. Every child was educated, no matter his or her social status, whether noble, commoner or slave. Two different schools taught the young—one for the noble class and one for commoners, although bright, talented commoners might be chosen for advanced learning at the noble school. Children’s Aztec education, however, started at home with their parents. From age four or five, boys learned and worked with their fathers at a trade or craft, farming, hunting and fishing. Girls learned from their mothers all the tasks they would need in running a household.
All children were taught a large collection of sayings called the huehuetlatolli, which incorporated Aztec ideas and teachings. The Aztec culture expected well-behaved people so children were taught to be humble, obedient and hardworking. The huehuetlatolli included many sayings on all aspects of life, from welcoming newborn infants to the family to what to say at the death of a relative. Every few years, the children were called to the temple and tested on how much they had learned of this inherited cultural knowledge.
For the first 14 years of life, boys and girls were taught at home by their parents. After that, the boys attended either the noble school, called a calmecac, or the commoners’ school, the telpochcalli. Girls went to a separate school, where they learned household skills, religious rituals, singing and dancing or craftwork. Some talented girls were chosen to be midwives and received the full training of a healer. Other athletically talented girls might be sent to the house of dancing and singing for special training.
Much of Aztec society was divided into calpullis, a group of interrelated families, somewhat like a neighborhood or clan. Each calpulli had its own schools, both calmecac and telpochcalli. Boys and girls attended the schools run by their calpulli.
Aztec Education: Calmecac
Calmecacs were schools for the sons of nobles, where they learned to be leaders, priests, scholars or teachers, healers or codex painters. They learned literacy, history, religious rituals, calendrics, geometry, songs and the military arts. These advanced studies in astronomy, theology and statesmen ship prepared the nobles’ sons for work in the government and temples.
Aztec Education: Telpochcalli
Telpochcalli taught boys history and religion, agricultural skills, military fighting techniques and a craft or trade, preparing them for a life as a farmer, metal worker, feather worker, potter or soldier. Athletically talented boys might then be sent on to the army for further military training. The other students would, after graduation, be sent back to their families to begin their working life.
Housing in the Aztec Empire
Aztec homes ranged from one-room huts to large, spacious palaces. As in their clothing and diet, the size and style of Aztec homes depended on the family’s social status. Wealthy nobles lived in many roomed elaborate houses, usually built around an inner courtyard. Poorer Aztecs and commoners usually lived in one-room homes, built of adobe brick and thatched roofs. Nobles could lavishly decorate their homes; as commoners were not allowed to do. Many Aztecs whitewashed their homes with lime so the houses would reflect light and stay cool.
Many, or perhaps most Macehualtin or commoners were engaged in agriculture, taking care of the Tenochtitlan’s chinampas, or garden beds raised on the shallow shores of Lake Texcoco outside the city. They built simple, one room houses, usually with a few other smaller buildings and a garden in the lot. The family lived, slept, worked, ate and prayed in the big room, which had a small family shrine built in one wall. Most Aztec homes also had a separate building for a steam bath, as the Aztecs were very clean people. The kitchen area might also be in a smaller room built onto the house.
Most simple Aztec homes were built of adobe bricks, which are made using mud, sand, water and straw, then dried in the sun. There were no windows generally, and one open door. Wood for door jambs and support beams could be found outside the cities. Furniture was also simple: comfortable reed mats for sleeping, wood or leather chests for storing clothes and low tables were in most homes, as well as clay pots and bowls, stone metates for grinding corn, a griddle, water jugs and buckets.
Most work took place outside the home during the day. Men went off to tend the fields, taking the older boys with them. Women ground corn, cooked, spun yarn, wove cloth and watched the younger children, teaching their daughters what they would need to know when they married. Commoners’ homes were often built outside the city, nearer to the fields and chinampas where the men worked.
Often, an interrelated group of families lived together in a unit called a calpulli. They would build their houses in a square with a common, central courtyard. The calpulli, which included both nobles and commoners, provided mutual aid for its members, functioning as a sort of clan. The nobles owned the arable land, which the commoners worked. The nobles provided the occupations, often craftwork, and the commoners paid tribute to the nobles.
Nobles or pipiltin as they were known, lived in larger, finer homes often built of stone, although some were also built of adobe. Noble homes were often built around a central courtyard, where flower and vegetable gardens and a fountain would be found. These homes were often made of carved stone, and contained finer furniture than a commoner would have.
Noble homes could have a peaked roof, or the roof could be flat and even terraced with a garden. As nobles were often involved in making laws and government, they tended to live nearer the city centers, around the central plaza and marketplace. At the top of society, the emperor lived in a luxurious palace, complete with botanical gardens and a zoo.
Cite This Article"The Aztec Empire: Society, Politics, Religion, and Agriculture" History on the Net
© 2000-2021, Salem Media.
June 16, 2021 <https://www.historyonthenet.com/aztec-empire-society-politics-religion-agriculture>
More Citation Information. | https://www.historyonthenet.com/aztec-empire-society-politics-religion-agriculture | 21 |
22 | In the 1600s, the British people took interest in India. In 1707 when the Mongol Empire was collapsing, which meant the British had a chance to take over. By 1857 Britain took full, direct control of India. Although the British developed a very strong army, they restricted the freedom of Indians, created national parks, but abused natural resources, and killed almost 60 millions people but brought modern medicine. When the British took over India, they took over pretty much the entire government and created laws that restricted the rights of the Indians.
The Mughal rule was the government at the time but it was easily conquered by the British in the 1700’s because it was so weak and corrupt. (Todhunter, Katherine). The Mughal emperor was captured and the British East India Company functioned as the government. Following its rise to power, the British
The British in the 1700s controlled a massive empire all around the world and they knew how to deal with a rebellion, but they had never had a rebellion where former British residents were the rebels. The colonists had a very extreme reaction to a handful of simple taxes the British put in place that were only supposed to help finance the previous wars in North America, most notably the French and Indian War. The British reacted very reasonably against the colonial tax resistance, and the colonists only worsened the situation as they were overreacting about very small taxes. After the British attempted to pass taxes to help finance the recent wars with France, the colonists began on their rampage against any kind of British tax on the goods they bought. The first tax that Britain passed was the Sugar Act of 1764, this tax was on sugar goods and after a lot of unrest Parliament finally lowered the price of the tax and the colonists were satisfied.
The British Empire profited from slavery in the eighteenth century, but fought to abolish slavery in the nineteenth century. For many people, the British Empire meant loss of lands, discrimination and prejudice. Such a big empire had lots of everlasting impacts; a lot of them positive. The British Empire took science and technology across many parts of the world. They built railways, bridges and canals that helped improve communications in other territories.
By 1707 the Mughal Empire was collapsing, small states were breaking away from Mughal power. In 1757 the East India Company took over the Mughals territory by the battle of Plassey. After this the East India Company was the biggest power in India and the area grew over time. This imperialism by the British wasn 't all bad for India though. For India 's political and economic standpoint, imperialism helped improve there government, travel, and trade.
Mohandas Karamchand Gandhi or as more know him Mahatma Gandhi fought and died for the independance of India, even through all the cruelty people say that the British ruling helped shape modern India, did the British really help shape modern India? While many people would agree that the impact the British had was negative, but Dr.Lavani says otherwise, Lavani says that the British Helped India with their Efficient Government admission of 500 million people(Political)(Doc 6), they also built tons of mines, canals, sewers, and roads(Economic)(Doc 10), they as well protected wildlife and ancient buildings and also built universities and museums(Social)(Doc 11 & 17). Political Dr.Lavani’s side of the Argument is that the british helped build or set in stone the creation of modern India, some positives the British brought Politicly were things like really well trained armies, and great Administration(Doc 13 & 6), but that doesn’t mean the British didn’t do anything wrong, the British had only 60 Indians in Government(Doc 2), and the British used armed forces on
From the time of King Charles II, the British monarchy has accepted the policy of mercantilism, the economic belief that a nation can only gain wealth at the expense of another; it was Britain's motivation of founding colonies. The american colonies were a wealth of resources for their mother country. For about one hundred years, 1650-1750, the British government did not strictly enforce mercantilism in the colonies; however, after the French and Indian War Britain changed its colonial policies. From the declaration of the Proclamation Line, the official end to the French and Indian War, in 1763 to the signing of the Declaration of Independance in 1776, the colonies produced several violent demonstrations showing their support for Enlightenment
This caused the Americans to protest violently as they said you cannot be taxed for everything without a reason. Hence them coming up with the “no tax without representation” - representation meaning a reason. The Tea Act’s main objective was to reduce the massive amount of tea held by the British East Indian Company whom had financial difficulties (like the rest of Britain). This allowed the company the right to ship directly to North America and the right to the duty-free export from Britain. The British colonists had never accepted the duty on tea thus The Tea Act just reinforced their opposition and hatred of it.
What have the coloni zation had to say for the countries involved? And does the old British Empire still have any effect on Britain and the world today? Well hold your chair tight, because we are going to take a ride into the rise and fall of the British Empire and discuss the positive and negative consequences it has had on the countries involved. In my conclusion I will also give a short sketch of the present-day situation. In the sixteenth century British ships set out to conquer the world.
The British Raj controlled India in 1858 and 1947. The British Raj was also referred to as the period of domination. They decided to remove the caste system which gave the people equal rights. Along with government, India’s technology and education were also affected by imperialism. Britain brought over modern technology and
During the nineteenth and twentieth centuries, many powerful countries were looking to colonize and imperialize countries which were less powerful than their own in order to gain even more power. The picture of the British Indian Army, shows how the British used Indians and their resources to their full potential through enlisting them in their own army and having them support the British in both World Wars. On the other hand, the picture of the Filipino girls in class was taken by an American photographer and writer. He discovered that some of these embroidered artworks were also sent back to the United States for Americans to enjoy as well. While the United States helped the lesser developed countries produce goods that could be traded later on, they also greatly benefited through their own motives of
"It 's a ridiculous act. Britain is going to tax us for every piece of paper. We will be forced to pay a tax to obtain a stamp, which will be required on all legal documents and printed materials.” This preposterous act was going to hurt the hard working families here in the colonies. I tried to look at it from the King 's point of view. He probably thought we were a bunch of lazy people living luxuriously without any taxes.
Based on Effects of British Imperialism on India, Indian products were the best in the world when British ruled there. After the industrial revolution, British passed a law that Indians cannot sale their goods around the world, and even in India. They should buy only British goods. (Effects, 1) That was a big loss and a long term impact for Indians because they lost their industrial jobs. They were forced to work in British farms to grow cotton, tea, jute and other materials.
During the 1760’s, Britain needed to find a way to pay off their debt. This led to a reform that in part launched a plan designed by George Greenville (Schulz, 2013). Greenville’s plan was to implement acts that would help to pay off the nation’s debt. New acts, such as the Sugar, the Quartering, and the Stamp Act had colonists far and wide upset with Parliament. While each of these acts were disliked by colonists, none was as damaging as the Stamp Act.
A war had just ended between the French and the British. Although they won, Britain was suppressed. The King used the colonies to regain money, supplies, and numbers. Not only were soldiers allowed to take colonist’s houses and food, but the colonies were forced to pay tax on all paper goods. That extra tax, called the Stamp Act, started a rebellion in the colonies. | https://www.ipl.org/essay/How-Did-British-Imperialism-Affect-India-FKDXXJK6JE86 | 21 |
39 | The Modern-Day EUThroughout the 1990s, the “single market” idea allowed easier trade, more citizen interaction on issues such as the environment and security, and easier travel through the different countries.Even though the countries of Europe had various treaties in place prior to the early 1990s, this time is generally recognized as the period when the modern day European Union arose due to the Treaty of Maastricht on European Union which was signed on February 7, 1992 and put into action on November 1, 1993.The Treaty of Maastricht identified five goals designed to unify Europe in more ways than just economically. The goals are:1 To strengthen the democratic governing of participating nations.2 To improve the efficiency of the nations.3 To establish an economic and financial unification.4 To develop the “Community social dimension.”5 To establish a security policy for involved nations.In order to reach these goals, the Treaty of Maastricht has various policies dealing with issues such as industry, education, and youth. In addition, the Treaty put a single European currency, the euro, in the works to establish fiscal unification in 1999. In 2004 and 2007, the EU expanded, bringing the total number of member states as of 2008 to 27.In December 2007, all of the member nations signed the Treaty of Lisbon in hopes of making the EU more democratic and efficient to deal with climate change, national security, and sustainable development.
The precursor to the European Union was established after World War II in the late 1940’s in an effort to unite the countries of Europe and end the period of wars between neighbouring countries. These nations began to officially unite in 1949 with the Council of Europe. In 1950 the creation of the European Coal and Steel Community expanded the cooperation. The six nations involved in this initial treaty were Belgium, France, Germany, Italy, Luxembourg, and the Netherlands. Today these countries are referred to as the “founding members.”
During the 1950’s, the Cold War, protests, and divisions between Eastern and Western Europe showed the need for further European unification. In order to do this, the Treaty of Rome was signed on March 25, 1957, thus creating the European Economic Community and allowing people and products to move throughout Europe. Throughout the decades additional countries joined the community.
In order to further unify Europe, the Single European Act was signed in 1987 with the aim of eventually creating a “single market” for trade. Europe was further unified in 1989 with the elimination of the boundary between Eastern and Western Europe – the Berlin Wall.
As they meet today to thrash out next years budget! Better be quick as time is running out boy’s, to book your place on the gravy train!
Eurozone finance ministers met in Luxembourg to discuss reform of the banking sector and a request by Cyprus to revise the terms of its ten billion euro bailout.
Austria’s Finance Minister Maria Fekter criticised Nicosia’s demands, saying “I can’t imagine that there’s a better alternative to what we have painfully agreed on all together. To question a contract we have made, and which has passed all national parliaments – including the Cypriot parliament – is a quite bold announcement!”
View original post 145 more words
This is the case of failed Olympic Contractor G4s who never got off the ground with their security issues, during the Olympics! So what better way to reward them, than to grant a £150 million gas contract through none other than their favoured company “British Gas” l was also told by my contact that in the very near future, we will all have no choice but to have smart meters fitted, by none other than, you guessed it “G4’s” http://www.utilityweek.co.uk/news/news_story.asp?id=198009&title=G4S+awarded+%26%23163%3B150+million+meter+reading+contract+by+British+Gas
Technical disruptions at China’s largest state-owned lender caused temporary panic among customers at the weekend, with some expressing fears of a hacking or deliberately engineered credit squeeze.
Various banking services at ICBC – including internet, mobile and phone banking as well as automated teller machine services – were “paralysed” on Sunday morning for nearly one hour.
Unable to withdraw cash from ATMs or get through to the customer help hotline, some customers believed the outage was longer, but state broadcaster CCTV reported it was 45 minutes.
Cities said to be affected by the problems included Shanghai, Beijing, Wuhan, Chengdu and Xiamen, the Shanghai Daily newspaper reported.
The bank issued a statement via Sina Weibo on Sunday, reassuring customers that electronic channels were undergoing “system upgrades” since 10.38am and that certain services would be affected. The bank said it had restored all systems by 11.23am.
The glitch at the bank – one of China’s “big four” state lenders and largest in the world in terms of profit and market value – sparked concerns of a national credit crunch as it came just days after interbank lending rates had hit new record highs.
Others fanned worries that China’s financial system had been compromised by cyberattacks or hacking. The outage came just weeks after fugitive intelligence-leaker Edward Snowden told the South China Morning Post that the US National Security Agency had been hacking mainland Chinese and Hong Kong networks for years.
Online rumours circulating among financial insiders on forums such as Zhihu, China’s version of Quora, suggested that it was a deliberate show of force by the bank in response to Premier Li Keqiang’s bid to encourage private capital via “non-government affiliated banks” and a general overhaul of the financial system.
Courtesy of SCMP – More at:http://www.scmp.com/news/china/article/1267756/mysterious-icbc-banking-glitch-sparks-panic-frustration-among-customers
Understanding third-party contracts is the key to knowing where the tax-payers money is going! Once you understand that these companies put in tenders and get the contract (witness G4s) and others such as “Capita” then they can use their agreed grant in which ever they like! This then allows them to divide up the money, with the great portion given to themselves, and utilising the balance usually 60/40 in favour of themselves, but can be as high as 70/30 in some cases! The benefit to any government as it was to the bankers is that they carry no blame and can pass it on to the third party contractor, as they did with the G4’s debacle on the Olympics. Thus they look squeaky clean and if the third party screws up they can get another to take over the contract, making them look like they care ,when in fact they do not give a dam. Also as contract pricing is done by their departments ,they can save money and hide where they put it!
I have been studying these contracts for many years and this is the latest way governments, companies and countries pass the buck!
The fact that keeping their promises does not come into the agenda, maximising profit as a private company is what this is all about!
Germany’s top economic policymakers have clashed in court, setting out very divergent views on the legality of measures to tackle the eurozone crisis.
At Germany’s Constitutional Court, the Bundesbank’s chief opposed the European Central Bank’s buying of bonds to ease the pressure on eurozone countries.
But Germany’s finance minister and a German ECB board member strongly defended the policy.
Thanks to BBC News at http://www.bbc.co.uk/news/world-europe-22852929#sa-ns_mchannel=rss&ns_source=PublicRSS20-sa
This is the same in the United Kingdom with the ” Bank Guarantee Scheme” but as incomes have grown so have savings and the fear is that one day, we will not be covered. So do not “put all your eggs in one basket” or maybe suffer the consequences!
At the height of the first Great Depression, President Roosevelt signed the Banking Act of 1933, which established the Federal Deposit Insurance Corp. This was meant to insure account holders and protect them from losing everything in the event of another crash. While the majority of Americans conformed to the new banking system, a smaller percentage did not and instead rely on a cash-based economy – a group that came to be known in the financial industry as the “underbanked.”
Fast forward to 2013 and America’s underbanked population has swelled to some 68 million people. Research from the Federal Reserve Board shows, surprisingly, that the underbanked have adopted mobile and smartphones at a higher rate than the average American. And not only are they more likely to own a cellphone, but because these low-to-moderate income consumers are less likely to have in-home internet access, they rely more on their phones for…
View original post 891 more words
Local authorities should block access to payday loan websites from council computers in a bid to protect vulnerable residents, according to a Willaston and Rope Ward Councillor. Brian Silvester is calling for access to payday loans. Yesterday, Citizens Advice issued a warning over payday lenders’ after finding three out of four people struggled to repay the loan. It is urging the OFT to immediately ban these lenders, saying they are causing real harm to borrowers. http://www.localgov.co.uk/index.cfm?method=news.detail&id=109944 | https://acefinance.wordpress.com/2013/06/ | 21 |
23 | The prehistory of Ireland has been pieced together from archaeological evidence, which has grown at an increasing rate over the last decades. It begins with the first evidence of permanent human residence in Ireland around 10,500 BC, though an earlier date of 31,000 BC, for perhaps temporary hunting incursions, has been suggested following a 21st-century reexamination of a butchered reindeer bone discovered in 1905 at Castlepook Cave, County Cork, and finishes with the start of the historical record around 400 AD. Both the beginning and end dates of the period are later than for much of Europe and all of the Near East. The prehistoric period covers the Palaeolithic, Mesolithic, Neolithic, Bronze Age and Iron Age societies of Ireland. For much of Europe, the historical record begins when the Romans invaded; as Ireland was not invaded by the Romans its historical record starts later, with the coming of Christianity.
The two periods that have left the most spectacular groups of remains are the Neolithic, with its megalithic tombs, and the gold jewellery of the Bronze Age, when Ireland was a major centre of gold mining.
Ireland has many areas of bogland, and a great number of archaeological finds have been recovered from these. The anaerobic conditions sometimes preserve organic materials exceptionally well, as with a number of bog bodies, a Mesolithic wicker fish-trap, and a Bronze Age textile with delicate tassels of horse hair.
Glaciation and the Palaeolithic
During the most recent Quaternary glaciation, ice sheets more than 3,000 m (9,800 ft) thick scoured the landscape of Ireland, pulverising rock and bone, and eradicating any possible evidence of early human settlements during the Glenavian warm period; human remains pre-dating the last glaciation have been uncovered in the extreme south of Britain, which largely escaped the advancing ice sheets.
During the Last Glacial Maximum (ca. 26,000–19,000 years ago), Ireland was an arctic wasteland, or tundra. This period's effects on Ireland are referred to as Midland General Glaciation, or Midlandian glaciation. It was previously believed that during this period ice covered two thirds of Ireland. Subsequent evidence from the past 50 years has shown this to be untrue and recent publications suggest that the ice sheet extended beyond the southern coast of Ireland.
During the period between 17,500 and 12,000 years ago, a warmer period referred to as the Bølling-Allerød allowed for the rehabitation of northern areas of Europe by roaming hunter-gatherers. Genetic evidence suggests this reoccupation began in southwestern Europe and faunal remains suggest the existence of a refugium in Iberia that extended up into southern France. Those originally attracted to the north during the pre-boreal period would be species like reindeer and aurochs. Some sites as far north as Sweden inhabited earlier than 10,000 years ago suggest that humans might have used glacial termini as places from which they hunted migratory game.
These factors and ecological changes brought humans to the edge of the northernmost ice-free zones of continental Europe by the onset of the Holocene and this included regions close to Ireland. However, during the early part of the Holocene Ireland itself had a climate that was inhospitable to most European animals and plants. Human occupation was unlikely, although fishing was possible.
Britain and Ireland may have been joined by a land bridge, but because this hypothetical link would have been cut by rising sea levels so early into the warm period, probably by 16,000 BC, few temperate terrestrial flora or fauna would have crossed into Ireland. Snakes and most other reptiles could not repopulate Ireland because any land bridge disappeared before temperatures became warm enough for them. The lowered sea level also joined Britain to continental Europe; this persisted much longer, probably until around 5600 BC.
The earliest known modern humans in Ireland date back to the late Palaeolithic Age. This date was pushed back some 2,500 years by a radiocarbon dating performed in 2016 on a bear bone excavated in 1903 in the "Alice and Gwendoline Cave", County Clare. The bone has cut marks showing it was butchered when fresh and gave a date of around 10,500 BC, showing humans were in Ireland at that time. In contrast, a flint worked by a human found in 1968 at Mell, Drogheda, that is much older, probably well pre-dating 70,000 BC, is normally regarded as having been carried to Ireland on an ice sheet, probably from what is now the bottom of the Irish Sea.
A British site on the eastern coast of the Irish Sea, dated to 11,000 BC, indicated people were in the area eating a marine diet including shellfish. These modern humans may have also colonised Ireland after crossing a southern, now ice-free, land bridge that linked south-east Ireland and Cornwall, if it existed, or more likely, by boat. In the south, the Irish Sea facing South Wales was at the least a good deal narrower than today until 12,000 BC; in the north, the sea-crossing to Kintyre in Scotland, though much too deep to have ever been a land bridge, is even today only twelve miles at its shortest point and would then have been less. These people may have found few resources outside of coastal shellfishing and acorns, and so may not have continually occupied the region. The early coastline of Ireland is now almost entirely under the sea, so evidence of coastal populations is lost, though ways of investigating undersea sites are being explored.
The return of freezing conditions in the Younger Dryas, which lasted from 10,900 BC to 9700 BC, may have depopulated Ireland. During the Younger Dryas, sea levels continued to rise and no ice-free land bridge between Great Britain and Ireland ever returned.
Mesolithic (8000–4000 BC)
The last ice age fully came to an end in Ireland about 8000 BC. Until the single 2016 Palaeolithic dating described above, the earliest evidence of human occupation after the retreat of the ice was dated to the Mesolithic, around 7000 BC. Although sea levels were still lower than they are today, Ireland was very probably already an island by the time the first settlers arrived by boat, very likely from Britain. The earliest inhabitants of the island were seafarers who depended for much of their livelihood upon the sea, and later inland settlements or camps were usually close to water. Although archaeologists believe Mesolithic people heavily relied on riverine and coastal environments, ancient DNA indicates they had probably ceased contact with Mesolithic societies on the island of Britain and further afield.
Evidence for Mesolithic hunter-gatherers has been found throughout the island: a number of the key early Mesolithic excavations are the settlement site at Mount Sandel in County Londonderry (Coleraine); the cremations at Hermitage, County Limerick on the bank of the River Shannon; and the campsite at Lough Boora in County Offaly. As well as these, early Mesolithic lithic scatters have been noted around the island, from the north in County Donegal to the south in County Cork. The population has been tentatively estimated at around 8,000.
The hunter-gatherers of the Mesolithic era lived on a varied diet of seafood, birds, wild boar and hazelnuts. There is no evidence for deer in the Irish Mesolithic and it is likely that the first red deer were introduced in the early stages of the Neolithic. The human population hunted with spears, arrows and harpoons tipped with small stone blades called microliths, while supplementing their diet with gathered nuts, fruit and berries. They lived in seasonal shelters, which they constructed by stretching animal skins or thatch over wooden frames. They had outdoor hearths for cooking their food. During the Mesolithic the population of Ireland was probably never more than a few thousand. Surviving artefacts include small microlith blades and points, and later larger stone tools and weapons, in particular the versatile Bann flake.
Neolithic (4000–2500 BC)
Many areas of Europe entered the Neolithic with a 'package' of cereal cultivars, pastoral animals (domesticated oxen/cattle, sheep, goats), pottery, weaving, housing and burial cultures, which arrive simultaneously, a process that begins in central Europe as LBK (Linear Pottery culture) about 6000 BC. Within several hundred years this culture is observed in northern France. An alternative Neolithic culture, La Hoguette culture, that arrived in France's northwestern region appears to be a derivative of the Ibero Italian-Eastern Adriatic Impressed Cardial Ware culture (Cardium pottery). The La Hoguette culture, like the western Cardial culture, raised sheep and goats more intensely. By 5100 BC there is evidence of dairy practices in southern England, and modern English cattle appear to be derived from "T1 Taurids" that were domesticated in the Aegean region shortly after the onset of the Holocene. These animals were probably derived from the LBK cattle. Around 4300 BC cattle arrived in northern Ireland during the late Mesolithic period. The red deer was introduced from Britain about this time.
From around 4500 BC a Neolithic package that included cereal cultivars, housing culture (similar to those of the same period in Scotland) and stone monuments arrived in Ireland. Sheep, goats, cattle and cereals were imported from southwestern continental Europe, after which the population rose significantly. The earliest clear proof of farmers in Ireland or Great Britain is from Ferriter's Cove on the Dingle Peninsula, where a flint knife, cattle bones and a sheep's tooth were found and dated to c. 4350 BC. At the Céide Fields in County Mayo, an extensive Neolithic field system (arguably the oldest known in the world) has been preserved beneath a blanket of peat. Consisting of small fields separated from one another by dry-stone walls, the Céide Fields were farmed for several centuries between 3500 and 3000 BC. Wheat and barley were the principal crops cultivated. Pottery made its appearance around the same time as agriculture. Ware similar to that found in northern Great Britain has been excavated in Ulster (Lyle's Hill pottery) and in Limerick. Typical of this ware are wide-mouthed, round-bottomed bowls.
This follows a pattern similar to western Europe or gradual onset of Neolithic, such as seen in La Hoguette Culture of France and Iberia's Impressed Cardial Ware Culture. Cereal culture advance markedly slows north of France; certain cereal strains such as wheat were difficult to grow in cold climates—however, barley and German rye were suitable replacements. It can be speculated[by whom?] that the DQ2.5 aspect of the AH8.1 haplotype may have been involved in the slowing of cereal culture into Ireland, Scotland and Scandinavia since this haplotype confers susceptibility to a Triticeae protein induced disease as well as Type I diabetes and other autoimmune diseases that may have arisen as an indirect result of Neolithisation.
Some regions of Ireland showed patterns of pastoralism that indicated that some Neolithic peoples continued to move and indicates that pastoral activities dominated agrarian activities in many regions or that there was a division of labour between pastoral and agrarian aspects of the Neolithic. At the height of the Neolithic the population of the island was probably in excess of 100,000, and perhaps as high as 200,000. But there appears to have been an economic collapse around 2500 BC, and the population declined for a while.
The most striking characteristic of the Neolithic in Ireland was the sudden appearance and dramatic proliferation of megalithic monuments. The largest of these tombs were clearly places of religious and ceremonial importance to the Neolithic population, and were probably communal graves used over a long period. In most of the tombs that have been excavated, human remains—usually, but not always, cremated—have been found. Grave goods—pottery, arrowheads, beads, pendants, axes, etc.—have also been uncovered. These megalithic tombs, more than 1,200 of which are now known, can be divided for the most part into four broad groups, all of which would originally have been covered with earth, that in many cases has been eroded away to leave the impressive stone frameworks:
- Types of Irish megalithic tombs
- Glantane East wedge tomb.
- Court cairns – These are characterised by the presence of an entrance courtyard. They are found almost exclusively in the north of the island and are thought to include the oldest specimens. North Mayo has many examples of this type of megalith – Faulagh, Kilcommon, Erris.
- Passage tombs – These constitute the smallest group in terms of numbers, but they are the most impressive in terms of size and importance. They are also found in much of Europe, and in Ireland are distributed mainly through the north and east, the biggest and most impressive of them being found in the four great Neolithic "cemeteries" of the Boyne (Brú na Bóinne, a World Heritage Site), Loughcrew (both in County Meath), Carrowkeel and Carrowmore (both in County Sligo). The most famous of them is Newgrange, one of the oldest astronomically aligned monuments in the world. It was built around 3200 BC. At the winter solstice the first rays of the rising sun still shine through a light-box above the entrance to the tomb and illuminate the burial chamber at the centre of the monument. Another of the Boyne megaliths, Knowth, has been claimed to contain the world's earliest map of the Moon carved into stone.
- Portal tombs – These tombs include the well known dolmens. They consist of three or more upright stones supporting a large flat horizontal capstone (table). They were originally covered with earth to form a tumulus, but often their covering has now eroded to leave the impressive main stone structure. Most of them are to be found in two main concentrations, one in the southeast of the island and one in the north. The Knockeen and Gaulstown Dolmens in County Waterford are exceptional examples.
- Wedge tombs – The largest and most widespread of the four groups, the wedge tombs are particularly common in the west and southwest. County Clare is exceptionally rich in them. They are the latest of the four types and belong to the end of the Neolithic. They are so called from their wedge-shaped burial chambers.
The theory that these four groups of monuments were associated with four separate waves of invading colonists still has its adherents today, but the growth in population that made them possible need not have been the result of colonisation: it may simply have been the natural consequence of the introduction of agriculture.
The stone axe was the primary and essential tool for farming, carefully made in a variety of styles, and often polished. The products of axe factories next to sources of porcellanite, an especially good stone, were traded across Ireland; the main ones were Tievebulliagh and Rathlin Island, both in County Antrim. There were also imports from Britain, including products of the Langdale axe industry of the English Lake District.
There was a much rarer class of imported prestige axe head made from jadeite from north Italy; these may have been slowly traded across Europe to reach Ireland over a period reaching into centuries, and show no signs of use. Miniature axes, too small to be useful, were made, and a "tiny porcellanite axe" has been found in a passage tomb; another example has a hole for a cord, and may have been worn as jewellery or an amulet. Other stone shapes made were chisels, adzes, maces and spearheads. Only one decorated macehead has been found, in one of the tombs at Knowth, but it is extremely fine. Some finds may also be miniature maceheads.
Pierced beads and pendants are found, and two necklaces of shells (from Phoenix Park in Dublin) are very carefully made, with graded periwinkle shells; these were on the remains of two males. As an example of the exceptional preservation sometimes possible in items found in anaerobic bogs, part of a finely woven bag with circular handles has survived; it used reedy plant material wound round thin strips of wood. Decorated pottery, apparently made for funerary rather than domestic use, appears to imitate basketry patterns.
Copper and Bronze Ages (2500–500 BC)
Metallurgy arrived in Ireland with new people, generally known as the Bell Beaker People from their characteristic pottery, in the shape of an inverted bell. This was quite different from the finely made, round-bottomed pottery of the Neolithic. It is found, for example, at Ross Island, and associated with copper mining there, which had begun by at least 2,400 BC. There is some disagreement about when speakers of a Celtic language first arrived in Ireland. It is thought by some scholars to be associated with the Beaker People of the Bronze Age, but the more mainstream view is, or at least used to be, that "Celts" arrived much later at the beginning of the Iron Age.
The Bronze Age began once copper was alloyed with tin to produce true bronze artefacts, and this took place around 2000 BC, when some "Ballybeg-type" flat axes and associated metalwork were produced. The tin needed to be imported, normally from Cornwall. The period preceding this, in which Lough Ravel and most Ballybeg axes were produced, and which is known as the Copper Age or Chalcolithic, commenced about 2500 BC.
Bronze was used for the manufacture of both weapons and tools. Swords, axes, daggers, hatchets, halberds, awls, drinking utensils and horn-shaped trumpets are just some of the items that have been unearthed at Bronze Age sites. Irish craftsmen became particularly noted for the horn-shaped trumpet, which was made by the cire perdue, or lost wax, process.
Copper used in the manufacture of bronze was mined in Ireland, chiefly in the southwest of the island, while the tin was imported from Cornwall in Britain. The earliest known copper mine in these islands was located at Ross Island, at the Lakes of Killarney in County Kerry; mining and metalworking took place there between 2400 and 1800 BC. Another of Europe's best-preserved copper mines has been discovered at Mount Gabriel in County Cork, which was worked for several centuries in the middle of the second millennium. Mines in Cork and Kerry are believed to have produced as much as 370 tonnes of copper during the Bronze Age.
Ireland was also rich in native gold, and the Bronze Age saw the first extensive working of this precious metal by Irish craftsmen. More Bronze Age gold hoards have been discovered in Ireland than anywhere else in Europe. Irish gold ornaments have been found as far afield as Germany and Scandinavia, and gold-related trade was very possibly a major factor in the Bronze Age Irish economy.
In the early stages of the Bronze Age the gold ornaments included simple but finely decorated gold lunulae, a distinctively Irish type of object later made in Britain and continental Europe, and disks of thin gold sheet. Many of these seem to have been long in use before they were deposited. Later the thin twisted torc made its appearance; this was a collar consisting of a bar or ribbon of metal, twisted into a spiral. Other types of gold jewellery made in Ireland during the Bronze Age, most shared with Britain, include earrings, sun disks, bracelets, clothes fasteners, and in the Late Bronze Age, the distinctively Irish large "gorgets", and bullae amulets. After the Bronze Age goldwork almost ceased to be produced in Ireland; the Irish deposits may well have been essentially exhausted.
Construction of wedge tombs tailed off from about 2,200 BC, and while the previous tradition of large scale monument building was much reduced, existing earlier megalithic monuments continued in use in the form of secondary insertions of funerary and ritual artefacts. Towards the end of the Bronze Age the single-grave cist made its appearance. This consisted of a small rectangular stone chest, covered with a stone slab and buried a short distance below the surface. The body might be cremated, or not. Decorated pots often accompanied the remains, and later cremated remains were placed inside the urn, which was turned upside-down, and might also have grave goods of various sorts. Numerous stone circles were also erected at this time, chiefly in Ulster and Munster.
Crannogs are timber homes built in shallow lakes for security, often with a narrow walkway to the shore. Some use or extend natural islets, and the largest probably housed a number of families, and animals. It is thought that most of the 1,200-odd crannogs in Ireland were begun in the Bronze Age, although many sites seem to have been used, continuously or intermittently, over very long periods, even into medieval times.
The large Dowris Hoard, originally of over 200 items, mostly in bronze, has given its name to the Dowris Phase or period, as a term for the final phase of the Irish Bronze Age, about 900-600 BC. With 48 examples, the hoard contained all but two of the known examples of the distinctive "crotals", bronze rattles in the shape of a bull's testicle, as well as 26 horns or trumpets, weapons, and vessels. The rather earlier Dunaverney flesh-hook (perhaps 1050–900 BC) is suggestive of a culture where elite feasting was important, and reflects influence from continental Europe; very large riveted bronze cauldrons were also made. Large numbers of bronze weapons were produced, and typical sword shapes changed from shorter ones for stabbing and thrusting on foot, to longer ones, perhaps for a mounted warrior to slash with. This is one example of a Dowris Phase design type originating in the Hallstatt culture of continental Europe, probably transmitted via southern Britain; chapes for scabbards is another.
During the Bronze Age, the climate of Ireland deteriorated and extensive deforestation took place. The population of Ireland at the end of the Bronze Age was probably in excess of 100,000, and may have been as high as 200,000. It is possible that it was not much greater than it had been at the height of the Neolithic. In Ireland the Bronze Age lasted until c. 500 BC, later than continental Europe and also Britain.
- End of the Dunaverney flesh-hook
- Part of the Dowris Hoard
- Tongue-shaped bronze dagger, Hunt Museum
- Late Bronze Age bracelet from Castlederg, c. 950-800 BC
- Gold Dress Fastener, Clones, County Monaghan, 800-700 BC
Iron Age (500 BC – AD 400)
The Irish Iron Age has long been thought to begin around 500 BC and then continue until the Christian era in Ireland, which brought some written records and therefore the end of prehistoric Ireland. This view has been somewhat upset by the recent carbon-dating of the wood shaft of a very elegant iron spearhead found in the River Inny, which gave a date of between 811 and 673 BC. This may further erode the belief, still held by some, that the arrival of iron-working marked the beginning of the arrival of the Celts (i.e. speakers of the Proto-Celtic language) and thus Indo-European speakers, to the island.
Alternatively, many hold the view that this happened with the bearers of the Bell Beaker culture, probably Indo-European speaking, reaching Ireland during the earlier stage of the Bronze Age. The Celtic languages of Britain and Ireland, also known as Insular Celtic, can be divided into two groups, Goidelic and Brittonic. When primary written records of Celtic first appear in about the fifth century, Gaelic or Goidelic, in the form of Primitive Irish, is found in Ireland, while Brittonic, in the form of Common Brittonic, is found in Britain.
The Iron Age includes the period in which the Romans ruled most of the neighbouring island of Britain. Roman interest in the area led to some of the earliest written evidence about Ireland. The names of its tribes were recorded by the geographer Ptolemy in the 2nd century AD.
The recorded tribes of Ireland included at least three with names identical or similar to British or Gaulish tribes: the Brigantes (also the name of the largest tribe in northern and midland Britain), the Manapii (possibly the same people as the Menapii, a Belgic tribe of northern Gaul) and the Coriondi (a name similar to that of Corinion, later Cirencester and the Corionototae of northern Britain).
Up to about 150 BC there are many finds that show stylistic influence from continental Europe (as in the preceding Dowris Phase), and some direct imports. After that date relationships with British styles predominate, perhaps reflecting some movement of people. The Keshcarrigan Bowl, possibly made in Britain, is an example of this. Another cup found in Fore, County Westmeath does seem to be an import.
Examples from Iron Age Ireland of La Tène style, the term for Iron Age Celtic art, are very few, to a "puzzling" extent, although some of these are of very high quality, such as a number of scabbards from Ulster and the Petrie Crown, apparently dating to the 2nd century AD. This was well after Celtic art elsewhere had been subsumed into Gallo-Roman art and its British equivalent. Despite this it was in Ireland that the style seemed to revive in the early Christian period, to form the Insular art of the Book of Kells and other well-known masterpieces, perhaps under influence from Late Roman and post-Roman Romano-British styles. The 1st century BC Broighter Gold hoard, from Ulster, includes a small model boat, a spectacular torc with relief decoration influenced by classical style, and other gold jewellery probably imported from the Roman world, perhaps as far away as Alexandria.
The headland of Drumanagh, near Dublin and not yet fully excavated, may have represented a centre for trade with Roman Britain. Drumanagh is an example of the coastal promontory fort, using cliff headlands with a narrow neck to reduce the extent of fortification necessary. In Ireland these seem to be mainly a feature of the Iron Age, with some perhaps dating to the Bronze Age, and also continuing to be used into the Early Medieval period. Although today seen as mostly dating from the early historic period, some of the perhaps 60,000 ringforts or raths in Ireland date back to the Late Iron Age. These vary greatly in size and function, with smaller ones a single-family farmstead (with slaves), or merely an enclosure for animals, and larger ones clearly having a wider political and military significance.
There are several ringforts in the complex topping the Hill of Tara, which seems to have its origins in the late Iron Age, although the site also includes a Neolithic passage grave and other earlier tombs. This is one of a number of major sites connected in later literature and mythology with kingship, and probably had a ritual and religious significance, though it is now impossible to be clear as to what this was. Navan Fort (Emain Macha), another major hilltop site, had a very large circular building constructed on it about 100 BC. It was forty metres across, with 275 tree-posts in rings. The largest was the central post, a tree felled about 95 BC. Within the century following the whole building was destroyed, apparently in a ritual fashion.
Other large-scale constructions, requiring a good degree of social organization, include linear earthworks such as the Black Pig's Dyke and Cliadh Dubh, probably representing boundaries, and acting as hindrances to cattle-raids, and "toghers" or wooden trackways across boggy areas, of which the best-known is the Corlea Trackway, a corduroy road dated to 148-147 BC, and about a kilometre long and some three metres wide.
The late Iron Age saw sizeable changes in human activity. Thomas Charles-Edwards coined the phrase "Irish Dark Age" to refer to a period of apparent economic and cultural stagnation in late prehistoric Ireland, lasting from c. 100 BC to c. AD 300. Pollen data extracted from Irish bogs indicate that "the impact of human activity upon the flora around the bogs from which the pollen came was less between c. 200 BC and c. AD 300 than either before or after." The third and fourth centuries saw a rapid recovery.
The reasons for the decline and recovery are uncertain, but it has been suggested that recovery may be linked to the "Golden Age" of Roman Britain in the third and fourth centuries. The archaeological evidence for trade with, or raids on, Roman Britain is strongest in northern Leinster, centred on modern County Dublin, followed by the coast of County Antrim, with lesser concentrations in the Rosses on the north coast of County Donegal and around Carlingford Lough. As Roman Britain collapsed politically, there was even settlement by Irish people, and leaders, in Wales and western Britain. Inhumation burials may also have spread from Roman Britain, and had become common in Ireland by the fourth and fifth centuries.
It was also during this time that some protohistoric records begin to appear. Early Irish literature was not written down until much later, in the Early Medieval period, but many scholars are ready to accept that the saga cycles preserve in some form elements from much earlier, that give some insights into the world of the last elites of prehistoric Ireland.
The large areas of bog in Ireland have produced over a dozen ancient bog bodies, mostly from the Iron Age. Some were found and reburied before archaeological and scientific investigation was possible. Some survive as skeletons only, but the best-preserved have retained their flesh, hair, and clothing. The oldest appears to be the Neolithic Stoneyisland Man, perhaps the victim of a canoeing accident around 3320–3220 BC.
Cashel Man died violently about 2500-2000 BC in the early Bronze Age, and is one of the possible ritual killings; it is now thought these were deposed kings sacrificed after being seen to fail in their rule, perhaps after crop failures. Two Iron Age examples of apparent elite victims of ritual killing are Old Croghan Man and Clonycavan Man, both from approximately 400 to 175 BC. Other bodies appear to have been normal burials.
Almost all prehistoric Irish finds remain in the British Isles. Some are in local museums, but much the most significant collections are in Dublin, Belfast and London. The first "national" collection for Irish antiquities was the British Museum in London, where many finds from before and after it was established in 1753 have ended up. However, from the foundation of the Dublin Royal Irish Academy in 1785 there was a local rival, which became the main destination of objects that were newly-found, or appeared on the market. The Dublin Society also formed a collection, though this was less important for antiquities. The society was founded in 1731, and by 1733 had opened a museum. Both these collections were transferred to the new "Museum of Science and Art", now the National Museum of Ireland, by 1890.
A legal dispute in which the Crown challenged the British Museum's purchase of the Broighter Hoard was won in 1903, and marked the acceptance on all sides of the Dublin museum as the Irish national collection. This was a hoard found in what became Northern Ireland after Irish independence. Northern Ireland had seen its many important finds of antiquities passing to first London and then Dublin, and the Ulster Museum was only recognized as a national museum for antiquities in 1961. This had developed out of the collections of the Belfast Natural History Society, later renamed the Belfast Municipal Museum and Art Gallery, and was renamed again in 1961. Despite this, the pace of new finds has meant that it has an important collection.
- "New Discovery Pushes Back Date of Human Existence in Ireland by 2500 years", Irish Archaeology
- Irish Examiner; "Reindeer bone rewrites Irish human history", Irish Archaeology
- 100 objects, "Mesolithic fish trap"
- Wallace and O'Floinn, 92, 3:4
- Herity and Eogan, start of Ch. 2
- Clark, Peter U.; Dyke, Arthur S.; Shakun, Jeremy D.; Carlson, Anders E.; Clark, Jorie; Wohlfarth, Barbara; Mitrovica, Jerry X.; Hostetler, Steven W.; McCabe, A. Marshall (2009). "The Last Glacial Maximum". Science. 325 (5941): 710–714. Bibcode:2009Sci...325..710C. doi:10.1126/science.1172873. PMID 19661421. S2CID 1324559.
- Alcibiades (27 February 2019). "Prehistoric Ireland – Formation of an Island". About History. Retrieved 2 July 2019.
- Farmer, G. Thomas; Cook, John (2013). Climate Change Science: A Modern Synthesis: Volume 1 - The Physical Climate. Springer Science & Business Media. p. 409. ISBN 9789400757578. Retrieved 2 July 2019.
- Stephens, Nicholas; Herries Davies, G. L. (1978). Ireland: The geomorphology of the British Isles. London: Methuen. ISBN 978-0-416-84640-9.
- Greenwood, Sarah L., Clark, Chris D., "Reconstructing the last Irish Ice Sheet 2: a geomorphologically-driven model of ice sheet growth, retreat and dynamics", Quaternary Science Reviews, Volume 28, Issues 27–28, December 2009, Pages 3101-3123, online
- Edwards, R.J., Brooks, A.J. (2008) "The Island of Ireland: Drowning the Myth of an Irish Land-bridge?" In: Davenport, J.J., Sleeman, D.P., Woodman, P.C. (eds.), Mind the Gap: Postglacial Colonisation of Ireland. Special Supplement to The Irish Naturalists’ Journal, pp 19-34, "The Island of Ireland: Drowning the Myth of an Irish Land-bridge?" Accessed 21 December 2018.
- An older view was that a land bridge to the English West Country existed until about 14,000 BC. See: K. Lambeck, P. Johnston, C. Smither, K. Fleming and Y. Yokoyama, Late Pleistocene and Holocene sea-level change, Annual Report of the Research School of Earth Sciences, ANU College of Physical & Mathematical Sciences, Canberra, 1995.
- Owen, James (13 March 2008). "Snakeless in Ireland: Blame the Ice Age, Not St. Patrick". National Geographic. Retrieved 27 April 2016.
- Cunliffe, 2012, p. 56.
- "New Discovery Pushes Back Date of Human Existence in Ireland by 2500 years", Irish Archaeology; BBC (21 March 2016). "Earliest evidence of humans in Ireland". Retrieved 27 April 2016.
- Wallace and O'Floinn, 45, 2:3 - they date it to 400,000-300,000 BC; picture and different account
- Tanabe, Susumu; Nekanishi, Toshimichi; Yasui, Satoshi (14 October 2010). "Relative sea-level change in and around the Younger Dryas inferred from late Quaternary incised valley fills along the Japan sea". Quaternary Science Reviews. 29 (27–28): 3956–3971. Bibcode:2010QSRv...29.3956T. doi:10.1016/j.quascirev.2010.09.018.
- Herity and Eogan, Start of Ch. 2
- Wallace and O'Floinn, 45
- Herity and Eoghan, Chapter 2
- Sheridan, Alison (June 2020). "Incest uncovered at the elite prehistoric Newgrange monument in Ireland". Nature. 582 (7812): 347–349. doi:10.1038/d41586-020-01655-4. PMID 32555481. S2CID 219730055.
- Driscoll, K. The Early Prehistory in the West of Ireland: Investigations into the Social Archaeology of the Mesolithic, West of the Shannon, Ireland (2006).
- BBC, A History of the World in 100 Objects
- Woodman, Peter (1985). Excavations at Mount Sandel, 1973-77, County Londonderry. HM Stationery Office.
- Perri, Angela R.; Power, Robert C.; Stuijts, Ingelise; Heinrich, Susann; Talamo, Sahra; Hamilton-Dyer, Sheila; Roberts, Charlotte (1 September 2018). "Detecting hidden diets and disease: Zoonotic parasites and fish consumption in Mesolithic Ireland". Journal of Archaeological Science. 97: 137–146. doi:10.1016/j.jas.2018.07.010. ISSN 0305-4403.
- "Kerry red deer ancestry traced to population introduced to Ireland by ancient peoples over 5,000 years ago". Retrieved 6 November 2012.
- Wallace and O'Floinn, 45-46, 2:1
- "Prehistoric Genocide in Ireland?" (PDF). Ireland's DNA. Retrieved 27 June 2015.
- 100 objects, "Neolithic Bowl"
- Wallace and O'Floinn, 46-47, 2:4
- 100 objects, "Ceremonial Axehead", Wallace and O'Floinn, 46, 2:5
- Wallace and O'Floinn, 46-47, 2:1-9; 100 objects, "Flint Macehead"
- Wallace and O'Floinn, 48, 2:10-14; 100 objects, "Neolithic Bag"
- Michael Herity and George Eogan, Ireland in Prehistory (1996), p.114; M.J. O'Kelly, Bronze-age Ireland, in A New History of Ireland, vol 1: Prehistoric and early Ireland, edited by Dáibhí Ó Cróinín (Royal Irish Academy 2005).
- Wallace and O'Floinn, 49
- J.X.W.P. Corcoram, "The origin of the Celts", in Nora Chadwick, The Celts (1970); David W. Anthony, The Horse, the Wheel and Language: How Bronze-Age riders from the Eurasian steppes shaped the modern world (2007).
- Michael Herity and George Eogan, Ireland in Prehistory (1996), pp.115–6.
- "A History of Ireland in 100 Objects" website
- 100 objects, "Basket Earrings"
- 100 objects, "Pair of Gold Discs"
- 100 objects, "Gleninsheen Gold Gorget"
- Wallace and O'Floinn, 49-50, 2:15-21, 2:28-29 (Early to Middle) and 87-90, 3:6-24 (Late); 100 objects, "Coggalbeg Gold Hoard"
- Wallace and O'Floinn, 93
- Wallace and O'Floinn, 51
- 100 objects, "Bronze Age Funerary Pots"
- Wallace and O'Floinn, 86, 91, 125-126
- Wallace and O'Floinn, 90-91, 125; 100 objects, "Castlederg Bronze Cauldron"
- Wallace and O'Floinn, 53, 90
- Wallace and O'Floinn, 125
- Ó Cróinín, p. lx.
- Harbison, Peter. (1970). Two prehistoric bronze weapons from Ireland in the Hunt collection. [Royal Society of Antiquaries of Ireland]. OCLC 1000908777.
- After Duffy (ed.), Atlas of Irish History, p. 15.
- 100 objects, "Iron Spearhead"
- A New History of Ireland, Volume I: Prehistoric and Early Ireland. Dáibhí Ó Cróinín (Editor).
- Wallace and O'Floinn, 171
- Wallace and O'Floinn, 127, 4:8; 100 objects, "Keshcarrigan Bowl"
- Ó Cróinín, p. lx, "puzzling"
- Wallace and O'Floinn, 126-127, 130-131, 4:5, 4:18; 100 objects, "Petrie 'Crown'"
- Ó Cróinín, 137–152; Wallace and O'Floinn, 128-129, 4:10-14; 100 objects, "Broighter Boat"
- Wallace and O'Floinn, 126
- Charles-Edwards, 151; Wallace and O'Floinn, 126
- Charles-Edwards, 146-147
- Megalithic Ireland.com, Corlea Trackway
- Charles-Edwards, p. 145.
- Charles-Edwards, p. 148.
- Charles-Edwards, 155-162, map 8; 100 objects, "Cunorix Gravestone"
- Charles-Edwards, pp. 175–176.
- Kingship and Sacrifice, NMI; "Laois 'bog body' said to be world's oldest", The Irish Times. 2 August 2013.
- 100 objects, "Armlet, Old Croghan Man"
- Wallace and O'Floinn, 4-9
- Wallace and O'Floinn, 8
- "100 objects", A History of Ireland in 100 Objects, An Post, The Irish Times, National Museum of Ireland and the Royal Irish Academy, 2017
- Thomas Charles-Edwards, Early Christian Ireland, Cambridge, 2000
- Driscoll, K. The Early Prehistory in the West of Ireland: Investigations into the Social Archaeology of the Mesolithic, West of the Shannon, Ireland (2006).
- Herity, M. and G. Eogan. Ireland in Prehistory. (1996) Routledge. ISBN 0-415-04889-3
- Ó Cróinín, Dáibhí, ed. (2008). A New History of Ireland: Prehistoric and early Ireland, (Volume 1 series), Oxford University Press, ISBN 0-19-922665-2.
- Wallace, Patrick F., O'Floinn, Raghnall eds. Treasures of the National Museum of Ireland: Irish Antiquities, 2002, Gill & Macmillan, Dublin, ISBN 0717128296 The book is chapters of text, followed by commentary on the illustrations and then a section with the illustrations, cross-referred by chapter and picture numbers, e.g. "3:13"
- Dardis GF (1986). "Late Pleistocene glacial lakes in South-central Ulster, Northern Ireland". Ir. J. Earth Sci. 7: 133–144.
- Barry, T. (ed.) A History of Settlement in Ireland. (2000) Routledge. ISBN 0-415-18208-5.
- Bradley, R. The Prehistory of Britain and Ireland. (2007) Cambridge University Press. ISBN 0-521-84811-3.
- Coffey, G. Bronze Age in Ireland (1913)
- Driscoll, K. The Early Prehistory in the West of Ireland: Investigations into the Social Archaeology of the Mesolithic, West of the Shannon, Ireland (2006).
- Flanagan L. Ancient Ireland. Life before the Celts. (1998). ISBN 0-312-21881-8
- Thompson, T. Ireland’s Pre-Celtic Archaeological and Anthropological Heritage. (2006) Edwin Mellen Press. ISBN 0-7734-5880-8.
- Waddell, J., The Celticization of the West: an Irish Perspective, in C. Chevillot and A. Coffyn (eds), L' Age du Bronze Atlantique. Actes du 1er Colloque de Beynac, Beynac (1991), 349–366.
- Waddell, J.,The Question of the Celticization of Ireland, Emania No. 9 (1991), 5–16.
- Waddell, J., 'Celts, Celticisation and the Irish Bronze Age', in J. Waddell and E. Shee Twohig (eds.), Ireland in the Bronze Age. Proceedings of the Dublin Conference, April 1995, 158–169.
- Arias, J. World Prehistory 13 (1999):403–464.The Origins of the Neolithic Along the Atlantic Coast of Continental Europe: A Survey.
- Bamforth and Woodman, Oxford J. Arch. 23 (2004): 21–44. Tool hoards and Neolithic use of the landscape in north-eastern Ireland.
- Clark (1970) Beaker Pottery of Great Britain and Ireland of the Gulbenkain Archaeological Series, Cambridge University Press.
- Waddell, John (1998). The prehistoric archaeology of Ireland. Galway: Galway University Press. hdl:10379/1357.
- McEnvoy; et al. (2004). "The Longue Durée of Genetic Ancestry: Multiple Genetic Marker Systems and Celtic Origins on the Atlantic Facade of Europe". Am. J. Hum. Genet. 75 (4): 693–704. doi:10.1086/424697. PMC 1182057. PMID 15309688.
- Finch; et al. (1997). "Distribution of HLA-A, B and DR genes and haplotypes in the Irish population". Exp. Clin. Immunogenet. 14 (4): 250–263. PMID 9523161.
- Williams; et al. (2004). "High resolution HLA-DRB1 identification of a caucasian population". Human Immunology. 65 (1): 66–77. doi:10.1016/j.humimm.2003.10.004. PMID 14700598. | https://worddisk.com/wiki/Prehistoric_Ireland/ | 21 |
107 | North American Free Trade Agreement
The North American Free Trade Agreement (NAFTA; Spanish: Tratado de Libre Comercio de América del Norte, TLCAN; French: Accord de libre-échange nord-américain, ALÉNA) was an agreement signed by Canada, Mexico, and the United States that created a trilateral trade bloc in North America. The agreement came into force on January 1, 1994, and superseded the 1988 Canada–United States Free Trade Agreement between the United States and Canada. The NAFTA trade bloc formed one of the largest trade blocs in the world by gross domestic product.
North American Free Trade Agreement
Logo of the NAFTA Secretariat
|Type||Free trade area|
|January 1, 1994|
• USMCA in force
|July 1, 2020|
|21,578,137 km2 (8,331,365 sq mi)|
• Water (%)
• 2018 estimate
|22.3/km2 (57.8/sq mi)|
|GDP (PPP)||2018 estimate|
• Per capita
The impetus for a North American free trade zone began with U.S. president Ronald Reagan, who made the idea part of his 1980 presidential campaign. After the signing of the Canada–United States Free Trade Agreement in 1988, the administrations of U.S. president George H. W. Bush, Mexican President Carlos Salinas de Gortari, and Canadian prime minister Brian Mulroney agreed to negotiate what became NAFTA. Each submitted the agreement for ratification in their respective capitals in December 1992, but NAFTA faced significant opposition in both the United States and Canada. All three countries ratified NAFTA in 1993 after the addition of two side agreements, the North American Agreement on Labor Cooperation (NAALC) and the North American Agreement on Environmental Cooperation (NAAEC).
Passage of NAFTA resulted in the elimination or reduction of barriers to trade and investment between the U.S., Canada, and Mexico. The effects of the agreement regarding issues such as employment, the environment, and economic growth have been the subject of political disputes. Most economic analyses indicated that NAFTA was beneficial to the North American economies and the average citizen, but harmed a small minority of workers in industries exposed to trade competition. Economists held that withdrawing from NAFTA or renegotiating NAFTA in a way that reestablished trade barriers would have adversely affected the U.S. economy and cost jobs. However, Mexico would have been much more severely affected by job loss and reduction of economic growth in both the short term and long term.
After U.S. President Donald Trump took office in January 2017, he sought to replace NAFTA with a new agreement, beginning negotiations with Canada and Mexico. In September 2018, the United States, Mexico, and Canada reached an agreement to replace NAFTA with the United States–Mexico–Canada Agreement (USMCA), and all three countries had ratified it by March 2020. NAFTA remained in force until USMCA was implemented. In April 2020, Canada and Mexico notified the U.S. that they were ready to implement the agreement. The USMCA took effect on July 1, 2020, replacing NAFTA. The new law involved only small changes.
Negotiation, signing, ratification, and revision (1988–94)
The impetus for a North American free trade zone began with U.S. president Ronald Reagan, who made the idea part of his campaign when he announced his candidacy for the presidency in November 1979. Canada and the United States signed the Canada–United States Free Trade Agreement (FTA) in 1988, and shortly afterward Mexican President Carlos Salinas de Gortari decided to approach U.S. president George H. W. Bush to propose a similar agreement in an effort to bring in foreign investment following the Latin American debt crisis. As the two leaders began negotiating, the Canadian government under Prime Minister Brian Mulroney feared that the advantages Canada had gained through the Canada–US FTA would be undermined by a US–Mexican bilateral agreement, and asked to become a party to the US–Mexican talks.
Following diplomatic negotiations dating back to 1990, the leaders of the three nations signed the agreement in their respective capitals on December 17, 1992. The signed agreement then needed to be ratified by each nation's legislative or parliamentary branch.
The earlier Canada–United States Free Trade Agreement had been controversial and divisive in Canada, and featured as an issue in the 1988 Canadian election. In that election, more Canadians voted for anti-free trade parties (the Liberals and the New Democrats), but the split of the votes between the two parties meant that the pro-free trade Progressive Conservatives (PCs) came out of the election with the most seats and so took power. Mulroney and the PCs had a parliamentary majority and easily passed the 1987 Canada–US FTA and NAFTA bills. However, Mulroney was replaced as Conservative leader and prime minister by Kim Campbell. Campbell led the PC party into the 1993 election where they were decimated by the Liberal Party under Jean Chrétien, who campaigned on a promise to renegotiate or abrogate NAFTA. Chrétien subsequently negotiated two supplemental agreements with Bush, who had subverted the LAC advisory process and worked to "fast track" the signing prior to the end of his term, ran out of time and had to pass the required ratification and signing of the implementation law to incoming president Bill Clinton.
Before sending it to the United States Senate, Clinton added two side agreements, the North American Agreement on Labor Cooperation (NAALC) and the North American Agreement on Environmental Cooperation (NAAEC), to protect workers and the environment, and to also allay the concerns of many House members. The U.S. required its partners to adhere to environmental practices and regulations similar to its own. After much consideration and emotional discussion, the U.S. House of Representatives passed the North American Free Trade Agreement Implementation Act on November 17, 1993, 234–200. The agreement's supporters included 132 Republicans and 102 Democrats. The bill passed the Senate on November 20, 1993, 61–38. Senate supporters were 34 Republicans and 27 Democrats. Republican Representative David Dreier of California, a strong proponent of NAFTA since the Reagan Administration, played a leading role in mobilizing support for the agreement among Republicans in Congress and across the country.
Clinton signed it into law on December 8, 1993; the agreement went into effect on January 1, 1994. At the signing ceremony, Clinton recognized four individuals for their efforts in accomplishing the historic trade deal: Vice President Al Gore, Chairwoman of the Council of Economic Advisers Laura Tyson, Director of the National Economic Council Robert Rubin, and Republican Congressman David Dreier. Clinton also stated that "NAFTA means jobs. American jobs, and good-paying American jobs. If I didn't believe that, I wouldn't support this agreement." NAFTA replaced the previous Canada-US FTA.
The goal of NAFTA was to eliminate barriers to trade and investment between the U.S., Canada and Mexico. The implementation of NAFTA on January 1, 1994, brought the immediate elimination of tariffs on more than one-half of Mexico's exports to the U.S. and more than one-third of U.S. exports to Mexico. Within 10 years of the implementation of the agreement, all U.S.–Mexico tariffs were to be eliminated except for some U.S. agricultural exports to Mexico, to be phased out within 15 years. Most U.S.–Canada trade was already duty-free. NAFTA also sought to eliminate non-tariff trade barriers and to protect the intellectual property rights on traded products.
Chapter 20 provided a procedure for the international resolution of disputes over the application and interpretation of NAFTA. It was modeled after Chapter 69 of the Canada–United States Free Trade Agreement.
The North American Free Trade Agreement Implementation Act made some changes to the copyright law of the United States, foreshadowing the Uruguay Round Agreements Act of 1994 by restoring copyright (within the NAFTA nations) on certain motion pictures which had entered the public domain.
The Clinton administration negotiated a side agreement on the environment with Canada and Mexico, the North American Agreement on Environmental Cooperation (NAAEC), which led to the creation of the Commission for Environmental Cooperation (CEC) in 1994. To alleviate concerns that NAFTA, the first regional trade agreement between a developing country and two developed countries, would have negative environmental impacts, the commission was mandated to conduct ongoing ex post environmental assessment, It created one of the first ex post frameworks for environmental assessment of trade liberalization, designed to produce a body of evidence with respect to the initial hypotheses about NAFTA and the environment, such as the concern that NAFTA would create a "race to the bottom" in environmental regulation among the three countries, or that NAFTA would pressure governments to increase their environmental protections. The CEC has held four symposia to evaluate the environmental impacts of NAFTA and commissioned 47 papers on the subject from leading independent experts.
Proponents of NAFTA in the United States emphasized that the pact was a free-trade, not an economic-community, agreement. The freedom of movement it establishes for goods, services and capital did not extend to labor. In proposing what no other comparable agreement had attempted—to open industrialized countries to "a major Third World country"--NAFTA eschewed the creation of common social and employment policies. The regulation of the labor market and or the workplace remained the exclusive preserve of the national governments.
A "side agreement" on enforcement of existing domestic labor law, concluded in August 1993, the North American Agreement on Labour Cooperation (NAALC), was highly circumscribed. Focused on health and safety standards and on child labor law, it excluded issues of collective bargaining, and its "so-called [enforcement] teeth" were accessible only at the end of "a long and tortuous" disputes process". Commitments to enforce existing labor law also raised issues of democratic practice. The Canadian anti-NAFTA coalition, Pro-Canada Network, suggested that guarantees of minimum standards would be "meaningless" without "broad democratic reforms in the [Mexican] courts, the unions, and the government". Later assessment, however, did suggest that NAALC's principles and complaint mechanisms did "create new space for advocates to build coalitions and take concrete action to articulate challenges to the status quo and advance workers’ interests".
From the earliest negotiation, agriculture was a controversial topic within NAFTA, as it has been with almost all free trade agreements signed within the WTO framework. Agriculture was the only section that was not negotiated trilaterally; instead, three separate agreements were signed between each pair of parties. The Canada–U.S. agreement contained significant restrictions and tariff quotas on agricultural products (mainly sugar, dairy, and poultry products), whereas the Mexico–U.S. pact allowed for a wider liberalization within a framework of phase-out periods (it was the first North–South FTA on agriculture to be signed).
NAFTA established the CANAMEX Corridor for road transport between Canada and Mexico, also proposed for use by rail, pipeline, and fiber optic telecommunications infrastructure. This became a High Priority Corridor under the U.S. Intermodal Surface Transportation Efficiency Act of 1991.
Chapter 11 – investor-state dispute settlement procedures
Another contentious issue was the investor-state dispute settlement obligations contained in Chapter 11 of NAFTA. Chapter 11 allowed corporations or individuals to sue Mexico, Canada or the United States for compensation when actions taken by those governments (or by those for whom they are responsible at international law, such as provincial, state, or municipal governments) violated international law.
This chapter has been criticized by groups in the United States, Mexico, and Canada for a variety of reasons, including not taking into account important social and environmental considerations. In Canada, several groups, including the Council of Canadians, challenged the constitutionality of Chapter 11. They lost at the trial level and the subsequent appeal.
Methanex Corporation, a Canadian corporation, filed a US$970 million suit against the United States. Methanex claimed that a California ban on methyl tert-butyl ether (MTBE), a substance that had found its way into many wells in the state, was hurtful to the corporation's sales of methanol. The claim was rejected, and the company was ordered to pay US$3 million to the U.S. government in costs, based on the following reasoning: "But as a matter of general international law, a non-discriminatory regulation for a public purpose, which is enacted in accordance with due process and, which affects, inter alios, a foreign investor or investment is not deemed expropriatory and compensable unless specific commitments had been given by the regulating government to the then putative foreign investor contemplating investment that the government would refrain from such regulation."
In another case, Metalclad, an American corporation, was awarded US$15.6 million from Mexico after a Mexican municipality refused a construction permit for the hazardous waste landfill it intended to construct in Guadalcázar, San Luis Potosí. The construction had already been approved by the federal government with various environmental requirements imposed (see paragraph 48 of the tribunal decision). The NAFTA panel found that the municipality did not have the authority to ban construction on the basis of its environmental concerns.
In Eli Lilly and Company v. Government of Canada the plaintiff presented a US$500 million claim for the way Canada requires usefulness in its drug patent legislation. Apotex is sued the U.S. for US$520 million because of opportunity it says it lost in an FDA generic drug decision.
Lone Pine Resources Inc. v. Government of Canada filed a US$250 million claim against Canada, accusing it of "arbitrary, capricious and illegal" behaviour, because Quebec intends to prevent fracking exploration under the St. Lawrence Seaway.
Lone Pine Resources is incorporated in Delaware but headquartered in Calgary, and had an initial public offering on the NYSE May 25, 2011, of 15 million shares each for $13, which raised US$195 million.
Barutciski acknowledged "that NAFTA and other investor-protection treaties create an anomaly in that Canadian companies that have also seen their permits rescinded by the very same Quebec legislation, which expressly forbids the paying of compensation, do not have the right (to) pursue a NAFTA claim", and that winning "compensation in Canadian courts for domestic companies in this case would be more difficult since the Constitution puts property rights in provincial hands".
A treaty with China would extend similar rights to Chinese investors, including SOEs.
Chapter 19 – countervailing duty
NAFTA's Chapter 19 was a trade dispute mechanism which subjects antidumping and countervailing duty (AD/CVD) determinations to binational panel review instead of, or in addition to, conventional judicial review. For example, in the United States, review of agency decisions imposing antidumping and countervailing duties are normally heard before the U.S. Court of International Trade, an Article III court. NAFTA parties, however, had the option of appealing the decisions to binational panels composed of five citizens from the two relevant NAFTA countries. The panelists were generally lawyers experienced in international trade law. Since NAFTA did not include substantive provisions concerning AD/CVD, the panel was charged with determining whether final agency determinations involving AD/CVD conformed with the country's domestic law. Chapter 19 was an anomaly in international dispute settlement since it did not apply international law, but required a panel composed of individuals from many countries to re-examine the application of one country's domestic law.
A Chapter 19 panel was expected to examine whether the agency's determination was supported by "substantial evidence". This standard assumed significant deference to the domestic agency. Some of the most controversial trade disputes in recent years, such as the U.S.–Canada softwood lumber dispute, have been litigated before Chapter 19 panels.
Decisions by Chapter 19 panels could be challenged before a NAFTA extraordinary challenge committee. However, an extraordinary challenge committee did not function as an ordinary appeal. Under NAFTA, it only vacated or remanded a decision if the decision involveed a significant and material error that threatens the integrity of the NAFTA dispute settlement system. Since January 2006, no NAFTA party had successfully challenged a Chapter 19 panel's decision before an extraordinary challenge committee.
The roster of NAFTA adjudicators included many retired judges, such as Alice Desjardins, John Maxwell Evans, Constance Hunt, John Richard, Arlin Adams, Susan Getzendanner, George C. Pratt, Charles B. Renfrew and Sandra Day O'Connor.
In 2008, Canadian exports to the United States and Mexico were at $381.3 billion, with imports at $245.1 billion. According to a 2004 article by University of Toronto economist Daniel Trefler, NAFTA produced a significant net benefit to Canada in 2003, with long-term productivity increasing by up to 15 percent in industries that experienced the deepest tariff cuts. While the contraction of low-productivity plants reduced employment (up to 12 percent of existing positions), these job losses lasted less than a decade; overall, unemployment in Canada has fallen since the passage of the act. Commenting on this trade-off, Trefler said that the critical question in trade policy is to understand "how freer trade can be implemented in an industrialized economy in a way that recognizes both the long-run gains and the short-term adjustment costs borne by workers and others".
According to a 2012 study, with reduced NAFTA trade tariffs, trade with the United States and Mexico only increased by a modest 11% in Canada compared to an increase of 41% for the U.S. and 118% for Mexico.:3 Moreover, the U.S. and Mexico benefited more from the tariff reductions component, with welfare increases of 0.08% and 1.31%, respectively, with Canada experiencing a decrease of 0.06%.:4
According to a 2017 report by the New York City based public policy think tank report, Council on Foreign Relations (CFR), bilateral trade in agricultural products tripled in size from 1994 to 2017 and is considered to be one of the largest economic effects of NAFTA on U.S.-Canada trade with Canada becoming the U.S. agricultural sectors' leading importer. Canadian fears of losing manufacturing jobs to the United States did not materialize with manufacturing employment holding "steady". However, with Canada's labour productivity levels at 72% of U.S. levels, the hopes of closing the "productivity gap" between the two countries were also not realized.
According to a 2018 report by Gordon Laxter published by the Council of Canadians, NAFTA's Article 605, energy proportionality rule ensures that Americans had "virtually unlimited first access to most of Canada's oil and natural gas" and Canada could not reduce oil, natural gas and electricity exports (74% its oil and 52% its natural gas) to the U.S., even if Canada was experiencing shortages. These provisions that seemed logical when NAFTA was signed in 1993 are no longer appropriate.:4 The Council of Canadians promoted environmental protection and was against NAFTA's role in encouraging development of the tar sands and fracking.
US President Donald Trump, angered by Canada's dairy tax of "almost 300%", threatened to leave Canada out of the NAFTA. Since 1972, Canada has been operating on a "supply management" system, which the United States is attempting to pressure it out of, specifically focusing on the dairy industry. However, this has not yet taken place, as Quebec, which holds approximately half the country's dairy farms, still supports supply management.
Maquiladoras (Mexican assembly plants that take in imported components and produce goods for export) became the landmark of trade in Mexico. They moved to Mexico from the United States, hence the debate over the loss of American jobs. Income in the maquiladora sector had increased 15.5% since the implementation of NAFTA in 1994. Other sectors also benefited from the free trade agreement, and the share of exports to the U.S. from non-border states increased in the last five years while the share of exports from border states decreased. This allowed for rapid growth in non-border metropolitan areas such as Toluca, León, and Puebla, which were all larger in population than Tijuana, Ciudad Juárez, and Reynosa.
The overall effect of the Mexico–U.S. agricultural agreement is disputed. Mexico did not invest in the infrastructure necessary for competition, such as efficient railroads and highways. This resulted in more difficult living conditions for the country's poor. Mexico's agricultural exports increased 9.4 percent annually between 1994 and 2001, while imports increased by only 6.9 percent a year during the same period.
One of the most affected agricultural sectors was the meat industry. Mexico went from a small player in the pre-1994 U.S. export market to the second largest importer of U.S. agricultural products in 2004, and NAFTA may have been a major catalyst for this change. Free trade removed the hurdles that impeded business between the two countries, so Mexico provided a growing market for meat for the U.S., and increased sales and profits for the U.S. meat industry. A coinciding noticeable increase in the Mexican per capita GDP greatly changed meat consumption patterns as per capita meat consumption grew.
Production of corn in Mexico increased since NAFTA. However, internal demand for corn had increased beyond Mexico's supply to the point where imports became necessary, far beyond the quotas Mexico originally negotiated. Zahniser & Coyle pointed out that corn prices in Mexico, adjusted for international prices, have drastically decreased, but through a program of subsidies expanded by former president Vicente Fox, production remained stable since 2000. Reducing agricultural subsidies, especially corn subsidies, was suggested as a way to reduce harm to Mexican farmers.
A 2001 Journal of Economic Perspectives review of the existing literature found that NAFTA was a net benefit to Mexico. By the year 2003, 80% of the commerce in Mexico was executed only with the U.S. The commercial sales surplus, combined with the deficit with the rest of the world, created a dependency in Mexico's exports. These effects were evident in the 2001 recession, which resulted in either a low rate or a negative rate in Mexico's exports.
A 2015 study found that Mexico's welfare increased by 1.31% as a result of the NAFTA tariff reductions and that Mexico's intra-bloc trade increased by 118%. Inequality and poverty fell in the most globalization-affected regions of Mexico. 2013 and 2015 studies showed that Mexican small farmers benefited more from NAFTA than large-scale farmers.
NAFTA had also been credited with the rise of the Mexican middle class. A Tufts University study found that NAFTA lowered the average cost of basic necessities in Mexico by up to 50%. This price reduction increased cash-on-hand for many Mexican families, allowing Mexico to graduate more engineers than Germany each year.
Growth in new sales orders indicated an increase in demand for manufactured products, which resulted in expansion of production and a higher employment rate to satisfy the increment in the demand. The growth in the maquiladora industry and in the manufacturing industry was of 4.7% in August 2016. Three quarters of the imports and exports are with the U.S.
Tufts University political scientist Daniel W. Drezner argued that NAFTA made it easier for Mexico to transform to a real democracy and become a country that views itself as North American. This has boosted cooperation between the United States and Mexico.
Economists generally agreed that the United States economy benefited overall from NAFTA as it increased trade. In a 2012 survey of the Initiative on Global Markets' Economic Experts Panel, 95% of the participants said that, on average, U.S. citizens benefited from NAFTA while none said that NAFTA hurt US citizens, on average. A 2001 Journal of Economic Perspectives review found that NAFTA was a net benefit to the United States. A 2015 study found that US welfare increased by 0.08% as a result of NAFTA tariff reductions, and that US intra-bloc trade increased by 41%.
A 2014 study on the effects of NAFTA on US trade jobs and investment found that between 1993 and 2013, the US trade deficit with Mexico and Canada increased from $17.0 to $177.2 billion, displacing 851,700 US jobs.
In 2015, the Congressional Research Service concluded that the "net overall effect of NAFTA on the US economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of US GDP. However, there were worker and firm adjustment costs as the three countries adjusted to more open trade and investment among their economies." The report also estimated that NAFTA added $80 billion to the US economy since its implementation, equivalent to a 0.5% increase in US GDP.
The US Chamber of Commerce credited NAFTA with increasing U.S. trade in goods and services with Canada and Mexico from $337 billion in 1993 to $1.2 trillion in 2011, while the AFL–CIO blamed the agreement for sending 700,000 American manufacturing jobs to Mexico over that time.
University of California, San Diego economics professor Gordon Hanson said that NAFTA helped the US compete against China and therefore saved US jobs. While some jobs were lost to Mexico as a result of NAFTA, considerably more would have been lost to China if not for NAFTA.
The US had a trade surplus with NAFTA countries of $28.3 billion for services in 2009 and a trade deficit of $94.6 billion (36.4% annual increase) for goods in 2010. This trade deficit accounted for 26.8% of all US goods trade deficit. A 2018 study of global trade published by the Center for International Relations identified irregularities in the patterns of trade of NAFTA ecosystem using network theory analytical techniques. The study showed that the US trade balance was influenced by tax avoidance opportunities provided in Ireland.
A study published in the August 2008 issue of the American Journal of Agricultural Economics, found NAFTA increased US agricultural exports to Mexico and Canada, even though most of the increase occurred a decade after its ratification. The study focused on the effects that gradual "phase-in" periods in regional trade agreements, including NAFTA, have on trade flows. Most of the increases in members' agricultural trade, which was only recently brought under the purview of the World Trade Organization, was due to very high trade barriers before NAFTA or other regional trade agreements.
The U.S. foreign direct investment (FDI) in NAFTA countries (stock) was $327.5 billion in 2009 (latest data available), up 8.8% from 2008. The US direct investment in NAFTA countries was in non-bank holding companies and the manufacturing, finance/insurance, and mining sectors. The foreign direct investment of Canada and Mexico in the United States (stock) was $237.2 billion in 2009 (the latest data available), up 16.5% from 2008.
Economy and jobs
In their May 24, 2017 report, the Congressional Research Service (CRS) wrote that the economic impacts of NAFTA on the U.S. economy were modest. In a 2015 report, the Congressional Research Service summarized multiple studies as follows: "In reality, NAFTA did not cause the huge job losses feared by the critics or the large economic gains predicted by supporters. The net overall effect of NAFTA on the U.S. economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of U.S. GDP. However, there were worker and firm adjustment costs as the three countries adjusted to more open trade and investment among their economies.":2
Many American small businesses depended on exporting their products to Canada or Mexico under NAFTA. According to the U.S. Trade Representative, this trade supported over 140,000 small- and medium-sized businesses in the US.
According to University of California, Berkeley professor of economics Brad DeLong, NAFTA had an insignificant impact on US manufacturing. The adverse impact on manufacturing was exaggerated in US political discourse according to DeLong and Harvard economist Dani Rodrik.
According to a 2013 article by Jeff Faux published by the Economic Policy Institute, California, Texas, Michigan and other states with high concentrations of manufacturing jobs were most affected by job loss due to NAFTA. According to a 2011 article by EPI economist Robert Scott, about 682,900 U.S. jobs were "lost or displaced" as a result of the trade agreement. More recent studies agreed with reports by the Congressional Research Service that NAFTA only had a modest impact on manufacturing employment and automation explained 87% of the losses in manufacturing jobs.
According to a study in the Journal of International Economics, NAFTA reduced pollution emitted by the US manufacturing sector: "On average, nearly two-thirds of the reductions in coarse particulate matter (PM10) and sulfur dioxide (SO2) emissions from the U.S. manufacturing sector between 1994 and 1998 can be attributed to trade liberalization following NAFTA."
According to the Sierra Club, NAFTA contributed to large-scale, export-oriented farming, which led to the increased use of fossil fuels, pesticides and GMO. NAFTA also contributed to environmentally destructive mining practices in Mexico. It prevented Canada from effectively regulating its tar sands industry, and created new legal avenues for transnational corporations to fight environmental legislation. In some cases, environmental policy was neglected in the wake of trade liberalization; in other cases, NAFTA's measures for investment protection, such as Chapter 11, and measures against non-tariff trade barriers threatened to discourage more vigorous environmental policy. The most serious overall increases in pollution due to NAFTA were found in the base metals sector, the Mexican petroleum sector, and the transportation equipment sector in the United States and Mexico, but not in Canada.
Mobility of persons
According to the Department of Homeland Security Yearbook of Immigration Statistics, during fiscal year 2006 (October 2005 – September 2006), 73,880 foreign professionals (64,633 Canadians and 9,247 Mexicans) were admitted into the United States for temporary employment under NAFTA (i.e., in the TN status). Additionally, 17,321 of their family members (13,136 Canadians, 2,904 Mexicans, as well as a number of third-country nationals married to Canadians and Mexicans) entered the U.S. in the treaty national's dependent (TD) status. Because DHS counts the number of the new I-94 arrival records filled at the border, and the TN-1 admission is valid for three years, the number of non-immigrants in TN status present in the U.S. at the end of the fiscal year is approximately equal to the number of admissions during the year. (A discrepancy may be caused by some TN entrants leaving the country or changing status before their three-year admission period has expired, while other immigrants admitted earlier may change their status to TN or TD, or extend TN status granted earlier).
According to the International Organization for Migration, deaths of migrants have been on the rise worldwide with 5,604 deaths in 2016. An increased number of undocumented farmworkers in California may be due to the initial passing of NAFTA.
Canadian authorities estimated that on December 1, 2006, 24,830 U.S. citizens and 15,219 Mexican citizens were in Canada as "foreign workers". These numbers include both entrants under NAFTA and those who entered under other provisions of Canadian immigration law. New entries of foreign workers in 2006 totalled 16,841 U.S. citizens and 13,933 Mexicans.
Disputes and controversies
1992 U.S. presidential candidate Ross Perot
In the second 1992 presidential debate, Ross Perot argued:
We have got to stop sending jobs overseas. It's pretty simple: If you're paying $12, $13, $14 an hour for factory workers and you can move your factory south of the border, pay a dollar an hour for labor, ... have no health care—that's the most expensive single element in making a car—have no environmental controls, no pollution controls and no retirement, and you don't care about anything but making money, there will be a giant sucking sound going south. ... when [Mexico's] jobs come up from a dollar an hour to six dollars an hour, and ours go down to six dollars an hour, and then it's leveled again. But in the meantime, you've wrecked the country with these kinds of deals.
Perot ultimately lost the election, and the winner, Bill Clinton, supported NAFTA, which went into effect on January 1, 1994.
In 1996, the gasoline additive MMT was brought to Canada by Ethyl Corporation, an American company when the Canadian federal government banned imports of the additive. The American company brought a claim under NAFTA Chapter 11 seeking US$201 million, from the Canadian federal government as well as the Canadian provinces under the Agreement on Internal Trade (AIT). They argued that the additive had not been conclusively linked to any health dangers, and that the prohibition was damaging to their company. Following a finding that the ban was a violation of the AIT, the Canadian federal government repealed the ban and settled with the American company for US$13 million. Studies by Health and Welfare Canada (now Health Canada) on the health effects of MMT in fuel found no significant health effects associated with exposure to these exhaust emissions. Other Canadian researchers and the U.S. Environmental Protection Agency disagreed citing studies that suggested possible nerve damage.
The United States and Canada argued for years over the United States' 27% duty on Canadian softwood lumber imports. Canada filed many motions to have the duty eliminated and the collected duties returned to Canada. After the United States lost an appeal before a NAFTA panel, spokesperson for U.S. Trade Representative Rob Portman responded by saying: "we are, of course, disappointed with the [NAFTA panel's] decision, but it will have no impact on the anti-dumping and countervailing duty orders." On July 21, 2006, the United States Court of International Trade found that imposition of the duties was contrary to U.S. law.
Change in income trust taxation not expropriation
On October 30, 2007, American citizens Marvin and Elaine Gottlieb filed a Notice of Intent to Submit a Claim to Arbitration under NAFTA, claiming thousands of U.S. investors lost a total of $5 billion in the fall-out from the Conservative Government's decision the previous year to change the tax rate on income trusts in the energy sector. On April 29, 2009, a determination was made that this change in tax law was not expropriation.
Impact on Mexican farmers
Several studies rejected NAFTA responsibility for depressing the incomes of poor corn farmers. The trend existed more than a decade before NAFTA existed. Also, maize production increased after 1994, and there wasn't a measurable impact on the price of Mexican corn because of subsidized corn from the United States. The studies agreed that the abolition of U.S. agricultural subsidies would benefit Mexican farmers.
Zapatista Uprising in Chiapas, Mexico
Preparations for NAFTA included cancellation of Article 27 of Mexico's constitution, the cornerstone of Emiliano Zapata's revolution in 1910–1919. Under the historic Article 27, indigenous communal landholdings were protected from sale or privatization. However, this barrier to investment was incompatible with NAFTA. Indigenous farmers feared the loss of their remaining land and cheap imports (substitutes) from the US. The Zapatistas labelled NAFTA a "death sentence" to indigenous communities all over Mexico and later declared war on the Mexican state on January 1, 1994, the day NAFTA came into force.
Criticism from 2016 U.S. presidential candidates
In a 60 Minutes interview in September 2015, 2016 presidential candidate Donald Trump called NAFTA "the single worst trade deal ever approved in [the United States]", and said that if elected, he would "either renegotiate it, or we will break it". Juan Pablo Castañón, president of the trade group Consejo Coordinador Empresarial, expressed concern about renegotiation and the willingness to focus on the car industry. A range of trade experts said that pulling out of NAFTA would have a range of unintended consequences for the United States, including reduced access to its biggest export markets, a reduction in economic growth, and higher prices for gasoline, cars, fruits, and vegetables. Members of the private initiative in Mexico noted that to eliminate NAFTA, many laws must be adapted by the U.S. Congress. The move would also eventually result in legal complaints by the World Trade Organization. The Washington Post noted that a Congressional Research Service review of academic literature concluded that the "net overall effect of NAFTA on the U.S. economy appears to have been relatively modest, primarily because trade with Canada and Mexico accounts for a small percentage of U.S. GDP".
Democratic candidate Bernie Sanders, opposing the Trans-Pacific Partnership trade agreement, called it "a continuation of other disastrous trade agreements, like NAFTA, CAFTA, and permanent normal trade relations with China". He believes that free trade agreements have caused a loss of American jobs and depressed American wages. Sanders said that America needs to rebuild its manufacturing base using American factories for well-paying jobs for American labor rather than outsourcing to China and elsewhere.
Policy of the Trump administration
Shortly after his election, U.S. President Donald Trump said he would begin renegotiating the terms of NAFTA, to resolve trade issues he had campaigned on. The leaders of Canada and Mexico have indicated their willingness to work with the Trump administration. Although vague on the exact terms he seeks in a renegotiated NAFTA, Trump threatened to withdraw from it if negotiations fail.
In July 2017, the Trump administration provided a detailed list of changes that it would like to see to NAFTA. The top priority was a reduction in the United States' trade deficit. The administration also called for the elimination of provisions that allowed Canada and Mexico to appeal duties imposed by the United States and limited the ability of the United States to impose import restrictions on Canada and Mexico. The list also alleged subsidized state-owned enterprises and currency manipulation.
According to Chad Bown of the Peterson Institute for International Economics, the Trump administration's list "is very consistent with the president's stance on liking trade barriers, liking protectionism. This makes NAFTA in many respects less of a free-trade agreement." The concerns expressed by the US Trade Representative over subsidized state-owned enterprises and currency manipulation are not thought to apply to Canada and Mexico, but rather to be designed to send a message to countries beyond North America. Jeffrey Schott of the Peterson Institute for International Economics noted that it would not be possible to conclude renegotiations quickly while also addressing all the concerns on the list. He also said that it would be difficult to do anything about trade deficits.
An October 2017 op-ed in Toronto's The Globe and Mail questioned whether the United States wanted to re-negotiate the agreement or planned to walk away from it no matter what, noting that newly appointed American ambassador Kelly Knight Craft is married to the owner of Alliance Resource Partners, a big US coal operation. Canada is implementing a carbon plan, and there is also the matter of a sale of Bombardier jets. "The Americans inserted so many poison pills into last week's talks in Washington that they should have been charged with murder", wrote the columnist, John Ibbitson.
"A number of the proposals that the United States has put on the table have little or no support from the U.S. business and agriculture community. It isn't clear who they're intended to benefit", said John Murphy, vice-president of the U.S. Chamber of Commerce. Pat Roberts, the senior US senator from Kansas, called for an outcry against Trump anti-NAFTA moves, saying the "issues affect real jobs, real lives and real people". Kansas is a major agricultural exporter, and farm groups warn that just threatening to leave NAFTA might cause buyers to minimize uncertainty by seeking out non-US sources.
A fourth round of talks included a U.S. demand for a sunset clause that would end the agreement in five years, unless the three countries agreed to keep it in place, a provision U.S. Commerce Secretary Wilbur Ross has said would allow the countries to kill the deal if it was not working. Canadian Prime Minister Justin Trudeau met with the House Ways and Means Committee, since Congress would have to pass legislation rolling back the treaty's provisions if Trump tries to withdraw from the pact.
From June to late August 2018, Canada was sidelined as the United States and Mexico held bilateral talks. On 27 August 2018 Mexico and the United States announced they had reached a bilateral understanding on a revamped NAFTA trade deal that included provisions that would boost automobile production in the U.S., a 10-year data protection period against generic drug production on an expanded list of products that benefits pharmaceutical companies, particularly US makers producers of high-cost biologic drugs, a sunset clause—a 16-year expiration date with regular 6-year reviews to possibly renew the agreement for additional 16-year terms, and an increased de minimis threshold in which Mexico raised the de minimis value to $100 from $50 regarding online duty- and tax-free purchases. According to an August 30 article in The Economist, Mexico agreed to increase the rules of origin threshold which would mean that 75% as opposed to the previous 62.5% of a vehicle's components must be made in North America to avoid tariffs. Since car makers currently import less expensive components from Asia, under the revised agreement, consumers would pay more for vehicles. As well, approximately 40 to 45 per cent of vehicle components must be made by workers earning a minimum of US$16 per hour, in contrast to the current US$2.30 an hour that a worker earns on average in a Mexican car manufacturing plant. The Economist described this as placing "Mexican carmaking into a straitjacket".
Trudeau and Canadian Foreign Minister Chrystia Freeland announced that they were willing to join the agreement if it was in Canada's interests. Freeland returned from her European diplomatic tour early, cancelling a planned visit to Ukraine, to participate in NAFTA negotiations in Washington, D.C. in late August. According to an August 31 Canadian Press published in the Ottawa Citizen, key issues under debate included supply management, Chapter 19, pharmaceuticals, cultural exemption, the sunset clause, and de minimis thresholds.
Although President Donald Trump warned Canada on September 1 that he would exclude them from a new trade agreement unless Canada submitted to his demands, it is not clear that the Trump administration has the authority to do so without the approval of Congress.:34–6 According to Congressional Research Service (CRS) reports, one published in 2017 and another on July 26, 2018, it is likely that congressional approval to make substantive changes to NAFTA would have to be secured by President Trump before the changes could be implemented.:34–6
On September 30, 2018, the day of the deadline for the Canada–U.S. negotiations, a preliminary deal between the two countries was reached, thus preserving the trilateral pact when the Trump administration submits the agreement before Congress. The new name for the agreement was the "United States—Mexico—Canada Agreement" (USMCA) and came into effect on July 1, 2020.
Impact of withdrawing from NAFTA
Following Donald Trump's election to the presidency, a range of trade experts said that pulling out of NAFTA as Trump proposed would have a range of unintended consequences for the U.S., including reduced access to the U.S.'s biggest export markets, a reduction in economic growth, and increased prices for gasoline, cars, fruits, and vegetables. The worst affected sectors would be textiles, agriculture and automobiles.
According to Tufts University political scientist Daniel W. Drezner, the Trump administration's desire to return relations with Mexico to the pre-NAFTA era are misguided. Drezner argued that NAFTA made it easier for Mexico to transform to a real democracy and become a country that views itself as North American. If Trump acts on many of the threats that he has made against Mexico, it is not inconceivable that Mexicans would turn to left-wing populist strongmen, as several South American countries have. At the very least, US-Mexico relations would worsen, with adverse implications for cooperation on border security, counterterrorism, drug-war operations, deportations and managing Central American migration.
According to Chad P. Bown (senior fellow at the Peterson Institute for International Economics), "a renegotiated NAFTA that would reestablish trade barriers is unlikely to help workers who lost their jobs—regardless of the cause—take advantage of new employment opportunities".
According to Harvard economist Marc Melitz, "recent research estimates that the repeal of NAFTA would not increase car production in the United States". Melitz noted that this would cost manufacturing jobs.
If the original Trans-Pacific Partnership (TPP) had come into effect, existing agreements such as NAFTA would be reduced to those provisions that do not conflict with the TPP, or that require greater trade liberalization than the TPP. However, only Canada and Mexico would have the prospect of becoming members of the TPP after U.S. President Donald Trump withdrew the United States from the agreement in January 2017. In May 2017, the 11 remaining members of the TPP, including Canada and Mexico, agreed to proceed with a revised version of the trade deal without U.S. participation.
American public opinion on NAFTA
The American public was largely divided on its view of the North American Free Trade Agreement (NAFTA), with a wide partisan gap in beliefs. In a February 2018 Gallup Poll, 48% of Americans said NAFTA was good for the U.S., while 46% said it was bad.
According to a journal from the Law and Business Review of the Americas (LBRA), U.S. public opinion of NAFTA centers around three issues: NAFTA's impact on the creation or destruction of American jobs, NAFTA's impact on the environment, and NAFTA's impact on immigrants entering the U.S.
After President Trump's election in 2016, support for NAFTA became very polarized between Republicans and Democrats. Donald Trump expressed negative views of NAFTA, calling it "the single worst trade deal ever approved in this country". Republican support for NAFTA decreased from 43% support in 2008 to 34% in 2017. Meanwhile, Democratic support for NAFTA increased from 41% support in 2008 to 71% in 2017.
The political gap was especially large in concern to views on free trade with Mexico. As opposed to a favorable view of free trade with Canada, whom 79% of American described as a fair trade partner, only 47% of Americans believed Mexico practices fair trade. The gap widened between Democrats and Republicans: 60% of Democrats believed Mexico is practicing fair trade, while only 28% of Republicans did. This was the highest level from Democrats and the lowest level from Republicans ever recorded by the Chicago Council Survey. Republicans had more negative views of Canada as a fair trade partner than Democrats as well.
NAFTA had strong support from young Americans. In a February 2017 Gallup poll, 73% of Americans aged 18–29 said NAFTA was good for the U.S, showing higher support than any other U.S. age group. It also had slightly stronger support from unemployed Americans than from employed Americans.
- United States–Mexico–Canada Agreement (USMCA)
- North American integration
- North American Leaders' Summit (NALS)
- Canada's Global Markets Action Plan
- The Fight for Canada
- Comprehensive Economic and Trade Agreement (CETA)
- North American Transportation Statistics Interchange
- Pacific Alliance
- Trans-Pacific Partnership (TPP)
- Free trade debate
- US public opinion on the North American Free Trade Agreement
- Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP)
- "Report for Selected Countries and Subjects". Archived from the original on February 14, 2021. Retrieved September 5, 2017.
- NAFTA Secretariat Archived April 12, 2013, at the Wayback Machine. Nafta-sec-alena.org (June 9, 2010). Retrieved on July 12, 2013.
- "NAFTA's Economic Impact". Council on Foreign Relations. Archived from the original on 2017-07-21. Retrieved 2017-07-18.
- "Poll Results | IGM Forum". www.igmchicago.org. 13 March 2012. Archived from the original on 22 June 2016. Retrieved 2016-01-01.
- Burfisher, Mary E; Robinson, Sherman; Thierfelder, Karen (2001-02-01). "The Impact of NAFTA on the United States". Journal of Economic Perspectives. 15 (1): 125–44. CiteSeerX 10.1.1.516.6543. doi:10.1257/jep.15.1.125. ISSN 0895-3309.
- Hiltzik, Michael (January 30, 2017). "NAFTA doesn't count for much economically, but it's still a huge political football. Here's why". Los Angeles Times. ISSN 0458-3035. Archived from the original on August 29, 2017. Retrieved July 18, 2017.
- Rodrik, Dani (June 2017). "Populism and the Economics of Globalization". NBER Working Paper No. 23559. doi:10.3386/w23559.
- "Driving Home the Importance of NAFTA | Econofact". Econofact. Retrieved 2017-02-15.
- Eric Martin, Trump Killing Nafta Could Mean Big Unintended Consequences for the U.S., Bloomberg Business (October 1, 2015).
- "Which American producers would suffer from ending NAFTA?". The Economist. Retrieved 2017-02-19.
- "Nafta withdrawal would hit US GDP without helping trade deficit – report". Financial Times.
- "United States-Mexico-Canada Agreement". USTR. Retrieved October 1, 2018.
- CBC News, "Mexico joins Canada, notifies U.S. it's ready to implement new NAFTA" 2020/04/04 Archived 2020-11-26 at the Wayback Machine accessed 06 April 2020
- Eugene Beaulieu and Dylan Klemen. "You Say USMCA or T-MEC and I Say CUSMA: The New NAFTA-Let's Call the Whole Thing On." The School of Public Policy Publications (2020) online.
- "North American Free Trade Agreement (NAFTA)". The Canadian Encyclopedia. Historica Canada. Retrieved 19 November 2017.
- Foreign Affairs and International trade Canada: Canada and the World: A History – 1984–1993: "Leap of Faith Archived October 27, 2007, at the Wayback Machine
- NAFTA: Final Text, Summary, Legislative History & Implementation Directory. New York: Oceana Publications. 1994. pp. 1–3. ISBN 978-0-379-00835-7.
- Labor Advisory Committee for Trade Negotiations and Trade Policy; established under the Trade Act of 1974.
- Preliminary Report of the Labor Advisory Committee for Trade Negotiations and Trade Policy on the North American Free Trade Agreement, dated Sept. 16, 1992 (Washington, D.C.: Executive Office of the President, Office of the U.S. Trade Representative, 1992), i, 1.
- For an overview of the process, see Noam Chomsky, "'Mandate for Change', or Business as Usual", Z Magazine 6, no. 2 (February 1993), 41.
- "H.R.3450 – North American Free Trade Agreement Implementation Act". Retrieved December 29, 2014.
- "Trump says many trade agreements are bad for Americans. The architects of NAFTA say he's wrong". Los Angeles Times. 2016-10-28. Retrieved 2020-07-10.
- "Remembering Those Who Left Us In 2011". NPR.org. Retrieved 2020-07-10.
- "Clinton Signs NAFTA – December 8, 1993". Miller Center. University of Virginia. Archived from the original on October 10, 2010. Retrieved January 27, 2011.
- "NAFTA Timeline". Fina-nafi. Archived from the original on January 14, 2011. Retrieved July 4, 2011.
- "YouTube". www.youtube.com. Retrieved 2020-07-10.
- "Signing NaFTA". History Central. Retrieved February 20, 2011. Cite journal requires
- "Decreto de promulgación del Tratado de Libre Comercio de América del Norte" [Decree of promulgation of the North American Free Trade Agreement]. Decree of December 20, 1993 (PDF) (in Spanish). Senate of the Republic (Mexico).
- Floudas, Demetrius Andreas & Rojas, Luis Fernando; "Some Thoughts on NAFTA and Trade Integration in the American Continent" Archived 2017-10-21 at the Wayback Machine, 52 (2000) International Problems 371
- Gantz, DA (1999). "Dispute Settlement Under the NAFTA and the WTO:Choice of Forum Opportunities and Risks for the NAFTA Parties". American University International Law Review. 14 (4): 1025–106.
- "Pest Management Regulatory Agency". Health Canada. Branches and Agencies. nd. Retrieved September 3, 2018.
- GPO, P.L. 103-182, Section 334
- ML-497 (March 1995), Docket No. RM 93-13C, Library of Congress Copyright Office
- Carpentier, Chantal Line (December 1, 2006). "IngentaConnect NAFTA Commission for Environmental Cooperation: ongoing assessment". Impact Assessment and Project Appraisal. 24 (4): 259–272. doi:10.3152/147154606781765048.
- Analytic Framework for Assessing the Environmental Effects of the North American Free Trade Agreement. Commission for Environmental Cooperation (1999)
- "Trade and Environment in the Americas". Cec.org. Archived from the original on December 7, 2014. Retrieved November 9, 2008.
- McDowell, Manfred (1995). "NAFTA and the EC Social Dimension". Labour Studies Journal. 20 (1). Retrieved 11 September 2020.
- Schliefer, Jonathan (December 1992). "What price economic growth?". The Atlantic Monthly: 114.
- Bureau of International Labor Affairs, U.S. National Administrative Office. "North American Agreement on Labor Cooperation: A Guide". dol.gov/agencies/ilab. U.S. Department of Labor. Retrieved 12 September 2020.
- "Accords fail to redraw battle lines over pact". New York Times. 14 August 1993.
- Witt, Matt (April 1990). "Don't trade on me: Mexican, U.S. and Canadian workers confront free trade". Dollars and Sense.
- Compa, Lance. "NAFTA's Labour Side Agr s Labour Side Agreement and International Labour Agreement and International Labour Solidarity" (PDF). core.ac.uk/. Cornell University ILR School. Retrieved 12 September 2020.
- "NAFTA, Chapter 11". Sice.oas.org. Retrieved July 4, 2011.
- Government of Canada, Global Affairs Canada (July 31, 2002). "The North American Free Trade Agreement (NAFTA) – Chapter 11 – Investment". Retrieved January 20, 2017.
- "'North American Free Trade Agreement (NAFTA)', Public Citizen". Citizen.org. January 1, 1994. Retrieved July 4, 2011.
- Red Mexicana de Accion Frente al Libre Comercio. "NAFTA and the Mexican Environment". Archived from the original on December 16, 2000.
- "The Council of Canadians". Canadians.org. Archived from the original on May 25, 2019. Retrieved July 4, 2011.
- Commission for Environmental Cooperation. "The NAFTA environmental agreement: The Intersection of Trade and the Environment". Cec.org. Archived from the original on June 11, 2007. Retrieved July 4, 2011.
- PEJ News. "Judge Rebuffs Challenge to NAFTA'S Chapter 11 Investor Claims Process". Pej.org. Archived from the original on July 26, 2011. Retrieved July 4, 2011.
- Ontario Court of Appeal. "Council of Canadians v. Canada (Attorney General), 2006 CanLII 40222 (ON CA)". CanLII. Retrieved December 11, 2019.
- "Arbitration award between Methanex Corporation and United States of America" (PDF). Archived from the original (PDF) on June 16, 2007. (1.45 MB)
- "Arbitration award between Metalclad Corporation and The United Mexican States" (PDF). Archived from the original (PDF) on June 16, 2007. (120 KB)
- Government of Canada, Foreign Affairs Trade and Development Canada. "Eli Lilly and Company v. Government of Canada". Retrieved January 20, 2017.
- "Canada must learn from NAFTA legal battles". Retrieved January 20, 2017.
- Government of Canada, Foreign Affairs Trade and Development Canada. "Lone Pine Resources Inc. v. Government of Canada". Retrieved January 20, 2017.
- "Quebec's St. Lawrence fracking ban challenged under NAFTA". Retrieved January 20, 2017.
- "Stock:Lone Pine Resources". Retrieved January 20, 2017.
- Millán, Juan. "North American Free Trade Agreement; Invitation for Applications for Inclusion on the Chapter 19 Roster" (PDF). Federal Register. Office of the United States Trade Representative. Retrieved 19 March 2016.
- "NAFTA – Fast Facts: North American Free Trade Agreement". NAFTANow.org. April 4, 2012. Archived from the original on October 30, 2013. Retrieved October 26, 2013.
- Trefler, Daniel (Sep 2004). "The Long and Short of the Canada-U.S. Free Trade Agreement" (PDF). American Economic Review. 94 (4): 870–895. doi:10.1257/0002828042002633.
- Bernstein, William J. (16 May 2009). A Splendid Exchange: How Trade Shaped the World. Grove Press.
- Romalis, John (2007-07-12). "NAFTA's and CUSFTA's Impact on International Trade" (PDF). Review of Economics and Statistics. 89 (3): 416–35. doi:10.1162/rest.89.3.416. ISSN 0034-6535. S2CID 57562094.
- Caliendo, Lorenzo; Parro, Fernando (2015-01-01). "Estimates of the Trade and Welfare Effects of NAFTA". The Review of Economic Studies. 82 (1): 1–44. CiteSeerX 10.1.1.189.1365. doi:10.1093/restud/rdu035. ISSN 0034-6527. S2CID 20591348.
- McBride, James; Sergie, Mohammed Aly (2017) [February 14, 2014]. "NAFTA's Economic Impact". Council on Foreign Relations (CFR) think tank. Archived from the original on May 16, 2017. Retrieved September 3, 2018.
- "NAFTA and Climate Report 2018" (PDF). Sierra Club.
- Laxer, Gordon. "Escaping Mandatory Oil Exports: Why Canada needs to dump NAFTA's energy proportionality rule" (PDF). p. 28.
- "The coddling of the Canadian cow farmer". The Economist. Retrieved 2018-09-12.
- Hufbauer, GC; Schott, JJ (2005). "NAFTA Revisited". Washington, DC: Institute for International Economics. Cite journal requires
- Greening the Americas, Carolyn L. Deere (editor). MIT Press, Cambridge, Massachusetts.
- "Clark, Georgia Rae. 2006. Analysis of Mexican demand for Meat: A Post-NAFTA Demand Systems Approach. MS Thesis, Texas Tech University" (PDF). Archived from the original (PDF) on August 15, 2011. Retrieved July 4, 2011.
- "NAFTA, Corn, and Mexico's Agricultural Trade Liberalization" (PDF). Archived from the original (PDF) on January 9, 2007. (152 KB) p. 4
- Steven S. Zahniser & William T. Coyle, U.S.-Mexico Corn Trade During the NAFTA Era: New Twists to an Old Story Archived 2021-02-14 at the Wayback Machine, Outlook Report No. FDS04D01 (Economic Research Service/USDA, May 2004), 22 pp.
- Becker, Elizabeth (August 27, 2003). "U.S. Corn Subsidies Said to Damage Mexico". Retrieved July 21, 2019 – via NYTimes.com.
- Ruiz Nápoles, Pablo. "El TLCAN y el balance comercial en México". Economía Informa. UNAM. 2003
- H, Hanson, Gordon (2007-03-09). "Globalization, Labor Income, and Poverty in Mexico". Cite journal requires
- Prina, Silvia (2013). "Maintenance page : Wiley Online Library". Review of Development Economics. 17 (3): 594–608. doi:10.1111/rode.12053. S2CID 154627747.
- Prina, Silvia (2015). "Maintenance page : Wiley Online Library". Journal of International Development. 27: 112–132. doi:10.1002/jid.2814.
- O'Neil, Shannon (March 2013). "Mexico Makes It". Foreign Affairs. 92 (2). Retrieved 19 March 2016.
- Taylor, Guy (14 May 2012). "NAFTA key to economic, social growth in Mexico". www.washingtontimes.com. The Washington Times. Retrieved 19 March 2016.
- "Economic Report of the exportations in the manufacturer industry" Consejo Nacional de Industria Maquiladora Manufacturera A.C. 2016
- "The missing dimension in the NAFTA debate". Washington Post. Retrieved 2017-02-12.
- "Trump administration formally launches NAFTA renegotiation". Washington Post. Retrieved 2017-07-18.
- Frankel, Jeffrey (2017-04-24). "How to Renegotiate NAFTA". Project Syndicate. Retrieved 2017-07-18.
- Scott, Robert E. (July 21, 2014). "The effects of NAFTA on US trade, jobs, and investment, 1993â€"2013". Review of Keynesian Economics. 2 (4): 429–441. doi:10.4337/roke.2014.04.02. Retrieved July 21, 2019 – via ideas.repec.org.
- "The North American Free Trade Agreement (NAFTA)" (PDF).
- "Contentious Nafta pact continues to generate a sparky debate". Archived from the original on May 7, 2015. Retrieved January 20, 2017.
- "NAFTA's Economic Impact". Council on Foreign Relations. Archived from the original on 2017-02-04. Retrieved 2017-02-07.
- Porter, Eduardo (2016-03-29). "Nafta May Have Saved Many Autoworkers' Jobs". The New York Times. ISSN 0362-4331. Retrieved 2017-02-07.
- "North American Free Trade Agreement (NAFTA)". Office of the United States Trade Representative. Archived from the original on March 17, 2013. Retrieved December 3, 2014.
- Lavassani, Kayvan (June 2018). "Data Science Reveals NAFTA's Problem" (PDF). International Affairs Forum (June 2018). Center for International Relations. Archived from the original (PDF) on 7 July 2018. Retrieved 7 July 2018.
- "Free Trade Agreement Helped U.S. Farmers". Archived 2009-01-29 at the Wayback Machine Newswise. Retrieved on June 12, 2008.
- "Archived copy". Archived from the original on November 25, 2011. Retrieved November 28, 2011.CS1 maint: archived copy as title (link)
- Villarreal, M. Angeles; Fergusson, Ian F. (May 24, 2017). The North American Free Trade Agreement (PDF). Congressional Research Service (CRS) (Report). Retrieved September 2, 2018.
- "North American Free Trade Agreement (NAFTA) | United States Trade Representative". ustr.gov. Retrieved 2016-10-12.
- DeLong, J. Bradford. "NAFTA and other trade deals have not gutted American manufacturing – period". Vox. Retrieved 2017-02-07.
- "What did NAFTA really do?". Dani Rodrik's weblog. Retrieved 2017-02-07.
- Faux, Jeff (December 9, 2013). "NAFTA's Impact on U.S. Workers". Economic Policy Institute. Retrieved 2016-10-12.
- "U.S. Economy Lost Nearly 700,000 Jobs Because Of NAFTA, EPI Says". The Huffington Post. July 12, 2011.
- Long, Heather (February 16, 2017). "U.S. auto workers hate NAFTA ... but love robots". CNNMoney. Archived from the original on 2021-02-14. Retrieved 2017-02-21.
The problem, they argue, is that machines took over. One study by Ball State University says 87% of American manufacturing jobs have been lost to robots. Only 13% have disappeared because of trade ... But workers in Michigan think the experts have it wrong.
- Cherniwchan, Jevan (2017). "Trade Liberalization and the Environment: Evidence from NAFTA and U.S. Manufacturing". Journal of International Economics. 105: 130–49. doi:10.1016/j.jinteco.2017.01.005.
- "Environmental Damages Underscore Risks of Unfair Trade". Sierraclub.org. Retrieved March 4, 2014.
- "IngentaConnect NAFTA Commission for Environmental Cooperation: ongoing assessment of trade liberalization in North America". Ingentaconnect.com. Archived from the original on June 6, 2011. Retrieved November 9, 2008.
- Kenneth A. Reinert and David W. Roland-Holst The Industrial Pollution Impacts of NAFTA: Some Preliminary Results. Commission for Environmental Cooperation (November 2000)
- "DHS Yearbook 2006. Supplemental Table 1: Nonimmigrant Admissions (I-94 Only) by Class of Admission and Country of Citizenship: Fiscal Year 2006". Archived from the original on February 28, 2011. Retrieved July 21, 2019.
- Jones, Reese. Borders & Walls: Do Barriers Deter Unauthorized Migration. Migration Policy Institute. web page October 5, 2016.
- Bacon, David. "Globalization and NAFTA Caused Migration from Mexico | Political Research Associates". Retrieved 2017-04-03.
- Facts and Figures 2006 Immigration Overview: Temporary Residents Archived February 23, 2008, at the Wayback Machine (Citizenship and Immigration Canada)
- "Facts and Figures 2006 – Immigration Overview: Permanent and Temporary Residents". Cic.gc.ca. June 29, 2007. Archived from the original on August 22, 2008. Retrieved November 9, 2008.
- "THE 1992 CAMPAIGN; Transcript of 2d TV Debate Between Bush, Clinton and Perot". The New York Times. 16 October 1992. Archived from the original on 29 December 2018. Retrieved 16 May 2016.
- "Notice of Arbitration" (PDF). Archived from the original (PDF) on June 16, 2007. (1.71 MB), 'Ethyl Corporation vs. Government of Canada'
- "Agreement on Internal Trade" (PDF). Archived from the original (PDF) on 2006-08-22. (118 KB)
- "Dispute Settlement". Dfait-maeci.gc.ca. October 15, 2010. Archived from the original on January 15, 2008. Retrieved July 4, 2011.
- "MMT: the controversy over this fuel additive continues". canadiandriver.com. Retrieved July 4, 2011.
- softwood Lumber Archived June 16, 2008, at the Wayback Machine
- "Statement from USTR Spokesperson Neena Moorjani Regarding the NAFTA Extraordinary Challenge Committee decision in Softwood Lumber". Archived from the original on May 9, 2008. Retrieved July 21, 2019.
- "Tembec, Inc vs. United States" (PDF). Archived from the original (PDF) on September 23, 2006. (193 KB)
- "Statement by USTR Spokesman Stephen Norton Regarding CIT Lumber Ruling". Archived from the original on May 9, 2008. Retrieved July 21, 2019.
- Canada, Global Affairs; Canada, Affaires mondiales (June 26, 2013). "Global Affairs Canada". Archived from the original on December 27, 2007. Retrieved January 20, 2017.
- Fiess, Norbert; Daniel Lederman (November 24, 2004). "Mexican Corn: The Effects of NAFTA" (PDF). Trade Note. The World Bank Group. 18. Archived from the original (PDF) on June 16, 2007. Retrieved March 12, 2007.
- Subcomandante Marcos, Ziga Voa! 10 Years of the Zapatista Uprising. AK Press 2004
- Politico Staff. "Full transcript: First 2016 presidential debate". Politico. Retrieved 27 September 2016.
- Jill Colvin, Trump: NAFTA trade deal a 'disaster,' says he'd 'break' it Archived 2016-03-14 at the Wayback Machine, Associated Press (September 25, 2015).
- Mark Thoma, Is Donald Trump right to call NAFTA a "disaster"?, CBS News (October 5, 2015).
- Gonzales, Lilia (November 14, 2016). "El Economista".
- Eric Martin, Trump Killing NAFTA Could Mean Big Unintended Consequences for the U.S., Bloomberg Business (October 1, 2015).
- "Sen. Bernie Sanders on taxes, trade agreements and Islamic State". PBS. May 18, 2015. Retrieved May 20, 2015. (transcript of interview with Judy Woodruff)
- Sanders, Bernie (May 21, 2015). "The TPP Must Be Defeated". The Huffington Post. Retrieved May 22, 2015.
- Will Cabaniss for Punditfact. September 2, 2015 How Bernie Sanders, Hillary Clinton differ on the Trans-Pacific Partnership
- "Canada, Mexico talked before making NAFTA overture to Trump". Retrieved April 3, 2018.
- "What Is Nafta, and How Might Trump Change It?". The New York Times. Archived from the original on April 28, 2017. Retrieved April 5, 2017.
- Rappeport, Alan (2017-07-17). "U.S. Calls for 'Much Better Deal' in Nafta Overhaul Plan". The New York Times. ISSN 0362-4331. Retrieved 2017-07-18.
- "U.S. makes lower trade deficit top priority in NAFTA talks". Reuters. July 18, 2017. Retrieved 2017-07-18.
- "US calls for smaller deficits in new NAFTA talks". BBC News. 2017-07-18. Retrieved 2017-07-18.
- John Ibbitson (October 23, 2017). "US ambassador to Canada must mend an old friendship". The Globe and Mail. p. A5.
- Alexander Panetta (November 1, 2017). "U.S. pro-NAFTA campaign ramps up to defend deal: Concerns in Congress heighten over 'potential catastrophe' from withdrawal". Vancouver Sun. Canadian Press.
- Laura Stone; Robert Fife (October 13, 2017). "Canada, Mexico vow to remain at NAFTA negotiating table". The Globe and Mail. p. A1.
- Gollom, Mark (August 30, 2018). "Canada had little choice but to play it cool in NAFTA talks, trade experts say". CBC News. Retrieved September 2, 2018.
Charm offensive hadn't worked with U.S., so there was not much Trudeau could have done to save NAFTA, say some
- Lee, Don (August 27, 2018). "U.S. and Mexico strike preliminary accord on NAFTA; Canada expected to return to bargaining table". Los Angeles Times. Retrieved August 27, 2018.
- "Trump Reaches Revised Trade Deal With Mexico, Threatening to Leave Out Canada". The New York Times. August 27, 2018. Retrieved September 30, 2018.
- "NAFTA's sticking points: Key hurdles to clear on the way to a deal". The Ottawa Citizen via Canadian Press. Ottawa, Ontario. August 30, 2018. Retrieved September 2, 2018.
- "America's deal with Mexico will make NAFTA worse". The Economist. Going south. August 30, 2018. Retrieved September 2, 2018.
Its costly new regulations result from flawed economic logic
- Mcleod, James (August 30, 2018). "Trump's Mexico deal is a roadmap to higher car prices, industry analysts say". Financial Post. Toronto, Ontario. Retrieved September 2, 2018.
Rules of origin and labour requirements will pass costs on to consumers
- "Trump announces 'incredible' trade deal with Mexico". BBC News. 27 August 2018. Retrieved September 2, 2018.
- Ukrainian Independent Information Agency, Canada's Foreign Minister postpones visit to Ukraine over urgent talks in U.S., 29 August 2018
- Villarreal, M. Angeles; Fergusson, Ian F. (July 26, 2018). NAFTA Renegotiation and Modernization (PDF). Congressional Research Service (CRS) (Report). p. 47. Retrieved September 2, 2018.
- "Trump: Canada 'will be out' of trade deal unless it's 'fair'". BBC. September 2, 2018. Archived from the original on November 21, 2018. Retrieved September 2, 2018.
- Aleem, Zeeshan (October 26, 2017). "We asked 6 experts if Congress could stop Trump from eliminating NAFTA". VOX. Retrieved September 2, 2018.
- "Donald Trump threatens to cancel NAFTA entirely if Congress interferes with his plans". Edmonton Journal. September 2, 2018. Retrieved September 2, 2018.
- "Renegotiated NAFTA Likely to Require Congressional Approval, CRS Says". Sandler, Travis & Rosenberg Trade Report. February 1, 2017. Retrieved September 2, 2018.
- "US and Canada reach deal on NAFTA". CNN. September 30, 2018. Retrieved September 30, 2018.
- "U.S. and Canada Reach Deal to Salvage Nafta". The New York Times. September 30, 2018. Retrieved September 30, 2018.
- Swanson, Ana (July 1, 2020). "As New NAFTA Takes Effect, Much Remains Undone". The New York Times. Retrieved August 12, 2020.
- Journal, Julie Wernau | Photographs by Mark Mahaney for The Wall Street (2017-02-12). "Denim Dilemma". Wall Street Journal. ISSN 0099-9660. Retrieved 2017-02-12.
- "What is NAFTA, and what would happen to U.S. trade without it?". Washington Post. Retrieved 2017-02-15.
- Isfeld, Gordon (12 October 2015). "Forget NAFTA, the TPP is the new 'gold standard' of global trade". Financial Post. National Post. Retrieved 31 December 2015.
- Shaffer, Sri Jegarajah, Craig Dale, Leslie (2017-05-21). "TPP nations agree to pursue trade deal without US". CNBC. Archived from the original on 2019-07-03. Retrieved July 4, 2017.
- Inc., Gallup. "Americans Split on Whether NAFTA Is Good or Bad for U.S." Gallup.com. Retrieved 2018-04-30.
- "Redirecting ..." heinonline.org. Retrieved 2018-04-30.
- "Transcript of the First Debate". The New York Times. 2016-09-27. ISSN 0362-4331. Retrieved 2018-04-30.
- Affairs, Chicago Council on Global. "Pro-Trade Views on the Rise, Partisan Divisions on NAFTA Widen | Chicago Council on Global Affairs". www.thechicagocouncil.org. Retrieved 2018-04-30.
- Inc., Gallup. "Opinion Briefing: North American Free Trade Agreement". Gallup.com. Retrieved 2018-04-30.
- Beaulieu, Eugene, and Dylan Klemen. "You Say USMCA or T-MEC and I Say CUSMA: The New NAFTA-Let's Call the Whole Thing On." The School of Public Policy Publications (2020) online.
- Cameron, Maxwell A. , Brian W. Tomlin (2002) The making of NAFTA: how the deal was done. Cornell University Press. ISBN 0-8014-8781-1.
- Chambers, Edward J. and Peter H. Smith (2002) NAFTA in the new millennium. University of California, San Diego. Center for U.S.-Mexican Studies ISBN 0-88864-386-1
- Hufbauer. Gary Clyde, and Jeffrey J. Schott (2005) NAFTA Revisited: Achievements and Challenges Washington, D.C.: Institute for International Economics ISBN 0-88132-334-9
- Poynter. 2018. Everything you should know about North American trade, in 8 fact checks.
- Rosenberg, Jerry M. ed. Encyclopedia of the North American Free Trade Agreement, the New American Community, and Latin-American Trade (1995) online
- Skonieczny, Amy. "Constructing NAFTA: Myth, representation, and the discursive construction of US foreign policy." International Studies Quarterly 45.3 (2001): 433-454 online
- Villareal, M., and Ian F. Fergusson. (2017) "The North American Free Trade Agreement (NAFTA)." (CRS Report R42965). Washington: Congressional Research Service online free; a U.S. government document
|Wikimedia Commons has media related to North American Free Trade Agreement.|
- Text of the agreement, on the official website of the NAFTA Secretariat.
- NaftaNow.org, jointly developed by the Governments of Canada, Mexico and the United States of America.
- North American Free Trade Agreement (NAFTA) page on the Rules of Origin Facilitator, with member countries' status and access to legal documents.
- Abbott, Frederick M. North American Free Trade Agreement, Case Law (Max Planck Encyclopedia of Public International Law).
- Office of the U.S. Trade Representative – NAFTA statistics page
- U.S. Department of Agriculture NAFTA links page
- North American Free Trade Agreement, 1992 Oct. 7 at Project Gutenberg
- NAFTA document in World Bank's World Integrated Trade Solution
- GPTAD database library
- North American Free Trade Agreement (NAFTA) page on the Rules of Origin Facilitator, with member countries' status and access to legal documents. | https://library.kiwix.org/wikipedia_en_top_maxi/A/North_American_Free_Trade_Agreement | 21 |
21 | The Hungarian nobility consisted of a privileged group of individuals, most of whom owned landed property, in the Kingdom of Hungary. Initially, a diverse body of people were described as noblemen, but from the late 12th century only high-ranking royal officials were regarded as noble. Most aristocrats claimed a late 9th century Magyar leader for their ancestor. Others were descended from foreign knights, and local Slavic chiefs were also integrated in the nobility. Less illustrious individuals, known as castle warriors, also held landed property and served in the royal army. From the 1170s, most privileged laymen called themselves royal servants to emphasize their direct connection to the monarchs. The Golden Bull of 1222 enacted their liberties, especially their tax-exemption and the limitation of their military obligations. From the 1220s, royal servants were associated with the nobility and the highest-ranking officials were known as barons of the realm. Only those who owned allods – lands free of obligations – were regarded as true noblemen, but other privileged groups of landowners, known as conditional nobles, also existed.
Part of a series on the
|History of Hungary|
In the 1280s, Simon of Kéza was the first to claim noblemen held real authority in the kingdom. The counties developed into institutions of noble autonomy, and the nobles' delegates attended the Diets (or parliaments). The wealthiest barons built stone castles enabling them to control vast territories, but royal authority was restored in the early 14th century. Louis I of Hungary introduced an entail system and enacted the principle of "one and the selfsame liberty" of all noblemen. Actually, legal distinctions between true noblemen and conditional nobles prevailed, and the most powerful nobles employed lesser noblemen as their familiares (retainers). According to customary law, only males inherited noble estates, but the kings could promote a daughter to a son, authorizing her to inherit her father's lands. Noblewomen who had married a commoner could also claim their inheritance – the daughters' quarter (that is one-quarter of their possessions) – in land.
The monarchs granted hereditary titles and the poorest nobles lost their tax-exemption from the middle of the 15th century, but the Tripartitum – a frequently cited compilation of customary law – maintained the notion of all noblemen's equality. In the early modern period, Hungary was divided into three parts – Royal Hungary, Transylvania and Ottoman Hungary – because of the expansion of the Ottoman Empire in the 1570s. The princes of Transylvania supported the noblemen's fight against the Habsburg kings in Royal Hungary, but they prevented the Transylvanian noblemen from challenging their authority. Ennoblement of whole groups of people was not unusual in the 17th century. After the Diet was divided into two chambers in Royal Hungary in 1608, noblemen with a hereditary title had a seat in the Upper House, other nobles sent delegates to the Lower House.
Most parts of medieval Hungary were integrated into the Habsburg Monarchy in the 1690s. Monarchs confirmed the nobles' privileges several times, but their attempts to strengthen royal authority regularly brought them into conflicts with the nobility, who made up about four-and-a-half percent of society. Reformist noblemen demanded the abolition of noble privileges from the 1790s, but their program was enacted only during the Hungarian Revolution of 1848. Most noblemen lost their estates after the emancipation of their serfs, but the aristocrats preserved their distinguished social status. State administration employed thousands of impoverished noblemen in Austria-Hungary. Prominent (mainly Jewish) bankers and industrialists were awarded with nobility, but their social status remained inferior to traditional aristocrats. Noble titles were abolished only in 1947, months after Hungary was proclaimed a republic.
The Magyars (or Hungarians) dwelled in the Pontic steppes when they first appeared in written sources in the mid 9th century. Muslim merchants described them as wealthy nomadic warriors, but they also noticed the Magyars had extensive arable lands. Masses of Magyars crossed the Carpathian Mountains after the Pechenegs invaded their lands in 894 or 895. They settled in the lowlands along the Middle Danube, annihilated Moravia and defeated the Bavarians in the 900s. Slovak historians write at least three Hungarian noble kindreds were descended from Moravian aristocrats. Historians who say that the Vlachs (or Romanians) were already present in the Carpathian Basin in the late 9th century propose the Vlach knezes (or chieftains) also survived the Hungarian Conquest. Neither of the two continuity theories is universally accepted.
Around 950, Constantine Porphyrogenitus recorded the Hungarians were organized into tribes, and each had its own "prince". The tribal leaders most probably bore the title úr, as it is suggested by Hungarian terms – ország (now "realm") and uralkodni ("to rule") – deriving from this noun. Porphyrogenitus noted the Magyars spoke both Hungarian and "the tongue of the Chazars", showing that at least their leaders were bilingual.
Archaeological research revealed that most settlements comprised small pit-houses and log cabins in the 10th century, but literary sources mention tents were still in use in the 12th century. No archeological finds evidence fortresses in the Carpathian Basin in the 10th century, but fortresses were also rare in Western Europe during the same period. A larger log cabin – measuring five by five metres (16 ft × 16 ft) – which was built on a foundation of stones in Borsod was tentatively identified as the local leader's abode.
More than a 1,000 graves yielding sabres, arrow-heads and bones of horses show mounted warriors formed a significant group in the 10th century. The highest-ranking Hungarians were buried either in large cemeteries (where hundreds of graves of men buried without weapons surrounded their burial places), or in small cemeteries with 25–30 graves. The wealthy warriors' burial sites yielded richly decorated horse harness, and sabretaches ornamented with precious metal plaques. Rich women's graves contained their braid ornaments and rings made of silver or gold and decorated with precious stones. The most widespread decorative motifs which can be regarded as tribal totems – the griffin, wolf and hind – were rarely applied in Hungarian heraldry in the following centuries. Defeats during the Hungarian invasions of Europe and clashes with the paramount rulers from the Árpád dynasty had decimated the leading families by the end of the 10th century. The Gesta Hungarorum, which was written around 1200, claimed that dozens of noble kindreds flourishing in the late 12th century had been descended from tribal leaders, but most modern scholars do not regard this list as a reliable source.
Stephen I, who was crowned the first king of Hungary in 1000 or 1001, defeated the last resisting tribal chieftains. Earthen forts were built throughout the kingdom and most of them developed into centers of royal administration. About 30 administrative units, known as counties, were established before 1040; more than 40 new counties were organized during the next centuries. A royal official, the ispán, whose office was not hereditary, headed each county. The royal court provided further career opportunities. Actually, as Martyn Rady noted, the "royal household was the greatest provider of largesse in the kingdom" where the royal family owned more than two-thirds of all lands. The palatine – the head of the royal household – was the highest-ranking royal official.
The kings appointed their officials from among the members of about 110 aristocratic kindreds. These aristocrats were descended either from native (that is, Magyar, Kabar, Pecheneg or Slavic) chiefs, or from foreign knights who had migrated to the country in the 11th and 12th centuries. The foreign knights had been trained in the Western European art of war, which contributed to the development of heavy cavalry. Their descendants were labelled as newcomers for centuries, but intermarriage between natives and newcomers was not rare, which enabled their integration. The monarchs pursued an expansionist policy from the late 11th century. Ladislaus I seized Slavonia – the plains between the river Drava and the Dinaric Alps – in the 1090s. His successor, Coloman, was crowned king of Croatia in 1102. Both realms retained their own customs, and Hungarians rarely received land grants in Croatia. According to customary law, Croatians could not be obliged to cross the river Drava to fight in the royal army at their own expense.
The earliest laws authorized landowners to dispose freely of their private estates, but customary law prescribed that inherited lands could only be transferred with the consent of the owner's kinsmen who could inherit them. From the early 12th century, only family lands traceable back to a grant made by Stephen I could be inherited by the deceased owner's distant relatives; other estates escheated to the Crown if their owner did not have offspring and brothers. Aristocratic families held their inherited domains in common for generations before the 13th century. Thereafter the division of inherited property became the standard practice. Even families descended from wealthy kindreds could become impoverished through the regular divisions of their estates.
Medieval documents mention the basic unit of estate organization as praedium or allodium. A praedium was a piece of land (either a whole village or part of it) with well-marked borders. Most wealthy landowners' domains consisted of scattered praedia, in several villages. Due to the scarcity of documentary evidence, the size of the private estates cannot be determined. The descendants of Otto Győr remained wealthy landowners even after he donated 360 households to the newly established Zselicszentjakab Abbey in 1061. The establishment of monasteries by wealthy individuals was common. Such proprietary monasteries served as burial places for their founders and the founders' descendants, who were regarded as the co-owners, or from the 13th century, co-patrons, of the monastery. Wolf identifies the small motte forts, built on artificial mounds and protected by a ditch and a palisade that appeared in the 12th century as the centers of private estates. Unfree peasants cultivated part of the praedium, but other plots were hired out in return for in-kind taxes.
The term "noble" was rarely used and poorly defined before the 13th century: it could refer to a courtier, a landowner with judicial powers, or even to a common warrior. The existence of a diverse group of warriors, who were subjected to the monarch, royal officials or prelates is well documented. The castle warriors, who were exempt from taxation, held hereditary landed property around the royal castles. Light-armored horsemen, known as lövős (or archers), and armed castle folk, mentioned as őrs (or guards), defended the gyepűs (or borderlands).
Official documents from the end of the 12th century only mentioned court dignitaries and ispáns as noblemen. Aristocrats had adopted most elements of chivalric culture. They regularly named their children after Paris of Troy, Hector, Tristan, Lancelot and other heroes of Western European chivalric romances. The first tournaments were held around the same time.
The regular alienation of royal estates is well-documented from the 1170s. The monarchs granted immunities, exempting the grantee's estates from the jurisdiction of the ispáns, or even renouncing royal revenues that had been collected there. Béla III was the first Hungarian monarch to give away a whole county to a nobleman: he granted Modrus in Croatia to Bartholomew of Krk in 1193, stipulating that he was to equip warriors for the royal army. Béla's son, Andrew II, decided to "alter the conditions" of his realm and "distribute castles, counties, lands and other revenues" to his officials, as he narrated in a document in 1217. Instead of granting the estates in fief, with an obligation to render future services, he gave them as allods, in reward for the grantee's previous acts. The great officers who were the principal beneficiaries of his grants were mentioned as barons of the realm from the late 1210s.
Donations of such a large scale accelerated the development of a wealthy group of landowners, most descending from high-ranking kindreds. Some wealthy landowners could afford to build stone castles in the 1220s. Closely related aristocrats were distinguished from other lineages through a reference to their (actual or presumed) common ancestor with the words de genere ("from the kindred"). Families descending from the same kindred adopted similar insignia. The author of the Gesta Hungarorum fabricated genealogies for them and emphasized that they could never be excluded from "the honor of the realm", that is from state administration.
The new owners of the transferred royal estates wanted to subjugate the freemen, castle warriors and other privileged groups of people living in or around their domains. The threatened groups wanted to achieve confirmation of their status as royal servants, emphasizing that they were only to serve the king. Béla III issued the first extant royal charter about the grant of this rank to a castle warrior. Andrew II's Golden Bull of 1222 enacted royal servants' privileges. They were exempt from taxation; they were to fight in the royal army without proper compensation only if enemy forces invaded the kingdom; only the monarch or the palatine could judge their cases; and their arrest without a verdict was prohibited. According to the Golden Bull, only royal servants who died without a son could freely will their estates, but even in this case, their daughters were entitled to the daughters' quarter. The final article of the Golden Bull authorized the bishops, barons and other nobles to resist the monarch if he ignored its provisions. Most provisions of the Golden Bull were first confirmed in 1231.
The clear definition of the royal servants' liberties distinguished them from all other privileged groups whose military obligations remained theoretically unlimited. From the 1220s, the royal servants were regularly called noblemen and started to develop their own corporate institutions at the county level. In 1232, the royal servants of Zala County asked Andrew II to authorize them "to judge and do justice", stating that the county had slipped into anarchy. The king granted their request and Bartholomew, Bishop of Veszprém, sued one Ban Oguz for properties before their community. The "community of the royal servants of Zala" was regarded as a juridical person with its own seal.
The first Mongol invasion of Hungary in 1241 and 1242 proved the importance of well-fortified locations and heavily armored cavalry. During the following decades, Béla IV of Hungary gave away large parcels of the royal demesne (domain), expecting that the new owners would build stone castles there. Béla's burdensome castle-building program was unpopular, but he achieved his aim: almost 70 castles were built or reconstructed during his reign. More than half of the new or reconstructed castles were in noblemen's domains. Most new castles were erected on rocky peaks, mainly along the western and northern borderlands. The spread of stone castles profoundly changed the structure of landholding because castles could not be maintained without proper income. Lands and villages were legally attached to each castle, and castles were thereafter always transferred and inherited along with these "appurtenances".
The royal servants were legally identified as nobles in 1267. That year, "the nobles of all Hungary, called royal servants" persuaded Béla IV and his son, Stephen, to hold an assembly and confirm their collective privileges. Other groups of land-holding warriors could also be called nobles, but they were always distinguished from the true noblemen. The Vlach noble knezes who had landed property in the Banate of Severin were obliged to fight in the army of the ban (or royal governor). Most warriors known as the noble sons of servants were descended from freemen or liberated serfs who received estates from Béla IV in Upper Hungary on the condition that they were to equip jointly a fixed number of knights. The nobles of the Church formed the armed retinue of the wealthiest prelates. The nobles of Turopolje in Slavonia were required to provide food and fodder to high-ranking royal officials. The Székelys and Saxons firmly protected their communal liberties, which prevented their leaders from exercising noble privileges in the Székely and Saxon territories in Transylvania. Székelys and Saxons could only enjoy the liberties of noblemen if they held estates outside the lands of the two privileged communities.
Most noble families failed to adopt a strategy to avoid the division of their inherited estates into dwarf-holdings through generations. Daughters could only demand the cash equivalent of the quarter of their father's estates, but younger sons rarely remained unmarried. Impoverished noblemen had little chance to receive land grants from the kings, because they were unable to participate in the monarchs' military campaigns, but commoners who bravely fought in the royal army were regularly ennobled.
Self-government and oligarchs
Historian Erik Fügedi noted that "castle bred castle" in the second half of the 13th century: if a landowner erected a fortress, his neighbors would also build one to defend their own estates. Between 1271 and 1320, noblemen or prelates built at least 155 new fortresses, and only about a dozen castles were erected on royal domains. Most castles consisted of a tower, surrounded by a fortified courtyard, but the tower could also be built into the walls. Noblemen who could not erect fortresses were occasionally forced to abandon their inherited estates or seek the protection of more powerful lords, even through renouncing their liberties.
The lords of the castles had to hire a professional staff for the defence of the castle and the management of its appurtenances. They primarily employed nobles who held nearby estates, which gave rise to the development of a new institution, known as familiaritas. A familiaris was a nobleman who entered into the service of a wealthier landowner in exchange for a fixed salary or a portion of revenue, or rarely for the ownership or usufruct (right to enjoyment) of a piece of land. Unlike a conditional noble, in theory a familiaris remained an independent landholder, only subject to the monarch.
Monarchs took an oath at their coronation, which included a promise to respect the noblemen's liberties from the 1270s. The counties gradually transformed into an institution of the noblemen's local autonomy. Noblemen regularly discussed local matters at the counties' general assemblies. The sedria (the counties' law courts) became important elements in the administration of justice. They were headed by the ispáns or their deputies, but they consisted of four (in Slavonia and Transylvania, two) elected local noblemen, known as judges of the nobles.
Hungary fell into a state of anarchy because of the minority of Ladislaus IV in the early 1270s. To restore public order, the prelates convoked the barons and the delegates of the noblemen and Cumans to a general assembly near Pest in 1277. This first Diet (or parliament) declared the monarch to be of age. In the early 1280s, Simon of Kéza associated the Hungarian nation with the nobility in his Deeds of the Hungarians, emphasizing the community of noblemen held real authority.
The barons took advantage of the weakening of royal authority and seized large contiguous territories. The monarchs could not appoint and dismiss their officials at will any more. The most powerful barons – known as oligarchs in modern historiography – appropriated royal prerogatives, combining private lordship with their administrative powers. When Andrew III, the last male member of the Árpád dynasty, died in 1301, about a dozen lords held sway over most parts of the kingdom.
Age of the Angevins
Ladislaus IV's great-nephew, Charles I, who was a scion of the Capetian House of Anjou, restored royal power in the 1310s and 1320s. He captured the oligarchs' castles, which again secured the preponderance of the royal demesne. He refuted the Golden Bull in 1318 and claimed that noblemen had to fight in his army at their own expense. He ignored customary law and regularly "promoted a daughter to a son", granting her the right to inherit her father's estates. The King reorganized the royal household, appointing pages and knights to form his permanent retinue. He established the Order of Saint George, which was the first chivalric order in Europe. Charles I was the first Hungarian monarch to grant coats of arms (or rather crests) to his subjects. He based royal administration on honors (or office fiefs), distributing most counties and royal castles among his highest-ranking officials. These "baronies", as Matteo Villani recorded it around 1350, were "neither hereditary nor lifelong", but Charles rarely dismissed his most trusted barons. Each baron was required to hold his own banderium (or armed retinue), distinguished by his own banner.
In 1351, Charles's son and successor, Louis I confirmed all provisions of the Golden Bull, save the one that authorized childless noblemen to freely will their estates. Instead, he introduced an entail system, prescribing that childless noblemen's landed property "should descend to their brothers, cousins and kinsmen". The concept of aviticitas also protected the Crown's interests: only kin within the third degree could inherit a nobleman's property and noblemen who had only more distant relatives could not dispose of their property without the king's consent. Louis I emphasized all noblemen enjoyed "one and the selfsame liberty" in his realms and secured all privileges that nobles owned in Hungary proper to their Slavonian and Transylvanian peers. He rewarded dozens of Vlach knezes and voivodes with true nobility for military merits. The vast majority of the noble sons of servants achieved the status of true noblemen without a formal royal act, because the memory of their conditional landholding fell into oblivion. Most of them preferred Slavic names even in the 14th century, showing that they spoke the local Slavic vernacular. Other groups of conditional nobles remained distinguished from true noblemen. They developed their own institutions of self-government, known as seats or districts. Louis decreed that only Catholic noblemen and knezes could hold landed property in the district of Karánsebes (now Caransebeș in Romania) in 1366, but Orthodox landowners were not forced to convert to Catholicism in other territories of the kingdom. Even the Catholic bishop of Várad (now Oradea in Romania) authorized his Vlach voivodes to employ Orthodox priests. The king granted the district of Fogaras (around present-day Făgăraș in Romania) to Vladislav I of Wallachia in fief in 1366. In his new duchy, Vladislaus I donated estates to Wallachian boyars; their legal status was similar to the position of the knezes in other regions of Hungary.
Royal charters customarily identified noblemen and landowners from the second half of the 14th century. A man who lived in his own house on his own estates was described as living "in the way of nobles", in contrast with those who did not own landed property and lived "in the way of peasants". A verdict of 1346 declared that a noble woman who was given in marriage to a commoner should receive her inheritance "in the form of an estate in order to preserve the nobility of the descendants born of the ignoble marriage". Her husband was also regarded as a nobleman – a noble by his wife – according to the local customs of certain counties.
The peasants' legal position had been standardized in almost the entire kingdom by the 1350s. The iobagiones (or free peasant tenants) were to pay seigneurial taxes, but were rarely obliged to provide labour service. In 1351, the king ordered that the ninth – a tax payable to the landowners – was to be collected from all iobagiones, thus preventing landowners from offering lower taxes to persuade tenants to move from other lords' lands to their estates. In 1328, all landowners were authorized to administer justice on their estates "in all cases except cases of theft, robbery, assault or arson". The kings started to grant noblemen the right to execute or mutilate criminals who were captured in their estates. The most influential noblemen's estates were also exempted of the jurisdiction of the sedria.
Royal power quickly declined after Louis I died in 1382. His son-in-law, Sigismund of Luxembourg, entered into a formal league with the aristocrats who had elected him king in early 1387. He had to give away more than half of the 150 royal castles to his supporters before he could strengthen his authority in the early 15th century. His favorites were foreigners, but old Hungarian families also took advantage of his magnanimity. The wealthiest noblemen, known as magnates, built comfortable castles in the countryside which became important centers of social life. These fortified manor houses always contained a hall for representative purposes and a private chapel. Sigismund regularly invited the magnates to the royal council, even if they did not hold higher offices. He founded a new chivalric order, the Order of the Dragon, in 1408 to award his most loyal supporters.
The expansion of the Ottoman Empire reached the southern frontiers in the 1390s. A large anti-Ottoman crusade ended with a catastrophic defeat near Nicopolis in 1396. Next year, Sigismund held a Diet in Temesvár (now Timișoara in Romania) to strengthen the defence system. He confirmed the Golden Bull, but without the two provisions that limited the noblemen's military obligations and established their right to resist the monarchs. The Diet obliged all landowners to equip one archer for 20 peasant plots on their domains to serve in the royal army. Sigismund granted large estates to neighboring Orthodox rulers in Hungary to secure their alliance. They established Basilite monasteries on their estates.
Sigismund's son-in-law, Albert of Habsburg, was elected king in early 1438, but only after he promised always to make important decisions with the consent of the royal council. After he died in 1439, a civil war broke out between the partisans of his son, Ladislaus the Posthumous and the supporters of Vladislaus III of Poland. Ladislaus the Posthumous was crowned with the Holy Crown of Hungary, but the Diet proclaimed the coronation invalid. Vladislaus died fighting the Ottomans during the Crusade of Varna in 1444 and the Diet elected seven captains in chief to administer the kingdom. The talented military commander, John Hunyadi, was elected the sole regent in 1446.
The Diet developed from a consultative body into an important institution of law making in the 1440s. The magnates were always invited to attend it in person. Lesser noblemen were also entitled to attend the Diet, but in most cases they were represented by delegates. The noble delegates were almost always the familiares of the magnates.
Birth of titled nobility and the Tripartitum
Hunyadi was the first noble to receive a hereditary title from a Hungarian monarch. Ladislaus the Posthumous granted him the Saxon district of Bistritz (now Bistrița in Romania) with the title perpetual count in 1453. Hunyadi's son, Matthias Corvinus, who was elected king in 1458, rewarded further noblemen with the same title. Fügedi states, 16 December 1487 was the "birthday of the estate of magnates in Hungary", because an armistice signed on this day listed 23 Hungarian "natural barons", contrasting them with the high officers of state, who were mentioned as "barons of office". Corvinus' successor, Vladislaus II, and Vladislaus' son, Louis II, formally began to reward important persons of their government with the hereditary title of baron.
Differences in the nobles' wealth increased in the second half of the 15th century. About 30 families owned more than a quarter of the territory of the kingdom when Corvinus died in 1490. Average magnates held about 50 villages, but the regular division of inherited landed property could cause the impoverishment of aristocratic families. Strategies applied to avoid this – family planning and celibacy – led to the extinction of most aristocratic families after a few generations. A tenth of all lands in the kingdom was in the possession of about 55 wealthy noble families. Other nobles held almost one third of the lands, but this group included 12–13,000 peasant-nobles who owned a single plot (or a part of it) and had no tenants. The Diets regularly compelled the peasant-nobles to pay tax on their plots.
The Diet ordered the compilation of customary law in 1498. István Werbőczy completed the task, presenting a law-book at the Diet in 1514. His Tripartitum – The Customary Law of the Renowned Kingdom of Hungary in Three Parts – was never enacted, but it was consulted at the law courts for centuries. It summarized the noblemen's fundamental privileges in four points: noblemen were only subject to the monarch's authority and could only be arrested in a due legal process; furthermore, they were exempt from all taxes and were entitled to resist the king if he attempted to interfere with their privileges. Werbőczy also implied that Hungary was actually a republic of nobles headed by a monarch, stating that all noblemen "are members of the Holy Crown" of Hungary. Quite anachronistically, he emphasized the idea of all noblemen's legal equality, but he had to admit that the high officers of the realm, whom he mentioned as "true barons", were legally distinguished from other nobles. He also mentioned the existence of a distinct group, who were barons "in name only", but without specifying their peculiar status.
The Tripartitum regarded the kindred as the basic unit of nobility. A noble father exercised almost autocratic authority over his sons, because he could imprison them or offer them as a hostage for himself. His authority ended only if he divided his estates with his sons, but the division could rarely be enforced. The "betrayal of fraternal blood" (that is, a kinsman's "deceitful, sly, and fraudulent ... disinheritance") was a serious crime, which was punished by loss of honor and the confiscation of all property. Although the Tripartitum did not explicitly mention it, a nobleman's wife was also subject to his authority. She received her dower from her husband at the consummation of their marriage. If her husband died, she inherited his best coach-horses and clothes.
Demand for foodstuffs grew rapidly in Western Europe in the 1490s. The landowners wanted to take advantage of the growing prices. They demanded labour service from their peasant tenants and started to collect the seigneurial taxes in kind. The Diets passed decrees that restricted the peasants' right to free movement and increased their burdens. The peasants' grievances unexpectedly culminated in a rebellion in May 1514. The rebels captured manor houses and murdered dozens of noblemen, especially on the Great Hungarian Plain. The voivode of Transylvania, John Zápolya, annihilated their main army at Temesvár on 15 July. György Dózsa and other leaders of the peasant war were tortured and executed, but most rebels received a pardon. The Diet punished the peasantry as a group, condemning them to perpetual servitude and depriving them of the right of free movement. The Diet also enacted the serfs' obligation to provide one day's labour service for their lords each week.
Early modern and modern times
The Ottomans annihilated the royal army at the Battle of Mohács. Louis II died fleeing from the battlefield and two claimants, John Zápolya and Ferdinand of Habsburg, were elected kings. Ferdinand tried to reunite Hungary after Zápolya died in 1540, but the Ottoman Sultan, Suleiman the Magnificent intervened and captured Buda in 1541. The sultan allowed Zápolya's widow, Isabella Jagiellon, to rule the lands east of the river Tisza on behalf of her infant son, John Sigismund, in return for a yearly tribute. His decision divided Hungary into three parts: the Ottomans occupied the central territories; John Sigismund's eastern Hungarian Kingdom developed into the autonomous Principality of Transylvania; and the Habsburg monarchs preserved the northern and western territories (or Royal Hungary).
Most noblemen fled from the central regions to the unoccupied territories. Peasants who lived along the borders paid taxes both to the Ottomans and their former lords. Commoners were regularly recruited to serve in the royal army or in the magnates' retinues to replace the noblemen who had perished during fights. The irregular hajdú foot-soldiers – mainly runaway serfs and dispossessed noblemen – became important elements of the defence forces. Stephen Bocskai, Prince of Transylvania, settled 10,000 hajdús in seven villages and exempted them from taxation in 1605, which was the "largest collective ennoblement" in the history of Hungary.
The noblemen formed one of the three nations (or Estates of the realm) in Transylvania, but they could rarely challenge the princes' authority. In Royal Hungary, the magnates successfully protected the noble privileges, because their vast domains were almost completely exempt from royal officials' authority. Their manors were fortified in the "Hungarian manner" (with walls made of earth and timber) in the 1540s. The Hungarian noblemen could also count on the support of the Transylvanian princes against the Habsburg monarchs. Intermarriages among Austrian, Czech and Hungarian aristocrats gave rise to the development of a "supranational aristocracy" in the Habsburg Monarchy. Foreign aristocrats regularly received Hungarian citizenship, and Hungarian noblemen were often naturalized in the Habsburgs' other realms. The Habsburg kings rewarded the most powerful magnates with hereditary titles from the 1530s.
The aristocrats supported the spread of the Reformation. Most noblemen adhered to Lutheranism in the western regions of Royal Hungary, but Calvinism was the dominant religion in Transylvania and other regions. John Sigismund even promoted anti-Trinitarian views, but most Unitarian noblemen perished in battles in the early 1600s. The Habsburgs remained staunch supporters of Counter-Reformation and the most prominent aristocratic families converted to Catholicism in Royal Hungary in the 1630s. The Calvinist princes of Transylvania supported their co-religionists. Gabriel Bethlen granted nobility to all Calvinist pastors.
Both the kings and the Transylvanian princes regularly ennobled commoners without granting landed property to them. Jurisprudence, however, maintained that only those who owned land cultivated by serfs could be regarded as fully fledged noblemen. Armalists – noblemen who hold a charter of ennoblement, but not a single plot of land – and peasant-nobles continued to pay taxes, for which they were collectively known as taxed nobility. Nobility could be purchased from the kings who were always in need of funds. Landowners also benefitted from the ennoblement of their serfs, because they could demand a fee for their consent.
The Diet was officially divided into two chambers in Royal Hungary in 1608. All adult male members of the titled noble families had a seat in the Upper House. The lesser noblemen elected two or three delegates at the general assemblies of the counties to represent them in the Lower House. The Croatian and Slavonian magnates also had a seat at the Upper House, and the sabor (or Diet) of Croatia and Slavonia sent delegates to the Lower House.
Liberation and war of independence
Relief forces from the Holy Roman Empire and the Polish–Lithuanian Commonwealth inflicted a crushing defeat on the Ottomans at Vienna in 1683. The Ottomans were expelled from Buda in 1686. Michael I Apafi, the prince of Transylvania, acknowledged the suzerainty of Emperor Leopold I (who was also king of Hungary) in 1687. Grateful for the liberation of Buda, the Diet abolished the noblemen's right to resist the monarch for the defense of their liberties. Leopold confirmed the privileges of the Transylvanian Estates in 1690.
In 1688, the Diet authorized the aristocrats to establish a special trust, known as fideicommissum, with royal consent to prevent the distribution of their landed wealth among their descendants. In accordance with the traditional concept of aviticitas, inherited estates could not be subject to the trust. Always one member of the family administered estates in fideicommissum, but he was responsible for the proper boarding of his relatives.
The Ottomans acknowledged the loss of central Hungary in 1699. Leopold set up a special committee to distribute the lands in the reconquered territories. The descendants of the noblemen who had held estates there before the Ottoman conquest were required to provide documentary evidence to substantiate their claims to the ancestral lands. Even if they could present documents, they were to pay a fee – a tenth of the value of the claimed property – as a compensation for the costs of the liberation war. Few noblemen could meet the criteria and more than the half of the recovered lands were distributed among foreigners. They were naturalized, but most of them never visited Hungary.
The Habsburg administration doubled the amount of the taxes to be collected in Hungary and demanded almost one third of the taxes (1.25 million florins) from the clergy and the nobility. The palatine, Prince Paul Esterházy, convinced the monarch to reduce the noblemen's tax-burden to 0.25 million florins, but the difference was to be paid by the peasantry. Leopold did not trust the Hungarians, because a group of magnates conspired against him in the 1670s. Mercenaries replaced the Hungarian garrisons, and they frequently plundered the countryside. The monarch also supported Cardinal Leopold Karl von Kollonitsch's attempts to restrict the Protestants' rights. Tens of thousands of Catholic Germans and Orthodox Serbs were settled in the reconquered territories.
The outbreak of the War of the Spanish Succession provided an opportunity for the discontented Hungarians to rise against Leopold. They regarded one of the wealthiest aristocrats, Prince Francis II Rákóczi, as their leader. Rákóczi's War of Independence lasted from 1703 to 1711. Although the rebels were forced to yield, the Treaty of Szatmár granted a general amnesty for them and the new Habsburg monarch, Charles III, promised to respect the privileges of the Estates of the realm.
Cooperation, absolutism and reforms
Charles III again confirmed the privileges of the Estates of the "Kingdom of Hungary, and the Parts, Kingdoms and Provinces thereto annexed" in 1723 in return for the enactment of the Pragmatic Sanction which established his daughters' right to succeed him. Montesquieu, who visited Hungary in 1728, regarded the relationship between the king and the Diet as a good example of the separation of powers. The magnates almost monopolized the highest offices, but both the Hungarian Court Chancellery – the supreme body of royal administration – and the Lieutenancy Council – the most important administrative office – also employed lesser noblemen. In practice, Protestants were excluded from public offices after a royal decree, the Carolina Resolutio, obliged all candidates to take an oath on the Virgin Mary.
The Peace of Szatmár and the Pragmatic Sanction maintained that the Hungarian nation consisted of the privileged groups, independent of their ethnicity, but the first debates along ethnic lines occurred in the early 18th century. The jurist Mihály Bencsik claimed that the burghers of Trencsén (now Trenčín in Slovakia) should not send delegates to the Diet because their ancestors had been forced to yield to the conquering Magyars in the 890s. A priest, Ján B. Magin, wrote a response, arguing that ethnic Slovaks and Hungarians enjoyed the same rights. In Transylvania, a bishop of the Romanian Greek Catholic Church, Baron Inocențiu Micu-Klein, demanded the recognition of the Romanians as the fourth Nation.
Maria Theresa succeeded Charles III in 1740, which gave rise to the War of the Austrian Succession. The noble delegates offered their "lives and blood" for their new "king" and the declaration of the general levy of the nobility was crucial at the beginning of the war. Grateful for their support, Maria Therese strengthened the links between the Hungarian nobility and the monarch. She established the Theresian Academy and the Royal Hungarian Bodyguard for young Hungarian noblemen. Both institutions enabled the spread of the ideas of the Age of Enlightenment. Freemasonry became also popular, especially among the magnates.
Cultural differences between the magnates and lesser noblemen grew. The magnates adopted the lifestyle of the imperial aristocracy, moving between their summer palaces in Vienna and their newly built splendid residences in Hungary. Prince Miklós Esterházy employed Joseph Haydn; Count János Fekete, a fierce protector of noble privileges, bombarded Voltaire with letters and dilettante poems; Count Miklós Pálffy proposed to tax the nobles to finance a standing army. However, most noblemen were unwilling to renounce their privileges. Lesser noblemen also insisted on their traditional way of life and lived in simple houses, made of timber or packed clay.
Maria Therese did not hold Diets after 1764. She regulated the relationship of landowners and their serfs in a royal decree in 1767. Her son and successor, Joseph II, known as the "king in hat", was never crowned, because he wanted to avoid the coronation oath. He introduced reforms which clearly contradicted local customs. He replaced the counties with districts and appointed royal officials to administer them. He also abolished serfdom, securing all peasants' the right to free movement after the revolt of Romanian peasants in Transylvania. He ordered the first census in Hungary in 1784. According to its records, the nobility made up about four-and-a-half percent of the male population in the Lands of the Hungarian Crown (with 155,519 noblemen in Hungary proper, and 42,098 noblemen in Transylvania, Croatia and Slavonia). The nobles' proportion was significantly higher (six–sixteen percent) in the northeastern and eastern counties, and less (three percent) in Croatia and Slavonia. Poor noblemen, who were mocked as "nobles of the seven plum trees" or "sandal-wearing nobles", made up almost 90% of the nobility. Previous investigations of nobility show that more than half of the noble families received this rank after 1550.
The few reformist noblemen greeted the news of the French Revolution with enthusiasm. József Hajnóczy translated the Declaration of the Rights of Man and of the Citizen into Latin, and János Laczkovics published its Hungarian translation. To appease the Hungarian nobility, Joseph II revoked almost all his reforms on his deathbed in 1790. His successor, Leopold II, convoked the Diet and confirmed the liberties of the Estates of the realm, emphasizing Hungary was a "free and independent" realm, governed by its own laws. News about the Jacobin terror in France strengthened royal power. Hajnóczy and other radical (or "Jacobin") noblemen who had discussed the possibility of the abolishment of all privileges in secret societies were captured and executed or imprisoned in 1795. The Diets voted the taxes and the recruits that Leopold's successor, Francis, demanded between 1792 and 1811.
The last general levy of the nobility was declared in 1809, but Napoleon easily defeated the noble troops near Győr. Agricultural bloom encouraged the landowners to borrow money and to buy new estates or to establish mills during the war, but most of them went bankrupt after peace was restored in 1814. The concept of aviticitas prevented both the creditors from collecting their money and the debtors from selling their estates. Radical nobles played a crucial role in the reform movements of the early 19th century. Gergely Berzeviczy attributed the backwardness of the local economy to the peasants' serfdom already around 1800. Ferenc Kazinczy and János Batsányi initiated language reform, fearing the disappearance of the Hungarian language. The poet Sándor Petőfi, who was a commoner, ridiculed the conservative noblemen in his poem The Magyar Noble, contrasting their anachronistic pride and their idle way of life.
From the 1820s, a new generation of reformist noblemen dominated political life. Count István Széchenyi demanded the abolition of the serfs' labour service and the entail system, stating that, "We, well-to-do landowners are the main obstacles to the progress and greater development of our fatherland". He established clubs in Pressburg and Pest and promoted horse racing, because he wanted to encourage the regular meetings of magnates, lesser noblemen and burghers. Széchenyi's friend, Baron Miklós Wesselényi, demanded the creation of a constitutional monarchy and the protection of civil rights. A lesser nobleman, Lajos Kossuth, became the leader of the most radical politicians in the 1840s. He emphasized the Diets and the counties were the privileged groups' institutions and only a wider social movement could secure the development of Hungary.
The official use of the Hungarian language spread from the late 18th century, although ethnic Hungarians made up only about 38% of the population. Kossuth declared that all who wanted to enjoy the liberties of the nation should learn Hungarian. Count Janko Drašković recommended Croatian should replace Latin as the official language in Croatia and Slavonia. The Slovak Ľudovít Štúr stated that the Hungarian nation consisted of many nationalities and their loyalty could be strengthened by the official use of their languages.
Revolution and neo-absolutism
News of the uprisings in Paris and Vienna reached Pest on 15 March 1848. Young intellectuals proclaimed a radical program, known as the Twelve Points, demanding equal civil rights to all citizens. Count Lajos Batthyány was appointed the first prime minister of Hungary. The Diet quickly enacted the majority of the Twelve Points, and Ferdinand V sanctioned them in April.
The April Laws abolished the nobles' tax-exemption and the aviticitas, but the 31 fideicommissa remained intact. The peasant tenants received the ownership of their plots, but a compensation was promised to the landowners. Adult men who owned more than 0.032 km2 (7.9 acres) of arable lands or urban estates with a value of at least 300 florins – about one quarter of the adult male population – were granted the right to vote in the parliamentary elections. However, the noblemen's exclusive franchise in county elections was confirmed, otherwise ethnic minorities could have easily dominated the general assemblies in many counties. Noblemen made up about one quarter of the members of the new parliament, which assembled after the general elections on 5 July.
The Slovak delegates demanded autonomy for all ethnic minorities at their assembly in May. Similar demands were adopted at the Romanian delegates' meeting. Ferdinand V's advisors persuaded the ban (or governor) of Croatia, Baron Josip Jelačić, to invade Hungary proper in September. A new war of independence broke out and the Hungarian parliament dethroned the Habsburg dynasty on 14 April 1849. Nicholas I of Russia intervened on the legitimist side and Russian troops overpowered the Hungarian army, forcing it to surrender on 13 August.
Hungary, Croatia (and Slavonia) and Transylvania were incorporated as separate realms in the Austrian Empire. The advisors of the young emperor, Franz Joseph, declared that Hungary had lost its historic rights and the conservative aristocrats could not persuade him to restore the old constitution. Noblemen who had remained loyal to the Hapsburgs were appointed to high offices, but most new officials came from other provinces of the empire.
The vast majority of noblemen opted for a passive resistance: they did not hold offices in state administration and tacitly obstructed the implementation of imperial decrees. An untitled nobleman from Zala County, Ferenc Deák, became their leader around 1854. They tried to preserve an air of superiority, but their vast majority was assimilated to the local peasantry or petty bourgeoisie during the following decades. In contrast to them, the magnates, who retained about one quarter of all lands, could easily raise funds from the developing banking sector to modernize their estates.
Deák and his followers knew the great powers did not support the disintegration of the Austrian Empire. Austria's defeat in the Austro-Prussian War accelerated the rapprochement between the king and the Deák Party, which led to the Austro-Hungarian Compromise in 1867. Hungary proper and Transylvania were united and the autonomy of Hungary was restored within the Dual Monarchy of Austria-Hungary. Next year, the Croatian–Hungarian Settlement restored the union of Hungary proper and Croatia, but secured the competence of the sabor in internal affairs, education and justice.
The Compromise strengthened the position of the traditional political elite. Only about six percent of the population could vote in the general elections. More than half of the prime ministers and one-third of the ministers were appointed from among the magnates from 1867 to 1918. Landowners made up the majority of the members of parliament. Half of the seats in municipal assemblies were preserved for the greatest taxpayers. Noblemen also dominated state administration, because tens of thousands of impoverished nobles took jobs at the ministries, or at the state-owned railways and post offices. They were ardent supporters of Magyarization, denying the use of minority languages.
Only nobleman who owned an estate of at least 1.15 km2 (280 acres) were regarded as prosperous, but the number of estates of that size quickly decreased. The magnates took advantage of lesser noblemen's bankruptcies and bought new estates during the same period. New fideicommissa were created which enabled the magnates to preserve the entailment of their landed wealth. Aristocrats were regularly appointed to the boards of directors of banks and companies.
Jews were the prime movers of the development of the financial and industrial sectors. Jewish businessmen owned more than half of the companies and more than four-fifths of the banks in 1910. They also bought landed property and had acquired almost one-fifth of the estates of between 1.15–5.75 km2 (280–1,420 acres) by 1913. The most prominent Jewish burghers were awarded with nobility and there were 26 aristocratic families and 320 noble families of Jewish origin in 1918. Many of them converted to Christianity, but other nobles did not regard them as their peers.
Revolutions and counter-revolution
The First World War brought about the disintegration of Austria-Hungary in 1918. The Aster Revolution – a movement of the left-liberal Party of Independence, the Social Democratic Party and the Radical Citizens' Party – persuaded King Charles IV, to appoint the leader of the opposition, Count Mihály Károlyi, prime minister on 31 October. After the Lower House dissolved itself, Hungary was proclaimed a republic on 16 November. The Hungarian National Council adopted a land reform setting the maximum size of the estates at 1.15 square kilometres (280 acres) and ordering the distribution of any excess among the local peasantry. Károlyi, whose inherited domains had been mortgaged to banks, was the first to implement the reform.
The Allied Powers authorized Romania to occupy new territories and ordered the withdrawal of Hungarian troops almost as far as the Tisza on 26 February 1919. Károlyi resigned and the Bolshevik Béla Kun announced the establishment of the Hungarian Soviet Republic on 21 March. All estates of over 0.43 km2 (110 acres) and all private companies employing more than 20 workers were nationalized. The Bolsheviks could not stop the Romanian invasion and their leaders fled from Hungary on 1 August. After Gyula Peidl's temporary government, the industrialist István Friedrich formed a coalition government with the support of the Allied Powers on 6 August. The Bolsheviks' nationalization program was abolished.
The social democrats boycotted the general elections in early 1920. The new one-chamber parliament restored the monarchy, but without restoring the Habsburgs. Instead, a Calvinist nobleman, Miklós Horthy, was elected regent on 1 March 1920. Hungary had to acknowledge the loss of more than two thirds of its territory and more than 60% of its population (including one-third of the ethnic Hungarians) in the Treaty of Trianon on 4 June.
Horthy, who was not a crowned king, could not grant nobility, but he established a new order of merit, the Order of Gallantry. Its members received the hereditary title of Vitéz ("brave"). They were also granted parcels of land, which renewed the "medieval link between land tenure and service to the crown". Two Transylvanian aristocrats, Counts Pál Teleki and István Bethlen, were the most influential politicians in the interwar period. The events of 1918–19 convinced them that only a "conservative democracy", dominated by the landed nobility, could secure stability. Most ministers and the majority of the members of the parliament were nobles. A conservative agrarian reform – limited to eight and a half percent of all arable lands – was introduced, but almost one third of the lands remained in the possession of about 400 magnate families. The two-chamber parliament was restored in 1926, with an Upper House dominated by the aristocrats, prelates and high-ranking officials.
Antisemitism was a leading ideology in the 1920s and 1930s. A numerus clausus law limited the admission of Jewish students in the universities. Count Fidél Pálffy was one of the leading figures of the national socialist movements, but most aristocrats disdained the radicalism of "petty officers and housekeepers". Hungary participated in the German invasion of Yugoslavia in April 1941 and joined the war against the Soviet Union after the bombing of Kassa in late June. Fearing the defection of Hungary from the war, the Germans occupied the country on 19 March 1944. Hundreds of thousands of Jews and tens of thousands of Romani were transferred to Nazi concentration camps with the local authorities' assistance. The wealthiest business magnates were forced to renounce their companies and banks to redeem their own and their relatives' lives.
The Soviet Red Army reached the Hungarian borders and took possession of the Great Hungarian Plain by 6 December 1944. Delegates from the region's towns and villages established the Provisional National Assembly in Debrecen, which elected a new government on 22 December. Three prominent Anti-Nazi aristocrats had a seat in the assembly. The Provisional National Government soon promised land reform, along with the abolishment of all "anti-democratic" laws. The last German troops left Hungary on 4 April 1945.
Imre Nagy, the Communist Minister of Agriculture, announced land reform on 17 March 1945. All domains of more than 5.75 km2 (1,420 acres) were confiscated and the owners of smaller estates could retain a maximum 0.58–1.73 km2 (140–430 acres) of land. The land reform, as Bryan Cartledge noted, destroyed the nobility and eliminated the "elements of feudalism, which had persisted for longer in Hungary than anywhere else in Europe". Similar land reforms were introduced in Romania and Czechoslovakia. In the two countries, ethnic Hungarian aristocrats were sentenced to death or prison as alleged war criminals. Hungarian aristocrats could retain their estates only in Burgenland (in Austria) after 1945.
Soviet military authorities controlled the general elections and the formation of a coalition government in late 1945. The new parliament declared Hungary a republic on 1 February 1946. An opinion poll showed that more than 75% of men and 66% of women were opposed to the use of noble titles in 1946. The parliament adopted an act that abolished all noble ranks and related styles, also banning their use. The new act came into force on 14 February 1947.
- They refer to the Hont-Pázmány, Miskolc and Bogát-Radvány clans.
- The Bár-Kalán, Csák, Kán, Lád and Szemere kindreds regarded themselves as descendants of one of the legendary seven leaders of the Hungarian Conquest.
- Andronicus Aba built a castle at Füzér, and the castle at Kabold (now Kobersdorf in Austria) was erected by Pousa Szák.
- The families from the Aba clan had an eagle on their coat-of-arms, and the Csáks adopted the lion.
- According to a 15th-century land-register, many ecclesiastic nobles in the Bishopric of Veszprém were descended from true noblemen who had sought the bishops' protection.
- The most powerful oligarch, Matthew Csák, dominated more than a dozen counties in northwestern Hungary; Ladislaus Kán was the actual ruler of Translyvnia; and Paul Šubić ruled Croatia and Dalmatia.
- The Styrian Hermann of Celje became the greatest landowner in Slavonia; the Pole Stibor of Stiboricz held 9 castles and 140 villages in northeastern Hungary.
- The Báthory, Perényi and Rozgonyi families were among the native beneficiaries of Sigismund's grants.
- Mircea I of Wallachia was awarded with Fogaras; Stefan Lazarević, Despot of Serbia, received more than a dozen of castles.
- Stephen Bánffy of Losonc held 68 villages in 1459, but the same villages were divided among his 14 descendants in 1526.
- From among the 36 wealthiest families of the late 1430s, 27 families survived until 1490, and only eight families until 1570.
- The marriages of the children and grandchildren of Magdolna Székely by her three husbands established close family links between the Hungarian Széchy and Thurzó, the Croatian-Hungarian Zrinski, the Czech Kolowrat, Lobkowicz, Pernštejn, and Rožmberk, and the Austrian or German Arco, Salm and Ungnad families.
- The Tyrolian Count Pyrcho von Arco (who married the Hungarian Margit Széchy) was naturalized in Hungary in 1559; the Hungarian Baron Simon Forgách (who married the Austrian Ursula Pemfflinger) received citizenship in Lower Austria in 1568 and in Moravia in 1581.
- The Batthyány, Illésházy, Nádasdy and Thurzó families were the first converts.
- The former bodyguard, György Bessenyei, wrote pamphlets about the importance of education and the cultivation of the Hungarian language in the 1770s.
- Counts Emil Dessewffy, Antal Szécsen and György Apponyi were their leaders.
- Count Ferenc Zichy had a seat in the Imperial Council, Count Ferenc Nádasdy was made the Imperial Minister of Justice.
- The number of estates of between 1.15–5.75 km2 (280–1,420 acres) decreased from 20,000 to 10,000 from 1867 to 1900.
- In 1905, 88 counts and 66 barons had a seat in boards of directors.
- Henrik Lévay, who established the first Hungarian insurance company, was ennobled in 1868 and received the title baron in 1897; Zsigmond Kornfeld, who was the "Hungarian financial and industrial giant of the age", was created baron.
- The Chorins, Weisses and Kornfelds.
- Counts Gyula Dessewffy, Mihály Károlyi and Géza Teleki.
- Baron Zsigmond Kemény was imprisoned for initiating the execution of 191 Jews in Romania, although he had actually brought food to them.
- The Batthyány, Batthyány–Strattman, Erdődy, Esterházy and Zichy families.
- Berend, Urbańczyk & Wiszewski 2013, pp. 71–73.
- Engel 2001, pp. 8, 17.
- Zimonyi 2016, pp. 160, 306–308, 359.
- Berend, Urbańczyk & Wiszewski 2013, pp. 76–77.
- Engel 2001, pp. 12–13.
- Berend, Urbańczyk & Wiszewski 2013, pp. 76–78.
- Lukačka 2011, pp. 31, 33–36.
- Georgescu 1991, p. 40.
- Pop 2013, p. 40.
- Wolf 2003, p. 329.
- Engel 2001, pp. 117–118.
- Engel 2001, pp. 8, 20.
- Berend, Urbańczyk & Wiszewski 2013, p. 105.
- Engel 2001, p. 20.
- Constantine Porphyrogenitus: De Administrando Imperio (ch. 39), p. 175.
- Bak 1993, p. 273.
- Wolf 2003, pp. 326–327.
- Wolf 2003, p. 327.
- Berend, Urbańczyk & Wiszewski 2013, p. 107.
- Engel 2001, p. 16.
- Engel 2001, p. 17.
- Révész 2003, p. 341.
- Rady 2000, p. 12.
- Rady 2000, pp. 12–13.
- Rady 2000, pp. 12–13, 185 (notes 7–8).
- Engel 2001, p. 85.
- Cartledge 2011, p. 11.
- Berend, Urbańczyk & Wiszewski 2013, pp. 148–150.
- Wolf 2003, p. 330.
- Berend, Urbańczyk & Wiszewski 2013, pp. 149, 207–208.
- Engel 2001, p. 73.
- Rady 2000, pp. 18–19.
- Berend, Urbańczyk & Wiszewski 2013, pp. 149, 210.
- Berend, Urbańczyk & Wiszewski 2013, p. 193.
- Rady 2000, pp. 16–17.
- Engel 2001, p. 40.
- Rady 2000, p. 28.
- Engel 2001, pp. 85–86.
- Rady 2000, pp. 28–29.
- Rady 2000, p. 29.
- Fügedi & Bak 2012, p. 324.
- Engel 2001, p. 86.
- Fügedi & Bak 2012, p. 326.
- Curta 2006, p. 267.
- Engel 2001, p. 33.
- Magaš 2007, p. 48.
- Curta 2006, p. 266.
- Magaš 2007, p. 51.
- Engel 2001, pp. 76–77.
- Berend, Urbańczyk & Wiszewski 2013, p. 298.
- Rady 2000, pp. 25–26.
- Engel 2001, p. 87.
- Engel 2001, p. 80.
- Berend, Urbańczyk & Wiszewski 2013, p. 299.
- Berend, Urbańczyk & Wiszewski 2013, p. 297.
- Engel 2001, p. 81.
- Engel 2001, pp. 81, 87.
- Wolf 2003, p. 331.
- Berend, Urbańczyk & Wiszewski 2013, p. 201.
- Engel 2001, pp. 71–72.
- Curta 2006, p. 401.
- Engel 2001, pp. 73–74.
- Rady 2000, pp. 128–129.
- Fügedi & Bak 2012, p. 328.
- Rady 2000, p. 129.
- Rady 2000, p. 31.
- Berend, Urbańczyk & Wiszewski 2013, p. 286.
- Cartledge 2011, p. 20.
- Engel 2001, p. 93.
- Engel 2001, p. 92.
- Berend, Urbańczyk & Wiszewski 2013, pp. 426–427.
- Fügedi 1986a, p. 48.
- Rady 2000, p. 23.
- Engel 2001, pp. 86–87.
- Anonymus, Notary of King Béla: The Deeds of the Hungarians (ch. 6.), p. 19.
- Rady 2000, p. 35.
- Rady 2000, p. 36.
- Berend, Urbańczyk & Wiszewski 2013, p. 426.
- Fügedi 1998, p. 35.
- Engel 2001, p. 94.
- Cartledge 2011, p. 21.
- Engel 2001, p. 95.
- Rady 2000, pp. 40, 103.
- Engel 2001, p. 177.
- Berend, Urbańczyk & Wiszewski 2013, p. 429.
- Engel 2001, p. 96.
- Berend, Urbańczyk & Wiszewski 2013, p. 431.
- Rady 2000, p. 41.
- Kontler 1999, p. 78–80.
- Engel 2001, pp. 103–105.
- Berend, Urbańczyk & Wiszewski 2013, p. 430.
- Fügedi 1986a, p. 51.
- Fügedi 1986a, pp. 52, 56.
- Fügedi 1986a, p. 56.
- Fügedi 1986a, p. 60.
- Fügedi 1986a, pp. 65, 73–74.
- Fügedi 1986a, p. 74.
- Engel 2001, p. 120.
- Rady 2000, p. 86.
- Engel 2001, p. 84.
- Rady 2000, p. 91.
- Engel 2001, pp. 104–105.
- Rady 2000, p. 83.
- Rady 2000, p. 81.
- Makkai 1994, pp. 208–209.
- Rady 2000, p. 46.
- Fügedi 1998, p. 28.
- Rady 2000, p. 48.
- Fügedi 1998, pp. 41–42.
- Fügedi 1986a, pp. 72–73.
- Fügedi 1986a, pp. 54, 82.
- Fügedi 1986a, p. 87.
- Rady 2000, pp. 112–113, 200.
- Fügedi 1986a, pp. 77–78.
- Fügedi 1986a, p. 78.
- Rady 2000, p. 110.
- Rady 2000, p. 112.
- Kontler 1999, p. 76.
- Berend, Urbańczyk & Wiszewski 2013, p. 432.
- Berend, Urbańczyk & Wiszewski 2013, pp. 431–432.
- Rady 2000, p. 42.
- Berend, Urbańczyk & Wiszewski 2013, p. 273.
- Engel 2001, p. 108.
- Engel 2001, p. 122.
- Engel 2001, p. 124.
- Engel 2001, p. 125.
- Engel 2001, pp. 126–127.
- Cartledge 2011, p. 34.
- Kontler 1999, p. 89.
- Engel 2001, pp. 141–142.
- Fügedi 1998, p. 52.
- Rady 2000, p. 108.
- Engel 2001, pp. 178–179.
- Engel 2001, p. 146.
- Engel 2001, p. 147.
- Engel 2001, p. 151.
- Rady 2000, p. 137.
- Engel 2001, pp. 151–153, 342.
- Rady 2000, pp. 146–147.
- Fügedi 1998, p. 34.
- Kontler 1999, p. 97.
- Cartledge 2011, p. 40.
- Engel 2001, p. 178.
- Engel 2001, p. 175.
- Pop 2013, pp. 198–212.
- Rady 2000, p. 89.
- Lukačka 2011, p. 37.
- Rady 2000, pp. 84, 89, 93.
- Rady 2000, pp. 89, 93.
- Pop 2013, pp. 470–471, 475.
- Pop 2013, pp. 256–257.
- Engel 2001, p. 165.
- Makkai 1994, pp. 191–192, 230.
- Rady 2000, pp. 59–60.
- Fügedi 1998, p. 45.
- Fügedi 1998, p. 47.
- Engel 2001, pp. 174–175.
- Rady 2000, p. 57.
- Engel 2001, p. 180.
- Engel 2001, pp. 179–180.
- Cartledge 2011, p. 42.
- Engel 2001, p. 199.
- Kontler 1999, pp. 102, 104–105.
- Engel 2001, pp. 204–205, 211–213.
- Engel 2001, pp. 343–344.
- Fügedi 1986a, p. 143.
- Engel 2001, p. 342.
- Fügedi 1986a, p. 123.
- Cartledge 2011, p. 44.
- Kontler 1999, p. 103.
- Engel 2001, p. 205.
- Kontler 1999, p. 104.
- Rady 2000, p. 150.
- Engel 2001, pp. 232–233, 337.
- Engel 2001, pp. 337–338.
- Engel 2001, p. 279.
- Kontler 1999, p. 112.
- Kontler 1999, p. 113.
- Engel 2001, p. 281.
- Cartledge 2011, p. 57.
- Kontler 1999, p. 116.
- Kontler 1999, p. 117.
- Engel 2001, pp. 288, 293.
- Engel 2001, p. 311.
- Fügedi 1986b, p. IV.14.
- Pálffy 2009, pp. 109–110.
- Engel 2001, p. 338.
- Engel 2001, pp. 338, 340–341.
- Engel 2001, p. 341.
- Engel 2001, p. 339.
- Kontler 1999, p. 134.
- Engel 2001, pp. 349–350.
- Engel 2001, p. 350.
- Kontler 1999, p. 135.
- Engel 2001, p. 351.
- Cartledge 2011, p. 70.
- The Customary Law of the Renowned Kingdom of Hungary in Three Parts (1517) (1.4.), p. 53.
- Fügedi 1998, pp. 32, 34.
- Fügedi 1998, p. 20.
- Fügedi 1998, pp. 21–22.
- The Customary Law of the Renowned Kingdom of Hungary in Three Parts (1517) (1.39.), p. 105.
- Fügedi 1998, p. 26.
- Fügedi 1998, p. 24.
- Fügedi 1998, p. 25.
- Cartledge 2011, p. 71.
- Kontler 1999, p. 129.
- Engel 2001, p. 357.
- Engel 2001, p. 362.
- Engel 2001, p. 363.
- Engel 2001, p. 364.
- Cartledge 2011, p. 72.
- Engel 2001, p. 370.
- Kontler 1999, p. 139.
- Szakály 1994, p. 85.
- Cartledge 2011, p. 83.
- Cartledge 2011, pp. 83, 94.
- Szakály 1994, p. 88.
- Szakály 1994, pp. 88–89.
- Szakály 1994, p. 92.
- Schimert 1995, p. 161.
- Pálffy 2009, p. 231.
- Schimert 1995, p. 162.
- Cartledge 2011, p. 91.
- Kontler 1999, p. 167.
- Szakály 1994, p. 89.
- Pálffy 2009, pp. 72, 86–88.
- Pálffy 2009, pp. 86, 366.
- Kontler 1999, p. 151.
- Murdock 2000, p. 12.
- Kontler 1999, p. 152.
- Murdock 2000, p. 20.
- Murdock 2000, p. 34.
- Kontler 1999, p. 156.
- Schimert 1995, p. 166.
- Schimert 1995, p. 158.
- Rady 2000, p. 155.
- Schimert 1995, p. 167.
- Pálffy 2009, p. 178.
- Cartledge 2011, p. 95.
- Cartledge 2011, p. 113.
- Kontler 1999, p. 183.
- Kontler 1999, p. 184.
- Cartledge 2011, p. 114.
- Kontler 1999, pp. 183–184.
- Á. Varga 1989, p. 188.
- Kontler 1999, p. 185.
- Cartledge 2011, p. 115.
- Schimert 1995, p. 170.
- Schimert 1995, pp. 170–171.
- Cartledge 2011, p. 116.
- Cartledge 2011, p. 123.
- Cartledge 2011, p. 127.
- Magaš 2007, pp. 187–188.
- Vermes 2014, p. 135.
- Schimert 1995, pp. 127, 152–154.
- Kontler 1999, pp. 196–197.
- Nakazawa 2007, p. 2007.
- Kováč 2011, p. 121.
- Kováč 2011, pp. 121–122.
- Kováč 2011, p. 122.
- Georgescu 1991, p. 89.
- Kontler 1999, p. 197.
- Cartledge 2011, p. 130.
- Vermes 2014, p. 33.
- Vermes 2014, pp. 33, 61.
- Kontler 1999, pp. 217–218.
- Schimert 1995, p. 176.
- Cartledge 2011, p. 151.
- Schimert 1995, p. 174.
- Vermes 2014, pp. 94, 136.
- Kontler 1999, p. 206.
- Kontler 1999, p. 218.
- Schimert 1995, pp. 175–176.
- Kontler 1999, p. 210.
- Cartledge 2011, p. 139.
- Cartledge 2011, p. 140.
- Kontler 1999, p. 217.
- Schimert 1995, p. 148.
- Schimert 1995, p. 149.
- Vermes 2014, p. 31.
- Vermes 2014, p. 32.
- Kontler 1999, p. 220.
- Cartledge 2011, p. 143.
- Cartledge 2011, p. 144–145.
- Kontler 1999, p. 221.
- Kontler 1999, pp. 221–222.
- Kontler 1999, p. 223.
- Cartledge 2011, p. 159.
- Cartledge 2011, pp. 159–160.
- Kontler 1999, p. 226.
- Kontler 1999, p. 228.
- Patai 2015, p. 373.
- Cartledge 2011, p. 162.
- Cartledge 2011, p. 164.
- Cartledge 2011, pp. 166–167.
- Kontler 1999, p. 235.
- Cartledge 2011, p. 168.
- Cartledge 2011, p. 179.
- Kontler 1999, p. 242.
- Kontler 1999, p. 179.
- Magaš 2007, p. 202.
- Nakazawa 2007, p. 160.
- Kontler 1999, p. 247.
- Cartledge 2011, p. 191.
- Cartledge 2011, p. 194.
- Cartledge 2011, p. 196.
- Á. Varga 1989, p. 189.
- Kontler 1999, p. 248.
- Kontler 1999, p. 251.
- Nakazawa 2007, p. 163.
- Kováč 2011, p. 126.
- Georgescu 1991, p. 155.
- Kontler 1999, p. 250.
- Magaš 2007, p. 230.
- Kontler 1999, p. 253.
- Kontler 1999, p. 257.
- Cartledge 2011, p. 217.
- Cartledge 2011, p. 219.
- Cartledge 2011, p. 221.
- Cartledge 2011, pp. 220–221.
- Kontler 1999, p. 266.
- Cartledge 2011, p. 222.
- Kontler 1999, p. 270.
- Kontler 1999, p. 268.
- Kontler 1999, pp. 270–271.
- Cartledge 2011, p. 231.
- Georgescu 1991, p. 158.
- Cartledge 2011, p. 232.
- Magaš 2007, pp. 297–298.
- Kontler 1999, p. 281.
- Kontler 1999, p. 305.
- Kontler 1999, p. 285.
- Taylor 1976, p. 185.
- Cartledge 2011, p. 257.
- Taylor 1976, p. 186.
- Cartledge 2011, p. 255.
- Cartledge 2011, p. 256.
- Cartledge 2011, p. 258.
- Cartledge 2011, p. 259.
- Patai 2015, pp. 290–292, 369–370.
- Taylor 1976, pp. 244–251.
- Kontler 1999, pp. 328–329.
- Cartledge 2011, pp. 303–304.
- Cartledge 2011, p. 304.
- Cartledge 2011, p. 305.
- Kontler 1999, pp. 333–334.
- Cartledge 2011, p. 307.
- Cartledge 2011, p. 308.
- Cartledge 2011, p. 309.
- Kontler 1999, p. 338.
- Kontler 1999, p. 339.
- Kontler 1999, pp. 339, 345.
- Cartledge 2011, p. 334.
- Cartledge 2011, p. 352.
- Kontler 1999, p. 345.
- Kontler 1999, pp. 345–346.
- Cartledge 2011, p. 351.
- Kontler 1999, p. 347.
- Kontler 1999, p. 353.
- Cartledge 2011, p. 340.
- Cartledge 2011, p. 353.
- Kontler 1999, p. 348.
- Cartledge 2011, p. 354.
- Kontler 1999, pp. 347–348, 365.
- Kontler 1999, pp. 377–378.
- Cartledge 2011, pp. 395–396.
- Cartledge 2011, p. 398.
- Kontler 1999, p. 386.
- Cartledge 2011, p. 409.
- Kontler 1999, p. 391.
- Gudenus & Szentirmay 1989, p. 43.
- Cartledge 2011, p. 411.
- Cartledge 2011, p. 412.
- Cartledge 2011, p. 414.
- Kontler 1999, p. 394.
- Gudenus & Szentirmay 1989, p. 75.
- Gudenus & Szentirmay 1989, p. 73.
- Cartledge 2011, pp. 417–418.
- Cartledge 2011, p. 421.
- Gudenus & Szentirmay 1989, p. 28.
- Gudenus & Szentirmay 1989, pp. 27–28.
- Gudenus & Szentirmay 1989, p. 27.
- Anonymus, Notary of King Béla: The Deeds of the Hungarians (Edited, Translated and Annotated by Martyn Rady and László Veszprémy) (2010). In: Rady, Martyn; Veszprémy, László; Bak, János M. (2010); Anonymus and Master Roger; CEU Press; ISBN 978-963-9776-95-1.
- Constantine Porphyrogenitus: De Administrando Imperio (Greek text edited by Gyula Moravcsik, English translation by Romillyi J. H. Jenkins) (1967). Dumbarton Oaks Center for Byzantine Studies. ISBN 0-88402-021-5.
- Simon of Kéza: The Deeds of the Hungarians (Edited and translated by László Veszprémy and Frank Schaer with a study by Jenő Szűcs) (1999). CEU Press. ISBN 963-9116-31-9.
- The Customary Law of the Renowned Kingdom of Hungary in Three Parts (1517) (Edited and translated by János M. Bak, Péter Banyó and Martyn Rady, with an introductory study by László Péter) (2005). Charles Schlacks, Jr.; Department of Medieval Studies, Central European University. ISBN 1-884445-40-3.
- The Laws of the Medieval Kingdom of Hungary, 1000–1301 (Translated and edited by János M. Bak, György Bónis, James Ross Sweeney with an essay on previous editions by Andor Czizmadia, Second revised edition, In collaboration with Leslie S. Domonkos) (1999). Charles Schlacks, Jr. Publishers.
- Á. Varga, László (1989). "hitbizomány [fee tail]". In Bán, Péter (ed.). Magyar történelmi fogalomtár, A–L [Thesaurus of Hungarian History]. Gondolat. pp. 188–189. ISBN 963-282-203-X.
- Bak, János (1993). ""Linguistic pluralism" in Medieval Hungary". In Meyer, Marc A. (ed.). The Culture of Christendom: Essays in Medieval History in Memory of Denis L. T. Bethel. The Hambledon Press. pp. 269–280. ISBN 1-85285-064-7.
- Berend, Nora; Urbańczyk, Przemysław; Wiszewski, Przemysław (2013). Central Europe in the High Middle Ages: Bohemia, Hungary and Poland, c. 900-c. 1300. Cambridge University Press. ISBN 978-0-521-78156-5.
- Cartledge, Bryan (2011). The Will to Survive: A History of Hungary. C. Hurst & Co. ISBN 978-1-84904-112-6.
- Curta, Florin (2006). Southeastern Europe in the Middle Ages, 500–1250. Cambridge University Press. ISBN 978-0-521-89452-4.
- Engel, Pál (2001). The Realm of St Stephen: A History of Medieval Hungary, 895–1526. I.B. Tauris Publishers. ISBN 1-86064-061-3.
- Fügedi, Erik (1986a). Castle and Society in Medieval Hungary (1000-1437). Akadémiai Kiadó. ISBN 963-05-3802-4.
- Fügedi, Erik (1986b). "The aristocracy in medieval Hungary (theses)". In Bak, J. M. (ed.). Kings, Bishops, Nobles and Burghers in Medieval Hungary. Variorum Reprints. pp. IV.1–IV.14. ISBN 0-86078-177-1.
- Fügedi, Erik (1998). The Elefánthy: The Hungarian Nobleman and His Kindred (Edited by Damir Karbić, with a foreword by János M. Bak). Central European University Press. ISBN 963-9116-20-3.
- Fügedi, Erik; Bak, János M. (2012). "Foreign knights and clerks in Early Medieval Hungary". In Berend, Nora (ed.). The Expansion of Central Europe in the Middle Ages. Ashgate Publishing. pp. 319–331. ISBN 978-1-4094-2245-7.
- Georgescu, Vlad (1991). The Romanians: A History. Ohio State University Press. ISBN 0-8142-0511-9.
- Gudenus, János; Szentirmay, László (1989). Összetört címerek: a magyar arisztokrácia sorsa és az 1945 utáni megpróbáltatások [Broken Coats-of-Arms: The Hungarian Aristocrats' Fate and the Scourge after 1945]. Mozaik. ISBN 963-02-6114-6.
- Kontler, László (1999). Millennium in Central Europe: A History of Hungary. Atlantisz Publishing House. ISBN 963-9165-37-9.
- Kováč, Dušan (2011). "The Slovak political programme: from Hungarian patriotism to the Czecho–Slovak State". In Teich, Mikuláš; Kováč, Dušan; Brown, Martin D. (eds.). Slovakia in History. Cambridge University Press. pp. 120–136. ISBN 978-0-521-80253-6.
- Lukačka, Ján (2011). "The beginnings of the nobility in Slovakia". In Teich, Mikuláš; Kováč, Dušan; Brown, Martin D. (eds.). Slovakia in History. Cambridge University Press. pp. 30–37. ISBN 978-0-521-80253-6.
- Magaš, Branka (2007). Croatia Through History. SAQI. ISBN 978-0-86356-775-9.
- Makkai, László (1994). "The Emergence of the Estates (1172–1526)". In Köpeczi, Béla; Barta, Gábor; Bóna, István; Makkai, László; Szász, Zoltán; Borus, Judit (eds.). History of Transylvania. Akadémiai Kiadó. pp. 178–243. ISBN 963-05-6703-2.
- Murdock, Graeme (2000). Calvinism on the Frontier, 1600-1660: International Calvinims and the Reformed Church in Hungary and Transylvania. Clarendon Press. ISBN 0-19-820859-6.
- Nakazawa, Tatsuya (2007). "Slovak Nation as a Corporate Body: The Process of the Conceptual Transformation of a Nation without History into a Constitutional Subject during the Revolutions of 1848/49". In Hayashi, Tadayuki; Fukuda, Hiroshi (eds.). Regions in Central and Eastern Europe: Past and Present. Slavic Research Center, Hokkaido University. pp. 155–181. ISBN 978-4-938637-43-9.
- Neumann, Tibor (2016). "Hercegek a középkorvégi Magyarországon [Dukes in Hungary in the Late Middle Ages]". In Zsoldos, Attila (ed.). Hercegek és hercegségek a középkori Magyarországon [Dukes and Duchies in Medieval Hungary] (in Hungarian). Városi Levéltár és Kutatóintézet. pp. 95–112. ISBN 978-963-8406-13-2.
- Pálffy, Géza (2009). The Kingdom of Hungary and the Habsburg Monarchy in the Sixteenth Century. Center for Hungarian Studies and Publications. ISBN 978-0-88033-633-8.
- Patai, Raphael (2015). The Jews of Hungary: History, Culture, Psychology. Wayne State University Press. ISBN 978-0-8143-2561-2.
- Pop, Ioan-Aurel (2013). "De manibus Valachorum scismaticorum...": Romanians and Power in the Mediaeval Kingdom of Hungary: The Thirteenth and Fourteenth Centuries. Peter Lang Edition. ISBN 978-3-631-64866-7.
- Rady, Martyn (2000). Nobility, Land and Service in Medieval Hungary. Palgrave. ISBN 0-333-80085-0.
- Révész, László (2003). "The cemeteries of the Conquest period". In Zsolt, Visy (ed.). Hungarian Archaeology at the Turn of the Millenium. Ministry of National Cultural Heritage, Teleki László Foundation. pp. 338–343. ISBN 963-86291-8-5.
- Schimert, Peter (1995). "The Hungarian Nobility in the Seventeenth and Eighteenth Centuries". In Scott, H. M. (ed.). The European Nobilites in the Seventeenth and Eighteenth Centuries, Volume Two: Northern, Central and Eastern Europe. Longman. pp. 144–182. ISBN 0-582-08071-1.
- Szakály, Ferenc (1994). "The Early Ottoman Period, Including Royal Hungary, 1526–1606". In Sugar, Peter F.; Hanák, Péter; Frank, Tibor (eds.). A History of Hungary. Indiana University Press. pp. 83–99. ISBN 963-7081-01-1.
- Taylor, A. J. P. (1976). The Habsburg Monarchy, 1809–1918: A History of the Austrian Empire and Austria–Hungary. The University of Chicago Press. ISBN 0-226-79145-9.
- Thompson, Wayne C. (2014). Nordic, Central, and Southeastern Europe 2014. Rowman & Littlefield. ISBN 9781475812244.
- Vermes, Gábor (2014). Hungarian Culture and Politics in the Habsburg Monarchy, 1711–1848. CEU Press. ISBN 978-963-386-019-9.
- Wolf, Mária (2003). "10th–11th century settlements; Earthen forts". In Visy, Zsolt (ed.). Hungarian Archaeology at the Turn of the Millenium. Ministry of National Cultural Heritage, Teleki László Foundation. pp. 326–331. ISBN 963-86291-8-5.
- Zimonyi, István (2016). Muslim Sources on the Magyars in the Second Half of the 9th Century: The Magyar Chapter of the Jayhānī Tradition. BRILL. ISBN 978-90-04-21437-8.
- Zsoldos, Attila (2020). The Árpáds and Their People. An Introduction to the History of Hungary from cca. 900 to 1301. Arpadiana IV., Research Centre for the Humanities. ISBN 978-963-416-226-1.
- Tötösy de Zepetnek, Steven (2010). Nobilitashungariae: List of Historical Surnames of the Hungarian Nobility / A magyar történelmi nemesség családneveinek listája. Purdue University Press. ISSN 1923-9580. | https://worddisk.com/wiki/Hungarian_nobility/ | 21 |
14 | Now, according to research conducted by the World Health Organization, 1.1 billion teenagers and young adults are at risk of experiencing hearing loss. Specifically, half of individuals aged 12 to 35-years-old expose their ears to “unsafe” sound levels when using audio devices and about 40 percent expose their ears to “potentially damaging” sound levels at entertainment venues.
As Dr. Etienne Krug, WHO Director for the Department for Management of Noncommunicable Diseases, Disability, Violence and Injury Prevention told ABC, “As they go about their daily lives doing what they enjoy, more and more young people are placing themselves at risk of hearing loss.” She also warns that once an individual experiences hearing loss, “It won’t come back.”
The National Institute of Deafness and Communication Disorders carefully explains the science behind noise-induced hearing loss on their website. Essentially, hearing loss occurs when there is damage to the hair cells, sensory cells that rest on top of the basilar membrane and are sensitive to sound waves.
When sound enters the cochlea, hair cells move causing the microscopic hair-like projections (sterocilia) sitting on top of the hair cells to bend. This bending motion then results in the opening of protein channels that allow chemicals to enter the hair cell, which generates an electrical signal that is sent to the brain and interpreted as sound. Thus, the death of these cells means that some (or all) sound waves cannot be transmitted to the brain via the movement of the hair cells and sterocilia (hence the loss of hearing).
In order to prevent hearing loss, WHO has suggested that people limit the use of headphones to one hour a day and also recommends that individuals avoid spending more than 8 hours in a workplace with 85 (or more) decibels of sound (bars and sporting events typically have around 100 decibels of noise).
Though some hearing loss can be temporary, it is important to note that hearing loss can be experienced due to a sudden, loud noise in addition to repeated, prolonged exposure to certain noises. If a person is experiencing hearing loss, it’s best for him or her to contact a physician to determine the extent of damage, if any.
(Photo courtesy of flattop341) | https://www.savingadvice.com/articles/2015/03/01/1033156_one-billion-teenagers-at-risk-for-hearing-loss.html | 21 |
22 | PROPOSED LESSON PLAN FOR WEEK XX ENDING XXth MONTH, 2021.
SCHOOL: (Put name of your school and the address here)
TERM: Third Term, 2020/2021 Academic Session
TOPIC: DEMAND AND SUPPLY
NUMBER IN CLASS:
TIME TABLE FIT:
|xx-xx-2021||Monday||5th||10:50 -11:30 am||40 minutes||SS1|
|xx-xx-2021||Tuesday||4th||10:10-10:50 am||40 minutes||SS1|
PREVIOUS KNOWLEDGE: The student has learnt about meaning of demand and supply.
MAIN AIM: to help the students understand the meaning of factors of production.
SUBSIDIARY AIMS: By the end of the lesson, the Students should be able to:
- Explain the market equilibrium,
- Explain what happens to demand and supply when the market price is lower or higher than the market equilibrium price,
- Differentiate between change in demand and change in quantity demanded, and
- List types of demand and supply.
PERSONAL AIM: to assist the students understand the concept of market equilibrium.
ASSUMPTION: the students are familiar with meaning of demand and supply.
ANTICIPATED PROBLEMS: the student may not be conversant with market equilibrium.
POSSIBLE SOLUTION: Teacher explains the meaning of market equilibrium.
TEACHING AIDS: comprehensive economics for senior secondary schools and economics dictionary by John Black.
INTERACTION PATTERN: interactive method.
STEP 1: MEANING OF MARKET EQUILIBRUIM: Market equilibrium, also known as the market clearing price, refers to a perfect balance in the market of supply and demand, i.e. when supply is equal to demand.
When the market is at equilibrium, the price of a product or service will remain the same, unless some external factor changes the level of supply or demand.
According to economic theory, in a market economy there is a single price which brings demand and supply into balance – the equilibrium price.
– – – – – – – – – – – –
– – – – – – – – – – –
STEP 2: Illustration of what happens to demand and supply when the market price is higher or lower than the market equilibrium
From the above diagram is obvious that increase in price from po to p2 lead to a decrease of quantity demanded from 20 to 10 , while supply increased from 20 to 30 at the same point thereby create an excess supply of 20 (30 – 10 = 20) when the price decreased from po to p1 quantity demanded increased from 20 to 30 thereby create an excess demand of 20 also (30-10 = 20) .further more point is equilibrium point , po is equilibrium price while Qo stands for equilibrium quantity.
STEP3: Change in Quantity Demanded
Definition of a Change in Quantity Demanded:
A Change in the Quantity Demanded is the change in the number of units a person or consumers are willing to purchase that results from a change in the price of that good or service.
The law of demand tells us that a change in the price will result in a change in the quantity demanded of a good or service. When sellers increase their price, consumers normally reduce the quantity they purchase. Conversely, when sellers have a sale it is to attract buyers and sell more. A change in the quantity demanded is illustrated by movement along the demand curve.
It is important to distinguish between a change in the quantity demanded and a change in demand. Many variables can change the demand for a product. These include: a change in income, a change in the price of related products, the number of buyers, future expectations, or a change in tastes. Let’s use an example of babysitting and theater tickets to illustrate the relationship and differences between a change in demand and a change in the quantity demanded by using the diagram below.
The Smith’s love the theater and are thrilled to learn the local theater recently dropped its price from $40 to $25 per ticket. The Smith’s demand curve shows us that they would increase the quantity of tickets they purchase from four to ten per season. This increase in the quantity demanded is illustrated by a movement along the demand curve from point A to point B. The Smiths have a three-year-old daughter and use Jane to babysit when they attend plays. Their demand for Jane’s services just increased as a result of the drop-in ticket prices. Theater tickets and babysitting are complements. Note that the demand for Jane’s babysitting has increased even though she has not changed her price. In fact, the demand for her services increased at all prices. This increase in demand is illustrated by a rightward shift in the demand curve for babysitting from Demand curve A to Demand Curve B.
The impact a price change will have on total revenues depends on the item’s price elasticity of demand. How severely is the change in the quantity demanded impacted by a change in the price? Revenues will decrease following an increase in the price if the product has an elastic demand. The added revenues generated per unit sold are less than the revenues lost from the drop in the quantity demanded. Conversely, revenues will increase following an increase in the price if the product has an inelastic demand, because the added revenues generated from each sale will be greater than the revenues lost from diminished sales.
– – – – – – – – – – – –
– – – – – – – – – – –
STEP4: TYPES OF DEMAND AND SUPPLY
TYPES OF DEMAND
3.complementry or joint demand
TYPES OF SUPPLY
1.complementry or joint supply
3.longrun and short run supply
(i) explain market equilibrium,
- illustrate what happens to demand and supply when the market price is higher or lower than the market equilibrium
(iii)with a well labeled diagram differentiate between change in demand and change in quantity demanded.
SUMMARY: any price below the equilibrium price will lead to excess demand while any price fixed above equilibrium price will lead to excess supply, at point of interception of market demand curve and market supply curve, demand and supply are equal at that point
(1) with help of a well labeled diagram differentiate between change in demand and change in quantity demanded
(2) list factor(s) responsible for each of them
Teacher Evaluation: ________________________ | https://edupodia.com/lesson-plan-on-concept-of-demand-and-supply-market-equilibrium-third-term-ss1-economics/ | 21 |
31 | The image is from Wikipedia Commons
An illuminated manuscript is a manuscript in which the text is supplemented with such decoration as initials, borders (marginalia), and miniature illustrations. In the strictest definition, the term refers only to manuscripts decorated with either gold or silver; but in both common usage and modern scholarship, the term refers to any decorated or illustrated manuscript from Western traditions. Comparable Far Eastern and Mesoamerican works are described as painted. Islamic manuscripts may be referred to as illuminated, illustrated, or painted, though using essentially the same techniques as Western works.
The earliest extant substantive illuminated manuscripts are from the period 400 to 600, produced in the Kingdom of the Ostrogoths and the Eastern Roman Empire. Their significance lies not only in their inherent artistic and historical value, but also in the maintenance of a link of literacy offered by non-illuminated texts. Had it not been for the monastic scribes of Late Antiquity, most literature of Greece and Rome would have perished. As it was, the patterns of textual survivals were shaped by their usefulness to the severely constricted literate group of Christians. Illumination of manuscripts, as a way of aggrandizing ancient documents, aided their preservation and informative value in an era when new ruling classes were no longer literate, at least in the language used in the manuscripts.
The majority of extant manuscripts are from the Middle Ages, although many survive from the Renaissance, along with a very limited number from Late Antiquity. The majority are of a religious nature. Especially from the 13th century onward, an increasing number of secular texts were illuminated. Most illuminated manuscripts were created as codices, which had superseded scrolls. A very few illuminated fragments survive on papyrus, which does not last nearly as long as parchment. Most medieval manuscripts, illuminated or not, were written on parchment (most commonly of calf, sheep, or goat skin), but most manuscripts important enough to illuminate were written on the best quality of parchment, called vellum.
Beginning in the Late Middle Ages, manuscripts began to be produced on paper. Very early printed books were sometimes produced with spaces left for rubrics and miniatures, or were given illuminated initials, or decorations in the margin, but the introduction of printing rapidly led to the decline of illumination. Illuminated manuscripts continued to be produced in the early 16th century but in much smaller numbers, mostly for the very wealthy. They are among the most common items to survive from the Middle Ages; many thousands survive. They are also the best surviving specimens of medieval painting, and the best preserved. Indeed, for many areas and time periods, they are the only surviving examples of painting.
Art historians classify illuminated manuscripts into their historic periods and types, including (but not limited to) Late Antique, Insular, Carolingian manuscripts, Ottonian manuscripts, Romanesque manuscripts, Gothic manuscripts, and Renaissance manuscripts. There are a few examples from later periods. The type of book most often heavily and richly illuminated, sometimes known as a "display book", varied between periods. In the first millennium, these were most likely to be Gospel Books, such as the Lindisfarne Gospels and the Book of Kells. The Romanesque period saw the creation of many large illuminated complete Bibles – one in Sweden requires three librarians to lift it. Many Psalters were also heavily illuminated in both this and the Gothic period. Single cards or posters of vellum, leather or paper were in wider circulation with short stories or legends on them about the lives of saints, chivalry knights or other mythological figures, even criminal, social or miraculous occurrences; popular events much freely used by story tellers and itinerant actors to support their plays. Finally, the Book of Hours, very commonly the personal devotional book of a wealthy layperson, was often richly illuminated in the Gothic period. Many were illuminated with miniatures, decorated initials and floral borders. Paper was rare and most Books of Hours were composed of sheets of parchment made from skins of animals, usually sheep or goats. Other books, both liturgical and not, continued to be illuminated at all periods.
The Byzantine world produced manuscripts in its own style, versions of which spread to other Orthodox and Eastern Christian areas. The Muslim World and in particular the Iberian Peninsula, with their traditions of literacy uninterrupted by the Middle Ages, were instrumental in delivering ancient classic works to the growing intellectual circles and universities of Western Europe all through the 12th century, as books were produced there in large numbers and on paper for the first time in Europe, and with them full treatises on the sciences, especially astrology and medicine where illumination was required to have profuse and accurate representations with the text.
The Gothic period, which generally saw an increase in the production of these artifacts, also saw more secular works such as chronicles and works of literature illuminated. Wealthy people began to build up personal libraries; Philip the Bold probably had the largest personal library of his time in the mid-15th century, is estimated to have had about 600 illuminated manuscripts, whilst a number of his friends and relations had several dozen.
Up to the 12th century, most manuscripts were produced in monasteries in order to add to the library or after receiving a commission from a wealthy patron. Larger monasteries often contained separate areas for the monks who specialized in the production of manuscripts called a scriptorium. Within the walls of a scriptorium were individualized areas where a monk could sit and work on a manuscript without being disturbed by his fellow brethren. If no scriptorium was available, then "separate little rooms were assigned to book copying; they were situated in such a way that each scribe had to himself a window open to the cloister walk".
By the 14th century, the cloisters of monks writing in the scriptorium had almost fully given way to commercial urban scriptoria, especially in Paris, Rome and the Netherlands. While the process of creating an illuminated manuscript did not change, the move from monasteries to commercial settings was a radical step. Demand for manuscripts grew to an extent that Monastic libraries began to employ secular scribes and illuminators. These individuals often lived close to the monastery and, in instances, dressed as monks whenever they entered the monastery, but were allowed to leave at the end of the day. In reality, illuminators were often well known and acclaimed and many of their identities have survived.
First, the manuscript was "sent to the rubricator, who added (in red or other colors) the titles, headlines, the initials of chapters and sections, the notes and so on; and then – if the book was to be illustrated – it was sent to the illuminator". In the case of manuscripts that were sold commercially, the writing would "undoubtedly have been discussed initially between the patron and the scribe (or the scribe’s agent,) but by the time that the written gathering were sent off to the illuminator there was no longer any scope for innovation".
Illumination was a complex and frequently costly process. It was usually reserved for special books: an altar Bible, for example. Wealthy people often had richly illuminated "books of hours" made, which set down prayers appropriate for various times in the liturgical day.
In the early Middle Ages, most books were produced in monasteries, whether for their own use, for presentation, or for a commission. However, commercial scriptoria grew up in large cities, especially Paris, and in Italy and the Netherlands, and by the late 14th century there was a significant industry producing manuscripts, including agents who would take long-distance commissions, with details of the heraldry of the buyer and the saints of personal interest to him (for the calendar of a Book of hours). By the end of the period, many of the painters were women, perhaps especially in Paris.
The text was usually written before the manuscripts were illuminated. Sheets of parchment or vellum were cut down to the appropriate size. These sizes ranged from 'Atlantic' Bibles large stationary works to small hand held works. After the general layout of the page was planned (including the initial capitals and borders), the page was lightly ruled with a pointed stick, and the scribe went to work with ink-pot and either sharpened quill feather or reed pen. The script depended on local customs and tastes. The sturdy Roman letters of the early Middle Ages gradually gave way to scripts such as Uncial and half-Uncial, especially in the British Isles, where distinctive scripts such as insular majuscule and insular minuscule developed. Stocky, richly textured blackletter was first seen around the 13th century and was particularly popular in the later Middle Ages.
Prior to the days of such careful planning, "A typical black-letter page of these Gothic years would show a page in which the lettering was cramped and crowded into a format dominated by huge ornamented capitals that descended from uncial forms or by illustrations". To prevent such poorly made manuscripts and illuminations from occurring a script was typically supplied first, "and blank spaces were left for the decoration. This pre-supposes very careful planning by the scribe even before he put pen to parchment". If the scribe and the illuminator were separate labors the planning period allowed for adequate space to be given to each individual.
The process of illumination
The following steps outline the detailed labor involved to create the illuminations of one page of a manuscript:
- Silverpoint drawing of the design were executed
- Burnished gold dots applied
- The application of modulating colors
- Continuation of the previous three steps in addition to the outlining of marginal figures
- The penning of a rinceau appearing in the border of a page
- The final step, the marginal figures are painted
The illumination and decoration was normally planned at the inception of the work, and space reserved for it. However, the text was usually written before illumination began. In the Early Medieval period the text and illumination were often done by the same people, normally monks, but by the High Middle Ages the roles were typically separated, except for routine initials and flourishes, and by at least the 14th century there were secular workshops producing manuscripts, and by the beginning of the 15th century these were producing most of the best work, and were commissioned even by monasteries. When the text was complete, the illustrator set to work. Complex designs were planned out beforehand, probably on wax tablets, the sketch pad of the era. The design was then traced or drawn onto the vellum (possibly with the aid of pinpricks or other markings, as in the case of the Lindisfarne Gospels). Many incomplete manuscripts survive from most periods, giving us a good idea of working methods.
At all times, most manuscripts did not have images in them. In the early Middle Ages, manuscripts tend to either be display books with very full illumination, or manuscripts for study with at most a few decorated initials and flourishes. By the Romanesque period many more manuscripts had decorated or historiated initials, and manuscripts essentially for study often contained some images, often not in color. This trend intensified in the Gothic period, when most manuscripts had at least decorative flourishes in places, and a much larger proportion had images of some sort. Display books of the Gothic period in particular had very elaborate decorated borders of foliate patterns, often with small drolleries. A Gothic page might contain several areas and types of decoration: a miniature in a frame, a historiated initial beginning a passage of text, and a border with drolleries. Often different artists worked on the different parts of the decoration.
While the use of gold is by far one of the most captivating features of illuminated manuscripts, the bold use of varying colors provided multiple layers of dimension to the illumination. From a religious perspective, "the diverse colors wherewith the book is illustrated, not unworthily represent the multiple grace of heavenly wisdom."
The medieval artist's palette was broad; a partial list of pigments is given below. In addition, unlikely-sounding substances such as urine and earwax were used to prepare pigments.
|Red||Insect-based colors, including:
Chemical- and mineral-based colors, including:
|Yellow||Plant-based colors, such as:
Mineral-based colors, including:
|Blue||Plant-based substances such as:
Chemical- and mineral-based colors, including:
On the strictest definition, a manuscript is not considered "illuminated" unless one or many illuminations contained metal, normally gold leaf or shell gold paint, or at least was brushed with gold specks. Gold leaf was from the 12th century usually polished, a process known as burnishing. The inclusion of gold alludes to many different possibilities for the text. If the text is of religious nature lettering in gold is a sign of exalting the text. In the early centuries of Christianity, “Gospel manuscripts were sometimes written entirely in gold". The gold ground style, with all or most of the background in gold, was taken from Byzantine mosaics and icons. Aside from adding rich decoration to the text, scribes during the time considered themselves to be praising God with their use of gold. Furthermore, gold was used if a patron who had commissioned a book to be written wished to display the vastness of his riches. Eventually, the addition of gold to manuscripts became so frequent, "that its value as a barometer of status with the manuscript was degraded". During this time period the price of gold had become so cheap that its inclusion in an illuminated manuscript accounted for only a tenth of the cost of production. By adding richness and depth to the manuscript, the use of gold in illuminations created pieces of art that are still valued today.
The application of gold leaf or dust to an illumination is a very detailed process that only the most skilled illuminators can undertake and successfully achieve. The first detail an illuminator considered when dealing with gold was whether to use gold leaf or specks of gold that could be applied with a brush. When working with gold leaf the pieces would be hammered and thinned until they were "thinner than the thinnest paper". The use of this type of leaf allowed for numerous areas of the text to be outlined in gold. There were several ways of applying gold to an illumination one of the most popular included mixing the gold with stag's glue and then "pour it into water and dissolve it with your finger". Once the gold was soft and malleable in the water it was ready to be applied to the page. Illuminators had to be very careful when applying gold leaf to the manuscript. Gold leaf is able to "adhere to any pigment which had already been laid, ruining the design, and secondly the action of burnishing it is vigorous and runs the risk of smudging any painting already around it."
Patrons of illumination
Monasteries produced manuscripts for their own use; heavily illuminated ones tended to be reserved for liturgical use in the early period, while the monastery library held plainer texts. In the early period manuscripts were often commissioned by rulers for their own personal use or as diplomatic gifts, and many old manuscripts continued to be given in this way, even into the Early Modern period. Especially after the book of hours became popular, wealthy individuals commissioned works as a sign of status within the community, sometimes including donor portraits or heraldry: "In a scene from the New Testament, Christ would be shown larger than an apostle, who would be bigger than a mere bystander in the picture, while the humble donor of the painting or the artist himself might appear as a tiny figure in the corner." The calendar was also personalized, recording the feast days of local or family saints. By the end of the Middle Ages many manuscripts were produced for distribution through a network of agents, and blank spaces might be reserved for the appropriate heraldry to be added locally by the buyer.
Displaying the amazing detail and richness of a text, the addition of illumination was never an afterthought. The inclusion of illumination is twofold, it added value to the work, but more importantly it provides pictures for the illiterate members of society to "make the reading seem more vivid and perhaps more credible".
- The untypically early 11th century Missal of Silos is from Spain, near to Muslim paper manufacturing centres in Al-Andaluz. Textual manuscripts on paper become increasingly common, but the more expensive parchment was mostly used for illuminated manuscripts until the end of the period.
- Putnam A.M., Geo. Haven. Books and Their Makers During The Middle Ages. Vol. 1. New York: Hillary House, 1962. Print.
- De Hamel, 45
- De Hamel, 57
- De Hamel, 65
- De Hamel, Christopher. Medieval Craftsmen: Scribes and Illuminations. Buffalo: University of Toronto, 1992. p. 60.
- "Getijdenboek van Alexandre Petau". lib.ugent.be. Retrieved 27 August 2020.
- de Hamel, Christopher (2001). The British Library Guide To Manuscript Illumination History and Techniques. Toronto: British Library. p. 35. ISBN 0-8020-8173-8.
- Anderson, Donald M. The Art of Written Forms: The Theory and Practice of Calligraphy. New York: Holt, Rinehart and Winston, Inc, 1969. Print.
- Calkins, Robert G. "Stages of Execution: Procedures of Illumination as Revealed in an Unfinished Book of Hours." International Center of Medieval Art 17.1 (1978): 61–70. JSTOR.org. Web. 17 April 2010. <https://www.jstor.org/stable/766713>
- Iberian manuscripts (pigments) Archived 29 March 2003 at archive.today
- De Hamel, Christopher. The British Library Guide to Manuscript Illumination: History and Techniques. Toronto: University of Toronto, 2001. Print,52.
- De Hamel, Christopher. Medieval Craftsmen: Scribes and Illuminations. Buffalo: University of Toronto, 1992. Print,49.
- Brehier, Louis. "Illuminated Manuscripts". The Catholic Encyclopedia. Vol.9. New York: Robert Appelton Company, 1910. 17 April 2010 http://www.newadvent.org/cathen/09620a.htm, page 45.
- Blondheim, D.S. "An Old Portuguese Work on Manuscript Illumination." The Jewish Quarterly Review, New Series 19.2 (1928): 97–135. JSTOR. Web. 17 April 2010. <https://www.jstor.org/stable/1451766>.
- Hamel, Christopher de (29 December 2001). The British Library Guide to Manuscript Illumination: History and Techniques (1 ed.). University of Toronto Press, Scholarly Publishing Division. p. 20. ISBN 0-8020-8173-8.
- "Heraldry". Glossary for Illuminated Manuscripts. British Library. n.d. Retrieved 14 December 2015.
- Jones, Susan. "Manuscript Illumination in Northern Europe". In Heilbrunn Timeline of Art History. New York: The Metropolitan Museum of Art, 2000–. http://www.metmuseum.org/toah/hd/manu/hd_manu.htm (October 2002)
- Alexander, Jonathan A.G., Medieval Illuminators and their Methods of Work, 1992, Yale UP, ISBN 0300056893
- Coleman, Joyce, Mark Cruse, and Kathryn A. Smith, eds. The Social Life of Illumination: Manuscripts, Images, and Communities in the Late Middle Ages (Series: Medieval Texts and Cultures in Northern Europe, vol. 21. Turnhout: Brepols Publishing, 2013). xxiv + 552 pp online review
- Calkins, Robert G. Illuminated Books of the Middle Ages. 1983, Cornell University Press, ISBN 0500233756
- De Hamel, Christopher. A History of Illuminated Manuscript (Phaidon, 1986)
- De Hamel, Christopher. Medieval Craftsmen: Scribes and Illuminations. Buffalo: University of Toronto, 1992.
- Kren, T. & McKendrick, Scot (eds), Illuminating the Renaissance – The Triumph of Flemish Manuscript Painting in Europe, Getty Museum/Royal Academy of Arts, 2003, ISBN 1-903973-28-7
- Liepe, Lena. Studies in Icelandic Fourteenth Century Book Painting, Reykholt: Snorrastofa, rit. vol. VI, 2009.
- Morgan, Nigel J., Stella Panayotova, and Martine Meuwese. Illuminated Manuscripts in Cambridge: A Catalogue of Western Book Illumination in the Fitzwilliam Museum and the Cambridge Colleges (London : Harvey Miller Publishers in conjunction with the Modern Humanities Association. 1999– )
- Pächt, Otto, Book Illumination in the Middle Ages (trans fr German), 1986, Harvey Miller Publishers, London, ISBN 0199210608
- Rudy, Kathryn M. (2016), Piety in Pieces: How Medieval Readers Customized their Manuscripts, Open Book Publishers, doi:10.11647/OBP.0094, ISBN 9781783742356
- Wieck, Roger. "Folia Fugitiva: The Pursuit of the Illuminated Manuscript Leaf". The Journal of the Walters Art Gallery, Vol. 54, 1996.
- Illuminated Manuscripts in the J. Paul Getty Museum – Los Angeles
- Illuminating the Manuscript Leaves Digitized illuminated manuscripts from the University of Louisville Libraries
- 15 pages of illuminated manuscripts from the Ball State University Digital Media Repository
- Digitized Illuminated Manuscripts – Complete sets of high-resolution archival images from the Walters Art Museum
- Collection of Armenian Illuminated Manuscripts – A full collection with high resolution images of Armenian Illuminated Manuscripts
- UCLA Library Special Collections collection of Medieval and Renaissance manuscripts
- British Library, catalogue of illuminated manuscripts
- Collection of illuminated manuscripts. From the Koninklijke Bibliotheek and Museum Meermanno-Westreenianum in The Hague.
- Demonstration of the production of an illuminated manuscript from the Fitzwilliam, Cambridge (Flash player needed)
- CORSAIR. Thousands of digital images from the Morgan Library's renowned collection of medieval and Renaissance manuscripts
- Manuscript Miniatures, a collection of illustrations from manuscripts made before 1450
- Bahasa Indonesia
- Norsk bokmål
- Simple English
- Српски / srpski
- Srpskohrvatski / српскохрватски
- This page is based on the Wikipedia article Illuminated manuscript; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. | https://thereaderwiki.com/en/Illuminated_manuscript | 21 |
20 | This article needs additional citations for verification . (January 2020) (Learn how and when to remove this template message)
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. (February 2020) (Learn how and when to remove this template message)
Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods using interval arithmetic can guarantee reliable, mathematically correct results. Instead of representing a value as a single number, interval arithmetic represents each value as a range of possibilities. For example, instead of estimating the height of someone as exactly 2.0 metres, using interval arithmetic one might be certain that that person is somewhere between 1.97 and 2.03 metres.
Mathematically, instead of working with an uncertain real , one works with the ends of an interval that contains . In interval arithmetic, any variable lies in the closed interval between and . A function , when applied to , yields an uncertain result; produces an interval which includes all the possible values for for all .
Interval arithmetic is suitable for a variety of purposes. The most common use is in software, to keep track of rounding errors in calculations and of uncertainties in the knowledge of the exact values of physical and technical parameters. The latter often arise from measurement errors and tolerances for components or due to limits on computational accuracy. Interval arithmetic also helps find guaranteed solutions to equations (such as differential equations) and optimization problems.
The main objective of interval arithmetic is a simple way to calculate upper and lower bounds for the range of a function in one or more variables. These endpoints are not necessarily the true supremum or infimum, since the precise calculation of those values can be difficult or impossible; the bounds need only contain the function's range as a subset.
This treatment is typically limited to real intervals, so quantities of form
where and are allowed. With one of , infinite, the interval would be an unbounded interval; with both infinite, the interval would be the extended real number line. Since a real number can be interpreted as the interval intervals and real numbers can be freely combined.
As with traditional calculations with real numbers, simple arithmetic operations and functions on elementary intervals must first be defined.More complicated functions can be calculated from these basic elements.
As an example, consider the calculation of body mass index (BMI) and assessing whether a person is overweight. BMI is calculated as a person's body weight in kilograms divided by the square of their height in metres. A bathroom scale may have a resolution of one kilogram. Intermediate values cannot be discerned—79.6 kg and 80.3 kg are indistinguishable, for example—but the true weight is rounded to the nearest whole number. It is unlikely that when the scale reads 80 kg, the person weighs exactly 80.0 kg. In normal rounding to the nearest value, the scale's showing 80 kg indicates a weight between 79.5 kg and 80.5 kg. This corresponds with the interval .
For a man who weighs 80 kg and is 1.80 m tall, the BMI is approximately 24.7. A weight of 79.5 kg and the same height yields approx. 24.537, while a weight of 80.5 kg yields approx. 24.846. Since the function is monotonically increasing, we conclude that the true BMI is in the range . Since the entire range is less than 25, which is the cutoff between normal and excessive weight, we conclude that the man is of normal weight.
The error in this case does not affect the conclusion (normal weight), but this is not always the case. If the man was slightly heavier, the BMI's range may include the cutoff value of 25. In that case, the scale's precision was insufficient to make a definitive conclusion.
Also, note that the range of BMI examples could be reported as , since this interval is a superset of the calculated interval. The range could not, however, be reported as , as now the interval does not contain possible BMI values.
Interval arithmetic states the range of possible outcomes explicitly. Results are no longer stated as numbers, but as intervals that represent imprecise values. The size of the intervals are similar to error bars in expressing the extent of uncertainty.
Height and body weight both affect the value of the BMI. We have already treated weight as an uncertain measurement, but height is also subject to uncertainty. Height measurements in metres are usually rounded to the nearest centimeter: a recorded measurement of 1.79 metres actually means a height in the interval . Now, all four combinations of possible height/weight values must be considered. Using the interval methods described below, the BMI lies in the interval
In this case, the man may have a normal weight or be overweight; the weight and height measurements were insufficiently precise to make a definitive conclusion. This demonstrates interval arithmetic's ability to correctly track and propagate error.
A binary operation on two intervals, such as addition or multiplication, is defined by
In other words, it is the set of all possible values of , where and are in their corresponding intervals. If is monotone in each operand on the intervals, which is the case for the four basic arithmetic operations (except division when the denominator contains ), the extreme values occur at the endpoints of the operand intervals. Writing out all combinations, one way of stating this is
provided that is defined for all and .
For practical applications this can be simplified further:
The last case loses useful information about the exclusion of . Thus, it is common to work with and as separate intervals. More generally, when working with discontinuous functions, it is sometimes useful to do the calculation with so-called multi-intervals of the form The corresponding multi-interval arithmetic maintains a set of (usually disjoint) intervals and also provides for overlapping intervals to unite.
Interval multiplication often only requires two multiplications. If , are nonnegative,
The multiplication can be interpreted as the area of a rectangle with varying edges. The result interval covers all possible areas, from smallest to the largest.
With the help of these definitions, it is already possible to calculate the range of simple functions, such as For example, if , and :
To make the notation of intervals smaller in formulae, brackets can be used.
can be used to represent an interval. Note that in such a compact notation, should not be confused between a single-point interval and a general interval. For the set of all intervals, we can use
as an abbreviation. For a vector of intervals we can use a bold font: .
Interval functions beyond the four basic operators may also be defined.
For monotonic functions in one variable, the range of values is simple to compute. If is monotonically increasing (resp. decreasing) in the interval then for all such that (resp. ).
The range corresponding to the interval can be therefore calculated by applying the function to its endpoints:
From this, the following basic features for interval functions can easily be defined:
For even powers, the range of values being considered is important, and needs to be dealt with before doing any multiplication. For example, for should produce the interval when But if is taken by repeating interval multiplication of form then the result is wider than necessary.
More generally one can say that, for piecewise monotonic functions, it is sufficient to consider the endpoints , of an interval, together with the so-called critical points within the interval, being those points where the monotonicity of the function changes direction. For the sine and cosine functions, the critical points are at or for , respectively. Thus, only up to five points within an interval need to be considered, as the resulting interval is if the interval includes at least two extrema. For sine and cosine, only the endpoints need full evaluation, as the critical points lead to easily pre-calculated values—namely -1, 0, and 1.
In general, it may not be easy to find such a simple description of the output interval for many functions. But it may still be possible to extend functions to interval arithmetic. If is a function from a real vector to a real number, then is called an interval extension of if
This definition of the interval extension does not give a precise result. For example, both and are allowable extensions of the exponential function. Tighter extensions are desirable, though the relative costs of calculation and imprecision should be considered; in this case, should be chosen as it gives the tightest possible result.
Given a real expression, its natural interval extension is achieved by using the interval extensions of each of its subexpressions, functions and operators.
The Taylor interval extension (of degree ) is a times differentiable function defined by
for some , where is the th order differential of at the point and is an interval extension of the Taylor remainder
The vector lies between and with , is protected by . Usually one chooses to be the midpoint of the interval and uses the natural interval extension to assess the remainder.
The special case of the Taylor interval extension of degree is also referred to as the mean value form.
An interval can also be defined as a locus of points at a given distance from the centre,[ clarification needed ] and this definition can be extended from real numbers to complex numbers. As it is the case with computing with real numbers, computing with complex numbers involves uncertain data. So, given the fact that an interval number is a real closed interval and a complex number is an ordered pair of real numbers, there is no reason to limit the application of interval arithmetic to the measure of uncertainties in computations with real numbers. Interval arithmetic can thus be extended, via complex interval numbers, to determine regions of uncertainty in computing with complex numbers.
The basic algebraic operations for real interval numbers (real closed intervals) can be extended to complex numbers. It is therefore not surprising that complex interval arithmetic is similar to, but not the same as, ordinary complex arithmetic.It can be shown that, as it is the case with real interval arithmetic, there is no distributivity between addition and multiplication of complex interval numbers except for certain special cases, and inverse elements do not always exist for complex interval numbers. Two other useful properties of ordinary complex arithmetic fail to hold in complex interval arithmetic: the additive and multiplicative properties, of ordinary complex conjugates, do not hold for complex interval conjugates.
Interval arithmetic can be extended, in an analogous manner, to other multidimensional number systems such as quaternions and octonions, but with the expense that we have to sacrifice other useful properties of ordinary arithmetic.
This section needs additional citations for verification . (February 2018) (Learn how and when to remove this template message)
The methods of classical numerical analysis can not be transferred one-to-one into interval-valued algorithms, as dependencies between numerical values are usually not taken into account.
To work effectively in a real-life implementation, intervals must be compatible with floating point computing. The earlier operations were based on exact arithmetic, but in general fast numerical solution methods may not be available. The range of values of the function for and are for example . Where the same calculation is done with single digit precision, the result would normally be . But , so this approach would contradict the basic principles of interval arithmetic, as a part of the domain of would be lost. Instead, the outward rounded solution is used.
The standard IEEE 754 for binary floating-point arithmetic also sets out procedures for the implementation of rounding. An IEEE 754 compliant system allows programmers to round to the nearest floating point number; alternatives are rounding towards 0 (truncating), rounding toward positive infinity (i.e. up), or rounding towards negative infinity (i.e. down).
The required external rounding for interval arithmetic can thus be achieved by changing the rounding settings of the processor in the calculation of the upper limit (up) and lower limit (down). Alternatively, an appropriate small interval can be added.
The so-called dependency problem is a major obstacle to the application of interval arithmetic. Although interval methods can determine the range of elementary arithmetic operations and functions very accurately, this is not always true with more complicated functions. If an interval occurs several times in a calculation using parameters, and each occurrence is taken independently then this can lead to an unwanted expansion of the resulting intervals.
As an illustration, take the function defined by The values of this function over the interval are As the natural interval extension, it is calculated as:
which is slightly larger; we have instead calculated the infimum and supremum of the function over There is a better expression of in which the variable only appears once, namely by rewriting as addition and squaring in the quadratic
So the suitable interval calculation is
and gives the correct values.
In general, it can be shown that the exact range of values can be achieved, if each variable appears only once and if is continuous inside the box. However, not every function can be rewritten this way.
The dependency of the problem causing over-estimation of the value range can go as far as covering a large range, preventing more meaningful conclusions.
An additional increase in the range stems from the solution of areas that do not take the form of an interval vector. The solution set of the linear system
is precisely the line between the points and Using interval methods results in the unit square, This is known as the wrapping effect .
A linear interval system consists of a matrix interval extension and an interval vector . We want the smallest cuboid , for all vectors which there is a pair with and satisfying
For quadratic systems – in other words, for – there can be such an interval vector , which covers all possible solutions, found simply with the interval Gauss method. This replaces the numerical operations, in that the linear algebra method known as Gaussian elimination becomes its interval version. However, since this method uses the interval entities and repeatedly in the calculation, it can produce poor results for some problems. Hence using the result of the interval-valued Gauss only provides first rough estimates, since although it contains the entire solution set, it also has a large area outside it.
A rough solution can often be improved by an interval version of the Gauss–Seidel method. The motivation for this is that the -th row of the interval extension of the linear equation
can be determined by the variable if the division is allowed. It is therefore simultaneously
So we can now replace by
and so the vector by each element. Since the procedure is more efficient for a diagonally dominant matrix, instead of the system one can often try multiplying it by an appropriate rational matrix with the resulting matrix equation
left to solve. If one chooses, for example, for the central matrix , then is outer extension of the identity matrix.
These methods only work well if the widths of the intervals occurring are sufficiently small. For wider intervals it can be useful to use an interval-linear system on finite (albeit large) real number equivalent linear systems. If all the matrices are invertible, it is sufficient to consider all possible combinations (upper and lower) of the endpoints occurring in the intervals. The resulting problems can be resolved using conventional numerical methods. Interval arithmetic is still used to determine rounding errors.
This is only suitable for systems of smaller dimension, since with a fully occupied matrix, real matrices need to be inverted, with vectors for the right hand side. This approach was developed by Jiri Rohn and is still being developed.
An interval variant of Newton's method for finding the zeros in an interval vector can be derived from the average value extension. For an unknown vector applied to , gives
For a zero , that is , and thus must satisfy
This is equivalent to . An outer estimate of can be determined using linear methods.
In each step of the interval Newton method, an approximate starting value is replaced by and so the result can be improved iteratively. In contrast to traditional methods, the interval method approaches the result by containing the zeros. This guarantees that the result produces all zeros in the initial range. Conversely, it proves that no zeros of were in the initial range if a Newton step produces the empty set.
The method converges on all zeros in the starting region. Division by zero can lead to separation of distinct zeros, though the separation may not be complete; it can be complemented by the bisection method.
As an example, consider the function , the starting range , and the point . We then have and the first Newton step gives
More Newton steps are used separately on and . These converge to arbitrarily small intervals around and .
The Interval Newton method can also be used with thick functions such as , which would in any case have interval results. The result then produces intervals containing .
The various interval methods deliver conservative results as dependencies between the sizes of different intervals extensions are not taken into account. However the dependency problem becomes less significant for narrower intervals.
Covering an interval vector by smaller boxes so that
is then valid for the range of values
So for the interval extensions described above the following holds:
Since is often a genuine superset of the right-hand side, this usually leads to an improved estimate.
Such a cover can be generated by the bisection method such as thick elements of the interval vector by splitting in the centre into the two intervals and If the result is still not suitable then further gradual subdivision is possible. A cover of intervals results from divisions of vector elements, substantially increasing the computation costs.
With very wide intervals, it can be helpful to split all intervals into several subintervals with a constant (and smaller) width, a method known as mincing. This then avoids the calculations for intermediate bisection steps. Both methods are only suitable for problems of low dimension.
Interval arithmetic can be used in various areas (such as set inversion, motion planning, set estimation or stability analysis) to treat estimates with no exact numerical value.
Interval arithmetic is used with error analysis, to control rounding errors arising from each calculation. The advantage of interval arithmetic is that after each operation there is an interval that reliably includes the true result. The distance between the interval boundaries gives the current calculation of rounding errors directly:
Interval analysis adds to rather than substituting for traditional methods for error reduction, such as pivoting.
Parameters for which no exact figures can be allocated often arise during the simulation of technical and physical processes. The production process of technical components allows certain tolerances, so some parameters fluctuate within intervals. In addition, many fundamental constants are not known precisely.
If the behavior of such a system affected by tolerances satisfies, for example, , for and unknown then the set of possible solutions
can be found by interval methods. This provides an alternative to traditional propagation of error analysis. Unlike point methods, such as Monte Carlo simulation, interval arithmetic methodology ensures that no part of the solution area can be overlooked. However, the result is always a worst-case analysis for the distribution of error, as other probability-based distributions are not considered.
Interval arithmetic can also be used with affiliation functions for fuzzy quantities as they are used in fuzzy logic. Apart from the strict statements and , intermediate values are also possible, to which real numbers are assigned. corresponds to definite membership while is non-membership. A distribution function assigns uncertainty, which can be understood as a further interval.
For fuzzy arithmetic are considered. The form of such a distribution for an indistinct value can then represented by a sequence of intervalsonly a finite number of discrete affiliation stages
The interval corresponds exactly to the fluctuation range for the stage
The appropriate distribution for a function concerning indistinct values and the corresponding sequences
can be approximated by the sequence
and can be calculated by interval methods. The value corresponds to the result of an interval calculation.
Warwick Tucker used interval arithmetic in order to solve the 14th of Smale's problems, that is, to show that the Lorenz attractor is a strange attractor.Thomas Hales used interval arithmetic in order to solve the Kepler conjecture.
Interval arithmetic is not a completely new phenomenon in mathematics; it has appeared several times under different names in the course of history. For example, Archimedes calculated lower and upper bounds 223/71 < π < 22/7 in the 3rd century BC. Actual calculation with intervals has neither been as popular as other numerical techniques nor been completely forgotten.
Rules for calculating with intervals and other subsets of the real numbers were published in a 1931 work by Rosalind Cicely Young.Arithmetic work on range numbers to improve the reliability of digital systems were then published in a 1951 textbook on linear algebra by Paul S. Dwyer ; intervals were used to measure rounding errors associated with floating-point numbers. A comprehensive paper on interval algebra in numerical analysis was published by Teruo Sunaga (1958).
The birth of modern interval arithmetic was marked by the appearance of the book Interval Analysis by Ramon E. Moore in 1966.He had the idea in spring 1958, and a year later he published an article about computer interval arithmetic. Its merit was that starting with a simple principle, it provided a general method for automated error analysis, not just errors resulting from rounding.
Independently in 1956, Mieczyslaw Warmus suggested formulae for calculations with intervals,though Moore found the first non-trivial applications.
In the following twenty years, German groups of researchers carried out pioneering work around Ulrich W. Kulischand Götz Alefeld at the University of Karlsruhe and later also at the Bergische University of Wuppertal. For example, Karl Nickel explored more effective implementations, while improved containment procedures for the solution set of systems of equations were due to Arnold Neumaier among others. In the 1960s, Eldon R. Hansen dealt with interval extensions for linear equations and then provided crucial contributions to global optimisation, including what is now known as Hansen's method, perhaps the most widely used interval algorithm. Classical methods in this often have the problem of determining the largest (or smallest) global value, but could only find a local optimum and could not find better values; Helmut Ratschek and Jon George Rokne developed branch and bound methods, which until then had only applied to integer values, by using intervals to provide applications for continuous values.
In 1988, Rudolf Lohner developed Fortran-based software for reliable solutions for initial value problems using ordinary differential equations.
The journal Reliable Computing (originally Interval Computations) has been published since the 1990s, dedicated to the reliability of computer-aided computations. As lead editor, R. Baker Kearfott, in addition to his work on global optimisation, has contributed significantly to the unification of notation and terminology used in interval arithmetic.
In recent years work has concentrated in particular on the estimation of preimages of parameterised functions and to robust control theory by the COPRIN working group of INRIA in Sophia Antipolis in France.
There are many software packages that permit the development of numerical applications using interval arithmetic.These are usually provided in the form of program libraries. There are also C++ and Fortran compilers that handle interval data types and suitable operations as a language extension, so interval arithmetic is supported directly.
Since 1967, Extensions for Scientific Computation (XSC) have been developed in the University of Karlsruhe for various programming languages, such as C++, Fortran and Pascal.The first platform was a Zuse Z23, for which a new interval data type with appropriate elementary operators was made available. There followed in 1976, Pascal-SC, a Pascal variant on a Zilog Z80 that it made possible to create fast, complicated routines for automated result verification. Then came the Fortran 77-based ACRITH-XSC for the System/370 architecture (FORTRAN-SC), which was later delivered by IBM. Starting from 1991 one could produce code for C compilers with Pascal-XSC; a year later the C++ class library supported C-XSC on many different computer systems. In 1997, all XSC variants were made available under the GNU General Public License. At the beginning of 2000 C-XSC 2.0 was released under the leadership of the working group for scientific computation at the Bergische University of Wuppertal to correspond to the improved C++ standard.
Another C++-class library was created in 1993 at the Hamburg University of Technology called Profil/BIAS (Programmer's Runtime Optimized Fast Interval Library, Basic Interval Arithmetic), which made the usual interval operations more user friendly. It emphasized the efficient use of hardware, portability and independence of a particular presentation of intervals.
The Boost collection of C++ libraries contains a template class for intervals. Its authors are aiming to have interval arithmetic in the standard C++ language.
The Frink programming language has an implementation of interval arithmetic that handles arbitrary-precision numbers. Programs written in Frink can use intervals without rewriting or recompilation.
Gaolis another C++ interval arithmetic library that is unique in that it offers the relational interval operators used in interval constraint programming.
The Moore libraryis an efficient implementation of interval arithmetic in C++. It provides intervals with endpoints of arbitrary precision and is based on the ``concepts´´ feature of C++.
The Julia programming languagehas an implementation of interval arithmetics along with high-level features, such as root-finding (for both real and complex-valued functions) and interval constraint programming, via the ValidatedNumerics.jl package.
In addition computer algebra systems, such as FriCAS, Mathematica, Maple, Maxima (software)and MuPAD, can handle intervals. A Matlab extension Intlab builds on BLAS routines, and the Toolbox b4m makes a Profil/BIAS interface. Moreover, the Software Euler Math Toolbox includes an interval arithmetic.
A library for the functional language OCaml was written in assembly language and C.
A standard for interval arithmetic, IEEE Std 1788-2015, has been approved in June 2015.Two reference implementations are freely available. These have been developed by members of the standard's working group: The libieeep1788 library for C++, and the interval package for GNU Octave.
A minimal subset of the standard, IEEE Std 1788.1-2017, has been approved in December 2017 and published in February 2018. It should be easier to implement and may speed production of implementations.
Several international conferences or workshop take place every year in the world. The main conference is probably SCAN (International Symposium on Scientific Computing, Computer Arithmetic, and Verified Numerical Computation), but there is also SWIM (Small Workshop on Interval Methods), PPAM (International Conference on Parallel Processing and Applied Mathematics), REC (International Workshop on Reliable Engineering Computing).
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous, and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
In probability theory, the expected value of a random variable , denoted or , is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.
In mathematics, a product is the result of multiplication, or an expression that identifies factors to be multiplied. For example, 30 is the product of 6 and 5, and is the product of and .
In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor series are equal near this point. Taylor's series are named after Brook Taylor who introduced them in 1715.
In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are added, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions.
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample in the sample space can be interpreted as providing a relative likelihood that the value of the random variable would equal that sample. In other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0, the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would equal one sample compared to the other sample.
In mathematics, the affinely extended real number system is obtained from the real number system by adding two infinity elements: and where the infinities are treated as actual numbers. It is useful in describing the algebra on infinities and the various limiting behaviors in calculus and mathematical analysis, especially in the theory of measure and integration. The affinely extended real number system is denoted or or
In mathematics, a Fourier series is a periodic function composed of harmonically related sinusoids, combined by a weighted summation. With appropriate weights, one cycle of the summation can be made to approximate an arbitrary function in that interval. As such, the summation is a synthesis of another function. The discrete-time Fourier transform is an example of Fourier series. The process of deriving weights that describe a given function is a form of Fourier analysis. For functions on unbounded intervals, the analysis and synthesis analogies are Fourier transform and inverse transform.
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation.
In statistical mechanics, the Fokker–Planck equation is a partial differential equation that describes the time evolution of the probability density function of the velocity of a particle under the influence of drag forces and random forces, as in Brownian motion. The equation can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck, and is also known as the Kolmogorov forward equation, after Andrey Kolmogorov, who independently discovered the concept in 1931. When applied to particle position distributions, it is better known as the Smoluchowski equation, and in this context it is equivalent to the convection–diffusion equation. The case with zero diffusion is known in statistical mechanics as the Liouville equation. The Fokker–Planck equation is obtained from the master equation through Kramers–Moyal expansion.
In probability theory and statistics, the moment-generating function of a real-valued random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. However, not all random variables have moment-generating functions.
In mathematics, the Radon transform is the integral transform which takes a function f defined on the plane to a function Rf defined on the (two-dimensional) space of lines in the plane, whose value at a particular line is equal to the line integral of the function over that line. The transform was introduced in 1917 by Johann Radon, who also provided a formula for the inverse transform. Radon further included formulas for the transform in three dimensions, in which the integral is taken over planes. It was later generalized to higher-dimensional Euclidean spaces, and more broadly in the context of integral geometry. The complex analogue of the Radon transform is known as the Penrose transform. The Radon transform is widely applicable to tomography, the creation of an image from the projection data associated with cross-sectional scans of an object.
In mathematics, more specifically in multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function.
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In mathematics, a norm is a function from a real or complex vector space to the nonnegative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance of a vector from the origin is a norm, called the Euclidean norm, or 2-norm, which may also be defined as the square root of the inner product of a vector with itself.
In real analysis, the projectively extended real line, is the extension of the set of the real numbers, by a point denoted ∞. It is thus the set with the standard arithmetic operations extended where possible, and is sometimes denoted by The added point is called the point at infinity, because it is considered as a neighbour of both ends of the real line. More precisely, the point at infinity is the limit of every sequence of real numbers whose absolute values are increasing and unbounded.
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers ℝ, or a subset of ℝ that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers.
The dyadic transformation is the mapping
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution.
In mathematics, a line integral is an integral where the function to be integrated is evaluated along a curve. The terms path integral, curve integral, and curvilinear integral are also used; contour integral is used as well, although that is typically reserved for line integrals in the complex plane. | https://wikimili.com/en/Interval_arithmetic | 21 |
39 | Gross Domestic Product (GDP) is the final monetary value of the goods and services produced within the country during a specified period of time, normally a year. In simple terms, GDP is the measure of the country's economic output in a year. In India, contributions to GDP are mainly divided into three broad sectors — agriculture, industry, and services. GDP is measured over market prices and there is a base year for the computation. The GDP growth rate measures how fast the economy is growing. It does this by comparing the country's gross domestic product in one quarter with that in the previous one, and with the same quarter of the previous year.
The GDP growth rate is driven by GDP’s four components. The main driver is personal consumption, which includes the critical sector of retail sales. The second component is business investment, including construction and inventory levels. The third is government spending whose largest categories are social security benefits, defense spending, and medicare benefits. The government often increases spending to jump-start the economy during a recession. The fourth is net trade.
When the economy is expanding, the GDP growth rate is positive. If the economy grows, so do businesses, jobs, and personal income. If it contracts, then businesses hold off investing in new purchases. They delay hiring new employees until they are confident that the economy will improve. Those delays further depress the economy. Without jobs, consumers have less money to spend. If the GDP growth rate turns negative, the country's economy is said to be in a state of recession.
The GDP growth rate is the most important indicator of economic health. It changes during the four phases of the business cycle — peak, contraction, trough, and expansion.
Nominal GDP is the value of all final goods and services that an economy produces during a given year; it is not adjusted for inflation. It is calculated by using the prices that are currently in the year in which the output is produced. Nominal GDP takes into account all of the changes that occurred for all goods and services produced during the year. If prices change from one period to the next and the output does not change, the nominal GDP would change even though the output remained constant.
Real GDP, on the other hand, is the total value of all final goods and services that the economy produces during a given year, accounting for inflation. It is calculated using the prices of a selected base year. To calculate Real GDP, you must determine how much of GDP has been changed by inflation since the base year and divide out the inflation each year. Real GDP, therefore, accounts for the fact that if prices change but the output doesn’t, nominal GDP would change.
In January 2015, the government moved to the new base year of 2011-12 from the earlier the base year of 2004-05 for national accounts. The base year of national accounts had previously been revised in January 2010. In the new series, the Central Statistics Office (CSO) did away with GDP at factor cost and adopted the international practice of valuing industry-wise estimates in gross value added (GVA) at basic prices.
copyrights © 2021 olinone.in. All rights reserved. | https://www.olinone.in/blog-content?title=What-is-Gross-Domestic-Product-(GDP) | 21 |
146 | Taxes are mandatory payments by individuals and corporations to government. They are levied to finance government services, redistribute income, and influence the behaviour of consumers and investors. The Constitution Act, 1867 gave Parliament unlimited taxing powers and restricted those of the provinces to mainly direct taxation (taxes on income and property, rather than on activities such as trade). Personal income tax and corporate taxes were introduced in 1917 to help finance the First World War. The Canadian tax structure changed profoundly during the Second World War. By 1946, direct taxes accounted for more than 56 per cent of federal revenue. The federal government introduced a series of tax reforms between 1987 and 1991; this included the introduction of the Goods and Services Tax (GST). In 2009, the federal, provincial and municipal governments collected $585.8 billion in total tax revenues.
Today, of the various methods available for financing government activities, only taxation payments are mandatory. Taxes are imposed on individuals, business firms and property. They are used to finance public services or enable governments to redistribute resources. Taxation allows governments to increase expenditures without causing price inflation, because private spending is reduced by an equivalent amount.
The Constitution Act, 1867 gave Parliament unlimited taxing powers. It also restricted those of the provinces to mainly direct taxation (taxes on income and property, rather than on activities such as trade). The federal government was responsible for national defence and economic development; the provinces for education, health, social welfare and local matters which then involved only modest expenditures. (See Distribution of Powers.) The provinces needed access to direct taxation mainly to enable their municipalities to levy property taxes.
Early 20th Century
For more than 50 years after Confederation, customs and excise duties provided the bulk of federal revenues; by 1913, they provided more than 90 per cent of the total. In 1917, however, to help finance the First World War, Parliament introduced personal income tax and corporate taxes. In 1920, a manufacturers’ sales tax and other sales taxes were also introduced.
Provincial revenue at this time came primarily from licences and permits; as well as the sales of commodities and services. In addition, the provinces received substantial federal subsidies. (See also Transfer Payments.) They hesitated to impose direct taxes; but by the late 1800s, they were taxing business profits and successions. Taxes on real and personal property were the bulwark of local government finance. By 1930, total municipal revenues surpassed those of the federal government.
The Great Depression bankrupted some municipalities and severely damaged provincial credit. Customs and excise duties declined by 65 per cent from 1929 to 1934. Parliament resorted more to personal and corporate taxation; it also raised sales taxes dramatically. Before the Depression was over, all provinces were taxing corporate income. All but two provinces levied personal income taxes, and two had retail sales taxes.
Second World War
The Canadian tax structure changed profoundly during the Second World War. To distribute the enormous financial burden of the war equitably, to raise funds efficiently and to minimize the impact of inflation, the major tax sources were gathered under a central fiscal authority. In 1941, the provinces agreed to surrender the personal and corporate income tax fields to the federal government for the duration of the war and for one year after. In exchange, they received fixed annual payments. (See Transfer Payments.) In 1941, the federal government introduced succession duties (on the transfer of assets after death). An excess profits tax was imposed. Other federal taxes increased drastically.
By 1946, direct taxes accounted for more than 56 per cent of federal revenue. The provinces received grants, and the yields from gasoline and sales taxes increased substantially. The financial position of the municipalities improved with higher property tax yields. In 1947, contrary to the 1942 plan, federal control was extended to include succession duties as well. However, Ontario and Quebec opted out; they chose to operate their own corporate income tax procedures. There was public pressure for federal action in many areas. The White Paper on Employment and Income advocated federal responsibility for these areas.
As a result, direct taxes became a permanent feature of federal finance. But the provinces also have a constitutional right to these taxes. There is a growing demand for services under provincial jurisdiction; such as health, education and social welfare. The difficulties of reconciling the legitimate claims of both levels of government for income taxation powers have since dominated many federal-provincial negotiations. (See Intergovernmental Finance; Federal-Provincial Relations.)
Late 20th Century
From 1947 to 1962, the provinces, with mounting reluctance, accepted federal grants as a substitute for levying their own direct taxes. In 1962, however, Ottawa reduced its own personal and corporate income tax rates to make tax room available to the provinces. Because taxpayers would pay the same total amount, provincial tax rates would not be risky politically. Further federal concessions between 1962 and 1977 raised the provincial share of income tax revenues significantly.
Provincial income tax calculations were traditionally integrated into federal tax returns. All provinces except Quebec used the federal definition of taxable income. (Quebec has operated its own income tax since 1954.) Provincial tax rates, which now differ considerably among the provinces, were simply applied to basic federal tax. In recent years, that trend has been weakening. For example, in Ontario, personal income tax payable is now calculated separately from federal income tax payable.
Principles of Taxation
The criteria by which a tax system is judged include equity; efficiency; economic growth; stabilization; and ease of administration and compliance. According to one view, taxes, to be fair, should be paid in accordance with the benefits received. But the difficulty of assigning the benefits of certain government expenditures — such as defence — restricts the application of this principle. Provincial gasoline taxes are one instance of the benefit principle, with fuel taxes providing revenue for roads and highways.
According to another view, individuals should be taxed based on their ability to pay (typically indicated by income). The personal income tax is in part a reflection of this principle. Horizontal equity (individuals with equal incomes are treated equally) is not easily achieved; this is because income alone is an imperfect measure of an individual’s ability to pay.
Vertical equity (higher incomes are taxed at higher rates than lower incomes — a principle not at odds with horizontal equity) has been opposed by business and those with higher incomes. They claim that progressive tax rates discourage initiative and investment. At the same time, under a progressive tax system, deductions benefit those with high taxable incomes. In recent years, this realization has led governments to convert many deductions to tax credits. However, this significantly complicates the tax preparation process.
Taxes can affect the rate of economic growth as well. Income taxes limit capital accumulation. Corporate and capital taxes reduce capital investment. Payroll taxes reduce job creation. Businesses in Canada have strongly opposed the full inclusion of corporate gains as taxable income. As a result, only 50 per cent of capital gains were taxable when the capital gains tax was introduced in 1972. The inclusion rate for capital gains was raised to 75 per cent by 1990. It was cut back to 50 per cent in 2000.
Shifting and Incidence
Taxes levied on some persons but paid ultimately by others are “shifted” forward to consumers wholly or partly by higher prices; or, they are “shifted” backward on workers if wages are lowered to compensate for the tax. Some part of corporate income taxes, federal sales and excise taxes, payroll taxes and local property taxes is shifted. This alters and obscures the final distribution of the tax burden.
The more elasticity (the percentage change in tax revenue resulting from a change in national income) a tax has, the greater its contribution to economic stabilization policy. Income taxes with fixed monetary exemptions and rate brackets have an automatic stabilization effect. This is because tax collections will grow faster than income in times of economic growth; conversely, they will fall more sharply than income in a recession.
In Canada, the revenue elasticity of personal income tax is weakened by indexing. Since 1974, both personal exemptions and tax brackets have been adjusted according to changes in the Consumer Price Index. But sales taxes have less revenue elasticity because consumption changes less rapidly in response to changes in income, and these taxes are not progressive in relation to consumption. Property tax yields do not grow automatically with rising national income; but they do exhibit some revenue elasticity.
Current Tax System
Taxes levied by all levels of government in Canada account for most of their revenues. The remainder comes from intergovernmental transfers (particularly from the federal government to the provinces), as well as investment income and other sources. In 2009, the federal, provincial and municipal governments collected $585.8 billion in total tax revenues. This amount included income tax; property tax; sales and other consumption taxes; payroll taxes; social security plans and health insurance premiums; and corporate taxes.
Federal Tax Revenues
In 2009, federal government revenues totalled $237.4 billion. Roughly 90 per cent was raised through taxes; $153 billion came from income taxes and $42.5 billion from consumption taxes and a range of other levies.
Personal income tax applies to all income sources of residents of Canada; except for such amounts as gifts, inheritances, lottery winnings, and veterans disability pensions. In addition, certain other amounts, such as workers’ compensation payments and some income-tested or needs-tested social assistance payments, must be reported as income; but they are not taxed.
Provincial Tax Revenues
In 2009, combined provincial government revenues totaled $308 billion. This included $95.7 billion in income taxes; $64.5 billion in consumption taxes; and more than $28 billion in property and other taxes. Another $60.5 billion in provincial revenues came in the form of transfer payments, mostly from the federal government.
Municipal Tax Revenues
In 2009, combined local government revenues across Canada totaled $121.8 billion. This included $46.2 billion in property taxes and approximately $1 billion in other taxes. The largest source of local government revenue was transfers from other levels of government, mostly the provinces; this amount totalled $51.7 billion in 2009. (See also Municipal Finance.)
Municipal tax bases vary considerably throughout Canada. The principal component of the municipal tax base in all provinces and territories is real property; this includes land, buildings and structures. Machinery and equipment affixed to property are included in the property tax base in Newfoundland and Labrador, Nova Scotia, Quebec, Ontario, Manitoba, Alberta (where a municipal business tax does not exist), the Northwest Territories and the Yukon.
In Prince Edward Island, New Brunswick and Saskatchewan, machinery, equipment and other fixtures are liable to property taxation only when they provide services to the buildings. British Columbia removed all machinery and equipment from its property tax base in 1987.
Probably the strongest criticism against the residential property tax is that it is regressive. Since the 1960s, provincial and local government commissions have recommended changes in the existing property tax system to make it more equitable and efficient. In response, Quebec, Ontario, Manitoba, Alberta and British Columbia introduced a property tax credit. Other general reforms have included broadening the tax base by reducing or eliminating some exemptions; as well as implementing equalized assessment.
Federal Tax Reform, 1987–91
In June 1987, the federal government introduced Stage One of Tax Reform. It included proposals for reform of the personal and corporate income tax structure. Bill C-139 took effect on 1 January 1988, although some changes were to be phased in over a longer period.
In line with tax reform in other countries, Bill C-139 broadened the tax base for both personal and corporate income. It also reduced the rates applicable to taxable income. The bill replaced exemptions with credits and eliminated some deductions for personal income tax. It also replaced the 1987 rate schedule, with its 10 brackets and rates ranging from 6 to 34 per cent, with a schedule containing only three brackets with rates of 17 per cent, 26 per cent, and 29 per cent. (As of 2015, there were four federal brackets with rates of 15 per cent, 22 per cent, 26 per cent and 29 per cent. The 2015 rates in the provinces and territories ranged from 4 per cent to 25.75 per cent, according to income level.)
Capital Gains, Dividends and Business Taxes
Bill C-139 also capped the lifetime capital gains exemption at $100,000. (As of 2013, the lifetime exemption was $750,000. This was available only to owners of businesses, farms or fishing properties). The Bill also reduced capital cost allowances; introduced limitations on deductible business expenses; and lowered the dividend tax credit.
Goods and Services Tax (GST)
In 1991, the federal government introduced Stage Two of Tax Reform. As part of this reform effort, Ottawa initially proposed a national value-added tax; it would merge the new federal sales tax and the provincial retail sales taxes. The federal government was unable to get approval from the provincial governments for this proposal; instead, it continued with Stage Two of Tax Reform and replaced the manufacturers’ sales tax with the Goods and Services Tax (GST).
The manufacturers’ sales tax was difficult to administer; it was also widely criticized for placing an unequal tax burden on different consumer purchases. With a broadly based, multi-stage sales tax such as the GST, tax is collected from all businesses in stages, as goods (or services) move from primary producers and processors to wholesalers, retailers and finally to consumers.
The GST has some advantages over the old manufacturers’ sales tax. It eliminates tax on business inputs and treats all businesses in a consistent manner. It ensures uniform and effective tax rates on the final sale price of products. Finally, it treats imports in the same manner as domestically produced goods. It also completely removes hidden federal taxes from Canadian exports.
When the GST came into effect on 1 January 1991, provincial governments (except Alberta, which has no Provincial Sales Tax) had to decide how to manage the relationship between the federal sales tax (GST) and their own provincial sales taxes (PST). Quebec and the Atlantic Provinces chose to impose their retail sales tax on the selling price, including the goods and services tax. This raised their retail sales tax base. The remaining provinces, however, opted to impose their sales taxes on the price before the goods and services tax was added. This reduced their retail sales tax base.
Following the introduction of the GST, the federal government continued discussions on its original proposal with several provinces. The aim was to harmonize the GST with the provincial sales taxes. Initially, only Quebec agreed to merge its provincial sales tax with the federal sales tax. As of 2015, the Atlantic Provinces and Ontario had also merged their sales taxes; this created the single Harmonized Sales Tax (HST) in those provinces rather than two sales taxes (PST and GST). Quebec, meanwhile, administers its own harmonized system with the Quebec Sales Tax (QST) and the GST. Alberta is the only province without a sales tax.
Federal-Provincial Fiscal Arrangements
Three major money transfer programs have formed the bedrock of federal-provincial fiscal arrangements: Established Programs Financing (introduced in 1977); equalization payments; and the Canada Assistance Plan (introduced in 1966). (See also Intergovernmental Finance; Federal-Provincial Relations.) Formulas have changed over the years. But the broad goal of these programs and their successors has been to foster more equality among Canada’s regions. This is achieved by transferring funds, via the tax system, from the richer provinces to those that are less well off.
As of 2015 there were four major transfer programs. These included the Canada Health Transfer (CHT); the Canada Social Transfer (CST); equalization payments; and Territorial Formula Financing (TFF). The CHT and CST are federal transfers to the provinces. They support specific policy areas such as health care; post-secondary education; social assistance and social services; early childhood development; and child care. In 2015–16, total federal transfers to provinces were projected to reach $68 billion.
Trends in Government Financing
One of the biggest challenges facing all levels of government in the 21st century has been Canadians’ increasing reluctance to pay the higher taxes needed to fund the services they want. This trend began to manifest as far back as the early 1990s in the mounting dissatisfaction with Prime Minister Brian Mulroney’s government, following implementation of the GST. In the 2008 federal election, Opposition leader Stéphane Dion tried but failed to convince the public of the need for a carbon tax; despite Canadians’ support for action on climate change.
Politicians of all stripes took their cues from these and similar developments. In recent years, governments have thus shifted revenue collection strategies from visible tax increases to less opaque techniques; these include increased borrowing, money printing, and the use of accounting standards that do not require full recording of public-sector liabilities.
Borrowing to finance public spending has proven to be particularly popular among all governments around the Western world. (See also Budgetary Process.) This technique enables politicians to get the credit for the benefits the spending generates. Debt burdens, meanwhile, are transferred to the country’s youth; they are either not allowed to vote, or are not as well organized in strong lobby groups (as are seniors, the wealthy or various business interests).
Incurring unfunded liabilities (e.g., making promises related to public sector pensions and health care benefits, without setting aside the funds to pay for them) is another favored way of financing spending. It does not involve immediate taxation.
In the era after the 2008–09 recession, “money printing,” in the form of central bank purchases of government bonds, became increasingly popular. This is due to the challenges many governments face in raising money thanks to the near-zero levels of real interest rates they are paying on many of their bonds. The United States, the European Union and the Bank of Japan have all made major initiatives in this area. | https://www.thecanadianencyclopedia.ca/en/article/taxation | 21 |
14 | The gap between the richest and the poorest has never been wider. In many countries, income inequality has increased as poverty also increases.
Income inequality has both economic and political impacts on a nation. These include political polarization, negative attitudes towards the wealthy, slower GDP growth, reduced income mobility, higher poverty rates, and greater household debt.
The Gini Coefficient
The Gini coefficient or Gini index is a statistical measure of distribution to represent the income or wealth of a country’s residents. Developed by Italian statistician Corrado Gini in 1912, the Gini coefficient is the most commonly used measure of inequality.
The Gini coefficient measures the distribution of incomes across income percentiles. The coefficient ranges from 0 (0%) to 1 (100%), with 0 representing perfect equality and 1 representing perfect inequality. In a country where everyone has the same income, the Gini coefficient would be 0. If a single resident earned all of the income while everyone else earned nothing, the coefficient would be 1.
Mathematically, the Gini coefficient is defined based on the Lorenz curve. The Lorenz curve plots the percentiles of the population on the horizontal axis of the graph according to income or wealth, whichever is being measured. The cumulative income or wealth of the population is plotted on the vertical axis.
It is important to note that while the Gini coefficient is a useful tool for analyzing wealth or income distribution, is not an absolute measurement of a country’s wealth. High-income and low-income countries can have the same Gini coefficient. Additionally, the Gini coefficient may be inaccurate and overstate income inequality due to limitations such as a lack of reliable and up-to-date GDP and income information.
Lesotho has a Gini coefficient of .632, making it have the highest income inequality in the world. Lesotho is a lower-middle-income country with high poverty rates and unemployment rates. Over the past two decades, Lesotho has reduced its poverty rate significantly, making strides to reduce its Gini coefficient and create more equality; however, it remains the most unequal country for income in the world.
2. South Africa
With a Gini coefficient of .0625, South Africa is the second-most unequal country in the world. In South Africa, the wealthiest 10% own 71% of the wealth, while the poorest 60% own just 7%. Additionally, about 55.5% of South Africans live in poverty, earning less than $83 per month.
Haiti’s Gini coefficient is .608. The top 20% of households in Haiti hold 4% of the total wealth in the country. Poverty is high in Haiti, with about 59% of Haitians living on less than $2 per day and GDP growth is very slow. With only about 50% of children attending school, the lack of education in Haiti has made is difficult for about two-thirds of people to find formal jobs that pay them well.
One of the world’s poorest countries at its independence in 1966, Botswana has since made great strides in development, expecting to become a high-income country by 2036. Despite this, Botswana has a Gini coefficient of .605. Income inequality is declining in Botswana due to regional convergence caused by fast growth in rural areas.
Namibia has very high rates of poverty and unemployment at 29.9% and 26.6% respectively, despite the country’s relatively high economic growth. Namibia’s Gini coefficient is .597, a big improvement from its 2003 coefficient of .633. It is believed that as much as 70% of the wealth is held by the wealthiest residents. The inequality in Namibia is considered to be a ticking time bomb and will involve improving education and investing in creating sustainable jobs in order to fix it.
Zambia’s Gini coefficient is .575. Zambia’s income inequality has slowly risen over the past several years. While unemployment is at 10% in Zambia, about 84% of those employed are in the informal sector, such as agricultural (50% of the population) with low earnings. Those employed in the formal sector each about 2.5 times what informal employees earn. This is, unsurprisingly, rooted in low levels of or lack of education.
Comoros has the seventh-highest income inequality in the world with a Gini coefficient of .559. Poverty is relatively widespread, with about 42.4% of the population living below the poverty line and about 23.55 of the population living in extreme poverty. Income inequality is most obvious in rural areas where more poverty and intergenerational inequality exist. Despite this, younger men and women are increasingly working in sectors that are more productive and pay better, helping to close the income gap.
8. Hong Kong
Hong Kong’s Gini coefficient is .539, the highest of any developed economy. The income inequality has reached its highest level in more than 40 years and is fueling social tensions among its residents. About 1 in 5 Hong Kong residents live below the poverty line. The wealthiest 10% earn nearly 44 times more than the poorest 10%.
Guatemala’s Gini coefficient is .53. The wealthiest 10% of the population hold about 50% of the wealth and the poorest 10% own less than 1%. The indigenous, non-Spanish-speaking population has limited access to education and opportunities and 90% of the indigenous population live below the poverty line. Those living in poverty fall into the cycle not being able to afford education past elementary school and being forced to work informal jobs that don’t pay well.
Paraguay has a Gini coefficient of .517. Despite the widespread inequality, Paraguay has reduced its poverty significantly in recent years as its economy continues to grow and is expected to continue to do so in the coming years. The unequal distribution of income and wealth is closely related to access to land: big landowners are enjoying the benefits of an improving economy while other sectors of society are excluded from the improved economic situation.
The ten countries with the lowest income inequality are:
- Faroe Islands .227
- Slovakia .237
- Slovenia .244
- Sweden .249
- The Czech Republic .25
- Ukraine .255
- Belgium .259
- Kazakhstan .263
- Belarus .265
- Moldova .268
The United States’ Gini coefficient is .485, the highest it’s been in 50 years according to the U.S. Census Bureau. The U.S. has the highest Gini coefficient among the G7 nations. The top 1% of earners in the United States earn about 40 times more than the bottom 90% of earners. About 33 million U.S. workers earn less than $10 per hour, placing a family of four below the poverty line. | https://worldpopulationreview.com/country-rankings/income-inequality-by-country | 21 |
27 | This article needs additional citations for verification . (March 2020) (Learn how and when to remove this template message)
Prospecting is the first stage of the geological analysis (followed by exploration) of a territory. It is the search for minerals, fossils, precious metals, or mineral specimens. It is also known as fossicking.
Traditionally prospecting relied on direct observation of mineralization in rock outcrops or in sediments. Modern prospecting also includes the use of geologic, geophysical, and geochemical tools to search for anomalies which can narrow the search area. Once an anomaly has been identified and interpreted to be a potential prospect direct observation can then be focused on this area.
In some areas a prospector must also make claims, meaning they must erect posts with the appropriate placards on all four corners of a desired land they wish to prospect and register this claim before they may take samples. In other areas publicly held lands are open to prospecting without staking a mining claim.
This section does not cite any sources . (March 2020) (Learn how and when to remove this template message)
The traditional methods of prospecting involved combing through the countryside, often through creek beds and along ridgelines and hilltops, often on hands and knees looking for signs of mineralisation in the outcrop. In the case of gold, all streams in an area would be panned at the appropriate trap sites looking for a show of 'colour' or gold in the river trail.
Once a small occurrence or show was found, it was then necessary to intensively work the area with pick and shovel, and often via the addition of some simple machinery such as a sluice box, races and winnows, to work the loose soil and rock looking for the appropriate materials (in this case, gold). For most base metal shows, the rock would have been mined by hand and crushed on site, the ore separated from the gangue by hand.
Often, these shows were short-lived, exhausted and abandoned quite soon, requiring the prospector to move onwards to the next and hopefully bigger and better show. Occasionally, though, the prospector would strike it rich and be joined by other prospectors and larger-scale mining would take place. Although these are thought of as "old" prospecting methods, these techniques are still used today but usually coupled with more advanced techniques such as geophysical magnetic or gravity surveys.
In most countries in the 19th and early 20th century, it was very unlikely that a prospector would retire rich even if he was the one who found the greatest of lodes. For instance Patrick (Paddy) Hannan, who discovered the Golden Mile, Kalgoorlie, died without receiving anywhere near a fraction of the value of the gold contained in the lodes. The same story repeated at Bendigo, Ballarat, Klondike and California.
In the United States and Canada prospectors were lured by the promise of gold, silver, and other precious metals. They traveled across the mountains of the American West, carrying picks, shovels and gold pans. The majority of early prospectors had no training and relied mainly on luck to discover deposits.
Other gold rushes occurred in Papua New Guinea, Australia at least four times, and in South Africa and South America. In all cases, the gold rush was sparked by idle prospecting for gold and minerals which, when the prospector was successful, generated 'gold fever' and saw a wave of prospectors comb the countryside.
Modern prospectors today rely on training, the study of geology, and prospecting technology.
Knowledge of previous prospecting in an area helps in determining location of new prospective areas. Prospecting includes geological mapping, rock assay analysis, and sometimes the intuition of the prospector.
Metal detectors are invaluable for gold prospectors, as they are quite effective at detecting gold nuggets within the soil down to around 1 metre (3 feet), depending on the acuity of the operator's hearing and skill.[ citation needed ]
Magnetic separators may be useful in separating the magnetic fraction of a heavy mineral sand from the nonmagnetic fraction, which may assist in the panning or sieving of gold from the soil or stream.
Prospecting pickaxes are used to scrape at rocks and minerals, obtaining small samples that can be tested for trace amounts of ore. Modern prospecting pickaxes are also sometimes equipped with magnets, to aid in the gathering of ferromagnetic ores. Prospecting pickaxes are usually equipped with a triangular head, with a very sharp point.
The introduction of modern gravity and magnetic surveying methods has greatly facilitated the prospecting process. Airborne gravimeters and magnetometers can collect data from vast areas and highlight anomalous geologic features.Three-dimensional inversions of audio-magnetotellurics (AMT) is used to find conductive materials up to a few kilometres into the Earth, which has been helpful to locate kimberlite pipes, as well as tungsten and copper.
Another relatively new prospecting technique is using low frequency electromagnetic (EM) waves for 'sounding' into the Earth's crust. These low frequency waves will respond differently based on the material they pass through, allowing for analysts to create three-dimensional images of potential ore bodies or volcanic intrusions. This technique is used for a variety of prospecting, but can mainly be for finding conductive materials.So far these low frequency EM techniques have been proven for geothermal exploration as well as for coal bed methane analysis.
Geochemical prospecting involves analyzing the chemical properties of rock samples, drainage sediments, soils, surface and ground waters, mineral separates, atmospheric gases and particulates, and even plants and animals. Properties such as trace element abundances are analyzed systematically to locate anomalies.
In archaeology, geophysical survey is ground-based physical sensing techniques used for archaeological imaging or mapping. Remote sensing and marine surveys are also used in archaeology, but are generally considered separate disciplines. Other terms, such as "geophysical prospection" and "archaeological geophysics" are generally synonymous.
A telluric current, or Earth current, is an electric current which moves underground or through the sea. Telluric currents result from both natural causes and human activity, and the discrete currents interact in a complex pattern. The currents are extremely low frequency and travel over large areas at or near the surface of the Earth.
Mining in the engineering discipline is the extraction of minerals from underneath, above or on the ground. Mining engineering is associated with many other disciplines, such as mineral processing, exploration, excavation, geology, and metallurgy, geotechnical engineering and surveying. A mining engineer may manage any phase of mining operations, from exploration and discovery of the mineral resources, through feasibility study, mine design, development of plans, production and operations to mine closure.
Gold prospecting is the act of searching for new gold deposits. Methods used vary with the type of deposit sought and the resources of the prospector. Although traditionally a commercial activity, in some developed countries placer gold prospecting has also become a popular outdoor recreation.
Exploration geophysics is an applied branch of geophysics and economic geology, which uses physical methods, such as seismic, gravitational, magnetic, electrical and electromagnetic at the surface of the Earth to measure the physical properties of the subsurface, along with the anomalies in those properties. It is most often used to detect or infer the presence and position of economically useful geological deposits, such as ore minerals; fossil fuels and other hydrocarbons; geothermal reservoirs; and groundwater reservoirs.
Magnetotellurics (MT) is an electromagnetic geophysical method for inferring the earth's subsurface electrical conductivity from measurements of natural geomagnetic and geoelectric field variation at the Earth's surface. Investigation depth ranges from 300 m below ground by recording higher frequencies down to 10,000 m or deeper with long-period soundings. Proposed in Japan in the 1940s, and France and the USSR during the early 1950s, MT is now an international academic discipline and is used in exploration surveys around the world. Commercial uses include hydrocarbon exploration, geothermal exploration, carbon sequestration, mining exploration, as well as hydrocarbon and groundwater monitoring. Research applications include experimentation to further develop the MT technique, long-period deep crustal exploration, deep mantle probing, and earthquake precursor prediction research.
Geophysical survey is the systematic collection of geophysical data for spatial studies. Detection and analysis of the geophysical signals forms the core of Geophysical signal processing. The magnetic and gravitational fields emanating from the Earth's interior hold essential information concerning seismic activities and the internal structure. Hence, detection and analysis of the electric and Magnetic fields is very crucial. As the Electromagnetic and gravitational waves are multi-dimensional signals, all the 1-D transformation techniques can be extended for the analysis of these signals as well. Hence this article also discusses multi-dimensional signal processing techniques.
An aeromagnetic survey is a common type of geophysical survey carried out using a magnetometer aboard or towed behind an aircraft. The principle is similar to a magnetic survey carried out with a hand-held magnetometer, but allows much larger areas of the Earth's surface to be covered quickly for regional reconnaissance. The aircraft typically flies in a grid-like pattern with height and line spacing determining the resolution of the data.
Transient electromagnetics,, is a geophysical exploration technique in which electric and magnetic fields are induced by transient pulses of electric current and the subsequent decay response measured. TEM / TDEM methods are generally able to determine subsurface electrical properties, but are also sensitive to subsurface magnetic properties in applications like UXO detection and characterization. TEM/TDEM surveys are a very common surface EM technique for mineral exploration, groundwater exploration, and for environmental mapping, used throughout the world in both onshore and offshore applications.
Induced polarization (IP) is a geophysical imaging technique used to identify the electrical chargeability of subsurface materials, such as ore.
The following outline is provided as an overview of and topical guide to geology:
A geologist is a scientist who studies the solid, liquid, and gaseous matter that constitutes the Earth and other terrestrial planets, as well as the processes that shape them. Geologists usually study geology, although backgrounds in physics, chemistry, biology, and other sciences are also useful. Field work is an important component of geology, although many subdisciplines incorporate laboratory work.
Magnetic surveying is one of a number of methods used in archaeological geophysics. Magnetic surveys record spatial variation in the Earth's magnetic field. In archaeology, magnetic surveys are used to detect and map archaeological artefacts and features. Magnetic surveys are used in both terrestrial and marine archaeology.
The Turam method is one of the oldest geophysical electro-magnetic methods used for mineral exploration, devised by Erik Helmer Lars Hedstrom in 1937. Its name is derived from Swedish "TU" (two) and "RAM" (frame), referring to the two receiving coils.
Near-surface geophysics is the use of geophysical methods to investigate small-scale features in the shallow subsurface. It is closely related to applied geophysics or exploration geophysics. Methods used include seismic refraction and reflection, gravity, magnetic, electric, and electromagnetic methods. Many of these methods were developed for oil and mineral exploration but are now used for a great variety of applications, including archaeology, environmental science, forensic science, military intelligence, geotechnical investigation, treasure hunting, and hydrogeology. In addition to the practical applications, near-surface geophysics includes the study of biogeochemical cycles.
The following outline is provided as an overview of and topical guide to geophysics:
Dan Hausel a polymath of martial arts, geology, writing, astronomy, art, and public speaking. Hall-of-Fame 10th degree black belt grandmaster of Shorin-Ryu Karate and Kobudo, mineral exploration geologist who made several gold, colored gemstone, and diamond deposit discoveries in Alaska, Colorado, Montana and Wyoming, author of more than 600 publications including books, maps, professional papers and magazine articles, public speaker, artist, former astronomy lecturer for the Hansen Planetarium in Utah, and former rock musician.
A primary mineral is any mineral formed during the original crystallization of the host igneous primary rock and includes the essential mineral(s) used to classify the rock along with any accessory minerals. In ore deposit geology, hypogene processes occur deep below the earth's surface, and tend to form deposits of primary minerals, as opposed to supergene processes that occur at or near the surface, and tend to form secondary minerals.
EMIGMA is a geophysics interpretation software platform developed by Petros Eikon Incorporated for data processing, simulation, inversion and imaging as well as other associated tasks. The software focuses on non-seismic applications and operates only on the Windows operating system. It supports files standard to the industry, instrument native formats as well as files used by other software in the industry such as AutoCAD, Google Earth and Oasis montaj. There is a free version of EMIGMA called EMIGMA Basic developed to allow viewing of databases created by licensed users. It does not allow data simulation nor modeling nor data import. The software is utilized by geoscientists for exploration and delineating purposes in mining, oil and gas and groundwater as well as hydrologists, environmental engineers, archaeologists and academic institutions for research purposes. Principal contributors to the software are R. W. Groom, H. Wu, E. Vassilenko, R. Jia, C. Ottay and C. Alvarez.
The Decennial Mineral Exploration Conferences (DMEC) is a Canadian voluntary association dedicated to the advancement of geoscience applied to exploration for mineral resources. The inaugural 1967 conference "Canadian Centennial Conference on Mining and Groundwater Geophysics", held in Ottawa, Canada, was organized by the Geological Survey of Canada as part of the Canadian Centennial. While the original session focussed on mining and groundwater geophysics, the purpose of subsequent conferences expanded to include geochemistry, and other geoscience disciplines as they are applied in mineral exploration. | https://wikimili.com/en/Prospecting | 21 |
17 | Slavery had existed in the Middle East for a long time, and Muslim expansion ensured a steady flow of slaves captured in war.
The Old and New Testaments accepted slavery the same way the Qur'an did.
Qur'an encourages the freeing of slaves, prescribes just and humane treatment of slaves, and encourages owners to give their slaves the chance to buy it.
The freeing of slaves was thought to lead to paradise.
Women slaves were employed as cooks, cleaners, laundresses, and nursemaids.
Some of them performed as singers, musicians, dancers, and reciters of poetry.
Many female slaves were concubines.
Rich merchants and high officials owned many concubines.
Down the economic ladder, concubines often assumed domestic and sexual duties.
The harem was secured by eunuch guards when women were in it, and when men were not in it.
Eunuchs were said to be more manageable and dependable than men with ordinary desires, so Muslims employed eunuchs as secretaries, tutors, and commercial agents.
Male slaves, eunuchs or not, were also set to work as longshoremen on the docks, as oarsmen on ships, in construction crews, in workshops, and in gold and silver mines.
Slaves fought as soldiers.
Slavery in the Islamic world was different from slavery in the Americas.
Race had no connection to slavery among Muslims, who were prepared to take slaves from Europe as well as Africa.
Slavery in the Islamic world was not the basis for plantation agriculture as it was in the southern United States, the Caribbean, and Brazil in the 18th and 19th century.
Slavery was not common in the Islamic world.
Most slaves who were taken from non-Muslims converted to Islam.
To give Muslim slavery the most positive interpretation, one could say that it provided a means to fill certain needs and that it was not segregation.
A few women slaves performed as dancers, singers, and musicians before an elite audience of rulers, officials, and wealthy merchants.
The harem in the royal palace in Samarra was adorned by a wall painting from the ninth century.
Arab tribal law gave women no legal power before Islam.
Parents paid for their daughters, and their husbands could end the union at will.
There were no property or succession rights for women.
The Qur'an wanted to improve the social position of women.
The Qur'an emphasizes moral precepts, not descriptions of social practice, and the text is open to different interpretations.
Modern scholars agree that the Islamic sacred book intended women to be the spiritual equals of men and gave them economic rights.
The early Umayyad period had active roles for women in the religious, economic, and political life of the community.
They owned property, traveled widely, and were involved with men in public religious rituals.
The Islamic ideal of equal value to the community did not last.
The supply of slave women increased.
Some scholars theorize that as wealth replaced ancestry as the main criterion of social status, men more and more viewed women as possessions.
The precepts of the Qur'an were seen in more patriarchal ways as society changed.
In this midsixteenth-century illustration of the interior of a mosque, a screen separates the women who are wearing veils and tending children from the men.
The women can hear what is being said, but the men can't.
Men were seen as more dominant in their marriages.
The Qur'an states that men are in charge of women because Allah made the one to excel the other and they spent their property for the support of women.
Good women guard in secret that Allah guarded.
The practices of veiling and seclusion of women have their roots in pre-Islamic times.
Some of the peoples' customs were adopted by the Arab conquerors.
It was probably Byzantine or Persian.
The practice of secluding women is a result of contacts with Persia and other Eastern cultures.
More prosperous households had 800 women stay out of sight.
The harem became a symbol of male prestige and prosperity, as well as a way to distinguish upper-class from lower-class women.
A prolific author of more than seventy books, Abu Hamid al-Ghazali was a Persian philosopher, theologian, jurist, and Sufi.
The trend toward more patriarchal readings of Muslim teachings is reflected in his writings.
There are five benefits to marriage, a) children, b) stilling of passion, c) good housekeeping, d) extended family ties and e) spiritual training.
The purpose of marriage was to continue procreation so that the world should never be devoid of humankind.
For the sake of inner tranquility, it is permissible to marry a slave girl, despite the fact that any offspring will carry slave status, which is a kind of perdition.
Anyone who can afford to marry a free woman is forbidden from doing so.
Slavery for the children is not as grave as the ruination of faith.
The child is only annoyance in this life, whereas the sin of fornication leads to the loss of the Life Hereafter.
If a man's nature is so dominated by sexual desire that one wife alone would not suffice to keep him chaste, it is recommended that he take more than one.
All is well if he enjoys the love and mercy of Allah and feels content with his wives.
If not, substitution is recommended.
She won't engage in a tender conversation with just anyone if she is vain.
If their prospective husbands first saw their daughters, certain men would only allow them to marry them.
Everyone knows that visual inspection is only used to distinguish beauty from ugliness.
He forbade the giving of excessive dower.
The marriage guardian has a duty to look at the qualities of the prospective husband.
He should look to the interest of his precious one, and not give her in marriage to a man of bad character, or who will fail to give her all her due, or who is not her equal in lineage.
He showed his approval of her words.
If a man has several wives, he must treat them equally.
If he wants to take one of them with him on a journey, he should draw lots of them.
Allah's Messenger used to do that.
He should make it up to her if he robs her.
The husband should kiss.
The Prophet once said that if he fell on his wife, he would be eaten by an animal.
A man shouldn't be too happy at getting a boy or sad at getting a girl, for he doesn't know which one will be better for him.
Girls give more peace and the reward they bring is more plentiful.
It is permissible to divorce, but it is not good for Allah.
Permission was granted by the publisher.
As in medieval Europe and traditional India and China, marriage in Muslim society was considered too important to be left to the romantic feelings of the young.
The prospective bride and groom had to find suitable partners before the contract was finalized.
Marriages were arranged after puberty because the bride needed to be a virgin.
Ten to fifteen years older were the ages of the husbands.
A long period of fertility was ensured by youthful marriages.
A wife's responsibilities were dependent on her husband's wealth.
In rural life, a farmer's wife helped in the fields, ground the corn, carried water, and prepared food.
Shopkeepers' wives helped in business.
In an upper-class household, the wife supervised servants, looked after all domestic arrangements, and did whatever was needed for her husband's comfort.
The children were the wife's special domain.
A mother had authority over her children.
In Chinese culture, the prestige of the young wife depended on the production of children as quickly as possible.
A wife's failure to have children was one of the main reasons for a man to divorce or take a second wife.
Like the Jewish tradition, Muslim law allows divorce.
Divorce is not encouraged.
The Prophet said that God hates divorce the most.
Islam maintains a healthy acceptance of sexual pleasure for both males and females despite the traditional Christian view of sexual activity as shameful and only a cure for lust.
The Qur'an allows a man to have four wives if they are treated justly.
The majority of Muslim males were monogamous because they couldn't afford to support more than one wife.
The main commercial routes of the Islamic world were Waterways.
Islam spread throughout North and East Africa, the Balkans, the Caucasus, Central Asia, India, and the islands of Southeast Asia by the year 1500.
Muslim merchants brought their religion to their trade networks.
They were active in the Indian Ocean before Europeans.
Cairo was a major hub for trade in the Mediterranean.
Foreign merchants sailed up the Nile to the Aswan region, traveled east from Aswan by caravan to the Red Sea, and then sailed down the Red Sea to India.
They exchanged textiles, glass, gold, silver, and copper for Asian spices, dyes, and drugs.
Muslims and Jews dominated the trade with India.
The bill of exchange, a written order from one person to another to pay a specified sum of money, and the idea of the joint stock company were all developed by Muslims.
Improvements in technology helped trade.
navigation of the Arabian Sea and the Indian Ocean was greatly aided by the Chinese adoption of the magnetic compass, an instrument for determining directions at sea by means of a magnetic needle turning on a pivot.
The construction of larger ships led to a shift in long-distance cargo from luxury goods such as pepper, spices, and drugs to bulk goods such as sugar, rice, and timber.
The wood for Arab ships came from western India.
The Persian and Arab seamen sailed down the east coast of Africa in the late twelfth century to establish trading towns.
Merchants linked Zimbabwe in southern Africa with the Indian Ocean trade and the Middle Eastern trade in these urban centers.
Useful plants were spread as a result of the extensive trade through Islamic lands.
Southeast Asia and India supplied fruit to Muslim Spain.
The prosperity of the Abbasid era was due to the value of this trade.
Arab and Persian merchants were active in the Indian Ocean during the time of Islam.
Wares and products from India and China were in high demand, and they were shipped in stages through a series of exchanges.
The trade is illuminated by traveler accounts and discoveries.
The man known as the Merchant traveled to India and China from the coast of Persia.
He wrote about the piracy and extreme weather he experienced on his many daring voyages, as well as the life of foreign traders in China.
The rare goods of China are caused by the frequent fires at Khanfu, the port for ships and the trading center for merchandise of the Arabs and the Chinese.
Sometimes, the wind throws them on to al-Yaman or other places and they sell their goods there; sometimes they make a long stop to repair their ships, and so on.
A native of Jerusalem, al-Muqaddasi was an Arab geographer who took many journeys to learn about distant regions.
Suhar is a flourishing and populous city.
It is a city with a lot of merchants.
The markets are located along the shore of the sea.
The houses in Suhar are built of burned bricks and wood.
The Persians are masters at it.
The ninth-century Arab or Persian ship that sank in Indonesian waters on its way from China to the Middle East was discovered in 1998.
The ship was 70 feet long and 16 feet wide.
African timber and sails of woven palm leaves were used in the creation of this replica.
The 4ottery was recovered from the ninth-century Belitung wreck.
The bulk of the cargo was mass-produced Chinese ceramics.
The cargo included an octagonal gold cup decorated with Central Asian figures, as well as silver boxes, silver ingots, and star anise.
A small sample of ceramics can be seen in this photo.
Marco Polo traveled through Southeast Asia on his return trip from China to Italy in 1295.
He talks about the situation on the west coast of India.
There are more than 100 ships that cruise out every year as corsairs, seizing other ships and robbing the merchants in Gujarat and Malabar.
They are pirates on a large scale.
The merchants, who are familiar with the habits of the corsairs and know that they are going to encounter them, are not afraid to face them after they have been detected.
They damage their attackers by defending themselves.
One should be captured now and then.
When the corsairs capture a merchant ship, they help themselves to the ship and the cargo, but they don't hurt the men.
They told them to fetch another cargo.
There was a ship that was found in 2003 not far from where the Belitung wreck was found.
lashing boards were used to build it like the Belitung ship.
There were ceramics from Thailand, Vietnam, Persia, and China as well as Chinese bronze mirrors and Indonesian bronze statues.
glassware came from Egypt, Iran, and Mesopotamia.
Using the sources above, along with what you have learned in class and in this chapter, write a short essay on maritime trade in the Indian Ocean between 800 and 1400 and its historical significance.
The arts and sciences were made possible by long-distance trade and Sufism brought a new spiritual and intellectual tradition.
Travelers to Baghdad would have seen slave markets.
In 1354, the Sultan of MoROCCO ordered a sociologist to write an account of the travels of Abu 'Abdallah Ibn Battuta, who had traveled through most of the Islamic world.
The two men worked together.
A travel book written in Arabic was hailed as the richest account of fourteenth-century Islamic culture.
A family of legal scholars had a child named Ibn Battuta.
He gained knowledge of Muslim law, Arabic, and social polish as a youth, and these qualities are considered essential for a civilized Muslim gentleman.
He left Tangiers at the age of twenty-one to go to Mecca.
He went to Alexandria, Cairo, Damascus, and Medina in North Africa.
He kissed the Holy Stone at the Ka'ba, and performed the ritual prayers after reaching Mecca.
He went to see more of the world.
In the next four years, Ibn Battuta traveled to Iraq and to Basra and Baghdad in Persia, then returned to Mecca before sailing down the coast of Africa.
The Persian Gulf region traveled by land to Mecca.
He decided to go to India by way of Egypt, Syria, and Anatolia, across the Black Sea to the plains of western Central Asia, and then back to the Asian steppe.
The sultan of Delhi had Ibn Battuta serve as a judge.
He was chosen by the sultan to lead a diplomatic mission to China.
After the wreck of the expedition off the southeastern coast of India, Ibn Battuta traveled through southern India, Sri Lanka, and the Maldive Islands.
After stopping in Bengal and Sumatra, he traveled to the southern coast of China, under the rule of the Mongols.
Returning to Mecca in 1346, he headed for home.
He traveled about 75 thousand miles after crossing the Strait of Gibraltar and taking a camel caravan to his final destination.
Ibn Battuta was interested in seeing and understanding the world.
He went to the mosques and madrasas to look for the learned jurists.
He was fascinated by the Lighthouse of Alexandria, which was in ruins, as well as the harbor at Kaffa, where two hundred Genoese ships were loaded with silks and slaves for the markets at Venice, Cairo, and Damascus.
An iron constitution is what Ibn Battuta must have had.
He was 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 800-273-3217 His thirst for adventure was stronger than his fear of storms and pirates at sea.
The cities of Baghdad and Cordoba, at their peak in the tenth century, were the finest examples of cosmopolitan Muslim civilization.
There was a kaleidoscope of races, creeds, costumes, and cultures on Baghdad's streets.
There was a wide range of goods from all over the world in the shops and marketplaces.
The court was presided over by the caliph.
He invited writers, dancers, musicians, poets, and artists to live in Baghdad, and he is said to have given one singer one hundred thousand silver pieces for a single song.
Shahyar, the legendary king of Samarkand, is the focus of the central story of this fictional collection, as he tries to keep his new bride, Scheherazade, from being unfaithful like his first wife.
She entertained him with one tale a night for 1,001 nights in an effort to delay her execution.
In the end, her husband pardons her.
The cultural leadership of the Islamic world was up for grabs.
With a population of about 1 million, Cordoba had over 200,000 houses for ordinary people and over 60,000 mansions for generals, officials and the wealthy.
Thousands of weavers produced silks, woolens, and brocades that were internationally famous.
There were 27 free schools in Cordoba and a library with 400,000 volumes.
Medicine and surgery, music, philosophy, and mathematics are some of the things that Cordoba's scholars made contributions to.
The Indian game of chess entered western Europe through Cordoba and Persia.
The contemporary nun Hrosth Saxonwita of Gandersheim said that Cordoba was the "ornament of the world".
Formal education for young men involved reading, writing, and the study of the Qur'an was important for its religious message.
The schools were urban phenomena.
They were endowed with salaries for teachers, stipends for students, and living accommodations by wealthy merchants.
The character of the teacher and the intellectual reputation of the institution were not taken into account when selecting a teacher in Islamic higher education.
Students built their careers on their teachers' reputation.
Learning depended on being able to remember things.
A boy in primary school memorised the entire Qur'an.
A student learns an introductory work in one of the branches of knowledge in adolescence.
He looked at the texts in detail.
The teacher looked at the student on the previous day's learning to see if he understood what he had memorised.
The students had to record the teacher's commentary on a particular text in order to learn to write.
The main focus was on the oral transmission of knowledge.
The teacher issued a certificate to the student if he studied the book or collection of traditions with his teacher because Islamic education focused on particular books.
The student was able to transmit a text to his friends on the authority of his teacher.
The Muslim transmission and improvement of papermaking techniques had a special significance to education.
After Chinese papermaking techniques spread west, Muslim papermakers improved on them by adding starch to the sheets.
Papermaking had a huge impact on the collection of knowledge.
Fatwas are legal opinions issued by judges in the public courts, they were trained in the Qur'an, hadith, or some text forming part of the shari'a.
Islamic culture was mixed on the issue of female education.
The law excluded women from participating in the legal, religious, or civic because of the basic Islamic principle that "men are the guardians of women, because God has set the one over the other."
Tradition holds that Muhammad said, "The seeking of knowledge is a duty of every Muslim, but Educational theorists wanted men to study in a sexually isolated environment.
Many young women were educated at home.
According to one biographical dictionary covering the lives of 1,125 women, 411 of them had received a certificate after studying the Qur'an.
There are some striking similarities between Islamic higher education in the 12th to 14th century and that available in Europe or China at the same time.
In Europe and the Islamic countries, the religious authorities ran most schools, while in China the government, local villages, and lineages ran schools.
The personal relationship of teacher and student was seen as key to education in the Islamic world.
The degree granted by the university was the reward for completing a course of study.
At the very highest levels in China, the state ran a civil service examination system that rewarded achievement with appointments in the state bureaucracy.
In Muslim culture, the teacher's evaluation was more important than the school or the state.
There were some striking similarities in the practice of education.
Students in all three cultures had to master a language.
Basic religious, legal, or philosophical texts were the focus of education in all three cultures.
The acquisition and transmission of learning was a big part of all three cultures.
Teachers in all three societies lectured on particular passages, and leading teachers might disagree about the correct interpretations of a particular text, forcing students to question, to think critically, and to choose among differing opinions.
Religious scholars debated the correct interpretation of a particular text, despite the fact that Islamic education relied heavily on memorization of the Qur'an.
The students in this book are learning to think critically and creatively.
The creation of a common culture in the Islamic world was dependent on the spread of the Arabic language among all the people.
The spread of the Arabic language was more important in fostering cultural change after the establishment of the Islamic empire, according to recent scholarship.
The linguistic conversion to Islam was much quicker than the gradual one.
The official language of the state was Arabic.
The Islamic rulers did not force the Greeks and Persians to change their religions.
The conquered peoples were compelled to use the Arabic language.
Over a large part of the world, Arabic produced a cohesive and international culture.
Gregory Bar-Hebraeus, a bishop of the Syrian Orthodox Church, wrote in Arabic.
Modern scholars consider the years from 800 to 1300 to be one of the most brilliant periods in the history of the world.
The basis for later Eastern and Western research was formed from the Greek and Indian findings.
The Muslim medical knowledge was much better than the West's.
The Baghdad physician al-Razi was the first to make the distinction between the two diseases, and his work was translated into Latin and spread in the West.
He talked about the cauterization of wounds and the crushing of stones in the bladder in an important work.
Avicenna is the name of the work of Ibn Sina of Bukhara, known in the West as Muslim science reached its peak.
The funeral procession of the hero Isfandyar is depicted on this page.
The mourners are wailing and pulling at their hair, which is a sign of mourning.
The poem was written by Ferdowsi.
Muslim scholars wrote works on geography, jurisprudence, and philosophy.
The first Muslim thinker to try to harmonize the principles of ethical and social conduct was 870.
Averroes, also known as Ibn Rushid, was a judge in Andalusia and later a royal court physician.
Gregory Bar-Hebraeus, a Syrian writer in the 13th century, wrote on a wide range of subjects, including religion and philosophy, but also on more playful subjects, and his works include a large collection of amusing stories.
Persian, Hebrew, Indian, and Christian wise men are some of the characters in these tales.
Two of his fables are compared with Bar-Hebraeus's tales.
A wolf, a fox, and a lion banded together to slay a goat, a deer, and a hare.
The lion leaped upon the wolf and killed him.
The hare said, "I was born before God created the heavens and the earth," and the fox said, "You are right, for I was present when you were born."
The Birds and Beasts were both conquerors.
A Bat, fearing the uncertain issues of the fight, always fought on the side that he felt was the strongest.
His conduct was obvious to both people when peace was declared.
Being condemned by each for his treachery, he was driven forth from the light of day and hid in dark hiding places.
A Wolf was carrying a lamb he stole from a fold.
The lamb was taken from him by a Lion who met him in the path. | https://knowt.io/note/7ab9c8df-25c8-473d-a4b3-60383837e765/Chapter-9----Part-2-The-Islamic-World | 21 |
14 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Not to be confused with Patriotism.
Nationalism is an ideology that holds that a nation is the fundamental unit for human social life, and takes precedence over any other social and political principles. Nationalism makes certain political claims based upon this belief: above all, the claim that the nation is the only legitimate basis for the state, that each nation is entitled to its own state, and that the borders of the state should be congruent with the borders of the nation. Nationalism refers to both a political doctrine and any collective action by political and social movements on behalf of specific nations. Nationalism has had an enormous influence upon world history, since the nation-state has become the dominant form of state organization. Most of the world's population now lives in states which are, at least nominally, nation-states. Historians also use the term 'nationalism' to refer to this historical transition, and to the emergence of nationalist ideology and movements.
- 1 Principles of Nationalism
- 2 Theory of nationalism
- 3 Historical evolution of nationalism
- 4 Types of nationalism
- 5 Nationalism within a nation
- 6 Opposition and critique
- 7 See also
- 8 External links
- 9 References
- 10 Further reading
Principles of Nationalism[edit | edit source]
Anarchism | Authoritarianism | Capitalism | Christian democracy | Communism | Communitarianism | Conservatism | Fascism | Feminism | Green politics | Islamism | Liberalism | Libertarianism | Masculism | Nationalism | Social democracy | Socialism
This section sets out the components of nationalist ideology as seen by nationalists themselves. (Academic theories of nationalism are sceptical of some of these beliefs and principles, see below).
Nationalism is a form of universalism when it makes universal claims about how the world should be organised, but it is particularistic with regard to individual nations. The combination of both is characteristic for the ideology, for instance in these assertions:
- "in a nation-state, the language of the nation should be the official language, and all citizens should speak it, and not a foreign language."
- "the official language of Denmark should be Danish, and all Danish citizens should speak it."
The universalistic principles bring nationalism into conflict with competing forms of universalism, the particularistic principles bring specific nationalist movements into conflict with rival nationalisms - for instance, the Danish-German tensions over their reciprocal linguistic minorities.
The starting point of nationalism is the existence of nations, which it takes as a given. Nations are typically seen as entities with a long history: most nationalists do not believe a nation can be created artificially. Nationalist movements see themselves as the representative of an existing, centuries-old nation. However, some theories of nationalism imply the reverse order - that the nationalist movements created the sense of national identity, and then a political unit corresponding to it, or that an existing state promoted a 'national' identity for itself.
Nationalists see nations as an inclusive categorisation of human beings - assigning every individual to one specific nation. In fact, nationalism sees most human activity as national in character. Nations have national symbols, a national culture, a national music and national literature; national folklore, a national mythology and - in some cases - even a national religion. Individuals share national values and a national identity, admire the national hero, eat the national dish and play the national sport.
Nationalists define individual nations on the basis of certain criteria, which distinguish one nation from another; and determine who is a member of each nation. These criteria typically include a shared language, culture, and/or shared values which are predominantly represented within a specific ethnic group. National identity refers both to these defining criteria, and to the shared heritage of each group. Membership in a nation is usually involuntary and determined by birth. Individual nationalisms vary in their degree of internal uniformity: some are monolithic, and tolerate little variance from the national norms. Academic nationalism theory emphasises that national identity is contested, reflecting differences in region, class, gender, and language or dialect. A recent development is the idea of a national core culture, in Germany the Leitkultur, which emphasises a minimal set of non-negotiable values: this is primarily a strategy of cultural assimilation in response to immigration.
Nationalism has a strong territorial component, with an inclusive categorisation of territory corresponding to the categorisation of individuals. For each nation, there is a territory which is uniquely associated with it, the national homeland, and together they account for most habitable land. This is reflected in the geopolitical claims of nationalism, which seeks to order the world as a series of nation-states, each based on the national homeland of its respective nation. Territorial claims characterise the politics of nationalist movements. Established nation-states also make an implicit territorial claim, to secure their own continued existence: sometimes it is specified in the national constitution. In the nationalist view, each nation has a moral entitlement to a sovereign state: this is usually taken as a given.
The nation-state is intended to guarantee the existence of a nation, to preserve its distinct identity, and to provide a territory where the national culture and ethos are dominant - nationalism is also a philosophy of the state. It sees a nation-state as a necessity for each nation: secessionist national movements often complain about their second-class status as a minority within another nation. This specific view of the duties of the state influenced the introduction of national education systems, often teaching a standard curriculum, national cultural policy, and national language policy. In turn, nation-states appeal to a national cultural-historical mythos to justify their existence, and to confer political legitimacy - acquiescence of the population in the authority of the government.
Nationalists recognise that 'non-national' states exist and existed, but do not see them as a legitimate form of state. The struggles of early nationalist movements were often directed against such non-national states, specifically multi-ethnic empires such as Austria-Hungary and the Ottoman Empire. Most multi-ethnic empires have disappeared, but some secessionist movements see Russia and China as comparable non-national, imperial states. At least one modern state is clearly not a nation-state: the Vatican City exists solely to provide a sovereign territorial unit for the Catholic Church.
Nationalism as ideology includes ethical principles: that the moral duties of individuals to fellow members of the nation override those to non-members. Nationalism claims that national loyalty, in case of conflict, overrides local loyalties, and all other loyalties to family, friends, profession, religion, or class.
Theory of nationalism[edit | edit source]
Background and problems[edit | edit source]
Specific examples of nationalism are extremely diverse, the issues are emotional, and the conflicts often bloody. The theory of nationalism has always been complicated by this background, and by the intrusion of nationalist ideology into the theory. There are also national differences in the theory of nationalism, since people define nationalism on the basis of their local experience. Theory (and media coverage) may overemphasise conflicting nationalist movements, ethnic tension, and war - diverting attention from general theoretical issues; for instance, the characteristics of nation-states.
Nationalist movements are surrounded by other nationalist movements and nations, and this may colour their version of nationalism. It may focus purely on self-determination, and ignore other nations. When conflicts arise, however, ideological attacks upon the identity and legitimacy of the 'enemy' nationalism may become the focus. In the Israeli-Palestinian conflict, for instance, both sides have claimed that the other is not a 'real' nation, and therefore has no right to a state. Jingoism and chauvinism make exaggerated claims about the superiority of one nation over another. National stereotypes are also common, and are usually insulting. This kind of negative nationalism, directed at other nations, is certainly a nationalist phenomenon, but not a sufficient basis for a general theory of nationalism.
Issues in nationalism theory[edit | edit source]
The first studies of nationalism were generally historical accounts of nationalist movements. At the end of the 19th century, Marxists and other socialists produced political analyses that were critical of the nationalist movements then active in central and eastern Europe. Most sociological theories of nationalism date from after the Second World War. Some nationalism theory is about issues which concern nationalists themselves, such as who belongs to the nation and who does not, as well as the precise meaning of 'belonging'.
Origin of nations and nationalism[edit | edit source]
Recent general theory has looked at underlying issues, and above all the question of which came first, nations or nationalism. Nationalist activists see themselves as representing a pre-existing nation, and the primordialist theory of nationalism agrees. It sees nations, or at least ethnic groups, as a social reality dating back twenty thousand years.
The modernist theories imply that until around 1800, almost no-one had more than local loyalties. National identity and unity were originally imposed from above, by European states, because they were necessary to modernise of economy and society. In this theory, nationalist conflicts are an unintended side-effect. For example, Ernest Gellner argued that nations are a by-product of industrialization, which required a large literate and culturally homogeneous population. According to Charles Tilly, states promoted nationalism in order to assure the popular consent with conscription into large modern armies and taxation, which was necessary to maintain such armies. According to the modernist view, the first true nation state was created by the French Revolution, though the tendencies have existed since the beginning of the Modern Era. In addition to the top-down nationalism, there were also cases of the bottom-up nationalism, such as the German Romantic nationalism, materialized in the resistance against Napoleon.
More recent theorists of nationalism emphasise that nations are a socially constructed phenomenon. Benedict Anderson, for example, described nations as "imagined communities". Gellner comments: "Nationalism is not the awakening of nations to self-consciousness: it invents nations where they do not exist." (Anderson and Gellner deploy terms such as 'imagined' and 'invent' in a neutral, descriptive manner. The use of these terms in this context is not intended to imply that nations are fictional or fantastic.) Modernisation theorists see such things as the printing press and capitalism as necessary conditions for nationalism.
Anthony D. Smith proposed a synthesis of primordialist and modernist views. According to Smith, the preconditions for the formation of a nation are as follows:
- A fixed homeland (current or historical)
- High autonomy
- Hostile surroundings
- Memories of battles
- Sacred centres
- Languages and scripts
- Special customs
- Historical records and thinking
Those preconditions may create powerful common mythology. Therefore, the mythic homeland is in reality more important for the national identity than the actual territory occupied by the nation. Smith also posits that nations are formed through the inclusion of the whole populace (not just elites), constitution of legal and political institutions, nationalist ideology, international recognition and drawing up of borders.
Theoretical literature on nationalism[edit | edit source]
There is a large amount of theoretical and empirical literature on nationalism. The following is a minimal selection, and a series of capsule summaries that do not do justice to the range of views expressed.
- Anderson, Benedict. 1991. Imagined Communities. 2nd ed. London: Verso. Anderson argues that nations are imagined political communities, and are imagined to be limited and sovereign. Their development is due to the decline of other types of imagined community, especially in the face of capitalist production of print media.
- Armstrong, John. 1982. Nations Before Nationalism. Armstrong traces the development of national identities from origins in antiquity and the medieval world.
- Breuilly, John. 1992. Nationalism and the State. 2nd ed. Manchester: Manchester University Press. This approach focuses on the politics of nationalism, in particular on nationalism as a response to the imperatives of the modern state. It employs the mode of comparative history to study a large number of different cases of nationalism.
- Gellner, Ernest. 1983. Nations and Nationalism. Oxford: Blackwell. This work links nationalism to the homogenising imperatives of industrial society and the reactions of minority cultures to those imperatives.
- Greenfeld, Liah. 1992. Nationalism: Five Roads to Modernity. Cambridge: Harvard University Press. Greenfeld argues that nationalism existed at an earlier age than previously thought: as early as the sixteenth century in the case of England.
- Hechter, Michael. 1975. Internal Colonialism. London: Routledge and Kegan Paul. Hechter attributes nationalism in the "Celtic fringe" of Britain and Ireland to the reinforcing divisions of culture and the division of labour.
- Hobsbawm, Eric, and Ranger, Terence, eds. 1983. The Invention of Tradition. Cambridge: Cambridge University Press. This collection of essays, especially Hobsbawm's introduction and chapter on turn-of-the-century Europe, argues that the nation is a prominent type of invented tradition.
- Kedourie, Elie. 1960. Nationalism. London: Hutchinson. Kedourie focuses on the role of disaffected German intellectuals in developing the doctrine of nationalism at the beginning of the nineteenth century from Kant's idea of the autonomy of the will and Herder's belief in the primacy of linguistic communities in establishing modes of thought.
- Kedourie, Elie, ed. 1971. Nationalism in Asia and Africa. London: Weidenfeld and Nicolson. Kedourie's introduction to this volume of nationalist texts extends his analysis in his earlier work to the efforts of intellectuals in colonial states.
- Nairn, Tom. 1977. The Break-up of Britain. London: New Left Books. Marxist historian Nairn traces nationalism to the confrontation of colonialism, which leaves indigenous elites without recourse to any resources but their own population.
- Smith, Anthony D. 1986. The Ethnic Origins of Nations. Oxford: Blackwell. Smith traces modern nations and nationalism to pre-modern ethnic sources, arguing for the existence of an "ethnic core" in modern nations.
Historical evolution of nationalism[edit | edit source]
Prior to 1900[edit | edit source]
Most theories of nationalism assume a European origin of the nation-state. The modern state is often seen as emerging with the Treaty of Westphalia in 1648, though this view is disputed. This treaty created the 'Westphalian system' of states, which recognised each other's sovereignty and territory. Some of the signatories, such as the Dutch Republic, qualify as a nation-state, but in 1648 most states in Europe were still non-national.
Many, but not all, see the major transition to nation-states as originating in the late 18th and 19th centuries. Beginning with romantic nationalism, nationalist movements arose throughout Europe, a process accelerated by the French Revolution and the conquests of Napoleon Bonaparte. Some of these movements were separatist, directed against large empires: an early example is the Greek Revolution (1821-1829). Others sought to unify a divided or fragmented territory, as in the Italian unification under the rule of Piedmont-Sardinia. These movements promoted a national identity and culture: in the 1848 Revolutions in Europe they were often associated with liberal demands. By the end of the 19th century most people accepted that Europe was divided into nations, and personally identified with one of these nations. The collapse of the Austro-Hungarian Empire and the Ottoman Empire after the First World War accelerated the formation of nation-states.
According to the standard view, before the 19th century people had local, regional, or religious loyalties, but no idea of nationhood. The typical state in Europe was a dynastic state, ruled by a royal house: if there were any loyalties above regional level, they were owed to the king and the ruling house. Dynastic states could acquire territory by royal marriage, and lose it by division of inheritance - which is now seen as absurd. Nationalism introduced the idea that each nation has a specific territory, and that beyond this point the claims of other nations apply. Nation-states, in principle, do not seek to conquer territory. However, nationalist movements rarely agreed on where the border should be. As the nationalist movements grew, they introduced new territorial disputes in Europe.
Nationalism also determined the political life of 19th century Europe. Where the nation was part of an empire, the national liberation struggle was also a struggle against older autocratic regimes, and nationalism was allied with liberal anti-monarchical movements. Where the nation-state was a consolidation of an older monarchy, as in Spain, nationalism was itself conservative and monarchical. Most nationalist movements began in opposition to the existing order, but by the 20th century, there were regimes which primarily identified themselves as nationalist.
The standard theory of the 19th-century origin of nation-states is disputed. One problem with it is that the South American independence struggles and the American Revolution (American War of Independence) predate most European nationalist movements. Some countries, such as the Netherlands and England, seem to have had a clear national identity well before the 19th century.
20th Century nationalism[edit | edit source]
By the end of the 19th century, nationalist ideas had begun to spread to Asia. In India, nationalism began to encourage calls for the end of British rule. The 20th century nationalist movement in India is generally associated with Mahatma Gandhi, although many other leaders were involved as well. In China, nationalism influenced the 1911 Revolution. In Japan, nationalism and Japanese "exceptionalism" influenced Japanese imperialism.
World War I led to to the creation of new nation-states in Europe. This was encouraged by the United States, which rejected the legitimacy of the former multi-ethnic empires, see Wilsonianism. France, which sought to to isolate Germany and Austria, also encouraged the creation of potential client states. The Ottoman Empire and the Austro-Hungarian Empire disintegrated. The Versailles Treaty, based upon US President Wilson's Fourteen Points, partially conformed the division into new nation-states. In the Middle East, the Arab Revolt did not lead to new independent states: the victorious western powers secured a League of Nations mandate for Iraq, Lebanon, Palestine including Transjordan, and Syria. The Turkish War of Independence (1919-1923) created a new nation state from the core of the Ottoman Empire. In the east of Europe, the Russian Empire had collapsed, as a result of the Russian Revolution of 1917. The Anglo-Irish War led to the partition of Ireland into the Irish Free State and Northern Ireland.
However, multi-nation and multi-ethnic states survived in Europe; and two new ones emerged, Czechoslovakia (where the more prosperous Czech half dominated), and the Kingdom of Yugoslavia, (dominated by Serbia). In the interwar period, the extreme nationalist movements of fascism and Nazism came to power in Italy and Germany respectively, and similar groups took over several other European countries during the late 1930s. This new wave of nationalism had powerful racist undertones, and it culminated in World War II and the Holocaust.
The horrors of World War II discredited militant nationalism as an ideology, but scarcely altered the division of Europe into nation-states. Outside Europe, the war initiated a new wave of nation-state formation, through the independence of African and Asian nations from European colonial empires. The most dramatic decolonisation began in the late 1950's in Africa, which was transformed from a collection of European colonies into a continent of nation-states. Few of them corresponded to the ideal nation-state (one nation, one language, one culture), but most still exist. Ironically, the one that best met those criteria, Somalia, disintegrated. The Algerian War of Independence was the most bloody of the decolonisation wars in Africa: some decolonisations were peaceful. Rhodesia and the Portuguese colonies of Mozambique and Angola delayed decolonisation for a time.
The collapse of the Soviet Union led to an unexpected revival of national movements in Europe around 1990. Its constituent states became independent, for the second time (in modern history) in the case of the Baltic states - Belarus, Ukraine, Moldova, Kazakhstan, Turkmenistan, Uzbekistan, Tajikistan, Kyrgyzstan, Armenia, Azerbaijan, Georgia, Latvia, Estonia and Lithuania. The second Yugoslavia broke up into nation states, some with predecessor states such as the Nazi-oriented Independent State of Croatia, some as new sovereign states. Within established nation-states, there are many secessionist movements, some of them seeking the creation of a new sovereign state, for instance in Quebec. The unresolved status of in Northern Ireland led to protracted violence known as The Troubles, but without changes in the border.
In the second half of the 20th century, some trends emerged which might indicate a weakening of the nation-state and nationalism. The European Union is widely seen transferring power from the national level, to both sub-national and supra-national levels. Critics of globalization often appeal to feelings of national identity, culture, and sovereignty. Free trade agreements, such as NAFTA and the GATT, and the increasing internationalisation of trade markets, are seen as damaging to the national economy, and have led to a revival of economic nationalism. Protest movements vehemently oppose these negative aspects of globalization, (see Anti-globalisation).
Not all anti-globalists are nationalists, but nationalism continues to assert itself in response to those trends. Nationalist parties continue to do well in elections, and most people continue to have a strong sense of attachment to their nationality. Moreover, globalism and European federalism are not always opposed to nationalism. For example, theorists of Chinese nationalism within the People's Republic of China have articulated the idea that China's national power is substantially enhanced, rather than being reduced, by engaging in international trade and multinational organizations. For a time sub-national groups such as Catalan autonomists and Welsh nationalists supported a stronger European Union in the hope that a Europe of the regions would limit the power of the present nation-states. However, with Euroscepticism now widespread in the EU, this transformation is no longer on its political agenda.
Language and Nationalism[edit | edit source]
A common language has been a defining characteristic of the nation, and an ideal for nationalists. For example, in France before the French Revolution, regional languages such as Breton and Occitan were spoken, which were mutually incomprehensible. Standard French was also spoken in large parts of the country and had also been the language of administration, but after the Revolution it was imposed as the national language in non-French-speaking regions. For instance, in Brittany, Celtic names were forbidden. The formation of nation-states, and their consolidation after independence, is generally accompanied by policies to restrict, replace, or abandon minority languages. This accelerates the tendency noted in sociolinguistic research that high-status languages displace low-status languages. See also: Language policy in France.
Some theorists believe that nationalism became pronounced in the 19th century simply because language became a more important unifier due to increased literacy. With more people reading newspapers, books, pamphlets and so on, which were increasingly widely available to read since the spread of the printing press, it became possible for the first time to develop a broader cultural attachment beyond the local community. At the same time, differences in language solidified, breaking down old dialects, and excluding those from completely different language groups.
The United States, a country which historically welcomed immigrants of varying nationality, has what can be seen as a pattern of discrimination against languages other than English. Prominent examples are the German language, which was nearly eradicated during World War I, and French and Italian, which have nearly disappeared from everyday life. Today Spanish is a second language across large portion of the country. Some politicians, such as Pat Buchanan have consciously opposed the rise of Spanish as a second American language, for fear that it would undermine unity in the American national character.
In the Arab World during the colonial period, the Turkish language, French language, Spanish language and English language were often imposed, although the intensity of imposition varied widely. When the colonial period ended (mostly after World War Two), a process of "Arabisation" began; reviving Arabic to unify their states and to facilitate a broader Arab identity, motivated by Pan-Arabism. Countries such as Algeria and Western Sahara underwent large scale Arabisations, changing from French and Spanish to Arabic respectively.
However within the Arab World, some nationalistic attempts were made to emancipate a domestic vernacular and treat classical Arabic as a formal foreign language, which was often incomprehensible to the non-literate population of nominally Arab countries, which were politically - but not necessarily linguistically, culturally or ethnically, Arabized. These policies were first promoted in Egypt in the early 20th century by the Egyptian scholar and nationalist Ahmad Lutfi al-Sayyid, who called for the formalization of the Egyptian Vernacular as the native language of the Egyptian people.
Similar attempts to emphasise minority languages completely independent of Arabic were made by the Nubians, speakers of Nobiinm who are split between Egypt and Sudan, and relatively more successfully by the Imazighen (commonly known as Berber) in Morocco.
Types of nationalism[edit | edit source]
Nationalism may manifest itself as part of official state ideology or as a popular (non-state) movement and may be expressed along civic, ethnic, cultural, religious or ideological lines. These self-definitions of the nation are used to classify types of nationalism. However such categories are not mutually exclusive and many nationalist movements combine some or all of these elements to varying degrees. Nationalist movements can also be classified by other criteria, such as scale and location.
Some political theorists make the case that any distinction between forms of nationalism is false. In all forms of nationalism, the populations believe that they share some kind of common culture, and culture can never be wholly separated from ethnicity. The United States, for example, has "God" on its coinage and in its Pledge of Allegiance, and designates official holidays which are seen by some to promote cultural biases. The United States has an ethnic theory of being American (nativism), and, for a short period in the 20th Century, had a committee to investigate Un-American Activities.
Civic nationalism[edit | edit source]
Civic nationalism (or civil nationalism) is the form of nationalism in which the state derives political legitimacy from the active participation of its citizenry, from the degree to which it represents the "will of the people". It is often seen as originating with Jean-Jacques Rousseau and especially the social contract theories which take their name from his 1762 book The Social Contract. Civic nationalism lies within the traditions of rationalism and liberalism, but as a form of nationalism it is contrasted with ethnic nationalism. Membership of the civic nation is considered voluntary. Civic-national ideals influenced the development of representative democracy in countries such as the United States and France.
Ethnic nationalism[edit | edit source]
Ethnic nationalism, or ethnonationalism, defines the nation in terms of ethnicity, which always includes some element of descent from previous generations. It also includes ideas of a culture shared between members of the group and with their ancestors, and usually a shared language. Membership in the nation is hereditary. The state derives political legitimacy from its status as homeland of the ethnic group, and from its function to protect the national group and facilitate its cultural and social life, as a group. Ideas of ethnicity are very old, but modern ethnic nationalism was heavily influenced by Johann Gottfried von Herder, who promoted the concept of the Volk, and Johann Gottlieb Fichte. Ethnic nationalism is now the dominant form, and is often simply referred to as "nationalism". Note that the theorist Anthony Smith uses the term 'ethnic nationalism' for non-Western concepts of nationalism, as opposed to Western views of a nation defined by its geographical territory. (The term "ethnonationalism" is generally used only in reference to nationalists who espouse an explicit ideology along these lines; "ethnic nationalism" is the more generic term, and used for nationalists who hold these beliefs in an informal, instinctive, or unsystematic way. The pejorative form of both is "ethnocentric nationalism" or "tribal nationalism," though "tribal nationalism" can have a non-pejorative meaning when discussing African, Native American, or other nationalisms that openly assert a tribal identity.)
Romantic nationalism[edit | edit source]
Romantic nationalism (also organic nationalism, identity nationalism) is the form of ethnic nationalism in which the state derives political legitimacy as a natural ("organic") consequence and expression of the nation, or race. It reflected the ideals of Romanticism and was opposed to Enlightenment rationalism. Romantic nationalism emphasised a historical ethnic culture which meets the Romantic Ideal; folklore developed as a Romantic nationalist concept. The Brothers Grimm were inspired by Herder's writings to create an idealised collection of tales which they labeled as ethnically German. Historian Jules Michelet exemplifies French romantic-nationalist history.
Cultural nationalism[edit | edit source]
Cultural nationalism defines the nation by shared culture. Membership in the nation is neither entirely voluntary (you cannot instantly acquire a culture), nor hereditary (children of members may be considered foreigners if they grew up in another culture). Chinese nationalism is one example of cultural nationalism, partly because of the many national minorities in China. (The 'Chinese nationalists' include those on Taiwan who reject the mainland Chinese government but claim the mainland Chinese state).
Liberal nationalism[edit | edit source]
Liberal nationalism is a kind of nationalism defended recently by political philosophers who believe that there can be a non-xenophobic form of nationalism compatible with liberal values of freedom, tolerance, equality, and individual rights (Tamir 1993; Kymlicka 1995; Miller 1995). Ernest Renan (1882) and John Stuart Mill (1861) are often thought to be early liberal nationalists. Liberal nationalists often defend the value of national identity by saying that individuals need a national identity in order to lead meaningful, autonomous lives (Kymlicka 1995; for criticism see Patten 1999) and that liberal democratic polities need national identity in order to function properly (Miller 1995; for criticism see Abizadeh 2002, 2004).
State nationalism[edit | edit source]
State nationalism is a variant on civic nationalism, very often combined with ethnic nationalism. It implies that the nation is a community of those who contribute to the maintenance and strength of the state, and that the individual exists to contribute to this goal. Italian fascism is the best example, epitomised in this slogan of Mussolini: "Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato." ("Everything in the State, nothing outside the State, nothing against the State"). It is no surprise that this conflicts with liberal ideals of individual liberty, and with liberal-democratic principles. The revolutionary (liberal) Jacobin creation of a unitary and centralist French state is often seen as the original version of state nationalism. Franquist Spain, and contemporary Turkish nationalism are later examples of state nationalism.
However, the term "state nationalism" is often used in conflicts between nationalisms, and especially where a secessionist movement confronts an established nation state. The secessionists speak of state nationalism to discredit the legitimacy of the larger state, since state nationalism is perceived as less authentic and less democratic. Flemish separatists speak of Belgian nationalism as a state nationalism. Basque separatists and Corsican separatists refer to Spain and France, respectively, in this way. There are no undisputed external criteria to assess which side is right, and the result is usually that the population is divided by conflicting appeals to its loyalty and patriotism.
Religious nationalism[edit | edit source]
Religious nationalism (also) defines the nation in terms of shared religion. If the state derives political legitimacy from adherence to religious doctrines, then it is more of a theocracy than a nation-state. In practice, many ethnic and cultural nationalisms are in some ways religious in character. The religion is a marker of group identity, rather than the motivation for nationalist claims. Irish nationalism is associated with Roman Catholicism, and most Irish nationalist leaders of the last 100 years were Catholic, but many of the early (18th century) nationalists were Protestant. Irish nationalism never centred on theological distinctions like transubstantiation, the status of the Virgin Mary, or the primacy of the Pope, but for some Protestants in Northern Ireland, these pre-Reformation doctrines are indeed part of Irish culture. Similarly, although Religious Zionism exists and influences many, the mainstream of Zionism is more secular in nature, and based on culture and Jewish ethnicity. Since the partition of British India, Indian nationalism is associated with Hinduism. In modern India, a contemporary form of Hindu nationalism, or Hindutva has been prominent among many followers of the Bharatiya Janata Party and Rashtriya Swayamsevak Sangh. Religious nationalism characterized by communal adherence to Eastern Orthodoxy and national Orthodox Churches is still prevalent in many states of Eastern Europe, Russia.
Pan-nationalism[edit | edit source]
Pan-nationalism is usually an ethnic and cultural nationalism, but the 'nation' is itself a cluster of related ethnic groups and cultures, such as Turkic peoples. Occasionally pan-nationalism is applied to mono-ethnic nationalism, when the national group is dispersed over a wide area and several states - as in Pan-Germanism.
Diaspora nationalism[edit | edit source]
Diaspora nationalism (or, as Benedict Anderson terms it, "long-distance nationalism") generally refers to nationalist feeling among a diaspora such as the Irish in the United States, or the Lebanese in the Americas and Africa, and the Armenians in Europe and the United States. Anderson states that this sort of nationalism acts as a "phantom bedrock" for people who want to experience a national connection, but who do not actually want to leave their diaspora community. The essential difference between pan-nationalism and diaspora nationalism is that members of a diaspora, by definition, are no longer resident in their national or ethnic homeland. In the specific case of Zionism, the national movement advocates migration to the claimed national homeland, which would - if 100% effected - end the diaspora.
Nationalism within a nation[edit | edit source]
With the establishment of a nation-state, the primary goal of any nationalist movement has been achieved. However, nationalism does not disappear but remains a political force within the nation, and inspires political parties and movements. The terms nationalist and 'nationalist politician’ are often used to describe these movements; nationalistic would be more accurate. Nationalists in this sense typically campaign for:
- strengthening national unity, including campaigns for national salvation in times of crisis.
- emphasising the national identity and rejecting foreign influences, influenced by cultural conservatism and in extreme cases, xenophobia.
- limiting non-national populations on the national territory, especially by limiting immigration and, in extreme cases, by ethnic cleansing.
- annexing territory which is considered part of the national homeland. This is called irredentism, from the Italian movement Italia irredenta.
- economic nationalism, which is the promotion of the national interest in economic policy, especially through protectionism and in opposition to free trade policies.
The term 'nationalism' is also used by extension, or as a metaphor, to describe movements which promote a group identity of some kind. This use is especially common in the United States, and includes black nationalism and white nationalism in a cultural sense. They may overlap with nationalism in the classic sense, including black secessionist movements and pan-Africanism.
Nationalists obviously have a positive attitude toward their own nation, although this is not a definition of nationalism. The emotional appeal of nationalism is visible even in established and stable nation-states. The social psychology of nations includes national identity (the individual’s sense of belonging to a group), and national pride (self-association with the success of the group). National pride is related to the cultural influence of the nation, and its economic and political strength - although they may be exaggerated. However the most important factor is that the emotions are shared: nationalism in sport includes the shared disappointment if the national team loses.
The emotions can be purely negative: a shared sense of threat can unify the nation. However, dramatic events, such as defeat in war, can qualitatively affect national identity and attitudes to non-national groups. The defeat of Germany in World War I, and the perceived humiliation by the Treaty of Versailles, economic crisis and hyperinflation, created a climate for xenophobia, revanchism, and the rise of Nazism. The solid bourgeois patriotism of the pre-1914 years, with the Kaiser as national father-figure, was no longer relevant.
Nationalism and extremism[edit | edit source]
Although nationalism influences many aspects of life in stable nation-states, its presence is often invisible, since the nation-state is taken for granted. Michael Billig speaks of banal nationalism, the everyday, less visible forms of nationalism, which shape the minds of a nation's inhabitants on a day-to-day basis. Attention concentrates on extreme aspects, and on nationalism in unstable regions. Nationalism may be used as a derogatory label for political parties, or they may use it themselves as a euphemism for xenophobia, even if their policies are no more specifically nationalist, than other political parties in the same country. In Europe, some 'nationalist' anti-immigrant parties have a large electorate, and are represented in parliament. Smaller but highly visible groups, such as far right skinheads, also self-identify as 'nationalist', although it may be a euphemism for neo-Nazis or white supremacists. Activists in other countries are often referred to as ultra-nationalists, with a clearly pejorative meaning. See also chauvinism and jingoism.
Nationalism is a component of other political ideologies, and in its extreme form, fascism. However it is not accurate to simply describe fascism as a more extreme form of nationalism, although non-extreme nationalism can be seen as a lesser form of fascism. Fascism in the general sense, and the Italian original, were marked by a strong combination of ethnic nationalism and state nationalism, often combined with a form of economic and ethical socialism. That was certainly evident in Nazism. However the geopolitical aspirations of Adolf Hitler are probably better described as imperialist, and Nazi Germany ultimately ruled over vast areas where there was no historic German presence. The Nazi state was so different from the typical European nation-state, that it was sui generis (requires a category of its own).
Racism[edit | edit source]
Nationalism does not necessarily imply a belief in the superiority of one nation over others, but in practice some (but not all) nationalists do think that way about their own nation. Occasionally they believe another nation can serve as an example for their own nation, see Anglophilia. There is a specific racial nationalism which can be considered an ethnic nationalism, but some form of racism can be found within almost all nationalist movements. It is usually directed at neighbouring nations and ethnic groups.
Racism was also a feature of colonialist ideologies, which were especially strong at the end of the 19th century. Strictly speaking, overseas colonies conflict with the principles of the nation-state, since they are not part of the historic homeland of the nation, and their inhabitants clearly do not belong to the same ethnic group, speak its language, or share its culture. In practice, nationalists sometimes combined a belief in self-determination in Europe, with colonisation in Africa or Asia.
Explicit biological race theory was influential from the end of the 19th century. Nationalist and fascist movements in the first half of the 20th century often appealed to these theories. The Nazi ideology was probably the most comprehensively racial ideology in history, and race influenced all aspects of policy in Nazi Germany.
Nevertheless racism continues to be an influence on nationalism. Ethnic cleansing is often seen as both a nationalist and racist phenomenon. It is part of nationalist logic that the state is reserved for one nation, but not all nation-states expel their minorities. The best known recent examples of ethnic cleansing are those during the Yugoslav secession war in the 1990s. Other examples seen as related to racism include the removal of Germans from the Volga Republic during the 1950s, and the Armenian Genocide in the Ottoman Empire in 1915.
Opposition and critique[edit | edit source]
Nationalism is an extremely assertive ideology, which makes far-reaching demands, including the disappearance of entire states. It is not surprising that it has attracted vehement opposition. Much of the early opposition to nationalism was related to its geopolitical ideal of a separate state for every nation. The classic nationalist movements of the 19th century rejected the very existence of the multi-ethnic empires in Europe. This resulted in severe repression by the (generally autocratic) governments of those empires. That tradition of secessionism, repression, and violence continues, although by now a large nation typically confronts a smaller nation. Even in that early stage, however, there was an ideological critique of nationalism. That has developed into several forms of anti-nationalism in the western world. The Islamic revival of the 20th century also produced an Islamic critique of the nation-state.
In the liberal political tradition there is widespread criticism of ‘nationalism’ as a dangerous force and a cause of conflict and war between nation-states. Liberals do not generally dispute the existence of the nation-states. The liberal critique also emphasises individual freedom as opposed to national identity, which is by definition collective (see collectivism).
The pacifist critique of nationalism also concentrates on the violence of nationalist movements, the associated militarism, and on conflicts between nations inspired by jingoism or chauvinism. National symbols and patriotic assertiveness are in some countries discredited by their historical link with past wars, especially in Germany.
The anti-racist critique of nationalism concentrates on the attitudes to other nations, and especially on the doctrine that the nation-state exists for one national group, to the exclusion of others. It emphasises the chauvinism and xenophobia of many nationalisms.
Political movements of the left have often been suspicious of nationalism, again without necessarily seeking the disappearance of the existing nation-states. Marxism has been ambiguous towards the nation-state, and in the late 19th century some Marxist theorists rejected it completely. For some Marxists the world revolution implied a global state (or global absence of state); for others it meant that each nation-state had its own revolution. A significant event in this context was the failure of the social-democratic and socialist movements in Europe to mobilise a cross-border workers' opposition to World War I. At present most, but certainly not all, left-wing groups accept the nation-state, and see it as the political arena for their activities.
In the Western world the most comprehensive current ideological alternative to nationalism is cosmopolitanism. Ethical cosmopolitanism rejects one of the basic ethical principles of nationalism: that humans owe more duties to a fellow member of the nation, than to a non-member. It rejects such important nationalist values as national identity and national loyalty. However there is also a political cosmopolitanism, which has a geopolitical programme to match that of nationalism: it seeks some form of world state, with a world government. Very few people openly and explicitly support the establishment of a global state, but political cosmopolitanism has influenced the development of international criminal law, and the erosion of the status of national sovereignty. In turn, nationalists are deeply suspicious of cosmopolitan attitudes, which they equate with treason and betrayal.
While internationalism in the cosmopolitanist context by definition implies cooperation among nations, and therefore the existence of nations, proletarian internationalism is different, in that it calls for the international working class to follow its brethren in other countries irrespective of the activities or pressures of the national government of a particular sector of that class. Meanwhile, anarchism rejects nation-states on the basis of self-determination of the majority social class, and thus reject nationalism. Instead of nations, anarchists usually advocate the creation of cooperative societies based on free association and mutual aid without regard to ethnicity or race.
Islamism and Nationalism[edit | edit source]
Some radical Islamists who reject the existence of any state on any basis, other than the Islamic caliphate. For them, the unity of Islam means that there can be only one government on Earth, in the form usually titled caliphate (khilafah). It is not a state in the usual Western sense, but all existing states are incompatible with this ideal, including the Islamic nation-states with Islam as the official religion. Only a minority of Islamists take this view, but insofar as Al-Qaeda has an ideology, it includes the goal of the caliphate. The Ba'ath Party and related groups have historically offered a secular Arab Nationalist opposition to Islamism in Arab countries.
As a universal religion, Islam is nominally opposed to any categorisation of people not based on one's beliefs. Islam promotes a strong feeling of community among all Muslims, who collectively constitute the Ummah. The word "Ummah" is often incorrectly translated into English as "Islamic nation" but it is not a nation in this sense and not a synonym of 'caliphate', although the idea is associated with the historic caliphates. There is no doubt that many Muslims do strongly identify with the religious community, probably more so than Christians. The confusion may arise because in other cases it does translate to the English word "nation", as in the Arabic name of the United Nations,الأمم المتحدة, Al Umam al Mutahidah. Shared observances such as the holy month of Ramadan and the Hajj (the pilgrimage to Mecca), contribute to this common Muslim identification. The Nation of Islam in the United States has been criticised by some Muslims, who find the comparison between Islam and an earthly nation offensive.
See also[edit | edit source]
- This entry is related to, but not included in the Political ideologies series or one of its sub-series. Other related articles can be found at the Politics Portal.
- Cultural identity
- Ethnic autonomous regions
- List of active autonomist and secessionist movements
- List of historical autonomist and secessionist movements.
- List of historical effects of nationalism
- List of nationalistic musical pieces
- List of nationalist conflicts and organizations
- List of prominent figures in nationalism
- Historiography and nationalism
- Identity politics
- National flag
- National liberation movements
- National mysticism
- National personification
- National romanticism
- National Socialism or Nazism
- Nationalism and sport
Compare[edit | edit source]
[edit | edit source]
- The Stanford Encyclopedia of Philosophy entry
- Internet Modern History Sourcebook: Nationalism — Resources
- The Nationalism Project is the world's most comprehensive English-language website on nationalism.
- Nation and Nationalism (2 parts)
- Animated map of German Unification
- What is a Nation? - Nadesan Satyendra,
- Religious Nationalism and Human Rights, David Little, United States Institute of Peace, also briefly discusses history of nationalism
- Alfred Verdross and Othmar Spann: German Romantic Nationalism, National Socialism and International Law, Anthony Carty, European Journal of International Law.
- Johann Gottfried Herder (1784): Materials for the Philosophy of the History of Mankind
- The Prohibition of Nationalism in Islam
- Notes on Nationalism Essay by George Orwell
- The Sabanci University: School of Languages Podcasts: Nationalism (Part 1) and Theories of Nationalism (Part 2)
- America's New Nationalism Book review of Anatol Lieven's book, America Right or Wrong: An Anatomy of American Nationalism, published in The American Conservative
References[edit | edit source]
- "Nationalism I would define as an ideology claiming that a given human population has a natural solidarity based on shared history and a common destiny. This collective identity as a historically constituted “people” crucially entails the right to constitute an independent or autonomous political community. The idea of nationalism takes form historically in tandem with the doctrine of popular sovereignty: that the ultimate source of authority lies in the people, not the ruler or government. The foregoing definition of nationalism will be found in any classic text with minor variations." M. Crawford Young, 2004. Revisiting nationalism and ethnicity in Africa. UCLA International Institute, James S. Coleman Memorial Lecture Series. Or: Handler, Richard. "Nationalism is an ideology about individuated being. It is an ideology concerned with boundedness, continuity, and homogeneity encompassing diversity. It is an ideology in which social reality, conceived in terms of nationhood, is endowed with the reality of natural things." Nationalism and the Politics of Culture in Quebec. New Directions in Antropological Writing: History, Poetics, Cultural Criticism, ed. George E.; Clifford Marcus, James. Madison: The University of Wisconsin Press, 1988. Passage online at . Specifically on the issue: M. Freeden, 1998. Is Nationalism a Distinct Ideology? Political Studies, Volume 46, Number 4, September 1998, pp. 748-765(18).
- Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- Hechter, Michael. 2001. Containing Nationalism. ISBN 0-19-924751-X .
- Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press.
- Tilly, Charles. 1990. Coercion, Capital and European States AD 990-1992. Cambridge, MA: Basil Blackwell.
Further reading[edit | edit source]
- Abizadeh, Arash. 2002. "Does Liberal Democracy Presuppose a Cultural Nation? Four Arguments." American Political Science Review 96(3): 495-509.
- Abizadeh, Arash. 2004. "Liberal Nationalist versus Postnational Social Integration." Nations and Nationalism 10(3): 231-250.
- Anderson, Benedict. 1991. Imagined Communities. ISBN 0-86091-329-5 .
- Anderson, Benedict. 1998. The Spectre of Comparison: Nationalism, Southeast Asia and the World. London: Verso. ISBN 1-85984-184-8 .
- Balakrishnan, Gopal, ed. 1996. Mapping the Nation. London: Verso. ISBN 1-85984-960-1 .
- Billig, Michael. Banal Nationalism. ISBN 0-8039-7525-2 .
- Blattberg, Charles. 2006. "Secular Nationhood? The Importance of Language in the Life of Nations." Nations and Nationalism 12(4): 597-612.
- Breuilly, John. 1994. Nationalism and the State. 2nd ed. Chicago: Chicago University Press. ISBN 0-226-07414-5 .
- Brubaker, Rogers. 1996. Nationalism Reframed: Nationhood and the National Question in the New Europe. Cambridge University Press. ISBN 0-521-57224-X .
- Calhoun, Craig. 1993. "Nationalism and Ethnicity." Annual Review of Sociology 19: 211-239.
- Canovan, Margaret. 1996. Nationhood and Political Theory. Cheltenham, UK: Edward Elgar. ISBN 1-85278-852-6 .
- Fitzgerald, Francis. 1972. Fire in the Lake: The Vietnamese and the Americans in Vietnam. Boston: Back Bay Books. ISBN 0-316-15919-0 .
- Freeden, Michael. 1998. "Is Nationalism a Distinct Ideology?" Political Studies 46: 748-765.
- Geary, Patrick J. 2002. The Myth of Nations: The Medieval Origins of Europe. Princeton University Press. ISBN 0-691-11481-1 .
- Gellner, Ernest. 1983. Nations and Nationalism. Ithaca: Cornell University Press. ISBN 0-8014-1662-0 .
- Greenfeld, Liah. 1992. Nationalism: Five Roads to Modernity Cambridge: Harvard University Press. ISBN 0-674-60319-2 .
- Hobsbawm, Eric J. 1992. Nations and Nationalism Since 1780: Programme, Myth, Reality. 2nd ed. Cambridge University Press. ISBN 0-521-43961-2 .
- Juergensmeyer, Mark. 1993. The New Cold War: Religious Nationalism Confronts the Secular State. Berkeley: University of California Press. ISBN 0-520-08651-1 .
- Kymlicka, Will. 1995. Multicultural Citizenship. Oxford University Press. ISBN 0-19-827949-3 .
- McKim, Robert, and Jeff McMahan. 1997. The Morality of Nationalism. Oxford University Press. ISBN 0-19-510391-2 .
- Mill, John Stuart. 1861. Considerations on Representative Government.
- Miller, David. 1995. On Nationality. Oxford University Press. ISBN 0-19-828047-5 .
- Patten, Alan. 1999. "The Autonomy Argument for Liberal Nationalism." Nations and Nationalism. 5(1): 1-17.
- Renan, Ernest. 1882. "Qu'est-ce qu'une nation?"
- Smith, Anthony D. 1986. The Ethnic Origins of Nations London: Basil Blackwell. pp 6–18. ISBN 0-631-15205-9 .
- Tamir, Yael. 1993. Liberal Nationalism. Princeton University Press. ISBN 0-691-07893-9 .
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|
Anarchism | Authoritarianism | Capitalism | Christian democracy | Communism | Communitarianism | Conservatism | Fascism | Feminism | Green politics | Islamism | Liberalism | Libertarianism | Masculism | Nationalism | Social democracy | Socialism | https://psychology.wikia.org/wiki/Nationalism | 21 |
40 | Deflation is a decrease in the general prices of goods and services within an economy. It occurs when the rate of inflation becomes negative. This differs from disinflation, which is only a slowdown in the rate of inflation (and marks the speed of that change). With deflation comes a gain in the buying power of currency. In other words, you may have the same amount of money, but since prices are lower, your dollar will stretch further.
Learn more about deflation, how it occurs, and the effects it can have on stocks, bonds, and other market metrics.
What Is Deflation?
Deflation is an increase in the real value of money as it relates to goods and services. This means you can purchase more with $1 in a negative inflation economy than you could in a positive inflation economy. Inflation and deflation are both measured using the Consumer Price Index (CPI), which measures the prices of a selection of goods and services that a typical consumer might purchase, spread over a set amount of time.
The rate of deflation can be calculated as follows:
- First, look at the price index of the current year (CPIc) and the price index of the previous year (CPIp). Subtract the current year (CPIc) from the previous year (CPIp).
- Next, divide the result by the CPI from the previous period.
- Lastly, multiply the result by 100 to get a percentage.
The formula for the rate of deflation looks like this:
(( CPIc - CPIp ) / CPIc ) * 100 = Deflation Rate
How Deflation Works
Deflation can be caused in a number of ways. It is often caused by a fall in the total demand of goods and services, or an increase in supply. (This makes sense, since supply and demand are inversely related.) It can also be cause by a lack of money supply. It works like this: if consumers reduce their spending, demand will go down, causing supply to go up and prices to go down. Investors see prices falling and begin to sell. Panic ensues, and the market takes a nosedive.
When prices are falling, people often curb their spending even more until prices bottom out. This pattern can compound the problem further.
There are several ways to counteract deflation and its effects, but not everyone agrees on the methods. In fact, this is the topic of an ongoing debate in various economic camps. Many believe in flooding the economy with cash, which will in turn promote spending. By this logic, injecting more capital into an economy is the only way to reverse deflation for certain, since it attempts to change the only part of the equation that they can control: the money supply.
In recent years, the Federal Reserve introduced a method called quantitative easing. This approach attempts to increase inflation from the market end. To conduct quantitative easing, the Fed starts by cutting the federal fund rate. This is the interest rate banks charge each other for overnight loans. The Fed would then purchase a large number of long-term bonds, which will decrease the value of bonds, and increase inflation.
Whether or not this unconventional tactic has the desired effect is still up for debate. The aim of policies such as these is to combat deflation by using the powers of the Fed to decrease the dollar's value. The Fed can decrease the value of the dollar through an increase of the money supply or a decrease in the value of bonds.
How Deflation Affects the Market
For the most part, people agree that deflation has a negative impact on stocks, since lower prices over a long span of time tend to hurt bottom-line corporate net income. To add to the problem, deflation might encourage consumers to save money and reduce spending. This practice also has a negative impact on top-line revenues, and erodes shareholder value.
While deflation is bad for stocks, it can have a positive impact on some types of bonds. Government debt, such as that bought and sold in the form of U.S. Treasury Bonds, is worth more because fixed payments take on greater value. This happens because interest rates tend to decrease during a deflationary period, which leads to increases in bond prices and profits for people who have bonds.
Deflation isn't always a good thing for corporate bonds, however, especially those in companies that aren't blue-chip stocks. Deflation makes it tougher to make debt payments each year, since they become more costly. This puts companies at risk because in time they will not be able to pay their debts.
Pros and Cons of Deflation
Lower prices on goods and services
Cheaper to borrow money
Shrinks wealth gap
Lower wages for workers
Rise in unemployment levels
- Lower prices: When deflation occurs, consumers spend less money, which drives down demand. This drop in demand and increase in supply leads to a decline in prices because businesses have to lower prices to get rid of their inventory.
- Cheaper to borrow money: As a way of combating deflation, the Federal Reserve will often lower interest rates to try to get people to spend more and invest less in fixed-income investments like bonds. The low interest rates also mean people can borrow money for much cheaper, which is helpful for big ticket purchases things like cars, homes, or other items that may need to be purchased with a loan.
- Shrinks wealth gap: The value of most assets falls during deflation. Since people with more wealth are more like to hold assets than cash, they will suffer a greater loss compared to people with less wealth. On the flip side, people with lower income and mostly cash assets (rather than stocks or bonds), will benefit from the rising value of the dollar.
- Lower wages for workers: As people hold on to their money and start to spend less, businesses also lose money. Drops in drops in profit means they don't have as much to pay employees, let alone offer raises.
- Higher unemployment: An increase in supply means that companies have to reduce their production of goods. Cutting down production means less labor is needed, and may lead to layoffs. In some cases factories or retail stores may permanently close. This not only hurts current workers, but it limits the pool of jobs open for people just starting to enter the workforce.
- Deflation occurs when the value of the dollar increases and the cost of goods and services drop.
- Deflation can cause an increase in unemployment figures and wage drops.
- People who are wealthy will suffer from greater losses during deflation because assets are more likely to decrease in value.
- The Federal Reserve tries to slow down deflation by increasing the money supply and encouraging spending. | https://www.thebalance.com/what-is-deflation-and-how-does-it-affect-investments-1978985 | 21 |
16 | The incredible vitality and suppleness of Chinese culture and civilization have had a profound influence in the whole of East Asia region and beyond in ways that no other civilization can match in global history, anywhere. Although it’s full influence on world history had not been understandably felt or experienced until the last couple of centuries, the transmission and adoption of fundamental elements of Chinese culture to other regions in East Asia, specifically Korea, Japan and Vietnam, undoubtedly provide one of the most essential illustrations of the spread of civilization from an inner core locale to other neighboring and distant regions (Adler & Pouwels 214). It is fascinating how the Chinese culture has been able to spread its wings beyond the East Asia region to find homage in other civilizations thousands of miles away from mainland China. This essay aims to prove that the Koreans have been greatly influenced by the Chinese culture and civilization due to close geographical proximity and recurrent cultural communications between China and Korea in ancient times. Historians trace the interactions between the countries – China and Korea – back to the Han Dynasty due to its great expansion into Manchuria and Korea (Gernet 121).
However, others are of the opinion that it is difficult to lay a definite timeline on when the two countries started interacting since Korea was greatly viewed as an extension of China during ancient times (Sterns et al 299). It is worthwhile to note that dynasties formed the basic political units in the history of ancient China, with the government being led by an emperor who, “although by no means divine, was able to inspire the loyalty of a great many talented and ambitious servants in his bureaucracy” (Adler & Pouwels 214). These bureaucracies, based more on merit than on characteristics such as family connections, played a significant role in spreading the influence of Chinese culture to other regions. Due to the increased overtures of these dynasties in attempting to conquer new regions, the Koreans and Japanese have no less understanding and perception of ancient Chinese culture than the Chinese themselves.
A majority of historians consider the Tang Dynasty (618-907) as the epitome of all Chinese civilizations throughout history (Forte 297). It was during this era that reforms intended to centralize all government functions were carried out, and a large civil service initiated to run government affairs. The Dynasty also engaged in an ambitious territorial expansion, thereby spreading its philosophies, especially the Confucian principles, further into the Diaspora.
The Tang Dynasty flourished mainly through education, agriculture, trade, craftsmanship and inventions. This particular Dynasty was able to influence other civilizations within the region by employing a friendly foreign policy as well as intense trading with over seventy countries. It is imperative to note that the Tang Dynasty was also influenced by the cultures and civilizations of the trading partners.
For example, the Chinese were able to incorporate mathematics and astronomy into their education system by interacting with trading partners during this era (Forte 299). The Tang Dynasty was once disrupted by Wu Zetian, the only distinguished female empress in the history of China. However, she was ousted, and the Dynasty continued to rule (Benn 26). Neighboring countries, specifically Korea, Vietnam and Japan, respected the Tang Dynasty and its emperors to an extent of paying them tribute (Adler & Pouwels 220).
According to Gernet, “…China’s influence in Asia [during the Tang Dynasty] was at its Zenith” (258). Various kingdoms and countries within the region paid homage to the Tang court, assimilating a multiplicity of legal and cultural elements in the process. Many historians will buy the argument that besides political domination, the Tang Dynasty also exercised a powerful cultural influence over Korea (Forte 297). For example, the Korean traditional architecture was developed through the absorption or adaptation of various cultural rudiments learned from ancient Chinese culture originating from the Han Dynasty (Sup para. 3).
It is worth noting that the Korean architecture surpassed its preceding rustic simplicity immediately after the amalgamation of the Three Kingdoms by the famous Silla Dynasty in 668 C.E. as a direct result of constant interaction between the two cultures (Sup para. 4).
The architecture influence is still felt and witnessed to date. For instance, the Pulguk temple, located in Kyonju, mirrors the marvelous architecture of the Tang Dynasty. Buddhism, introduced in China from India and adopted by Tang rulers, played an important function in the cultural transformation that took place in ancient Korea. The religious doctrines practiced by Buddhists had great ramifications on Koreans, including influencing their way of life, religious orientation, art, education, lifestyle and architecture. Indeed, Buddhism can be credited for providing the vital links of Chinese culture and civilization to the Koreans. For instance, Buddhism played a fundamental artistic influence during the period of Tang, Silla and Koryo dynasties.
The artistic themes mainly originated from India, passed through Central Asia and China before being adopted or assimilated in Korea (Hadar para. 2). The artistic expertise of making household items such as porcelains was assimilated from mainland China and mixed up with local Korean technology and expertise to occasion superb results. For example, the outstanding bluish-green porcelains made by the Koryo Dynasty were originally modeled around Chinese porcelains. However, the masses under the Tang dynasty engineered a revolt against Buddhism in 845 due to its foreign origins and teachings that went against the Chinese traditional concept of family life (Adler & Pouwels 221). The Koreans accomplished their historical duty of assimilating the entry of foreign cultural elements with traditional and inherent aspirations under the cultural manipulation of the Tang Dynasty (Sup Para.
4). In education, Tang rulers in the indigenous Koguryo kingdom had endeavored to introduce the Chinese examination system and writing style immediately before the Chinese domination over Korea started to wane during the Tang Dynasty (Stearns et al 269). However, a determined resistance to Sinification by aristocratic leaders in the peninsula led to the collapse of the plan. The bureaucratic structure of the Tang Dynasty was also adapted by the Koreans. It was during the Tang dynasty that government functions were fully structured and organized to include a civil service that was evaluated based on merit rather than social status or family linkages. Citizens aspiring for imperial offices had to first undertake competitive examinations based on Confucian classics to determine their suitability (Hearne para. 8; “Medieval China,” 14).
In Chinese, the imperial examinations were known as Jinshi. During this era of Chinese civilization, the military found themselves losing power and influence to the trained civil servants. All the above elements – bureaucratic structure, robust civil service, and education system – soon found their way into mainstream Korean culture through assimilation. Although the Koreans made several innate changes and innovations into the assimilated elements, it is safe to assume that their ancient bureaucratic structure, civil service and education system were modeled around the Chinese structures (Adler & Pouwels 221). A general assessment of the Tang Dynasty would not be complete without making reference to the invariable enrichment guaranteed by the contacts of the Tang law codes with civilizations alien to the dynasty in terms of culture and way of life. The ancient Koreans and Japanese were especially known to pay tribute to Tang courts. As such, they became accustomed to the Tang law codes that had been revised by emperor Tang Taizong, also known as the Great ancestor (“Medieval China,” 10).
Some of these law codes are still in use in China today. The legal codes served as models across East Asia, including Korea. To date, some of the Tang law codes assimilated by Koreans during Tang dynasty continue to inform legal matters in the peninsula. There were very few areas of divergence between ancient Chinese culture and civilization on the one hand and what was assimilated or absorbed by Korean culture on the other.
This is initially because the Tang dynasty was very strong, and expanded its tentacles deep into these regions using an elaborate civil service, robust trade strategies and good international relations (Benn 16). Still, there exist some areas of divergence. For instance, although the Korean spoken language was heavily influenced by the Chinese language, it still remained unique to the Koreans and genetically unrelated to Chinese (“Chinese Language Facts,” para. 5). Divergent architectural decorations and colors used in China and Korea especially during the Tang dynasty points to the fact that Koreans maintained independence when assimilating some Chinese elements.
During the Tang rule, decorations tended to be exceptionally complicated and superfluous. However, Koreans maintained the splendid exquisiteness of moderation in their application of color and decoration. Lastly, despite undertaking Confucian examinations, government positions in Korea were determined by such characteristics as birth and family connections rather than meritocracy (Stearns et al 300).
The Korean Peninsula has an elongated and interesting history, albeit catastrophic at times and desolately divided to this day. Korea has witnessed constant divisions followed by reunifications over many centuries (Lee & Yi 72). Although the peninsula has been ruled by numerous kingdoms and monarchies, this section will focus on the Silla period between 668 and 935 C.
E., a period in history that was deservedly known as Korea’s Golden Age. The Silla Kingdom was located in Gyeongju, approximately 45 miles north of Busan (Freedman para.
2). It is of significance to note that the Silla Kingdom allied itself with the Tang emperors in the latter’s attempt to conquer Korea for the second time after the Han dynasty (Adler & Pouwels 222). The Tang rulers decided to distinguish the Silla Kingdom as a vassal on condition that the latter would reciprocate by paying tribute to the Tang Dynasty and its courts. However, the Chinese withdrew their military personnel in 668 C.E.
, effectively leaving the Silla rulers to run their own jurisdiction independently. The Korean culture and mode of life was heavily influenced by the much more developed and complex Chinese culture, especially during the interactions that took place between the Tang Dynasty and the Silla Kingdom. Indeed, the Koreans had already assimilated many Chinese elements long before Tang rulers disembarked from Korea in 668 as they were expected to pay tribute to the Tang Dynasty, including the courts and government structures (Lee & Yi 74). This section will focus on three critical areas of interest – religion, government, and economy – to prove that the Koreans were greatly influenced by the Chinese culture due to close geographical proximity and constant cultural communications between the two neighbors.
In religion, it is safe to postulate that the Silla period (668-935) was one of religious maturity. The period characterized the culmination of Buddhist influence in the peninsula due to close interactions with Tang China. It is imperative to note that Buddhism was also alien in China as it originated from India and the Middle East.
However, it found its way into China and later into Korea due to close geographical proximity and constant cultural communications. The Tang dynasty spread the Buddhist religious philosophy into Korea after they conquered the country for the second time. Furthermore, Korean monks and scholars journeyed to Tang China for purposes of studying Buddhist philosophy and Confucius classics (Lee & Yi 73). On their return, the monks and students contributed immensely to the cultural development of Korea.
As such, Buddhism copiously blossomed and flourished in many regions of the Peninsula. Seondeok, Queen of Silla between 632 and 647, contributed immensely to the growth of Buddhism in Korea. Seondeok, also known as Sondok, served as the twenty-seventh ruler of the kingdom, and the first queen to ever rule Silla (Woo 27). Her reign coincided with a period of rivalry between the three kingdoms – Silla, Baekje Kingdom and Goguryeo. It was during her reign that Seon (Zen) Buddhism was fully initiated in Silla, a direct result of the interactions between her kingdom and the Tang Chinese.
She encouraged monks to travel to China so that they may have a deep understanding of the religion, and supervised the construction of temples. On their return, the monks brought many scriptures about Buddhism, further facilitating the cultural borrowing. She was also instrumental in sending numerous Hwarang warriors to China for purposes of studying martial arts.
The Hwarang signifies a military community of professional Buddhist warriors in the Silla and later day Unified Silla Kingdoms who played an influential function in Silla’s victories (Eckert 37). Indeed, the Hwarang, also known as righteous soldiers, assisted Silla to evade being subdued by the Tang Chinese. In government, it is imperative to note that the liaison of Koguryo, Paekche and Silla on the one hand and Tang China on the other hand during the Three Kingdoms period was predominantly based on armed conflict (Lee & Yi 73). However, contact of the Three Kingdoms with Tang Dynasty took other forms such as diplomatic coalitions and cultural borrowing especially after the Silla Unification. All this was made possible due to the close geographical proximity and continued communications between the two neighbors. After the Unification, Silla gained authority over a larger territory and population, and as such, a new government structure and dispensation was needed to run the administrative affairs of the peninsula.
According to Lee & Yi, “…a growing authoritarianism in the power exercised by the throne was the most important change accompanying the Silla Unification” (74). In consonance with the hugely reinforced authority of the throne, immeasurable changes were initiated on the functions of the main organs of the central government. However, of importance is the fact that the Unified Silla retained most organs assimilated from the Tang Dynasty during the Three Kingdom era (Lee & Yi 75).
For example, the administrative structure adopted by the Unified Silla was predominantly similar to that of Tang Chinese in terms of main organs such as the ministries of “military affairs, disbursements, rites, tax collections, official surveillance and justice” (Lee & Yi 75). Contact with the Tang Dynasty also introduced other political advances such as the civil service, written examinations and a more ordered government. However, Silla upheld an exclusive political mechanism referred to as the ‘bone-rank’ system.
Through this political approach, three women led the monarchy as a sovereign to safeguard stability. A council of nobles known as the hwabaek aided in the preservation of peace and stability by allowing people to appeal and participate in all government functions (Eckert et al 34). Despite encouraging students to undertake Confucian examinations, most government positions in Silla were determined by innate characteristics such as birth and family connections (Stearns et al 300) The economic exchange between the two neighbors was implemented within the structure of Tang’s tributary system. Here, the Silk Road was fundamentally important in ensuring the exchange of goods and services between Silla and the greater China. The distribution of economic exchange inarguably favored China, with much of Korea’s export comprising of raw materials, glass, silk and handcrafted items (Lee & Yi 73).
According to the authors, “the demand for imported goods remained the stronger impetus for Korean trade with China, as many kinds of luxury fabrics and handcrafted goods were eagerly sought for consumption by members of the [Korean] aristocracy” (73). To date, many Korean fabrics, handcrafted items, and general merchandise such as porcelains bear all the hallmarks of Chinese models. The Silla also built the capital, Kumsong, and other economic markets using Tang models. The Silk Road played a pivotal role in Silla’s economic exchange and development. Indeed, the road can be credited for Silla’s golden age, a period in history exhibited by numerous cultural exchanges as a result of communications and interactions between the two neighbors.
But it should not escape mention that not only did the Silla Kingdom reap many benefits from Tang China due to the latter’s dazzling culture, politics and economy, but the kingdom also assimilated many economic activities from China, further spurring Korea’s economic wellbeing (Eckert et al 39).
From the discussion, it is evidently clear that Chinese culture and civilization have had a profound influence on Korean civilization mainly because of the countries’ close geographical proximity as well as constant cultural communications during ancient times, especially during the Golden era of the Tang Dynasty. Through political, military and powerful cultural dominations and manipulations, the Tang Dynasty was able to penetrate the Korean socio-economic, religious and political fabric, triggering large-scale adaptations and assimilations of Chinese elements (Eckert et al 32). For instance, An Lushan, a military leader during the Tang Dynasty, helped greatly to protect the Northeastern border from attacks after the invasion, an act that made him secure the favor of emperor Xuanzong (Benn 9) The influences discussed in this paper are many and varied. However, the Chinese culture and civilization during the Tang Dynasty influenced Korean culture in areas of traditional architecture, Buddhism, artistic expertise, education and examination system, writing style, government bureaucracy, civil service and Tang law codes.
However, despite adopting the Chinese model of examination system, government positions in Korea were still determined by birth and social status other than meritocracy as it was the case in Tang China. The Korean spoken language and architectural decorations also maintained their originality. The essay has also discussed three critical areas – religion, government and economy – to put to rest the argument that Chinese culture and civilization indeed influenced the ancient Korean culture and way of life by virtue of the countries’ close geographical proximity and recurrent cultural communications.
Adler, P.J., & Pouwels, R.L.
World Civilizations: To 1700. Boston, MA: Thomson Learning, Inc. 2008. ISBN: 0495502618 Benn, C. China’s golden age: Everyday Life in the Tang Dynasty. Oxford University Press.
ISBN: 0195176650 Chinese Language Facts. (n.d.
). Retrieved 10 Feb 2009 html> Eckert, C., Lee, K., Lew, Y, Robinson, W., & Wagner, E.W. Korea Old and New: A History. Harvard Korea Institute. 1991. ISBN: 0962771309 Freedman, J. Gyeongju, Cradle of the Great Silla Kingdom. 2010. Retrieved 11 Feb 2009 Ancient China: Chinese Civilization from its Origins to the Tag Dynasty. Journal of the American Oriental Society 23.2 (2003): 292-302 Gernet, J. A History of Chinese Civilization. Trumpington Street, Cambridge: Cambridge University Press. 1998. ISBN: 0521497817 Hadar, O. South Korea: Characteristics of Society under the Dynasties. 1990. Retrieved 10 Feb 2010< http://www.country-data.com/cgi-bin/query/r-12229.html> Hearne, C.F. Tang Dynasty History: Chinese Culture was Unparalleled in the middle Ages. 2009. Retrieved 10 Feb 2009 cfm/tang_dynasty_history> Lee, K., & Yi, K. A New History of Korea. Harvard University Press. 1984. ISBN: 067461576X Medieval China: Sui, Tang, and Song Dynasties. (n.d.). Retrieved 10 Feb 2009 N., Adas, M., Schwartz, S.B., & Gibert, M. J. World Civilizations: The global Experience, Volume 1 – Beginnings to 1750, (4th Ed). Longman. 2003. ISBN: 0321182804 Sup, Y.C. Brief History of Korean Architecture. (n.d.). Retrieved 10 Feb 2010 gsnu.ac.kr/~mirkoh/ob1.html> Woo, H. Y. A Review of Korean History Vol. 1 Ancient/Goryeo Era. Kyongsaewon Publishers. 2010. ISBN: 9788983410917
html> Eckert, C., Lee, K., Lew, Y, Robinson, W., & Wagner, E.W. Korea Old and New: A History.
Harvard Korea Institute. 1991. ISBN: 0962771309 Freedman, J. Gyeongju, Cradle of the Great Silla Kingdom. 2010. Retrieved 11 Feb 2009
Ancient China: Chinese Civilization from its Origins to the Tag Dynasty. Journal of the American Oriental Society 23.2 (2003): 292-302 Gernet, J.
A History of Chinese Civilization. Trumpington Street, Cambridge: Cambridge University Press. 1998. ISBN: 0521497817 Hadar, O.
South Korea: Characteristics of Society under the Dynasties. 1990. Retrieved 10 Feb 2010< http://www.country-data.com/cgi-bin/query/r-12229.html> Hearne, C.F.
Tang Dynasty History: Chinese Culture was Unparalleled in the middle Ages. 2009. Retrieved 10 Feb 2009 cfm/tang_dynasty_history> Lee, K., & Yi, K. A New History of Korea. Harvard University Press. 1984. ISBN: 067461576X Medieval China: Sui, Tang, and Song Dynasties. (n.d.). Retrieved 10 Feb 2009 N., Adas, M., Schwartz, S.B., & Gibert, M. J. World Civilizations: The global Experience, Volume 1 – Beginnings to 1750, (4th Ed). Longman. 2003. ISBN: 0321182804 Sup, Y.C. Brief History of Korean Architecture. (n.d.). Retrieved 10 Feb 2010 gsnu.ac.kr/~mirkoh/ob1.html> Woo, H. Y. A Review of Korean History Vol. 1 Ancient/Goryeo Era. Kyongsaewon Publishers. 2010. ISBN: 9788983410917
cfm/tang_dynasty_history> Lee, K., & Yi, K. A New History of Korea. Harvard University Press. 1984.
ISBN: 067461576X Medieval China: Sui, Tang, and Song Dynasties. (n.d.). Retrieved 10 Feb 2009
N., Adas, M., Schwartz, S.B., & Gibert, M.
J. World Civilizations: The global Experience, Volume 1 – Beginnings to 1750, (4th Ed). Longman.
2003. ISBN: 0321182804 Sup, Y.C. Brief History of Korean Architecture.
(n.d.). Retrieved 10 Feb 2010 gsnu.ac.kr/~mirkoh/ob1.html> Woo, H. Y. A Review of Korean History Vol. 1 Ancient/Goryeo Era. Kyongsaewon Publishers. 2010. ISBN: 9788983410917
gsnu.ac.kr/~mirkoh/ob1.html> Woo, H.
Y. A Review of Korean History Vol. 1 Ancient/Goryeo Era. Kyongsaewon Publishers.
2010. ISBN: 9788983410917 | https://graceplaceofwillmar.org/chinese-and-korean-civilizations-comparing-areas-of-influence-and-divergence-during-tang-and-silla-dynasties/ | 21 |
28 | The Xiongnu (Chinese: 匈奴; Wade–Giles: Hsiung-nu, [ɕjʊ́ŋ.nǔ]) were a tribal confederation of nomadic peoples who, according to ancient Chinese sources, inhabited the eastern Eurasian Steppe from the 3rd century BC to the late 1st century AD. Chinese sources report that Modu Chanyu, the supreme leader after 209 BC, founded the Xiongnu Empire.
|History of Mongolia|
Part of a series on the
|History of Xinjiang|
After their previous rivals, the Yuezhi, migrated into Central Asia during the 2nd century BC, the Xiongnu became a dominant power on the steppes of East Asia, centred on an area known later as Mongolia. The Xiongnu were also active in areas now part of Siberia, Inner Mongolia, Gansu and Xinjiang. Their relations with adjacent Chinese dynasties to the south-east were complex, with repeated periods of conflict and intrigue, alternating with exchanges of tribute, trade and marriage treaties (heqin). During the Sixteen Kingdoms era, as one of the Five Barbarians, they founded several dynastic states in northern China, such as Former Zhao, Northern Liang and Xia.
Attempts to identify the Xiongnu with later groups of the western Eurasian Steppe remain controversial. Scythians and Sarmatians were concurrently to the west. The identity of the ethnic core of Xiongnu has been a subject of varied hypotheses, because only a few words, mainly titles and personal names, were preserved in the Chinese sources. The name Xiongnu may be cognate with that of the Huns or the Huna, although this is disputed. Other linguistic links—all of them also controversial—proposed by scholars include Iranian, Mongolic, Turkic, Uralic, Yeniseian, Tibeto-Burman or multi-ethnic.
An early reference to the 匈奴 Xiongnu (Mandarin Chinese pronunciation; of note, the Mandarin dialect spoken now in Beijing only came into existence less 1000 years ago), or Hungnou (Cantonese pronunciation, of note is the H instead of X pronunciation of the first of the two Han characters 匈奴 in Cantonese, which is closer to Middle Chinese spoken in the ancient Chinese capital Chang-an (now known as Xian) from about 200 BCE to 900 AD, and some other southern Chinese dialects, as well as Korean and Japanese pronunciations of the Han character 匈), was by the Han dynasty historian Sima Qian who wrote about the Xiongnu in the Records of the Grand Historian (c. 100 BC). In it, it is mentioned that the ancestor of Xiongnu was a possible descendant of the rulers of Xia dynasty by the name of Chunwei. It also draws a distinct line between the settled Huaxia people (Chinese) to the pastoral nomads (Xiongnu), characterizing it as two polar groups in the sense of a civilization versus an uncivilized society: the Hua–Yi distinction. Pre-Han sources often classify the Xiongnu as a Hu people, which was a blanket term for nomadic people; it only became an ethnonym for the Xiongnu during the Han.
Ancient China often came in contact with the Xianyun and the Xirong nomadic peoples. In later Chinese historiography, some groups of these peoples were believed to be the possible progenitors of the Xiongnu people. These nomadic people often had repeated military confrontations with the Shang and especially the Zhou, who often conquered and enslaved the nomads in an expansion drift. During the Warring States period, the armies from the Qin, Zhao and Yan states were encroaching and conquering various nomadic territories that were inhabited by the Xiongnu and other Hu peoples.
Sinologist Edwin Pulleyblank argued that the Xiongnu were part of a Xirong group called Yiqu, who had lived in Shaanbei and had been influenced by China for centuries, before they were driven out by the Qin dynasty. Qin's campaign against the Xiongnu expanded Qin's territory at the expense of the Xiongnu. After the unification of Qin dynasty, Xiongnu was a threat to the northern board of Qin. They were likely to attack the Qin dynasty when they suffered natural disasters. In 215 BC, Qin Shi Huang sent General Meng Tian to conquer the Xiongnu and drive them from the Ordos Loop, which he did later that year. After the catastrophic defeat at the hands of Meng Tian, the Xiongnu leader Touman was forced to flee far into the Mongolian Plateau. The Qin empire became a threat to the Xiongnu, which ultimately led to the reorganization of the many tribes into a confederacy.
In 209 BC, three years before the founding of Han China, the Xiongnu were brought together in a powerful confederation under a new chanyu, Modu Chanyu. This new political unity transformed them into a more formidable state by enabling the formation of larger armies and the ability to exercise better strategic coordination. The Xiongnu adopted many of the Chinese agriculture techniques such as slaves for heavy labor, wore silk like the Chinese, and lived in Chinese-style homes. The reason for creating the confederation remains unclear. Suggestions include the need for a stronger state to deal with the Qin unification of China that had resulted in a loss of the Ordos region at the hands of Meng Tian or the political crisis that overtook the Xiongnu in 215 BC when Qin armies evicted them from their pastures on the Yellow River.
After forging internal unity, Modu Chanyu expanded the empire on all sides. To the north he conquered a number of nomadic peoples, including the Dingling of southern Siberia. He crushed the power of the Donghu people of eastern Mongolia and Manchuria as well as the Yuezhi in the Hexi Corridor of Gansu, where his son, Jizhu, made a skull cup out of the Yuezhi king. Modu also reoccupied all the lands previously taken by the Qin general Meng Tian.
Under Modu's leadership, the Xiongnu threatened the Han dynasty, almost causing Emperor Gaozu, the first Han emperor, to lose his throne in 200 BC. By the time of Modu's death in 174 BC, the Xiongnu had driven the Yuezhi from the Hexi Corridor, killing the Yuezhi king in the process and asserting their presence in the Western Regions.
The Xiongnu were recognized as the most prominent of the nomads bordering the Chinese Han empire and during early relations between the Xiongnu and the Han, the former held the balance of power. According to the Book of Han, later quoted in Duan Chengshi's ninth-century Miscellaneous Morsels from Youyang:
Also, according to the Han shu, Wang Wu (王烏) and others were sent as envoys to pay a visit to the Xiongnu. According to the customs of the Xiongnu, if the Han envoys did not remove their tallies of authority, and if they did not allow their faces to be tattooed, they could not gain entrance into the yurts. Wang Wu and his company removed their tallies, submitted to tattoo, and thus gained entry. The Shanyu looked upon them very highly.
After Modu, later leaders formed a dualistic system of political organisation with the left and right branches of the Xiongnu divided on a regional basis. The chanyu or shanyu, a ruler equivalent to the Emperor of China, exercised direct authority over the central territory. Longcheng (蘢城) became the annual meeting place and served as the Xiongnu capital. The ruins of Longcheng have been found south of Ulziit District, Arkhangai Province in 2017.
The ruler of the Xiongnu was called the Chanyu. Under him were the Tuqi Kings. The Tuqi King of the Left was normally the heir presumptive. Next lower in the hierarchy came more officials in pairs of left and right: the guli, the army commanders, the great governors, the danghu and the gudu. Beneath them came the commanders of detachments of one thousand, of one hundred, and of ten men. This nation of nomads, a people on the march, was organized like an army.
Yap, apparently describing the early period, places the Chanyu's main camp north of Shanxi with the Tuqi King of the Left holding the area north of Beijing and the Tuqi King of the Right holding the Ordos Loop area as far as Gansu. Grousset, probably describing the situation after the Xiongnu had been driven north, places the Chanyu on the upper Orkhon River near where Genghis Khan would later establish his capital of Karakorum. The Tuqi King of the Left lived in the east, probably on the high Kherlen River. The Tuqi King of the Right lived in the west, perhaps near present-day Uliastai in the Khangai Mountains.
Marriage diplomacy with Han China
In the winter of 200 BC, following a Xiongnu siege of Taiyuan, Emperor Gaozu of Han personally led a military campaign against Modu Chanyu. At the Battle of Baideng, he was ambushed reputedly by Xiongnu cavalry. The emperor was cut off from supplies and reinforcements for seven days, only narrowly escaping capture.
The Han sent princesses to marry Xiongnu leaders in their efforts to stop the border raids. Along with arranged marriages, the Han sent gifts to bribe the Xiongnu to stop attacking. After the defeat at Pingcheng in 200 BC, the Han emperor abandoned a military solution to the Xiongnu threat. Instead, in 198 BC , the courtier Liu Jing was dispatched for negotiations. The peace settlement eventually reached between the parties included a Han princess given in marriage to the chanyu (called heqin) (Chinese: 和親; lit. 'harmonious kinship'); periodic gifts to the Xiongnu of silk, distilled beverages and rice; equal status between the states; and a boundary wall as mutual border.
This first treaty set the pattern for relations between the Han and the Xiongnu for sixty years. Up to 135 BC, the treaty was renewed nine times, each time with an increase in the "gifts" to the Xiongnu Empire. In 192 BC, Modun even asked for the hand of Emperor Gaozu of Han widow Empress Lü Zhi. His son and successor, the energetic Jiyu, known as the Laoshang Chanyu, continued his father's expansionist policies. Laoshang succeeded in negotiating with Emperor Wen terms for the maintenance of a large scale government sponsored market system.
While the Xiongnu benefited handsomely, from the Chinese perspective marriage treaties were costly, very humiliating and ineffective. Laoshang Chanyu showed that he did not take the peace treaty seriously. On one occasion his scouts penetrated to a point near Chang'an. In 166 BC he personally led 140,000 cavalry to invade Anding, reaching as far as the imperial retreat at Yong. In 158 BC, his successor sent 30,000 cavalry to attack Shangdang and another 30,000 to Yunzhong.
The Xiongnu also practiced marriage alliances with Han dynasty officers and officials who defected to their side. The older sister of the Chanyu (the Xiongnu ruler) was married to the Xiongnu General Zhao Xin, the Marquis of Xi who was serving the Han dynasty. The daughter of the Chanyu was married to the Han Chinese General Li Ling after he surrendered and defected. Another Han Chinese General who defected to the Xiongnu was Li Guangli, general in the War of the Heavenly Horses, who also married a daughter of the Chanyu. The Han Chinese diplomat Su Wu married a Xiongnu woman given by Li Ling when he was arrested and taken captive. Han Chinese explorer Zhang Qian married a Xiongnu woman and had a child with her when he was taken captive by the Xiongnu.
When the Eastern Jin dynasty ended the Xianbei Northern Wei received the Han Chinese Jin prince Sima Chuzhi 司馬楚之 as a refugee. A Northern Wei Xianbei Princess married Sima Chuzhi, giving birth to Sima Jinlong 司馬金龍. Northern Liang Xiongnu King Juqu Mujian's daughter married Sima Jinlong.
The Han dynasty made preparations for war when the Han Emperor Wu dispatched the Han Chinese explorer Zhang Qian to explore the mysterious kingdoms to the west and to form an alliance with the Yuezhi people in order to combat the Xiongnu. During this time Zhang married a Xiongnu wife, who bore him a son, and gained the trust of the Xiongnu leader. While Zhang Qian did not succeed in this mission, his reports of the west provided even greater incentive to counter the Xiongnu hold on westward routes out of China, and the Chinese prepared to mount a large scale attack using the Northern Silk Road to move men and material.
While Han China was making preparations for a military confrontation since the reign of Emperor Wen, the break did not come until 133 BC, following an abortive trap to ambush the chanyu at Mayi. By that point the empire was consolidated politically, militarily and economically, and was led by an adventurous pro-war faction at court. In that year, Emperor Wu reversed the decision he had made the year before to renew the peace treaty.
Full-scale war broke out in autumn 129 BC, when 40,000 Chinese cavalry made a surprise attack on the Xiongnu at the border markets. In 127 BC, the Han general Wei Qing retook the Ordos. In 121 BC, the Xiongnu suffered another setback when Huo Qubing led a force of light cavalry westward out of Longxi and within six days fought his way through five Xiongnu kingdoms. The Xiongnu Hunye king was forced to surrender with 40,000 men. In 119 BC both Huo and Wei, each leading 50,000 cavalrymen and 100,000 footsoldiers (in order to keep up with the mobility of the Xiongnu, many of the non-cavalry Han soldiers were mobile infantrymen who traveled on horseback but fought on foot), and advancing along different routes, forced the chanyu and his Xiongnu court to flee north of the Gobi Desert. Major logistical difficulties limited the duration and long-term continuation of these campaigns. According to the analysis of Yan You (嚴尤), the difficulties were twofold. Firstly there was the problem of supplying food across long distances. Secondly, the weather in the northern Xiongnu lands was difficult for Han soldiers, who could never carry enough fuel. According to official reports, the Xiongnu lost 80,000 to 90,000 men, and out of the 140,000 horses the Han forces had brought into the desert, fewer than 30,000 returned to China.
In 104 and 102 BC, the Han fought and won the War of the Heavenly Horses against the Kingdom of Dayuan. As a result, the Han gained many Ferghana horses which further aided them in their battle against the Xiongnu. As a result of these battles, the Chinese controlled the strategic region from the Ordos and Gansu corridor to Lop Nor. They succeeded in separating the Xiongnu from the Qiang peoples to the south, and also gained direct access to the Western Regions. Because of strong Chinese control over the Xiongnu, the Xiongnu became unstable and were no longer a threat to the Han Chinese.
Ban Chao, Protector General (都護; Duhu) of the Han dynasty, embarked with an army of 70,000 soldiers in a campaign against the Xiongnu remnants who were harassing the trade route now known as the Silk Road. His successful military campaign saw the subjugation of one Xiongnu tribe after another. Ban Chao also sent an envoy named Gan Ying to Daqin (Rome). Ban Chao was created the Marquess of Dingyuan (定遠侯, i.e., "the Marquess who stabilized faraway places") for his services to the Han Empire and returned to the capital Luoyang at the age of 70 years and died there in the year 102. Following his death, the power of the Xiongnu in the Western Regions increased again, and the emperors of subsequent dynasties did not reach as far west until the Tang dynasty.
Xiongnu Civil War (60–53 BC)
When a Chanyu died, power could pass to his younger brother if his son was not of age. This system, which can be compared to Gaelic tanistry, normally kept an adult male on the throne, but could cause trouble in later generations when there were several lineages that might claim the throne. When the 12th Chanyu died in 60 BC, power was taken by Woyanqudi, a grandson of the 12th Chanyu's cousin. Being something of a usurper, he tried to put his own men in power, which only increased the number of his enemies. The 12th Chanyu's son fled east and, in 58 BC, revolted. Few would support Woyanqudi and he was driven to suicide, leaving the rebel son, Huhanye, as the 14th Chanyu. The Woyanqudi faction then set up his brother, Tuqi, as Chanyu (58 BC). In 57 BC three more men declared themselves Chanyu. Two dropped their claims in favor of the third who was defeated by Tuqi in that year and surrendered to Huhanye the following year. In 56 BC Tuqi was defeated by Huhanye and committed suicide, but two more claimants appeared: Runzhen and Huhanye's elder brother Zhizhi Chanyu. Runzhen was killed by Zhizhi in 54 BC, leaving only Zhizhi and Huhanye. Zhizhi grew in power, and, in 53 BC, Huhanye moved south and submitted to the Chinese. Huhanye used Chinese support to weaken Zhizhi, who gradually moved west. In 49 BC, a brother to Tuqi set himself up as Chanyu and was killed by Zhizhi. In 36 BC, Zhizhi was killed by a Chinese army while trying to establish a new kingdom in the far west near Lake Balkhash.
Tributary relations with the Han
In 53 BC Huhanye (呼韓邪) decided to enter into tributary relations with Han China. The original terms insisted on by the Han court were that, first, the Chanyu or his representatives should come to the capital to pay homage; secondly, the Chanyu should send a hostage prince; and thirdly, the Chanyu should present tribute to the Han emperor. The political status of the Xiongnu in the Chinese world order was reduced from that of a "brotherly state" to that of an "outer vassal" (外臣). During this period, however, the Xiongnu maintained political sovereignty and full territorial integrity. The Great Wall of China continued to serve as the line of demarcation between Han and Xiongnu.
Huhanye sent his son, the "wise king of the right" Shuloujutang, to the Han court as hostage. In 51 BC he personally visited Chang'an to pay homage to the emperor on the Lunar New Year. In the same year, another envoy Qijushan (稽居狦) was received at the Ganquan Palace in the north-west of modern Shanxi. On the financial side, Huhanye was amply rewarded in large quantities of gold, cash, clothes, silk, horses and grain for his participation. Huhanye made two further homage trips, in 49 BC and 33 BC; with each one the imperial gifts were increased. On the last trip, Huhanye took the opportunity to ask to be allowed to become an imperial son-in-law. As a sign of the decline in the political status of the Xiongnu, Emperor Yuan refused, giving him instead five ladies-in-waiting. One of them was Wang Zhaojun, famed in Chinese folklore as one of the Four Beauties.
When Zhizhi learned of his brother's submission, he also sent a son to the Han court as hostage in 53 BC. Then twice, in 51 BC and 50 BC, he sent envoys to the Han court with tribute. But having failed to pay homage personally, he was never admitted to the tributary system. In 36 BC, a junior officer named Chen Tang, with the help of Gan Yanshou, protector-general of the Western Regions, assembled an expeditionary force that defeated him at the Battle of Zhizhi and sent his head as a trophy to Chang'an.
Tributary relations were discontinued during the reign of Huduershi (18 AD–48), corresponding to the political upheavals of the Xin Dynasty in China. The Xiongnu took the opportunity to regain control of the western regions, as well as neighboring peoples such as the Wuhuan. In 24 AD, Hudershi even talked about reversing the tributary system.
Southern Xiongnu and Northern Xiongnu
The Xiongnu's new power was met with a policy of appeasement by Emperor Guangwu. At the height of his power, Huduershi even compared himself to his illustrious ancestor, Modu. Due to growing regionalism among the Xiongnu, however, Huduershi was never able to establish unquestioned authority. In contravention of a principle of fraternal succession established by Huhanye, Huduershi designated his son Punu as heir-apparent. However, as the eldest son of the preceding chanyu, Bi (Pi)—the Rizhu King of the Right—had a more legitimate claim. Consequently, Bi refused to attend the annual meeting at the chanyu's court. Nevertheless, in 46 AD, Punu ascended the throne.
In 48 AD, a confederation of eight Xiongnu tribes in Bi's power base in the south, with a military force totalling 40,000 to 50,000 men, seceded from Punu's kingdom and acclaimed Bi as chanyu. This kingdom became known as the Southern Xiongnu.
The Northern Xiongnu
The rump kingdom under Punu, around the Orkhon (modern north central Mongolia) became known as the Northern Xiongnu. Punu, who became known as the Northern Chanyu, began to put military pressure on the Southern Xiongnu.
In 49 AD, Tsi Yung, a Han governor of Liaodong, allied with the Wuhuan and Xianbei, attacked the Northern Xiongnu. The Northern Xiongnu suffered two major defeats: one at the hands of the Xianbei in 85 AD, and by the Han during the Battle of Ikh Bayan, in 89 AD. The northern chanyu fled to the north-west with his subjects.
According to the fifth-century Book of Wei, the remnants of Northern Chanyu's tribe settled as Yueban (悅般), near Kucha and subjugated the Wusun; while the rest fled across the Altai mountains towards Kangju in Transoxania. It states that this group later became the Hephthalites.
The Southern Xiongnu
Coincidentally, the Southern Xiongnu were plagued by natural disasters and misfortunes—in addition to the threat posed by Punu. Consequently, in 50 AD, the Southern Xiongnu submitted to tributary relations with Han China. The system of tribute was considerably tightened by the Han, to keep the Southern Xiongnu under control. The chanyu was ordered to establish his court in the Meiji district of Xihe Commandery and the Southern Xiongnu were resettled in eight frontier commanderies. At the same time, large numbers of Chinese were also resettled in these commanderies, in mixed Han-Xiongnu settlements. Economically, the Southern Xiongnu became reliant on trade with the Han.
Tensions were evident between Han settlers and practitioners of the nomadic way of life. Thus, in 94, Anguo Chanyu joined forces with newly subjugated Xiongnu from the north and started a large scale rebellion against the Han.
During the late 2nd century AD, the southern Xiongnu were drawn into the rebellions then plaguing the Han court. In 188, the chanyu was murdered by some of his own subjects for agreeing to send troops to help the Han suppress a rebellion in Hebei—many of the Xiongnu feared that it would set a precedent for unending military service to the Han court. The murdered chanyu's son Yufuluo, entitled Chizhisizhu (持至尸逐侯), succeeded him, but was then overthrown by the same rebellious faction in 189. He travelled to Luoyang (the Han capital) to seek aid from the Han court, but at this time the Han court was in disorder from the clash between Grand General He Jin and the eunuchs, and the intervention of the warlord Dong Zhuo. The chanyu had no choice but to settle down with his followers in Pingyang, a city in Shanxi. In 195, he died and was succeeded as chanyu by his brother Huchuquan Chanyu.
In 215–216 AD, the warlord-statesman Cao Cao detained Huchuquan Chanyu in the city of Ye, and divided his followers in Shanxi into five divisions: left, right, south, north and centre. This was aimed at preventing the exiled Xiongnu in Shanxi from engaging in rebellion, and also allowed Cao Cao to use the Xiongnu as auxiliaries in his cavalry.
Later the Xiongnu aristocracy in Shanxi changed their surname from Luanti to Liu for prestige reasons, claiming that they were related to the Han imperial clan through the old intermarriage policy. After Huchuquan, the Southern Xiongnu were partitioned into five local tribes. Each local chief was under the "surveillance of a chinese resident", while the shanyu was in "semicaptivity at the imperial court."
Later Xiongnu states in northern China
The Southern Xiongnu that settled in northern China during the Eastern Han dynasty retained their tribal affiliation and political organization and played an active role in Chinese politics. During the Sixteen Kingdoms (304–439 CE), Southern Xiongnu leaders founded or ruled several kingdoms, including Liu Yuan's Han Zhao Kingdom (also known as Former Zhao), Helian Bobo's Xia and Juqu Mengxun's Northern Liang
Fang Xuanling's Book of Jin lists nineteen Xiongnu tribes: Tuge (屠各), Xianzhi (鮮支), Koutou (寇頭), Wutan (烏譚), Chile (赤勒), Hanzhi (捍蛭), Heilang (黑狼), Chisha (赤沙), Yugang (鬱鞞), Weisuo (萎莎), Tutong (禿童), Bomie (勃蔑), Qiangqu (羌渠), Helai (賀賴), Zhongqin (鐘跂), Dalou (大樓), Yongqu (雍屈), Zhenshu (真樹) and Lijie (力羯).
Former Zhao state (304–329)
- Han Zhao dynasty (304–318)
In 304, Liu Yuan became Chanyu of the Five Hordes. In 308, declared himself emperor and founded the Han Zhao Dynasty. In 311, his son and successor Liu Cong captured Luoyang, and with it the Emperor Huai of Jin China.
In 316, the Emperor Min of Jin China was captured in Chang'an. Both emperors were humiliated as cupbearers in Linfen before being executed in 313 and 318.
- The reign of Liu Yao (318–329)
In 318, after suppressing a coup by a powerful minister in the Xiongnu-Han court, in which the emperor and a large proportion of the aristocracy were massacred), the Xiongnu prince Liu Yao moved the Xiongnu-Han capital from Pingyang to Chang'an and renamed the dynasty as Zhao (Liu Yuan had declared the empire's name Han to create a linkage with Han Dynasty—to which he claimed he was a descendant, through a princess, but Liu Yao felt that it was time to end the linkage with Han and explicitly restore the linkage to the great Xiongnu chanyu Maodun, and therefore decided to change the name of the state. (However, this was not a break from Liu Yuan, as he continued to honor Liu Yuan and Liu Cong posthumously; it is hence known to historians collectively as Han Zhao.)
However, the eastern part of north China came under the control of a rebel Xiongnu-Han general of Jie ancestry named Shi Le. Liu Yao and Shi Le fought a long war until 329, when Liu Yao was captured in battle and executed. Chang'an fell to Shi Le soon after, and the Xiongnu dynasty was wiped out. North China was ruled by Shi Le's Later Zhao dynasty for the next 20 years.
However, the "Liu" Xiongnu remained active in the north for at least another century.
Tiefu and Xia (260–431)
The northern Tiefu branch of the Xiongnu gained control of the Inner Mongolian region in the 10 years between the conquest of the Tuoba Xianbei state of Dai by the Former Qin empire in 376, and its restoration in 386 as the Northern Wei. After 386, the Tiefu were gradually destroyed by or surrendered to the Tuoba, with the submitting Tiefu becoming known as the Dugu. Liu Bobo, a surviving prince of the Tiefu fled to the Ordos Loop, where he founded a state called the Xia (thus named because of the Xiongnu's supposed ancestry from the Xia dynasty) and changed his surname to Helian (赫連). The Helian-Xia state was conquered by the Northern Wei in 428–31, and the Xiongnu thenceforth effectively ceased to play a major role in Chinese history, assimilating into the Xianbei and Han ethnicities.
Tongwancheng (meaning "Unite All Nations") was the capital of the Xia (Sixteen Kingdoms), whose rulers claimed descent from Modu Chanyu.
The ruined city was discovered in 1996 and the State Council designated it as a cultural relic under top state protection. The repair of the Yong'an Platform, where Helian Bobo, emperor of the Da Xia regime, reviewed parading troops, has been finished and restoration on the 31-meter-tall turret follows.
Juqu and Northern Liang (401–460)
The Juqu were a branch of the Xiongnu. Their leader Juqu Mengxun took over the Northern Liang by overthrowing the former puppet ruler Duan Ye. By 439, the Juqu power was destroyed by the Northern Wei. Their remnants were then settled in the city of Gaochang before being destroyed by the Rouran.
The Xiongnu confederation was unusually long-lived for a steppe empire. The purpose of raiding China was not simply for goods, but to force the Chinese to pay regular tribute. The power of the Xiongnu ruler was based on his control of Chinese tribute which he used to reward his supporters. The Han and Xiongnu empires rose at the same time because the Xiongnu state depended on Chinese tribute. A major Xiongnu weakness was the custom of lateral succession. If a dead ruler's son was not old enough to take command, power passed to the late ruler's brother. This worked in the first generation but could lead to civil war in the second generation. The first time this happened, in 60 BC, the weaker party adopted what Barfield calls the 'inner frontier strategy.' They moved south and submitted to China and then used Chinese resources to defeat the Northern Xiongnu and re-establish the empire. The second time this happened, about 47 AD, the strategy failed. The southern ruler was unable to defeat the northern ruler and the Xiongnu remained divided.
|Pronunciation of 匈|
|Preclassic Old Chinese:||sŋoŋ|
|Classic Old Chinese:||[ŋ̊oŋ]|
|Postclassic Old Chinese:||hoŋ|
The Chinese name for the Xiongnu was a pejorative term in itself, as the characters (匈奴) have the meaning of "fierce slave". (The Chinese characters are pronounced as Xiōngnú [ɕjʊ́ŋnǔ] in modern Mandarin Chinese.)
There are several theories on the ethnolinguistic identity of the Xiongnu.
The sound of the first Chinese character (匈) in the name has been reconstructed as /qʰoŋ/ in Old Chinese. This sound has a possible similarity to the name "Hun" in European languages. The second character (奴) means slave and it appears to have no parallel in Western terminology. Whether the similarity is evidence of kinship or a mere coincidence is hard to tell. It could lend credence to the theory that the Huns were in fact the descendants of the Northern Xiongnu who migrated westward, or it could lend credence to the theory that the Huns were using a name which they borrowed from the Northern Xiongnu, or it could lend credence to the theory that the Xiongnu made up a part of the Hun confederation.
The Xiongnu-Hun hypothesis was originally proposed by the 18th-century French historian Joseph de Guignes, who noticed that ancient Chinese scholars had referred to members of tribes which were associated with the Xiongnu by names which were similar to the name "Hun", albeit with varying Chinese characters. Étienne de la Vaissière has shown that, in the Sogdian script used in the so-called "Sogdian Ancient Letters", both the Xiongnu and the Huns were referred to as the γwn (xwn), which indicates that the two names were synonymous. Although the theory that the Xiongnu were the precursors of the Huns as they were later known in Europe is now accepted by many scholars, it has yet to become a consensus view. The identification with the Huns may either be incorrect or it may be an oversimplification (as would appear to be the case with a proto-Mongol people, the Rouran, who have sometimes been linked to the Avars of Central Europe).
Harold Walter Bailey proposed an Iranian origin of the Xiongnu, recognizing all of the earliest Xiongnu names of the 2nd century BC as being of the Iranian type. This theory is supported by turkologist Henryk Jankowski. Central Asian scholar Christopher I. Beckwith notes that the Xiongnu name could be a cognate of Scythian, Saka and Sogdia, corresponding to a name for Northern Iranians. According to Beckwith the Xiongnu could have contained a leading Iranian component when they started out, but more likely they had earlier been subjects of an Iranian people and learned the Iranian nomadic model from them.
In the 1994 UNESCO-published History of Civilizations of Central Asia, its editor János Harmatta claims that the royal tribes and kings of the Xiongnu bore Iranian names, that all Xiongnu words noted by the Chinese can be explained from a Scythian language, and that it is therefore clear that the majority of Xiongnu tribes spoke an Eastern Iranian language.
Mongolian and other scholars have suggested that the Xiongnu spoke a language related to the Mongolic languages. Mongolian archaeologists proposed that the Slab Grave Culture people were the ancestors of the Xiongnu, and some scholars have suggested that the Xiongnu may have been the ancestors of the Mongols. According to the "Book of Song", the Rourans, whom Book of Wei identified as offspring of Proto-Mongolic Donghu people, possessed the alternative name(s) 大檀 Dàtán "Tatar" and/or 檀檀 Tántán "Tartar" and according to Book of Liang, "they also constituted a separate branch of the Xiongnu"; Nikita Bichurin considered Xiongnu and Xianbei to be two subgroups (or dynasties) of but one same ethnicity. However, Chinese chroniclers routinely ascribed Xiongnu origins to various nomadic groups: for examples, Xiongnu ancestry was ascribed to Turkic-speaking Göktürks and Tiele as well as Para-Mongolic-speaking Kumo Xi and Khitans.
Genghis Khan refers to the time of Modu Chanyu as "the remote times of our Chanyu" in his letter to Daoist Qiu Chuji. Sun and moon symbol of Xiongnu that discovered by archaeologists is similar to Mongolian Soyombo symbol.
Proponents of a Turkic language theory include E.H. Parker, Jean-Pierre Abel-Rémusat, Julius Klaproth, Kurakichi Shiratori, Gustaf John Ramstedt, Annemarie von Gabain and Omeljan Pritsak. Some sources say the ruling class was proto-Turkic. Craig Benjamin sees the Xiongnu as either proto-Turks or proto-Mongols who possibly spoke a language related to the Dingling.
Lajos Ligeti was the first to suggest that the Xiongnu spoke a Yeniseian language. In the early 1960s Edwin Pulleyblank was the first to expand upon this idea with credible evidence. In 2000, Alexander Vovin reanalyzed Pulleyblank's argument and found further support for it by utilizing the most recent reconstruction of Old Chinese phonology by Starostin and Baxter and a single Chinese transcription of a sentence in the language of the Jie people, a member tribe of the Xiongnu Confederacy. Previous Turkic interpretations of the aforementioned sentence do not match the Chinese translation as precisely as using Yeniseian grammar. Pulleybank and D. N. Keightley asserted that the Xiongnu titles "were originally Siberian words but were later borrowed by the Turkic and Mongolic peoples". The Xiongnu language gave to the later Turkic and Mongolian empires a number of important culture words including Turkic tängri, Mongolian tenggeri, was originally the Xiongnu word for "heaven", chengli. Titles such as tarqan, tegin and kaghan were also inherited from the Xiongnu language and probably of Yeniseian origin. For example the Xiongnu word for "heaven" is theorized to come from Proto-Yeniseian tɨŋVr. According to Edwin G. Pulleyblank the existence of initial r and l and initial clusters in Xiongnu makes it unlikely that it was an Altaic language, and many words in Xiongnu match Yeniseian languages. And that the simplest way to explain this is that the Xiongnu spoke a Yeniseian language and that the Turkic and Mongolic people inherited elements from the Xiongnu. The Haplogroup Q can also be found in Xiongnu, which is also found in the Ket people, at approximately 94% of the population. Many Xiongu words seem to have cognates in Yeniseian languages, such as Xiongu "sakdak" 'boot' and Ket "saagdi" 'boot', Xiongu kʷala 'son' and Ket qalek 'grandson'.
The word "ket" has also been compared to the Proto Yeniseian term "keʔt" 'person'
|dar 'north'||tɨl 'lower reachers of the Yenisei, north'||tɨr|
|qaa/gaa 'ruler'||qɨj 'ruler'||kij|
|qaʔ 'great'||qɛʔ 'big'||qɛʔ||χɛʔ|
According to Vovin, there was a horse name that seemed to use the Yeniseian 3ps possessive prefix d- "dajge".
According to Vovin the etymology for "prince" has a problem with vowel correspondence in the first syllable; however, it doesn't invalidate it completely.
Since the early 19th century, a number of Western scholars have proposed a connection between various language families or subfamilies and the language or languages of the Xiongnu. Albert Terrien de Lacouperie considered them to be multi-component groups. Many scholars believe the Xiongnu confederation was a mixture of different ethno-linguistic groups, and that their main language (as represented in the Chinese sources) and its relationships have not yet been satisfactorily determined. Kim rejects "old racial theories or even ethnic affiliations" in favour of the "historical reality of these extensive, multiethnic, polyglot steppe empires".
Chinese sources link the Tiele people and Ashina to the Xiongnu, not all Turkic peoples. According to the Book of Zhou and the History of the Northern Dynasties, the Ashina clan was a component of the Xiongnu confederation, but this connection is disputed, and according to the Book of Sui and the Tongdian, they were "mixed nomads" (traditional Chinese: 雜胡; simplified Chinese: 杂胡; pinyin: zá hú) from Pingliang. The Ashina and Tiele may have been separate ethnic groups who mixed with the Xiongnu. Indeed, Chinese sources link many nomadic peoples (hu; see Wu Hu) on their northern borders to the Xiongnu, just as Greco-Roman historiographers called Avars and Huns "Scythians". The Greek cognate of Tourkia (Greek: Τουρκία) was used by the Byzantine emperor and scholar Constantine VII Porphyrogenitus in his book De Administrando Imperio, though in his use, "Turks" always referred to Magyars. Such archaizing was a common literary topos, and implied similar geographic origins and nomadic lifestyle but not direct filiation.
Some Uyghurs claimed descent from the Xiongnu (according to Chinese history Weishu, the founder of the Uyghur Khaganate was descended from a Xiongnu ruler), but many contemporary scholars do not consider the modern Uyghurs to be of direct linear descent from the old Uyghur Khaganate because modern Uyghur language and Old Uyghur languages are different. Rather, they consider them to be descendants of a number of people, one of them the ancient Uyghurs.
In various kinds of ancient inscriptions on monuments of Munmu of Silla, it is recorded that King Munmu had Xiongnu ancestry. According to several historians, it is possible that there were tribes of Koreanic origin. There are also some Korean researchers that point out that the grave goods of Silla and of the eastern Xiongnu are alike.
The original geographic location of the Xiongnu is disputed among steppe archaeologists. Since the 1960s, the geographic origin of the Xiongnu has attempted to be traced through an analysis of Early Iron Age burial constructions. No region has been proven to have mortuary practices that clearly match those of the Xiongnu.
In the 1920s, Pyotr Kozlov's excavations of the royal tombs at the Noin-Ula burial site in northern Mongolia that date to around the first century CE provided a glimpse into the lost world of the Xiongnu. Other archaeological sites have been unearthed in Inner Mongolia. Those include the Ordos culture of Inner Mongolia, which has been identified as a Xiongnu culture. Sinologist Otto Maenchen-Helfen has said that depictions of the Xiongnu of Transbaikalia and the Ordos show individuals with "Europoid" features. Iaroslav Lebedynsky said that Europoid depictions in the Ordos region should be attributed to a "Scythian affinity".
Portraits found in the Noin-Ula excavations demonstrate other cultural evidences and influences, showing that Chinese and Xiongnu art have influenced each other mutually. Some of these embroidered portraits in the Noin-Ula kurgans also depict the Xiongnu with long braided hair with wide ribbons, which is seen to be identical with the Ashina clan hair-style. Well-preserved bodies in Xiongnu and pre-Xiongnu tombs in the Mongolian Republic and southern Siberia show both Mongoloid and Caucasian features.
Analysis of skeletal remains from some sites attributed to the Xiongnu provides an identification of dolichocephalic Mongoloid, ethnically distinct from neighboring populations in present-day Mongolia. Russian and Chinese anthropological and craniofacial studies show that the Xiongnu were physically very heterogenous, with six different population clusters showing different degrees of Mongoloid and Caucasoid physical traits.
Presently, there exist four fully excavated and well documented cemeteries: Ivolga, Dyrestui, Burkhan Tolgoi, and Daodunzi. Additionally thousands of tombs have been recorded in Transbaikalia and Mongolia. The Tamir 1 excavation site from a 2005 Silkroad Arkanghai Excavation Project is the only Xiongnu cemetery in Mongolia to be fully mapped in scale. Tamir 1 was located on Tamiryn Ulaan Khoshuu, a prominent granitic outcrop near other cemeteries of the Neolithic, Bronze Age and Mongol periods. Important finds at the site included a lacquer bowl, glass beads and three TLV mirrors. Archaeologists from this project believe that these artifacts paired with the general richness and size of the graves suggests that this cemetery was for more important or wealthy Xiongnu individuals.
The TLV mirrors are of particular interest. Three mirrors were acquired from three different graves at the site. The mirror found at feature 160 is believed to be a low-quality, local imitation of a Han mirror, while the whole mirror found at feature 100 and fragments of a mirror found at feature 109 are believed to belong to the classical TLV mirrors and date back to the Xin Dynasty or the early to middle Eastern Han period. The archaeologists have chosen to, for the most part, refrain from positing anything about Han-Xiongnu relations based on these particular mirrors. However, they were willing to mention the following:
"There is no clear indication of the ethnicity of this tomb occupant, but in a similar brick-chambered tomb of the late Eastern Han period at the same cemetery, archaeologists discovered a bronze seal with the official title that the Han government bestowed upon the leader of the Xiongnu. The excavators suggested that these brick chamber tombs all belong to the Xiongnu (Qinghai 1993)."
Classifications of these burial sites make distinction between two prevailing type of burials: "(1) monumental ramped terrace tombs which are often flanked by smaller "satellite" burials and (2) 'circular' or 'ring' burials." Some scholars consider this a division between "elite" graves and "commoner" graves. Other scholars, find this division too simplistic and not evocative of a true distinction because it shows "ignorance of the nature of the mortuary investments and typically luxuriant burial assemblages [and does not account for] the discovery of other lesser interments that do not qualify as either of these types."
A genetic study published in the American Journal of Human Genetics in July 2003 examined the remains of 62 individuals buried between the 3rd century BC and the 2nd century AD at the Xiongnu necropolis at Egyin Gol in northern Mongolia. The examined individuals were found to be primarily of East Asian ancestry. A genetic study published in the American Journal of Physical Anthropology in October 2006 detected significant genetic continuity between the examined individuals at Egyin Gol and modern Mongols.
Lihongjie (2012) analyzed the Y-DNA of samples from a 2nd- or 1st-century BCE cemetery at Heigouliang in Xinjiang—which is believed to have been a summer palace for Xiongnu kings. The Y-DNA of 12 men excavated from the site belonged to haplogroup Q—either Q-MEH2 (Q1a) or Q-M378 (Q1b). The Q-M378 men among them were regarded as hosts of the tombs; half of the Q-MEH2 men appeared to be hosts and the other half as sacrificial victims. Likewise, L. L. Kang et al. (2013), found that three samples from a Xiongnu site in Barkol, Xinjiang belonged to Q-M3 (Q1a2a1a1).
A genetic study published in the American Journal of Physical Anthropology in July 2010 analyzed three individuals buried at an elite Xiongnu cemetery in Duurlig Nars of Northeast Mongolia around 0 AD. One male carried the paternal haplogroup C3 and the maternal haplogroup D4. The female also carried the maternal haplogroup D4. The third individual, a male, carried the paternal haplogroup R1a1 and the maternal haplogroup U2e1. C3 and D4 are both common in Northeast Asia, R1a1 is common in Eurasia while U2e1 is a West Eurasian lineage.
A genetic study published in Nature in May 2018 examined the remains of five Xiongnu. The four samples of Y-DNA extracted belonged to haplogroups R1, R1b, O3a and O3a3b2, while the five samples of mtDNA extracted belonged to haplogroups D4b2b4, N9a2a, G3a3, D4a6 and D4b2b2b. The examined Xiongnu were found to be of mixed East Asian and West Eurasian origin, and to have had a larger amount of East Asian ancestry than neighboring Sakas, Wusun and Kangju. The evidence suggested that the Huns emerged through westward migrations of East Asian nomads (especially Xiongnu tribal members) and subsequent admixture between them and Sakas.
A genetic study published in Scientific Reports in November 2019 examined the remains of three individuals buried at Hunnic cemeteries in the Carpathian Basin in the 5th century AD. The results from the study supported the theory that the Huns were descended from the Xiongnu.
A genetic study published in Human Genetics in July 2020 which examined the remains of 52 individuals excavated from the Tamir Ulaan Khoshuu cemetery in Mongolia propose the ancestors of the Xiongnu as an admixture between Scythians and Siberians and support the idea that the Huns are their descendants.
According to Book of Han, "the Xiongnu called Heaven (天) 'Chēnglí' (撐犁) a Chinese transcription of Tengri.
Within the Xiongnu culture more variety is visible from site to site than from "era" to "era," in terms of the Chinese chronology, yet all form a whole that is distinct from that of the Han and other peoples of the non-Chinese north. In some instances iconography can not be used as the main cultural identifier because art depicting animal predation is common among the steppe peoples. An example of animal predation associated with Xiongnu culture is a tiger carrying dead prey. We see a similar image in work from Maoqinggou, a site which is presumed to have been under Xiongnu political control but is still clearly non-Xiongnu. From Maoqinggou, we see the prey replaced by an extension of the tiger's foot. The work also depicts a lower level of execution; Maoqinggou work was executed in a rounder, less detailed style. In its broadest sense, Xiongnu iconography of animal predation include examples such as the gold headdress from Aluchaideng and gold earrings with a turquoise and jade inlay discovered in Xigouban, Inner Mongolia. The gold headdress can be viewed, along with some other examples of Xiongnu art, from the external links at the bottom of this article.
Xiongnu art is harder to distinguish from Saka or Scythian art. There was a similarity present in stylistic execution, but Xiongnu art and Saka art did often differ in terms of iconography. Saka art does not appear to have included predation scenes, especially with dead prey, or same-animal combat. Additionally, Saka art included elements not common to Xiongnu iconography, such as a winged, horned horse. The two cultures also used two different bird heads. Xiongnu depictions of birds have a tendency to have a moderate eye and beak and have ears, while Saka birds have a pronounced eye and beak and no ears.:102–103 Some scholars claim these differences are indicative of cultural differences. Scholar Sophia-Karin Psarras claims that Xiongnu images of animal predation, specifically tiger plus prey, is spiritual, representative of death and rebirth, and same-animal combat is representative of the acquisition of or maintenance of power.:102–103
Rock art and writing
Excavations conducted between 1924 and 1925 in the Noin-Ula kurgans produced objects with over twenty carved characters, which were either identical or very similar to that of to the runic letters of the Old Turkic alphabet discovered in the Orkhon Valley. From this a some scholars hold that the Xiongnu had a script similar to Eurasian runiform and this alphabet itself served as the basis for the ancient Turkic writing.
- 2nd century BC – 2nd century AD characters of Hun-Xianbei script (Mongolia and Inner Mongolia), N. Ishjamts, "Nomads In Eastern Central Asia", in the History of civilizations of Central Asia, Volume 2, Fig 5, p. 166, UNESCO Publishing, 1996, ISBN 92-3-102846-4
- 2nd century BC – 2nd century AD, characters of Hun-Xianbei script (Mongolia and Inner Mongolia), N. Ishjamts, "Nomads In Eastern Central Asia", in the History of civilizations of Central Asia, Volume 2, Fig 5, p. 166, UNESCO Publishing, 1996, ISBN 92-3-102846-4
Chinese sources indicate that the Xiongnu did not have an ideographic form of writing like Chinese, but in the 2nd century BC, a renegade Chinese dignitary Yue "taught the Shanyu to write official letters to the Chinese court on a wooden tablet 31 cm long, and to use a seal and large-sized folder." The same sources tell that when the Xiongnu noted down something or transmitted a message, they made cuts on a piece of wood ('ke-mu'), and they also mention a "Hu script". At Noin-Ula and other Xiongnu burial sites in Mongolia and the region north of Lake Baikal, among the objects were discovered over 20 carved characters. Most of these characters are either identical or very similar to letters of the Old Turkic alphabet of the Early Middle Ages found on the Eurasian steppes. From this, some specialists conclude that the Xiongnu used a script similar to the ancient Eurasian runiform, and that this alphabet was a basis for later Turkic writing.
Xiongnu were a nomadic people. From their lifestyle of herding flocks and their horse-trade with China, it can be concluded that their diet consist mainly of mutton, horse meat and wild geese that were shot down.
- List of Xiongnu rulers (Chanyus)
- Rulers family tree
- Nomadic empire
- Ethnic groups in Chinese history
- History of the Han Dynasty
- Ban Yong
- List of largest empires
- Ordos culture
- Zheng Zhang (Chinese: 鄭張), Shang-fang (Chinese: 尚芳). 匈 – 字 – 上古音系 – 韻典網. ytenx.org [韻典網]. Rearranged by BYVoid.
- Zheng Zhang (Chinese: 鄭張), Shang-fang (Chinese: 尚芳). 奴 – 字 – 上古音系 – 韻典網. ytenx.org [韻典網]. Rearranged by BYVoid.
- "Xiongnu People". britannica.com. Encyclopædia Britannica. Retrieved 25 July 2015.
- di Cosmo 2004: 186
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. pp. 19, 26–27. ISBN 978-0-8135-1304-1.
- Beckwith 2009, pp. 51–52, 404–405
- Vaissière 2006
- Harmatta 1994, p. 488: "Their royal tribes and kings (shan-yü) bore Iranian names and all the Hsiung-nu words noted by the Chinese can be explained from an Iranian language of Saka type. It is therefore clear that the majority of Hsiung-nu tribes spoke an Eastern Iranian language."
- Bailey 1985, pp. 21–45
- Jankowski 2006, pp. 26–27
- Tumen D., "Anthropology of Archaeological Populations from Northeast Asia Archived 2013-07-29 at the Wayback Machine page 25, 27
- Hucker 1975: 136
- Pritsak 1959
- Henning 1948
- Sims-Williams 2004 Gyula Németh 2014
- Di Cosmo, 2004, pg 166
- Adas 2001: 88
- Vovin, Alexander. "Did the Xiongnu speak a Yeniseian language?". Central Asiatic Journal 44/1 (2000), pp. 87–104.
- 高晶一, Jingyi Gao (2017). 確定夏國及凱特人的語言為屬於漢語族和葉尼塞語系共同詞源 [Xia and Ket Identified by Sinitic and Yeniseian Shared Etymologies]. Central Asiatic Journal. 60 (1–2): 51–58. doi:10.13173/centasiaj.60.1-2.0051. JSTOR 10.13173/centasiaj.60.1-2.0051.
- Geng 2005
- "The Account of the Xiongnu,Records of the Grand Historian",Sima Qian.DOI: https://doi.org/10.1163/9789004216358_00
- Di Cosmo 2002, 2.
- Di Cosmo 2002, 129.
- Di Cosmo 2002, 107.
- Di Cosmo 1999, 892–893.
- Pulleyblank 2000, p. 20.
- Di Cosmo 1999, 892–893 & 964.
- Rawson, Jessica (2017). "China and the steppe: reception and resistance". Antiquity. 91 (356): 375–388. doi:10.15184/aqy.2016.276. ISSN 0003-598X. S2CID 165092308.
- Beckwith 2009, pp. 71–73
- Bentley, Jerry H., Old World Encounters, 1993, pg. 38
- Barfield 1989
- di Cosmo 1999: 885–966
- Jerry Bentley, Old World Encounters: Cross-Cultural Contacts and Exchanges in Pre-Modern Times (New York: Oxford University Press, 1993), 36.
- 又《漢書》:"使王烏等窺匈奴。法,漢使不去節,不以墨黥面,不得入穹盧。王烏等去節、黥面,得入穹盧,單於愛之。" from Miscellaneous Morsels from Youyang, Scroll 8 Translation from Reed, Carrie E. (2000). "Tattoo in Early China". Journal of the American Oriental Society. 120 (3): 360–376. doi:10.2307/606008. JSTOR 606008.
- Yü, Ying-shih (1986). "Han Foreign Relations". The Cambridge History of China, Volume 1: The Ch'in and Han Empires, 221 BC – AD 220. Cambridge: Cambridge University Press. p. 384. ISBN 978-0-521-24327-8.
- Barfield, Thomas J. (1981). "The Hsiung-nu imperial confederacy: Organization and foreign policy". The Journal of Asian Studies. 41 (1): 45–61. doi:10.2307/2055601. JSTOR 2055601.
- Grousset 1970
- Yap, page liii
- Grousset, page 20
- , p. 31.
- Qian Sima; Burton Watson (January 1993). Records of the Grand Historian: Han dynasty. Renditions-Columbia University Press. pp. 161–. ISBN 978-0-231-08166-5.
- Monumenta Serica. H. Vetch. 2004. p. 81.
- Frederic E. Wakeman (1985). The Great Enterprise: The Manchu Reconstruction of Imperial Order in Seventeenth-century China. University of California Press. pp. 41–. ISBN 978-0-520-04804-1.
- Lin Jianming (林剑鸣) (1992). 秦漢史 [History of Qin and Han]. Wunan Publishing. pp. 557–8. ISBN 978-957-11-0574-1.
- Hong, Yuan (2018). The Sinitic Civilization Book II: A Factual History Through the Lens of Archaeology, Bronzeware, Astronomy, Divination, Calendar and the Annals (abridged ed.). iUniversе. p. 419. ISBN 978-1532058301.
- China: Dawn of a Golden Age, 200–750 AD. Metropolitan Museum of Art. 2004. pp. 18–. ISBN 978-1-58839-126-1.
- James A. Millward (2007). Eurasian crossroads: a history of Xinjiang. Columbia University Press. p. 20. ISBN 978-0-231-13924-3. Retrieved 2011-04-17.
- Julia Lovell (2007). The Great Wall: China Against the World, 1000 BC – AD 2000. Grove Press. p. 73. ISBN 978-0-8021-4297-9. Retrieved 2011-04-17.
- Alfred J. Andrea; James H. Overfield (1998). The Human Record: To 1700. Houghton Mifflin. p. 165. ISBN 978-0-395-87087-7. Retrieved 2011-04-17.
- Yiping Zhang (2005). Story of the Silk Road. China Intercontinental Press. p. 22. ISBN 978-7-5085-0832-0. Retrieved 2011-04-17.
- Charles Higham (2004). Encyclopedia of ancient Asian civilizations. Infobase Publishing. p. 409. ISBN 978-0-8160-4640-9. Retrieved 2011-04-17.
- Indian Society for Prehistoric & Quaternary Studies (1998). Man and environment, Volume 23, Issue 1. Indian Society for Prehistoric and Quaternary Studies. p. 6. Retrieved 2011-04-17.
- Adrienne Mayor (22 September 2014). The Amazons: Lives and Legends of Warrior Women across the Ancient World. Princeton University Press. pp. 422–. ISBN 978-1-4008-6513-0.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. p. 34. ISBN 978-0-8135-1304-1.
- Loewe 1974, p. .
- Han Shu (Beijing: Zhonghua shuju ed) 94B, p. 3824.
- Bentley, Jerry H, "Old World Encounters", 1993, p. 37
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. pp. 42–47. ISBN 978-0-8135-1304-1.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. pp. 37–38. ISBN 978-0-8135-1304-1.
- Fairbank & Têng 1941.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. p. 39. ISBN 978-0-8135-1304-1.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. p. 53. ISBN 978-0-8135-1304-1.
- Book of Wei Vol. 102 (in Chinese)
- Gumilev L.N., "History of Hun People", Moscow, 'Science', Ch. 15 (In Russian)
- Hyun Jim Kim (2015). "2 The So-called 'Two-Hundred year Interlude'". The Huns. Routledge. ISBN 978-1317340904.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. p. 54. ISBN 978-0-8135-1304-1.
- Fang, Xuanling (1958). 晉書 [Book of Jin] (in Chinese). Beijing: Commercial Press. Vol. 97
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. pp. 56–57. ISBN 978-0-8135-1304-1.
- Grousset, Rene (1970). The Empire of the Steppes. Rutgers University Press. pp. 57–58. ISBN 978-0-8135-1304-1.
- Sand-covered Hun City Unearthed, CN: China
- National Geographic (online ed.)
- Obrusánszky 2006.
- Barfield, Thomas J (1989), The Perilous Frontier: Nomadic Empires and China, 221 BC to AD 1757
- Baxter-Sagart (2014).
- Vaissière, Étienne. "Xiongnu". Encyclopedia Iranica. Cite journal requires
- Polosmak, Natalia V. (2010). "We Drank Soma, We Became Immortal…". SCIENCE First Hand. 26 (N2).
- Yatsenko, Sergey A. (2012). "Yuezhi on Bactrian Embroidery from Textiles Found at Noyon uul, Mongolia" (PDF). The Silk Road. 10.
- Polosmak, Natalia V. (2012). "History Embroidered in Wool". SCIENCE First Hand. 31 (N1).
- Beckwith 2009, p. 405: "Accordingly, the transcription now read as Hsiung- nu may have been pronounced * Soγdâ, * Soγlâ, * Sak(a)dâ, or even * Skla(C)da, etc."
- Ts. Baasansuren "The scholar who showed the true Mongolia to the world", Summer 2010 vol.6 (14) Mongolica, pp.40
- Sinor, Denis (1990). Aspects of Altaic Civilization III. p. .
- Pulleyblank, Edwin G. (2000). "Ji 姬 and Jiang 姜: The Role of Exogamic Clans in the Organization of the Zhou Polity", Early China. p. 20
- Wei Shou. Book of Wei. vol. 91 "蠕蠕,東胡之苗裔也,姓郁久閭氏" tr. "Rúrú, offsprings of Dōnghú, surnamed Yùjiŭlǘ"
- Liangshu Vol. 54 txt: "芮芮國,蓋匈奴別種。" tr: "Ruìruì state, possibly a Xiongnu's separate branch"
- Golden, Peter B. "Some Notes on the Avars and Rouran", in The Steppe Lands and the World beyond Them. Ed. Curta, Maleon. Iași (2013). pp. 54-55
- N.Bichurin "Collection of information on the peoples who inhabited Central Asia in ancient times", 1950, p. 227
- Lee, Joo-Yup (2016). "The Historical Meaning of the Term Turk and the Nature of the Turkic Identity of the Chinggisid and Timurid Elites in Post-Mongol Central Asia". Central Asiatic Journal. 59 (1–2): 105.
- Howorth, Henry H. (Henry Hoyle). History of the Mongols from the 9th to the 19th century. London : Longmans, Green – via Internet Archive.
- "Sun and Moon" (JPG). depts.washington.edu.
- "Xiongnu Archaeology". depts.washington.edu.
- Elite Xiongnu Burials at the Periphery (Miller et al. 2009)
- Wink 2002: 60–61
- Craig Benjamin (2007, 49), In: Hyun Jin Kim, The Huns, Rome and the Birth of Europe. Cambridge University Press. 2013. page 176.
- Linghu Defen et al., Book of Zhou, Vol. 50. (in Chinese)
- Li Yanshou (李延寿), History of the Northern Dynasties, Vol. 99. (in Chinese)
- Peter B. Golden (1992). "Chapter VI – The Uyğur Qağante (742–840)". An Introduction to the History of the Turkic Peoples: Ethnogenesis and State-Formation in Medieval and Early Modern Eurasia and the Middle East. p. 155. ISBN 978-3-447-03274-2.
- History of Northern Dynasties, vol. 99
- Book of Zhou, vol. 50
- Sims-Williams 2004
- Vovin 2000
- Nicola Di Cosmo (2004). Cambridge. page 164
- THE PEOPLES OF THE STEPPE FRONTIER IN EARLY CHINESE SOURCES, Edwin G. Pulleyblank, page 49
- "Archived copy" (PDF). Archived from the original (PDF) on 2020-08-02. Retrieved 2021-03-12.CS1 maint: archived copy as title (link)
- "ONCE AGAIN ON THE ETYMOLOGY OF THE TITLE qaγan" Alexander VOVIN (Honolulu) – Studia Etymologica Cracoviensia vol. 12 Kraków 2007 (http://ejournals.eu/sj/index.php/SEC/article/viewFile/1100/1096)
- Vovin, Alexander. Did the Xiongnu speak a Yeniseian language?.
- Di Cosmo 2004: 165
- Hyun Jin Kim, The Huns, Rome and the Birth of Europe. ISBN 978-1-107-00906-6. Cambridge University Press. 2013. page 31.
- Christian, p. 249
- Wei Zheng et al., Book of Sui, Vol. 84. (in Chinese)
- Du, You (1988). 辺防13 北狄4 突厥上. 《通典》 [Tongdian] (in Chinese). 197. Beijing: Zhonghua Book Company. p. 5401. ISBN 978-7-101-00258-4.
- "Об эт нической принадлежности Хунну". rudocs.exdat.com.
- Jenkins, Romilly James Heald (1967). De Administrando Imperio by Constantine VII Porphyrogenitus. Corpus fontium historiae Byzantinae (New, revised ed.). Washington, D.C.: Dumbarton Oaks Center for Byzantine Studies. p. 65. ISBN 978-0-88402-021-9. Retrieved 28 August 2013. According to Constantine Porphyrogenitus, writing in his De Administrando Imperio (ca. 950 AD) "Patzinakia, the Pecheneg realm, stretches west as far as the Siret River (or even the Eastern Carpathian Mountains), and is four days distant from Tourkia (i.e. Hungary)."
- Günter Prinzing; Maciej Salamon (1999). Byzanz und Ostmitteleuropa 950–1453: Beiträge zu einer table-ronde des XIX. International Congress of Byzantine Studies, Copenhagen 1996. Otto Harrassowitz Verlag. p. 46. ISBN 978-3-447-04146-1. Retrieved 9 February 2013.
- Henry Hoyle Howorth (2008). History of the Mongols from the 9th to the 19th Century: The So-called Tartars of Russia and Central Asia. Cosimo, Inc. p. 3. ISBN 978-1-60520-134-4. Retrieved 15 June 2013.
- Sinor (1990)
- Peter B. Golden (1992). "Chapter VI – The Uyğur Qağante (742–840)". An Introduction to the History of the Turkic Peoples: Ethnogenesis and State-Formation in Medieval and Early Modern Eurasia and the Middle East. p. 155. ISBN 978-3-447-03274-2.
- Nabijan Tursun. "The Formation of Modern Uyghur Historiography and Competing Perspectives toward Uyghur History". The China and Eurasia Forum Quarterly. 6 (3): 87–100.
- James A. Millward & Peter C. Perdue (2004). "Chapter 2: Political and Cultural History of the Xinjiang Region through the Late Nineteenth Century". In S. Frederick Starr (ed.). Xinjiang: China's Muslim Borderland. M. E. Sharpe. pp. 40–41. ISBN 978-0-7656-1318-9.
- Susan J. Henders (2006). Susan J. Henders (ed.). Democratization and Identity: Regimes and Ethnicity in East and Southeast Asia. Lexington Books. p. 135. ISBN 978-0-7391-0767-6. Retrieved 2011-09-09.
- Reed, J. Todd; Raschke, Diana (2010). The ETIM: China's Islamic Militants and the Global Terrorist Threat. ABC-CLIO. p. 7. ISBN 978-0-313-36540-9.
- Cho Gab-je. 騎馬흉노국가 新羅 연구 趙甲濟(月刊朝鮮 편집장)의 심층취재 내 몸속을 흐르는 흉노의 피 (in Korean). Monthly Chosun. Retrieved 2016-09-25.
- 김운회 (2005-08-30). 김운회의 '대쥬신을 찾아서' <23> 금관의 나라, 신라" (in Korean). 프레시안. Retrieved 2016-09-25.
- 경주 사천왕사(寺) 사천왕상(四天王像) 왜 4개가 아니라 3개일까 (in Korean). 조선일보. 2009-02-27. Archived from the original on 2014-12-30. Retrieved 2016-09-25.
- 김창호, 〈문무왕릉비에 보이는 신라인의 조상인식 – 태조성한의 첨보 -〉, 《한국사연구》, 한국사연구회, 1986년
- "자료검색>상세_기사 | 국립중앙도서관". www.nl.go.kr. Archived from the original on 2018-10-02. Retrieved 2019-04-15.
- Di Cosmo 2004: 164
- Helfen-Maenchen,-Helfen, Otto (1973). The World of the Huns: Studies of Their History and Culture, pp.371. Berkeley, California: University of California Press. p. 371.
- Honeychurch, William. "Thinking Political Communities: The State and Social Stratification among Ancient Nomads of Mongolia". The Anthropological Study of Class and Consciousness: 47.
- Maenchen-Helfen, Otto (1 August 1973). The World of the Huns (1 ed.). UC Berkeley: University of California Press. pp. 370–371. ISBN 0520015967.
- Lebedynsky, Yaroslav (2007). Les nomades. Éditions Errance. p. 125. ISBN 9782877723466. "Europoid faces in some depictions of the Ordos, which should be attributed to a Scythian affinity"
- Camilla Trever, "Excavations in Northern Mongolia (1924–1925)", Leningrad: J. Fedorov Printing House, 1932
- The Great Empires of the Ancient World – Thomas Harrison – 2009 – page 288
- Fu ren da xue (Beijing, China), S.V.D. Research Institute, Society of the Divine Word – 2003
- A. V. Davydova, Ivolginskii arkheologicheskii kompleks II. Ivolginskii mogil'nik. Arkheologicheskie pamiatniki Siunnu 2 (Sankt-Peterburg 1996). А. В. Давыдова, Иволгинский археологи-ческий комплекс II. Иволгинский могильник. Археологические памятники Сюнну 2 (Санкт-Петербург 1996).
- S. S. Miniaev, Dyrestuiskii mogil'nik. Arkheologicheskie pamiatniki Siunnu 3 (Sankt-Peterburg 1998). С. С. Миняев, Дырестуйский могильник. Археологические памятники Сюнну 3 (Санкт-Петербург 1998).
- Ts. Törbat, Keramika khunnskogo mogil'nika Burkhan-Tolgoi. Erdem shinzhilgeenii bichig. Arkheologi, antropologi, ugsaatan sudlal 19,2003, 82–100. Ц. Тѳрбат, Керамика хуннского могильника Бурхан-Толгой. Эрдэм шинжилгээний бичиг. Археологи, антропологи, угсаатан судлал 19, 2003, 82–100.
- Ts. Törbat, Tamiryn Ulaan khoshuuny bulsh ba Khünnügiin ugsaatny büreldekhüünii asuudald. Tükhiin setgüül 4, 2003, 6–17. Ц. Төрбат, Тамирын Улаан хошууны булш ба Хүннүгийн угсаатны бүрэлдэхүүний асуудалд. Түүхийн сэтгүүл 4, 2003, 6–17.
- Ningxia Cultural Relics and Archaeology Research Institute (寧夏文物考古研究所); Chinese Academy of Social Sciences Archaeology Institute Ningxia Archaeology Group; Tongxin County Cultural Relics Administration (同心縣文物管理所) (1988). 寧夏同心倒墩子匈奴墓地. 考古學報 [Archaeology Journal] (3): 333–356.
- Miller, Bryan (2011). Jan Bemmann (ed.). Xiongnu Archaeology. Bonn: Vor- und Fruhgeschichtliche Archaeologie Rheinische Friedrich-Wilhelms-Universitat Bonn. ISBN 978-3-936490-14-5.
- Purcell, David. "Maps of the Xiongnu Cemetery at Tamiryn Ulaan Khoshuu, Ogii nuur, Arkhangai Aimag, Mongolia" (PDF). The Silk Road. 9: 143–145.
- Purcell, David; Kimberly Spurr. "Archaeological Investigations of Xiongnu Sites in the Tamir River Valley" (PDF). The Silk Road. 4 (1): 20–31.
- Lai, Guolong. "The Date of the TLV Mirrors from the Xiongnu Tombs" (PDF). The Silk Road. 4 (1): 34–43.
- Miller, Bryan (2011). Jan Bemmann (ed.). Xiongnu Archaeology. Bonn: Vor- und Fruhgeschichtliche Archaologie Rheinische Friedrich-Wilhelms-Universitat Bonn. p. 23. ISBN 978-3-936490-14-5.
- Miller, Bryan (2011). Jan Bemmann (ed.). Xiongnu Archaeology. Bonn: Vor- und Fruhgeschichtliche Archaologie Rheinische Friedrich-Wilhelms-Universitat Bonn. p. 24. ISBN 978-3-936490-14-5.
- Keyser-Tracqui et al. 2003, p. 247.
- Keyser-Tracqui et al. 2003, p. 258. "A majority (89%) of the Xiongnu sequences can beclassified as belonging to an Asian haplogroup... and nearly 11% belong to European haplogroups (U2, U5a1a, and J1)."
- Keyser-Tracqui et al. 2006, p. 272.
- L. L. Kang et al., Y chromosomes of ancient Hunnu people and its implication on the phylogeny of East Asian linguistic families (2013)
- Knowing the Xiongnu Culture in Eastern Tianshan Mountain from Tomb Heigouliang and Dongheigou Site at the Beginning of Xihan Dynasty, RenMeng, WangJianXin, 2008
- Kim et al. 2010, p. 429. " The genetic evidence of U2e1 and R1a1 may help to clarify the migration patterns of Indo-Europeans and ancient East-West contacts of the Xiongnu Empire. Artifacts in thetombs suggested that the Xiongnu had a system of thesocial stratification. The West Eurasian male might showthe racial tolerance of the Xiongnu Empire and someinsight into the Xiongnu society."
- Damgaard et al. 2018, Supplementary Table 2, Rows 28-32.
- Damgaard et al. 2018, Supplementary Table 9, Rows 20-23.
- Damgaard et al. 2018, Supplementary Table 8, Rows 87-88, 94-96.
- Damgaard et al. 2018, pp. 371–374. "Principal component analyses and D-statistics suggest that the Xiongnu individuals belong to two distinct groups, one being of East Asian origin and the other presenting considerable admixture levels with West Eurasian sources... Overall, our data show that the Xiongnu confederation was genetically heterogeneous, and that the Huns emerged following minor male-driven East Asian gene flow into the preceding Sakas that they invaded... As such our results support the contention that the disappearance of the Inner Asian Scythians and Sakas around two thousand years ago was a cultural transition that coincided with the westward migration of the Xiongnu. This Xiongnu invasion also led to the displacement of isolated remnant groups—related to Late Bronze Age pastoralists—that had remained on the south-eastern side of the Tian Shan mountains.
- Neparáczki et al. 2019, p. 1. "Haplogroups from the Hun-age are consistent with Xiongnu ancestry of European Huns.
- Keyser, C.; Zvénigorosky, V.; et al. (2020). "Genetic evidence suggests a sense of family, parity and conquest in the Xiongnu Iron Age nomads of Mongolia". Human Genetics. 140 (2): 349–359. doi:10.1007/s00439-020-02209-4. PMID 32734383. S2CID 220881540.
- Book of Han, Vol. 94-I, 匈奴謂天為「撐犁」,謂子為「孤塗」,單于者,廣大之貌也.
- Rawson, Jessica (1999). "Design Systems in Early Chinese Art". Orientations: 52. Archived from the original on 2020-10-18. Retrieved 2020-10-18.
- "Shaanxi History Museum notice". Shaanxi History Museum.
- Psarras, Sophia-Karin (2003). "Han and Xiongnu: A Reexamination of Cultural and Political Relations". Monumenta Serica. 51: 55–236. doi:10.1080/02549948.2003.11731391. JSTOR 40727370. S2CID 156676644.
- Demattè 2006
- Ishjamts 1996: 166
- N. Ishjatms, "Nomads In Eastern Central Asia", in the "History of civilizations of Central Asia", Volume 2, Fig 6, p. 166, UNESCO Publishing, 1996, ISBN 92-3-102846-4
- Primary sources
- Ban Gu et al., Book of Han, esp. vol. 94, part 1, part 2.
- Fan Ye et al., Book of the Later Han, esp. vol. 89.
- Sima Qian et al., Records of the Grand Historian, esp. vol. 110.
- Other sources consulted
- Adas, Michael. 2001. Agricultural and Pastoral Societies in Ancient and Classical History, American Historical Association/Temple University Press.
- Bailey, Harold W. (1985). Indo-Scythian Studies: being Khotanese Texts, VII. Cambridge University Press. JSTOR 312539. Retrieved 30 May 2015.
- Barfield, Thomas. 1989. The Perilous Frontier. Basil Blackwell.
- Beckwith, Christopher I. (16 March 2009). Empires of the Silk Road: A History of Central Eurasia from the Bronze Age to the Present. Princeton University Press. ISBN 978-0-691-13589-2. Retrieved 30 May 2015.
- Brosseder, Ursula, and Bryan Miller. Xiongnu Archaeology: Multidisciplinary Perspectives of the First Steppe Empire in Inner Asia. Bonn: Freiburger Graphische Betriebe- Freiburg, 2011.
- Csányi, B. et al. 2008. Y-Chromosome Analysis of Ancient Hungarian and Two Modern Hungarian-Speaking Populations from the Carpathian Basin. Annals of Human Genetics, 2008 March 27, 72(4): 519–534.
- Damgaard, P. B.; et al. (May 9, 2018). "137 ancient human genomes from across the Eurasian steppes". Nature. Nature Research. 557 (7705): 369–373. Bibcode:2018Natur.557..369D. doi:10.1038/s41586-018-0094-2. PMID 29743675. S2CID 13670282. Retrieved April 11, 2020.
- Demattè, Paola. 2006. Writing the Landscape: Petroglyphs of Inner Mongolia and Ningxia Province (China). In: Beyond the steppe and the sown: proceedings of the 2002 University of Chicago Conference on Eurasian Archaeology, edited by David L. Peterson et al. Brill. Colloquia Pontica: series on the archaeology and ancient history of the Black Sea area; 13. 300–313. (Proceedings of the First International Conference of Eurasian Archaeology, University of Chicago, May 3–4, 2002.)
- Davydova, Anthonina. The Ivolga archaeological complex. Part 1. The Ivolga fortress. In: Archaeological sites of the Xiongnu, vol. 1. St Petersburg, 1995.
- Davydova, Anthonina. The Ivolga archaeological complex. Part 2. The Ivolga cemetery. In: Archaeological sites of the Xiongnu, vol. 2. St Petersburg, 1996.
- (in Russian) Davydova, Anthonina & Minyaev Sergey. The complex of archaeological sites near Dureny village. In: Archaeological sites of the Xiongnu, vol. 5. St Petersburg, 2003.
- Davydova, Anthonina & Minyaev Sergey. The Xiongnu Decorative bronzes. In: Archaeological sites of the Xiongnu, vol. 6. St Petersburg, 2003.
- Di Cosmo, Nicola. 1999. The Northern Frontier in Pre-Imperial China. In: The Cambridge History of Ancient China, edited by Michael Loewe and Edward Shaughnessy. Cambridge University Press.
- Di Cosmo, Nicola. 2004. Ancient China and its Enemies: The Rise of Nomadic Power in East Asian History. Cambridge University Press. (First paperback edition; original edition 2002)
- Fairbank, J.K.; Têng, S.Y. (1941). "On the Ch'ing Tributary System". Harvard Journal of Asiatic Studies. 6 (2): 135–246. doi:10.2307/2718006. JSTOR 2718006.
- Geng, Shimin [耿世民] (2005). 阿尔泰共同语、匈奴语探讨 [On Altaic Common Language and Xiongnu Language]. Yu Yan Yu Fan Yi 语言与翻译(汉文版) [Language and Translation] (2). ISSN 1001-0823. OCLC 123501525. Archived from the original on 25 February 2012.
- Genome News Network. 2003 July 25. "Ancient DNA Tells Tales from the Grave"
- Grousset, René. 1970. The empire of the steppes: a history of central Asia. Rutgers University Press.
- (in Russian) Gumilev L. N. 1961. История народа Хунну (History of the Hunnu people).
- Hall, Mark & Minyaev, Sergey. Chemical Analyses of Xiong-nu Pottery: A Preliminary Study of Exchange and Trade on the Inner Asian Steppes. In: Journal of Archaeological Science (2002) 29, pp. 135–144
- Harmatta, János (1 January 1994). "Conclusion". In Harmatta, János (ed.). History of Civilizations of Central Asia: The Development of Sedentary and Nomadic Civilizations, 700 B. C. to A. D. 250. UNESCO. pp. 485–492. ISBN 978-9231028465. Retrieved 29 May 2015.
- (in Hungarian) Helimski, Eugen. "A szamojéd népek vázlatos története" (Short History of the Samoyedic peoples). In: The History of the Finno-Ugric and Samoyedic Peoples. 2000, Eötvös Loránd University, Budapest, Hungary.
- Henning W. B. 1948. The date of the Sogdian ancient letters. Bulletin of the School of Oriental and African Studies (BSOAS), 12(3–4): 601–615.
- Hill, John E. (2009) Through the Jade Gate to Rome: A Study of the Silk Routes during the Later Han Dynasty, 1st to 2nd Centuries CE. BookSurge, Charleston, South Carolina. ISBN 978-1-4392-2134-1. (Especially pp. 69–74)
- Hucker, Charles O. 1975. China's Imperial Past: An Introduction to Chinese History and Culture. Stanford University Press. ISBN 0-8047-2353-2
- N. Ishjamts. 1999. Nomads In Eastern Central Asia. In: History of civilizations of Central Asia. Volume 2: The Development of Sedentary and Nomadic Civilizations, 700 bc to ad 250; Edited by Janos Harmatta et al. UNESCO. ISBN 92-3-102846-4. 151–170.
- Jankowski, Henryk (2006). Historical-Etymological Dictionary of Pre-Russian Habitation Names of the Crimea. Handbuch der Orientalistik [HdO], 8: Central Asia; 15. Brill. ISBN 978-90-04-15433-9.
- Keyser-Tracqui, Christine; et al. (July 2003). "Nuclear and Mitochondrial DNA Analysis of a 2,000-Year-Old Necropolis in the Egyin Gol Valley of Mongolia". American Journal of Human Genetics. Cell Press. 73 (2): 247–260. doi:10.1086/377005. PMC 1180365. PMID 12858290.
- Keyser-Tracqui, Christine; et al. (October 2006). "Population origins in Mongolia: genetic structure analysis of ancient and modern DNA". American Journal of Physical Anthropology. American Association of Physical Anthropologists. 131 (2): 272–281. doi:10.1002/ajpa.20429. PMID 16596591.
- Kim, Kijeong; et al. (July 2010). "A western Eurasian male is found in 2000-year-old elite Xiongnu cemetery in Northeast Mongolia". American Journal of Physical Anthropology. American Association of Physical Anthropologists. 142 (3): 429–440. doi:10.1002/ajpa.21242. PMID 20091844.
- (in Russian) Kradin N.N., "Hun Empire". Acad. 2nd ed., updated and added., Moscow: Logos, 2002, ISBN 5-94010-124-0
- Kradin, Nikolay. 2005. Social and Economic Structure of the Xiongnu of the Trans-Baikal Region. Archaeology, Ethnology & Anthropology of Eurasia, No 1 (21), p. 79–86.
- Kradin, Nikolay. 2012. New Approaches and Challenges for the Xiongnu Studies. In: Xiongnu and its eastward Neighbours. Seoul, p. 35–51.
- (in Russian) Kiuner (Kjuner, Küner) [Кюнер], N.V. 1961. Китайские известия о народах Южной Сибири, Центральной Азии и Дальнего Востока (Chinese reports about peoples of Southern Siberia, Central Asia, and Far East). Moscow.
- (in Russian) Klyashtorny S.G. [Кляшторный С.Г.]. 1964. Древнетюркские рунические памятники как источник по истории Средней Азии. (Ancient Türkic runiform monuments as a source for the history of Central Asia). Moscow: Nauka.
- (in German) Liu Mau-tsai. 1958. Die chinesischen Nachrichten zur Geschichte der Ost-Türken (T'u-küe). Wiesbaden: Otto Harrassowitz.
- Loewe, Michael (1974). "The campaigns of Han Wu-ti". In Kierman, Jr., Frank A.; Fairbank, John K. (eds.). Chinese ways in warfare. Harvard Univ. Press.
- Maenschen-Helfen, Otto (1973). The World of the Huns: Studies in Their History and Culture. University of California Press. ISBN 978-0-520-01596-8. Retrieved February 18, 2015.
- Minyaev, Sergey. On the origin of the Xiongnu // Bulletin of International association for the study of the culture of Central Asia, UNESCO. Moscow, 1985, No. 9.
- Minyaev, Sergey. News of Xiongnu Archaeology // Das Altertum, vol. 35. Berlin, 1989.
- Miniaev, Sergey. "Niche Grave Burials of the Xiong-nu Period in Central Asia", Information Bulletin, Inter-national Association for the Cultures of Central Asia 17(1990): 91–99.
- Minyaev, Sergey. The excavation of Xiongnu Sites in the Buryatia Republic// Orientations, vol. 26, n. 10, Hong Kong, November 1995.
- Minyaev, Sergey. Les Xiongnu// Dossiers d' archaeologie, # 212. Paris 1996.
- Minyaev, Sergey. Archaeologie des Xiongnu en Russie: nouvelles decouvertes et quelques Problemes. In: Arts Asiatiques, tome 51, Paris, 1996.
- Minyaev, Sergey. The origins of the "Geometric Style" in Hsiungnu art // BAR International series 890. London, 2000.
- Minyaev, Sergey. Art and archeology of the Xiongnu: new discoveries in Russia. In: Circle of Iner Asia Art, Newsletter, Issue 14, December 2001, pp. 3–9
- Minyaev, Sergey & Smolarsky Phillipe. Art of the Steppes. Brussels, Foundation Richard Liu, 2002.
- (in Russian) Minyaev, Sergey. Derestuj cemetery. In: Archaeological sites of the Xiongnu, vol. 3. St-Petersburg, 1998.
- Miniaev, Sergey & Sakharovskaja, Lidya. Investigation of a Xiongnu Royal Tomb in the Tsaraam valley, part 1. In: Newsletters of the Silk Road Foundation, vol. 4, no.1, 2006.
- Miniaev, Sergey & Sakharovskaja, Lidya. Investigation of a Xiongnu Royal Tomb in the Tsaraam valley, part 2. In: Newsletters of the Silk Road Foundation, vol. 5, no.1, 2007.
- (in Russian) Minyaev, Sergey. The Xiongnu cultural complex: location and chronology. In: Ancient and Middle Age History of Eastern Asia. Vladivostok, 2001, pp. 295–305.
- Neparáczki, Endre; et al. (November 12, 2019). "Y-chromosome haplogroups from Hun, Avar and conquering Hungarian period nomadic people of the Carpathian Basin". Scientific Reports. Nature Research. 9 (16569): 16569. Bibcode:2019NatSR...916569N. doi:10.1038/s41598-019-53105-5. PMC 6851379. PMID 31719606.
- Miniaev, Sergey & Elikhina, Julia. On the chronology of the Noyon Uul barrows. The Silk Road 7 (2009: 21–30).
- (in Hungarian) Obrusánszky, Borbála. 2006 October 10. Huns in China (Hunok Kínában) 3.
- (in Hungarian) Obrusánszky, Borbála. 2009. Tongwancheng, city of the southern Huns. Transoxiana, August 2009, 14. ISSN 1666-7050.
- (in French) Petkovski, Elizabet. 2006. Polymorphismes ponctuels de séquence et identification génétique: étude par spectrométrie de masse MALDI-TOF. Strasbourg: Université Louis Pasteur. Dissertation
- (in Russian) Potapov L.P. [Потапов, Л. П.] 1969. Этнический состав и происхождение алтайцев (Etnicheskii sostav i proiskhozhdenie altaitsev, Ethnic composition and origins of the Altaians). Leningrad: Nauka. Facsimile in Microsoft Word format.
- (in German) Pritsak O. 1959. XUN Der Volksname der Hsiung-nu. Central Asiatic Journal, 5: 27–34.
- Psarras, Sophia-Karin. "HAN AND XIONGNU: A REEXAMINATION OF CULTURAL AND POLITICAL RELATIONS (I)." Monumenta Serica. 51. (2003): 55–236. Web. 12 Dec. 2012. <https://www.jstor.org/stable/40727370>.
- Pulleyblank, Edwin G. (2000). "Ji 姬 and Jiang 姜: The Role of Exogamic Clans in the Organization of the Zhou Polity" (PDF). Early China. 25 (25): 1–27. doi:10.1017/S0362502800004259. S2CID 162159081. Archived from the original (PDF) on 2017-11-18. Retrieved 2017-12-01.
- Sims-Williams, Nicholas. 2004. The Sogdian ancient letters. Letters 1, 2, 3, and 5 translated into English.
- (in Russian) Talko-Gryntsevich, Julian. Paleo-Ethnology of Trans-Baikal area. In: Archaeological sites of the Xiongnu, vol. 4. St Petersburg, 1999.
- Taskin V.S. [Таскин В.С.]. 1984. Материалы по истории древних кочевых народов группы Дунху (Materials on the history of the ancient nomadic peoples of the Dunhu group). Moscow.
- Toh, Hoong Teik (2005). "The -yu Ending in Xiongnu, Xianbei, and Gaoju Onomastica" (PDF). Sino-Platonic Papers. 146.
- Vaissière (2005). "Huns et Xiongnu". Central Asiatic Journal (in French). 49 (1): 3–26.
- Vaissière, Étienne de la. 2006. Xiongnu. Encyclopædia Iranica online.
- Vovin, Alexander (2000). "Did the Xiongnu speak a Yeniseian language?". Central Asiatic Journal. 44 (1): 87–104.
- Wink, A. 2002. Al-Hind: making of the Indo-Islamic World. Brill. ISBN 0-391-04174-6
- Yap, Joseph P. (2009). "Wars with the Xiongnu: A translation from Zizhi tongjian". AuthorHouse. ISBN 978-1-4490-0604-4
- Zhang, Bibo (张碧波); Dong, Guoyao (董国尧) (2001). 中国古代北方民族文化史 [Cultural History of Ancient Northern Ethnic Groups in China]. Harbin: Heilongjiang People's Press. ISBN 978-7-207-03325-3.
|Library resources about |
- (in Russian) Потапов, Л. П. 1966. Этнионим Теле и Алтайцы. Тюркологический сборник, 1966: 233–240. Moscow: Nauka. (Potapov L.P., The ethnonym "Tele" and the Altaians. Turcologica 1966: 233–240).
- Houle, J. and L.G. Broderick 2011 "Settlement Patterns and Domestic Economy of the Xiongnu in Khanui Valley, Mongolia", 137–152. In Xiongnu Archaeology: Multidisciplinary Perspectives of the First Steppe Empire in Inner Asia.
- Miller, Bryan K. (2014). "Xiongnu "Kings" and the Political Order of the Steppe Empire". Journal of the Economic and Social History of the Orient. 57 (1): 1–43. doi:10.1163/15685209-12341340.
- Yap, Joseph P, (2019). The Western Regions, Xiongnu and Han, from the Shiji, Hanshu and Hou Hanshu. ISBN 978-1792829154
|Wikimedia Commons has media related to Xiongnu.|
|Wikisource has the text of the 1911 Encyclopædia Britannica article Hiung-nu .|
- Li, Chunxiang; Li, Hongjie; Cui, Yinqiu; Xie, Chengzhi; Cai, Dawei; Li, Wenying; Victor, Mair H.; Xu, Zhi; Zhang, Quanchao; Abuduresule, Idelisi; Jin, Li; Zhu, Hong; Zhou, Hui (2010). "Evidence that a West-East admixed population lived in the Tarim Basin as early as the early Bronze Age". BMC Biology. 8: 15. doi:10.1186/1741-7007-8-15. PMC 2838831. PMID 20163704.
- Material Culture presented by University of Washington
- Encyclopedic Archive on Xiongnu
- The Xiongnu Empire
- The Silk Road Volume 4 Number 1
- The Silk Road Volume 9
- Gold Headdress from Aluchaideng
- Belt buckle, Xiongnu type, 3rd–2nd century B.C.
- Videodocumentation: Xiongnu – the burial site of the Hun prince (Mongolia)
- The National Museum of Mongolian History :: Xiongnu | https://library.kiwix.org/wikipedia_en_top_maxi/A/Xiongnu | 21 |
44 | Aggregate demand and supply
This page introduces the concept of aggregate demand and aggregate supply and your students will need to understand that the AD of an economy is the sum of the collective individual demand curves. You should also emphasise that governments have considerable ability to control the level of AD in the economy and also that the control of this variable is a crucial part of government economic policy. Similarly, that aggregate supply is a combination or aggregate of all of the individual supply curves in the economy. Like an individual supply curve this slopes upwards from left to right.
What do aggregate demand and supply mean? What factors might lead to a change in either AD and / or AS?
Lesson time: 70 minutes
Distinguish between the microeconomic concept of demand for a product and the macroeconomic concept of aggregate demand.
Construct an aggregate demand curve, and explain why the AD curve has a negative slope.
Describe consumption, investment, government spending and net exports as the components of aggregate demand.
Define the term aggregate supply. Explain, using a diagram, why the short-run aggregate supply curve (SRAS curve) is upward sloping.
Explain, using a diagram, how the AS curve in the short run (SRAS) can shift due to factors including changes in resource prices, changes in business taxes and subsidies and supply shocks.
Aggregate demand - the total spending in an economy consisting of consumption, investment, government expenditure and net exports. This is calculated by the formulae: C+G+I+(X-M).
Private consumption (C) - spending by households on domestic consumer goods and services over a period of time.
Government spending (G) - public sector spending whether by national or local governments. This includes spending on public services such as health, education, public transport, defence and infrastructure projects.
Investment (I) - expenditure by firms on capital equipment and is an injection into the economy.
Net exports (X-M) - the value of exports (ie export revenues) - value of import (ie import expenditures).
Aggregate supply - also known as total output, is the total supply of goods and services produced within an economy at a given time and at an overall price level.
Supply shock - an unexpected event that impacts on the supply of a product or commodity, resulting in a sudden change in price. Supply shocks are generally negative, resulting in a sudden fall in supply but can also sometimes be positive, leading to increased supply.
Price level / average price level - the average of current prices across the entire spectrum of goods and services produced in the economy.
The activities are available in PDF at: Aggregate demand and supply
Draw a normal demand and supply curve for a good or service. Now consider how this might look if you were drawing a demand and supply curve for the whole economy? Instead of price write average price and instead of quantity label the horizontal axis real GDP, real national output or real national income.
Now consider what determines the level of aggregate demand and supply within any economy?
Watch the following short video and then answer the questions that follow:
1. The diagram to the right illustrates the aggregate demand curve for a nation.
(a) What is the formulae for calculating AD in the economy?
(b) Explain the inverse relationship between average price level and quantity demanded.
(c) Provide examples of durable and non-durable goods.
(d) Why are individuals 'investing' in the stock market or placing their savings in bank deposits not included under investment in AD calculations?
(e) What is net investment spending?
(f) How is the external balance (X-M) calculated?
(g) What is the opportunity cost of any investment decision.
(h) The government is forced to cut spending on services and public works. Outline the consequence of this for aggregate demand in the economy?
Activity 3: Aggregate supply
Begin with the following short video before completing the questions that follow:
1. The diagram to the left illustrates a short run AS curve.
(a) Explain the relationship between average price level and the aggregate supply of goods and services in the economy?
(b) Explain what happens to SRAS when real output in the economy moves from Y1 to Y2.
Activity 4: Shifts in the AS curve
An economy is in equilibrium at PL1 and Y1. Calculate the new equilibrium as a result of the following supply shocks:
1. The government raises the level of minimum wage in the country, above the rate of productivity growth.
2. A sharp fall in oil prices, resulting from a glut in global supply.
3. A significant fall in the value of a nation's currency compared to its main trading partners.
4. A period of very low interest rates, over a sustained period of time.
5. A rise in the rate of sales tax.
Activity 5: Practise questions
Illustrate the effect on either AS and / or AD of the following:
(a) A rise in income tax in the economy.
(b) A fall in oil prices.
(c) A rise in interest rates in the economy.
(d) A rise in minimum wage.
(e) A rise in the value of the currency relative to the country's main trading partners.
(f) A fall in corporation tax rates.
Watch the following short video and then answer the questions which follow:
1. Outline the factors currently contributing to the growth of aggregate demand in the UK economy?
2. Evaluate the outlook for overall demand in the UK economy described in the video?
Activity 7: Link to the paper one examination
Examples of typical paper one questions include:
(a) Explain how a rise in either business or consumer confidence can affect economic growth. [10 marks]
(b) Using real world examples, discuss the view that rises in economic growth will also lead to improved living standards in a country. [15 marks] | https://www.thinkib.net/economics/page/29933/aggregate-demand-and-supply | 21 |
15 | Dominion of New England
Motto: Nunquam libertas gratior extat (Latin)
Nowhere does liberty appear in a greater form (English)
Map of the Dominion, represented in dark red, as of 1688. Names of the constituent and neighboring colonies also shown.
|Common languages||English, Dutch, French, Iroquoian, Algonquian|
|Government||British Direct rule colonial government|
|William III & Mary II|
|Legislature||Council of New England|
|Historical era||British colonization of the Americas |
Colonial History of the United States
|April 18, 1689|
|May 31, 1689|
|Today part of||United States|
The Dominion of New England in America (1686–89) was an administrative union of English colonies covering New England and the Mid-Atlantic Colonies (except for Delaware Colony and the Province of Pennsylvania). Its political structure represented centralized control similar to the model used by the Spanish monarchy through the Viceroyalty of New Spain. The dominion was unacceptable to most colonists because they deeply resented being stripped of their rights and having their colonial charters revoked. Governor Sir Edmund Andros tried to make legal and structural changes, but most of these were undone and the Dominion was overthrown as soon as word was received that King James II had left the throne in England. One notable change was the introduction of the Church of England into Massachusetts, whose Puritan leaders had previously refused to allow it any sort of foothold.
The Dominion encompassed a very large area from the Delaware River in the south to Penobscot Bay in the north, composed of the Province of New Hampshire, Massachusetts Bay Colony, Plymouth Colony, Colony of Rhode Island and Providence Plantations, Connecticut Colony, Province of New York, and Province of New Jersey, plus a small portion of Maine. It was too large for a single governor to manage. Governor Andros was highly unpopular and was seen as a threat by most political factions. News of the Glorious Revolution in England reached Boston in 1689, and the Puritans launched the 1689 Boston revolt against Andros, arresting him and his officers.
Leisler's Rebellion in New York deposed the dominion's lieutenant governor Francis Nicholson. After these events, the colonies that had been assembled into the dominion reverted to their previous forms of government, although some governed formally without a charter. King William III of England and Queen Mary II eventually issued new charters.
A number of English colonies were established in North America and in the West Indies during the first half of the 17th century, with varying attributes. Some originated as commercial ventures, such as the Virginia Colony, while others were founded for religious reasons, such as Plymouth Colony and Massachusetts Bay Colony. The governments of the colonies also varied. Virginia became a crown colony, despite its corporate beginning, while Massachusetts and other New England colonies had corporate charters and a great deal of administrative freedom. Other areas were proprietary colonies, such as Maryland and Carolina, owned and operated by one or a few individuals.
Following the English Restoration in 1660, King Charles II sought to streamline the administration of these colonial territories. Charles and his government began a process that brought a number of the colonies under direct crown control. One reason for these actions was the cost of administration of individual colonies, but another significant reason was the regulation of trade. Throughout the 1660s, the English Parliament passed a number of laws to regulate the trade of the colonies, collectively called the Navigation Acts. The American colonists resisted these laws, particularly in the New England colonies which had established significant trading networks with other English colonies and with other European countries and their colonies, especially Spain and the Dutch Republic. The Navigation Acts also outlawed some existing New England practices, in effect turning merchants into smugglers while significantly increasing the cost of doing business.
Some of the New England colonies presented specific problems for the king, and combining those colonies into a single administrative entity was seen as a way to resolve those problems. Plymouth Colony had never been formally chartered, and the New Haven Colony had sheltered two of the regicides of Charles I, the king's father. The territory of Maine was disputed by competing grantees and by Massachusetts, and New Hampshire was a very small, recently established crown colony.
Massachusetts had a long history of virtually theocratic rule, in addition to their widespread resistance to the Navigation Acts, and they exhibited little tolerance for non-Puritans, including supporters of the Church of England (which was most important for the king). Charles II repeatedly sought to change the Massachusetts government, but they resisted all substantive attempts at reform. In 1683, legal proceedings began to vacate the Massachusetts charter; it was formally annulled in June 1684.
The primary motivation in London was not to attain efficiency in administration, but to guarantee that the purpose of the colonies was to make England richer. The "Hull Mint" under John Hull was still illegally producing the pine tree shilling, thwarting the efforts of Charles II. For the Puritans, liberty was the most important; the Church of England (which was most important for the king) was not. To the king, it was an act of high treason in the United Kingdom and Hanging, drawing and quartering was the punishment.
England's desire for colonies that produced agricultural staples worked well for the southern colonies, which produced tobacco, rice, and indigo, but not so well for New England due to the geology of the region. Lacking a suitable staple, the New Englanders engaged in trade and became successful competitors to English merchants. They were now starting to develop workshops that threatened to deprive England of its lucrative colonial market for manufactured articles, such as textiles, leather goods, and ironware. The plan, therefore, was to establish a uniform all-powerful government over the northern colonies so that the people would be diverted away from manufacturing and foreign trade.
Following the revocation of the Massachusetts charter, Charles II and the Lords of Trade moved forward with plans to establish a unified administration over at least some of the New England colonies. The specific objectives of the dominion included the regulation of trade, reformation of land title practices to conform more to English methods and practices, coordination on matters of defense, and a streamlining of the administration into fewer centers. The Dominion initially comprised the territories of the Massachusetts Bay Colony, the Plymouth Colony, the Province of New Hampshire, the Province of Maine, and the Narraganset Country (present-day Washington County, Rhode Island).
Charles II had chosen Colonel Percy Kirke to govern the dominion, but Charles died before the commission was approved. King James II approved Kirke's commission in 1685, but Kirke came under harsh criticism for his role in putting down Monmouth's Rebellion, and his commission was withdrawn. A provisional commission was issued on October 8, 1685 to Massachusetts Bay native Joseph Dudley as President of the Council of New England, due to delays in developing the commission for Kirke's intended successor Sir Edmund Andros.
Dudley's limited commission specified that he would rule with an appointed council and no representative legislature. The councillors named as members of this body included a cross-section of politically moderate men from the old colonial governments. Edward Randolph had served as the crown agent investigating affairs in New England, and he was appointed to the council, as well. Randolph was also commissioned with a long list of other posts, including secretary of the dominion, collector of customs, and deputy postmaster.
Dudley's charter arrived in Boston on May 14, 1686, and he formally took charge of Massachusetts on May 25. His rule did not begin auspiciously, since a number of Massachusetts magistrates refused to serve who had been named to his council. According to Edward Randolph, the Puritan magistrates "were of opinion that God would never suffer me to land again in this country, and thereupon began in a most arbitrary manner to assert their power higher than at any time before." Elections of colonial military officers were also compromised when many of them refused to serve. Dudley made a number of judicial appointments, generally favoring the political moderates who had supported accommodation of the king's wishes in the battle over the old charter.
Dudley was significantly hampered by the inability to raise revenues in the dominion. His commission did not allow the introduction of new revenue laws, and the Massachusetts government had repealed all such laws in 1683, anticipating the loss of the charter. Furthermore, many refused to pay the few remaining taxes on the grounds that they had been enacted by the old government and were thus invalid. Attempts by Dudley and Randolph were largely unsuccessful at introducing the Church of England due to a lack of funding, but were also hampered by the perceived political danger of imposing on the existing churches for their use.
Dudley and Randolph enforced the Navigation Acts, although they did not adhere entirely to the laws. Some variations were overlooked, understanding that certain provisions of the acts were unfair (some resulted in the payments of multiple duties), and they suggested to the Lords of Trade that the laws be modified to ameliorate these conditions. However, the Massachusetts economy suffered, also negatively affected by external circumstances. A dispute eventually occurred between Dudley and Randolph over matters related to trade.
During Dudley's administration, the Lords of Trade decided on September 9, 1686 to include into the dominion the colonies of Rhode Island and Connecticut, based on a petition from Dudley's council. Andros's commission had been issued in June, and he was given an annex to his commission to incorporate them into the dominion.
Andros had previously been governor of New York; he arrived in Boston on December 20, 1686 and immediately assumed power. He took a hard-line position, claiming that the colonists had left behind all their rights as Englishmen when they left England. The Reverend John Wise rallied his parishioners in 1687 to protest and resist taxes; Andros had him arrested, convicted, and fined. An Andros official explained, "Mr. Wise, you have no more privileges Left you then not to be Sold for Slaves."
His commission called for governance by himself, again with a council. The initial composition of the council included representatives from each of the colonies which the dominion absorbed, but the council's quorums were dominated by representatives from Massachusetts and Plymouth because of the inconvenience of travel and the fact that travel costs were not reimbursed.
Church of England
Shortly after his arrival, Andros asked each of the Puritan churches in Boston if its meetinghouse could be used for services of the Church of England, but he was consistently rebuffed. He then demanded keys to Samuel Willard's Third Church in 1687, and services were held there under the auspices of Robert Ratcliff until 1688, when King's Chapel was built.
After Andros' arrival, the council began a long process of harmonizing laws throughout the dominion to conform more closely to English laws. This work was so time-consuming that Andros issued a proclamation in March 1687 stating that pre-existing laws would remain in effect until they were revised. Massachusetts had no pre-existing tax laws, so a scheme of taxation was developed that would apply to the entire dominion, developed by a committee of landowners. The first proposal derived its revenues from import duties, principally alcohol. After much debate, a different proposal was abruptly put forward and adopted, in essence reviving previous Massachusetts tax laws. These laws had been unpopular with farmers who felt that the taxes were too high on livestock. In order to bring in immediate revenue, Andros also received approval to increase the import duties on alcohol.
The first attempts to enforce the revenue laws were met by stiff resistance from a number of Massachusetts communities. Several towns refused to choose commissioners to assess the town population and estates, and officials from a number of them were consequently arrested and brought to Boston. Some were fined and released, while others were imprisoned until they promised to perform their duties. The leaders of Ipswich had been most vocal in their opposition to the law; they were tried and convicted of misdemeanor offenses.
The other provinces did not resist the imposition of the new law, even though the rates were higher than they had been under the previous colonial administration, at least in Rhode Island. Plymouth's relatively poor landowners were hard hit because of the high rates on livestock.
Town meeting laws
One consequence of the tax protest was that Andros sought to restrict town meetings, since these were where that protest had begun. He, therefore, introduced a law that limited meetings to a single annual meeting, solely for the purpose of electing officials, and explicitly banning meetings at other times for any reason. This loss of local power was widely hated. Many protests were made that the town meeting and tax laws were violations of the Magna Carta, which guaranteed taxation by representatives of the people.
Land titles and taxes
Andros dealt a major blow to the colonists by challenging their title to the land; unlike England, the great majority of Americans were land-owners. Taylor says that, because they "regarded secure real estate as fundamental to their liberty, status, and prosperity, the colonists felt horrified by the sweeping and expensive challenge to their land titles." Andros had been instructed to bring colonial land title practices more in line with those in England, and to introduce quit-rents as a means of raising colonial revenues. Titles issued in Massachusetts, New Hampshire, and Maine under the colonial administration often suffered from defects of form (for example, lacking an imprint of the colonial seal), and most of them did not include a quit-rent payment. Land grants in colonial Connecticut and Rhode Island had been made before either colony had a charter, and there were conflicting claims in a number of areas.
The manner was doubly divisive in which Andros approached the issue, since it threatened any landowner whose title was in any way dubious. Some landowners went through the confirmation process, but many refused, since they did not want to face the possibility of losing their land, and they viewed the process as a thinly veiled land grab. The Puritans of Plymouth and Massachusetts Bay were among the latter, some of whom had extensive landholdings. All of the existing land titles in Massachusetts had been granted under the now-vacated colonial charter; in essence, Andros declared them to be void, and required landowners to recertify their ownership, paying fees to the dominion and becoming subject to the charge of a quit-rent.
Andros attempted to compel the certification of ownership by issuing writs of intrusion, but large landowners who owned many parcels contested these individually, rather than recertifying all of their lands. The number was small of new titles issued during the Andros regime; 200 applications were made, but only about 20 of those were approved.
Andros' commission included Connecticut, and he asked Connecticut Governor Robert Treat to surrender the colonial charter not long after his arrival in Boston. Connecticut officials formally acknowledged Andros' authority, unlike Rhode Island, whose officials acceded to the dominion but in fact did little to assist him. Connecticut continued to run their government according to the charter, holding quarterly meetings of the legislature and electing colony-wide officials, while Treat and Andros negotiated over the surrender of the charter. In October 1687, Andros finally decided to travel to Connecticut to personally see to the matter. He arrived in Hartford on October 31, accompanied by an honor guard, and met that evening with the colonial leadership. According to legend, the charter was laid out on the table for all to see during this meeting. The lights in the room unexpectedly went out and, when they were relit, the charter had disappeared. It was said to have been hidden in a nearby oak tree (referred to afterward as the Charter Oak) so that a search of nearby buildings could not locate the document.
Whatever the truth of the legend, Connecticut records show that its government formally surrendered its seals and ceased operation that day. Andros then traveled throughout the colony, making judicial and other appointments, before returning to Boston. On December 29, 1687, the dominion council formally extended its laws over Connecticut, completing the assimilation of the New England colonies.
Inclusion of New York and the Jerseys
On May 7, 1688, the provinces of New York, East Jersey, and West Jersey were added to the Dominion. They were remote from Boston where Andros had his seat, so New York and the Jerseys were run by Lieutenant Governor Francis Nicholson from New York City. Nicholson was an army captain and protégé of colonial secretary William Blathwayt who came to Boston in early 1687 as part of Andros' honor guard and had been promoted to his council. During the summer of 1688, Andros traveled first to New York and then to the Jerseys to establish his commission. Dominion governance of the Jerseys was complicated by the fact that the proprietors' charters had been revoked, yet they had retained their property and petitioned Andros for what were traditional manorial rights. The dominion period in the Jerseys was relatively uneventful because of their distance from the power centers and the unexpected end of the Dominion in 1689.
In 1687, governor of New France Jacques-René de Brisay de Denonville, Marquis de Denonville launched an attack against Seneca villages in what is now western New York. His objective was to disrupt trade between the English at Albany and the Iroquois confederation, to which the Seneca belonged, and to break the Covenant Chain, a peace that Andros had negotiated in 1677 while he was governor of New York. New York Governor Thomas Dongan appealed for help, and King James ordered Andros to render assistance. James also entered into negotiations with Louis XIV of France, which resulted in an easing of tensions on the northwestern frontier.
On New England's northeastern frontier, however, the Abenaki harbored grievances against English settlers, and they began an offensive in early 1688. Andros made an expedition into Maine early in the year, in which he raided a number of Indian settlements. He also raided the trading outpost and home of Jean-Vincent d'Abbadie de Saint-Castin on Penobscot Bay. His careful preservation of the Catholic Castin's chapel was a source of later accusations of "popery" against Andros.
Andros took over the administration of New York in August 1688, and he met with the Iroquois at Albany to renew the covenant. In this meeting, he annoyed the Iroquois by referring to them as "children" (that is, subservient to the English) rather than "brethren" (that is, equals). He returned to Boston amid further attacks on the New England frontier by Abenaki parties, who admitted that they were doing so in part because of French encouragement. The situation in Maine had also deteriorated again, with English colonists raiding Indian villages and shipping the captives to Boston. Andros castigated the Mainers for this unwarranted act and ordered the Indians released and returned to Maine, earning the hatred of the Maine settlers. He then returned to Maine with a significant force, and began the construction of additional fortifications to protect the settlers. Andros spent the winter in Maine, and returned to Boston in March upon hearing rumors of revolution in England and discontent in Boston.
Glorious Revolution and dissolution
The religious leaders of Massachusetts, led by Cotton and Increase Mather, were opposed to the rule of Andros, and organized dissent targeted to influence the court in London. After King James published the Declaration of Indulgence in May 1687, Increase Mather sent a letter to the king thanking him for the declaration, and then he suggested to his peers that they also express gratitude to the king as a means to gain favor and influence. Ten pastors agreed to do so, and they decided to send Mather to England to press their case against Andros. Edward Randolph attempted to stop him; Mather was arrested, tried, and exonerated on one charge, but Randolph made a second arrest warrant with new charges. Mather was clandestinely spirited aboard a ship bound for England in April 1688. He and other Massachusetts agents were well received by James, who promised in October 1688 that the colony's concerns would be addressed. However, the events of the Glorious Revolution took over, and by December James had been deposed by William III and Mary II.
The Massachusetts agents then petitioned the new monarchs and the Lords of Trade for restoration of the old Massachusetts charter. Mather furthermore convinced the Lords of Trade to delay notifying Andros of the revolution. He had already dispatched a letter to previous colonial governor Simon Bradstreet containing news that a report (prepared before the revolution) stated that the charter had been illegally annulled, and that the magistrates should "prepare the minds of the people for a change." News of the revolution apparently reached some individuals as early as late March, and Bradstreet is one of several possible organizers of the mob that formed in Boston on April 18, 1689. He and other pre-Dominion magistrates and some members of Andros' council addressed an open letter to Andros on that day calling for his surrender in order to quiet the mob. Andros, Randolph, Dudley, and other dominion supporters were arrested and imprisoned in Boston.
In effect, the dominion then collapsed, as local authorities in each colony seized dominion representatives and reasserted their earlier power. In Plymouth, dominion councilor Nathaniel Clark was arrested on April 22, and previous governor Thomas Hinckley was reinstated. Rhode Island authorities organized a resumption of its charter with elections on May 1, but previous governor Walter Clarke refused to serve, and the colony continued without one. In Connecticut, the earlier government was also rapidly readopted. New Hampshire was temporarily left without formal government, and came under de facto rule by Massachusetts Governor Simon Bradstreet.
News of the Boston revolt reached New York by April 26, but Lieutenant Governor Nicholson did not take any immediate action. Andros managed during his captivity to have a message sent to Nicholson. Nicholson received the request for assistance in mid-May, but he was unable to take any effective action due to rising tensions in New York, combined with the fact that most of Nicholson's troops had been sent to Maine. At the end of May, Nicholson was overthrown by local colonists supported by the militia in Leisler's Rebellion, and he fled to England. Leisler governed New York until 1691, when King William commissioned Colonel Henry Sloughter as its governor. Sloughter had Leisler tried on charges of high treason; he was convicted in a trial presided over by Joseph Dudley and then executed.
Massachusetts and Plymouth
The dissolution of the dominion presented legal problems for both Massachusetts and Plymouth. Plymouth never had a royal charter, and the charter of Massachusetts had been revoked. As a result, the restored governments lacked legal foundations for their existence, an issue that the political opponents of the leadership made it a point to raise. This was particularly problematic in Massachusetts, whose long frontier with New France saw its defenders recalled in the aftermath of the revolt, and was exposed to French and Indian raids after the outbreak of King William's War in 1689. The cost of colonial defense resulted in a heavy tax burden, and the war also made it difficult to rebuild the colony's trade.
Agents for both colonies worked in England to rectify the charter issues, with Increase Mather petitioning the Lords of Trade for a restoration of the old Massachusetts charter. King William was informed that this would result in a return of the Puritan government, and he wanted to prevent that from happening, so the Lords of Trade decided to solve the issue by combining the two colonies. The resulting Province of Massachusetts Bay combined the territories of Massachusetts and Plymouth along with Martha's Vineyard, Nantucket, and the Elizabeth Islands that had been part of Dukes County in the Province of New York.
This is a list of the chief administrators of the Dominion of New England in America from 1684 to 1689:
|Name||Title||Date of commission||Date office assumed||Date term ended|
|Percy Kirke||Governor in Chief (designate) of the Dominion of New England||1684||Appointment withdrawn in 1685||Not applicable|
|Joseph Dudley||President of the Council of New England||October 8, 1685||May 25, 1686||December 20, 1686|
|Sir Edmund Andros||Governor in Chief of the Dominion of New England||June 3, 1686||December 20, 1686||April 18, 1689|
- Adams, James Truslow (1921). The Founding of New England. Boston, MA: Atlantic Monthly Press.
- Barnes, Viola Florence (1923). The Dominion of New England: A Study in British Colonial Policy. ISBN 978-0-8044-1065-6. OCLC 395292.
- Dunn, Richard S. "The Glorious Revolution and America" in The Origins of Empire: British Overseas Enterprise to the Close of the Seventeenth Century ( The Oxford History of the British Empire, (1998) vol 1 pp 445–66.
- Dunn, Randy (2007). "Patronage and Governance in Francis Nicholson's Empire". English Atlantics Revisited. Montreal: McGill-Queens Press. ISBN 978-0-7735-3219-9. OCLC 429487739.
- Hall, Michael Garibaldi (1960). Edward Randolph and the American Colonies. Chapel Hill, NC: University of North Carolina Press.
- Hall, Michael Garibaldi (1988). The Last American Puritan: The Life of Increase Mather, 1639–1723. Wesleyan University Press. ISBN 978-0-8195-5128-3. OCLC 16578800.
- Kimball, Everett (1911). The Public Life of Joseph Dudley. New York: Longmans, Green. OCLC 1876620.
- Lovejoy, David (1987). The Glorious Revolution in America. Middletown, CT: Wesleyan University Press. ISBN 978-0-8195-6177-0. OCLC 14212813.
- Lustig, Mary Lou (2002). The Imperial Executive in America: Sir Edmund Andros, 1637–1714. Fairleigh Dickinson University Press. ISBN 978-0-8386-3936-8. OCLC 470360764.
- Miller, Guy Howard (May 1968). "Rebellion in Zion: The Overthrow of the Dominion of New England". Historian. 30 (3): 439–459. doi:10.1111/j.1540-6563.1968.tb00328.x.
- Moore, Jacob Bailey (1851). Lives of the Governors of New Plymouth and Massachusetts Bay. Boston, MA: C. D. Strong. OCLC 11362972.
- Palfrey, John (1864). History of New England: History of New England During the Stuart Dynasty. Boston: Little, Brown. OCLC 1658888.
- Stanwood, Owen (2007). "The Protestant Moment: Antipopery, the Revolution of 1688–1689, and the Making of an Anglo-American Empire". Journal of British Studies. 46 (3): 481–508. doi:10.1086/515441. JSTOR 10.1086/515441.
- Steele, Ian K (March 1989). "Origins of Boston's Revolutionary Declaration of 18 April 1689". New England Quarterly (Volume 62, No. 1): 75–81. JSTOR 366211.
- Taylor, Alan, American Colonies: the Settling of North America, Penguin Books, 2001.
- Tuttle, Charles Wesley (1880). New Hampshire Without Provincial Government, 1689–1690: an Historical Sketch. Cambridge, MA: J. Wilson and Son. OCLC 12783351.
- Webb, Stephen Saunders. Lord Churchill's coup: the Anglo-American empire and the Glorious Revolution reconsidered (Syracuse University Press, 1998)
- Hall, Michael G. (1979). "Origins in Massachusetts of the Constitutional Doctrine of Advice and Consent". Proceedings of the Massachusetts Historical Society. Third Series. Massachusetts Historical Society. 91: 5. JSTOR 25080845.
Randolph's efforts at reporting unfavorably on the autonomous and "democratically government of Massachusetts brought about in 1684 total annulment of the first charter and in position of a new, arbitrary, prerogative government.
- Curtis P. Nettels, The Roots of American Civilization: A History of American Colonial Life (1938) p. 297.
- Barnes, p. 45
- Barnes, pp. 47–48
- Barnes, p. 48
- Barnes, p. 49
- Barnes, p. 50
- Barnes, pp. 50,54
- Barnes, p. 51
- Barnes, p. 53
- Barnes, p. 55
- Barnes, p. 56
- Barnes, p. 58
- Barnes, p. 59
- Barnes, p. 61
- Barnes, pp. 62–63
- Barnes, p. 68
- Lustig, p. 141
- Alan Taylor, American Colonies: The Settling of North America (2001) p277
- Lustig, p. 164
- Lustig, p. 165
- Barnes, p. 84
- Barnes, p. 85
- Lovejoy, p. 184
- Barnes, p. 97
- Taylor, p 277
- Barnes, p. 176
- Barnes, p. 182, 187
- Barnes, pp. 189–193
- Barnes, pp. 199–201
- Federal Writers Project (1940). Connecticut: A Guide to Its Roads, Lore and People. p. 170.
- Palfrey, pp. 545–546
- Palfrey, p. 548
- Dunn, p. 64
- Lovejoy, p. 211
- Lovejoy, pp. 212–213
- Lustig, p. 171
- Lustig, p. 173
- Lustig, p. 174
- Lustig, p. 176
- Lustig, pp. 177–179
- Hall (1988), pp. 207–210
- Hall (1988), p. 210
- Hall (1988), pp. 210–211
- Hall (1988), p. 217
- Barnes, p. 234
- Barnes, pp. 234–235
- Barnes, p. 238
- Steele, p. 77
- Steele, p. 78
- Lovejoy, p. 241
- Palfrey, p. 596
- Tuttle, pp. 1–12
- Lovejoy, p. 252
- Lustig, p. 199
- Lovejoy, pp. 255–256
- Lovejoy, pp. 326–338
- Lovejoy, pp. 355–357
- Kimball, pp. 61–63
- Barnes, p. 257 | https://wiki-offline.jakearchibald.com/wiki/Dominion_of_New_England | 21 |
14 | Are your kids tired of painstakingly learning the facts in math? A better use of that time would be spent engaging kids in a playful, hands on math game to learn the same skills, but keeping kids engaged and having fun while learning! This free printable, number bond games is a great way for kindergarten, first grade, and 2nd grade students to develop their number sense with an addition and subtraction practice activity!
Number Bonds Games
Help your kindergartners, grade 1, and grade 2 students practice seeing the relationship between numbers with this fun, hands on number bonds games! Simply print and play to create addition and subtraction problems. Whether you are a parent, teacher, or homeschooler – you will love this hands-on number bonds activity for children. Use it as extra practice, summer learning, at a math center in your classroom, or a supplement to your Singapore math curriculum.
This set of number bond matching cards will allow students to explore and discover connections between addition and subtraction.
Number bond games printable
Developing Number Sense
Though number sense is not something that can be taught, we can provide opportunities for kids to discover connections and form meaningful mathematical understandings. One of the ways we can do this is with visual models.
A number bond is a powerful visual model that helps kids see how to compose and decompose numbers. Seeing how to break apart numbers leads to a deeper understanding of how numbers work and relate to each other. It also provides an opportunity for kids to discover patterns.
Number bonds game
Included in this Number Bonds Games download is a set of 24 matching cards. One card includes a number bond with either a “part” or the “whole” missing, while the matching card contains the missing value.
The cards include numbers up to 20, but you could focus only on numbers up to 10 if you have younger ones.
Number bond games kindergarten
Also included are two equation mats: one for addition equations and one for subtraction equations.
To get started with these materials, I suggest you print and laminate all the cards and the equation mats. Then cut out the matching cards.
Laminating them will allow kids to write on them over and over again with a dry erase marker.
Number bonds first grade
There are dozens of ways you can use these cards, but here are some suggestions to get you started.
- First, simply use the matching cards as a matching game for kids to complete individually. This would be a super simple, low-prep math center activity, requiring little to no explanation.
- Another idea is to let partners use them as a memory game or “go fish” game.To set up a memory game, simply lay all the cards out face down on the table. Kids then take turns flipping over 2 cards.If the cards are a match, they keep the set. Otherwise, they turn them back over and it’s their partner’s turn. Once all the matches have been found, the player with the most correct matches wins.
- Go Fish is played the same way, except that each player starts with a hand of 5 cards, while the remaining cards are placed in a pile face down. Players then take turns asking the other for a match to one of their cards. For instance, they might ask, “Do you have a 6?” if the missing value on one of their number bonds is 6. If the partner has that card, they give it to them. Otherwise, the player draws a card from the pile and it’s their partner’s turn. Once all the matches have been found, the player with the most correct matches wins.
- All alternative to using these as matching cards is to simply use the cards with number bonds (without the matches).
Here are ideas for independent practice with number bonds
- Cut out the cards and place them on a key ring. Then allow kids to work through the stack with a dry erase marker, filling in the missing value.
- Then you can combine the answer cards with the equation mats for some hands on math.To get started, you will need something to count such as beads, base ten blocks, counting bears, candy, etc. Then give kids the answer cards to place in the top of the number bond. They can then use their counting manipulatives to break the number apart in different ways and write equations below. This is a great exercise for kids because there will always be more than one right answer, and it’s important for them to see that.
HINT: When they’ve completed the number bond and equation for a given number, ask them to find another equation. Once they’re familiar and comfortable using the mat and writing equations, ask how many different equations they can come up with.
- You could even encourage them to move the answer card to a different part of the number bond, and see how that changes their equations.
However you decide to use this resource, I hope it opens up your kids understanding of addition and subtraction, and allows for fun, open-ended discoveries.
Number Bond Printables
Lookign for more number bond activities for kids? Check out these fun, free resources:
- Pumpkin Number Bonds to 10 Worksheet
- Glue Number Bonds to 10
- Flower Number Bonds to 20
- Acorn Number Bonds Worksheets
- Free Printable Number Bond Games
- Hands-on Apple Number Bonds
- Fun Ice Cream Number Bonds to 10 Activities
- Printable Apple Number Bond Activities
- Fishing Number Bonds to 10 Craft (free printable)
This is such a fun, hands on math game for kindergartners that helps achieve fluency while having FUN!
- Printable Addition Tic Tac Toe Math Game
- Gumball Math – Addition and Subtraction Practice
- Watermelon Addition Within 10 activity
- 100 Epic Addition Activities
- Fall Addition Worksheets for Kindergarten
- Sleeping Beauty Color by Addition Math Worksheets
- Crack the Code Worksheet – Addition Practice
- Hands-on Goldfish Addition Practice
- Pot of Gold Addition Practice
- Rubber Duckie Kindergarten Math activity
- Planting Seeds Free Addition Games
- Free Printable Pumpkin Sum Game
- Math Addition Worksheets Mini Book – Adding 0-12
- Deck of Cards Free Addition Worksheets
- Disney Princess Flashcards – Addition
- Free Addition within 10 Games Printable
- Turkey Addition Thanksgiving Math Worksheets
- Apple Addition Coloring Pages
- Free Frog Math – Addition Game for Kids
- St Patricks Day Addition Practice
- Gingerbread Addition Math Game
- Math Mystery Pack – Solve Addition & Subtraction Word Problems
Looking for more fun, creative ways you can begin homeschooling for free? See our history lesson plans, math games for kids, english worksheets, sight words activities, alphabet worksheets, and cvc word games for kids of all ages!
Download Number Bonds Games
Before you download your free pack you agree to the following:
- This set is for personal and classroom use only.
- This printable set may not be sold, hosted, reproduced, or stored on any other website or electronic retrieval system.
- Graphics Purchased and used with permission
- All downloadable material provided on this blog is copyright protected. | https://www.123homeschool4me.com/free-number-bonds-games_21/ | 21 |
15 | Balance of Payments:
The balance of payments accounts of a country record the payments and receipts of the residents of the country in their transactions with residents of other countries. If all transactions are included, the payments and receipts of each country are, and must be, equal. Any apparent inequality simply leaves one country acquiring assets in the others. For example, if Americans buy automobiles from Japan, and have no other transactions with Japan, the Japanese must end up holding dollars, which they may hold in the form of bank deposits in the United States or in some other U.S. investment. The payments of Americans to Japan for automobiles are balanced by the payments of Japanese to U.S. individuals and institutions, including banks, for the acquisition of dollar.
There will be inequalities between the money even though the receipts are the same which we call that receipt as a surplus or deficit.
Balance of Trade:
The difference in value over a period of time between a country’s imports and exports of goods and services, usually expressed in the unit of currency of a particular country or economic union (e.g., dollars for the United States, pounds sterling for the United Kingdom, or euros for the European Union). The balance of trade is part of a larger economic unit, the BALANCE OF PAYMENTS (the sum total of all economic transactions between one country and its trading partners around the world), which includes capital movements (money flowing to a country paying high interest rates of return), loan repayment, expenditures by tourists, freight and insurance charges, and other payments.
If the exports of a country exceed its imports, the country is said to have a favorable balance of trade, or a trade surplus. Conversely, if the imports exceed exports, an unfavorable balance of trade, or a trade deficit, exists. According to the economic theory of mercantilism, which prevailed in Europe from the 16th to the 18th century, a favorable balance of trade was a necessary means of financing a country’s purchase of foreign goods and maintaining its export trade. This was to be achieved by establishing colonies that would buy the products of the mother country and would export raw materials (particularly precious metals), which were considered an indispensable source of a country’s wealth and power.
So finally with the help of these two main factors the foreign courts is trading which have a greater use for the development of the Indian Economy. | https://allindialegalforum.in/2021/04/07/international-economies-bop-and-bot-2/ | 21 |
21 | Cretaceous–Paleogene extinction event
The Cretaceous–Paleogene (K–Pg) extinction event (also known as the Cretaceous–Tertiary (K–T) extinction) was a sudden mass extinction of three-quarters of the plant and animal species on Earth, approximately 66 million years ago. With the exception of some ectothermic species such as sea turtles and crocodilians, no tetrapods weighing more than 25 kilograms (55 pounds) survived. It marked the end of the Cretaceous period, and with it the Mesozoic Era, while heralding the beginning of the Cenozoic Era, which continues to this day.
In the geologic record, the K–Pg event is marked by a thin layer of sediment called the K–Pg boundary, which can be found throughout the world in marine and terrestrial rocks. The boundary clay shows unusually high levels of the metal iridium, which is more common in asteroids than in the Earth's crust.
As originally proposed in 1980 by a team of scientists led by Luis Alvarez and his son Walter, it is now generally thought that the K–Pg extinction was caused by the impact of a massive comet or asteroid 10 to 15 km (6 to 9 mi) wide, 66 million years ago, which devastated the global environment, mainly through a lingering impact winter which halted photosynthesis in plants and plankton. The impact hypothesis, also known as the Alvarez hypothesis, was bolstered by the discovery of the 180 km (112 mi) Chicxulub crater in the Gulf of Mexico's Yucatán Peninsula in the early 1990s, which provided conclusive evidence that the K–Pg boundary clay represented debris from an asteroid impact. The fact that the extinctions occurred simultaneously provides strong evidence that they were caused by the asteroid. A 2016 drilling project into the Chicxulub peak ring confirmed that the peak ring comprised granite ejected within minutes from deep in the earth, but contained hardly any gypsum, the usual sulfate-containing sea floor rock in the region: the gypsum would have vaporized and dispersed as an aerosol into the atmosphere, causing longer-term effects on the climate and food chain. In October 2019, researchers reported that the event rapidly acidified the oceans, producing ecological collapse and, in this way as well, produced long-lasting effects on the climate, and accordingly was a key reason for the mass extinction at the end of the Cretaceous. In January 2020, scientists reported new evidence that the extinction event was mostly a result of the asteroid impact and not volcanism.
A wide range of species perished in the K–Pg extinction, the best-known being the non-avian dinosaurs. It also destroyed myriad other terrestrial organisms, including some mammals, birds, lizards, insects, plants, and all the pterosaurs. In the oceans, the K–Pg extinction killed off plesiosaurs and mosasaurs and devastated teleost fish, sharks, mollusks (especially ammonites, which became extinct), and many species of plankton. It is estimated that 75% or more of all species on Earth vanished. Yet the extinction also provided evolutionary opportunities: in its wake, many groups underwent remarkable adaptive radiation — sudden and prolific divergence into new forms and species within the disrupted and emptied ecological niches. Mammals in particular diversified in the Paleogene, evolving new forms such as horses, whales, bats, and primates. The surviving group of dinosaurs were avians, ground and water fowl who radiated into all modern species of bird. Teleost fish, and perhaps lizards also radiated.
The event appears to have affected all continents at the same time. Non-avian dinosaurs, for example, are known from the Maastrichtian of North America, Europe, Asia, Africa, South America, and Antarctica, but are unknown from the Cenozoic anywhere in the world. Similarly, fossil pollen shows devastation of the plant communities in areas as far apart as New Mexico, Alaska, China, and New Zealand.
Despite the event's severity, there was significant variability in the rate of extinction between and within different clades. Species that depended on photosynthesis declined or became extinct as atmospheric particles blocked sunlight and reduced the solar energy reaching the ground. This plant extinction caused a major reshuffling of the dominant plant groups. Omnivores, insectivores, and carrion-eaters survived the extinction event, perhaps because of the increased availability of their food sources. No purely herbivorous or carnivorous mammals seem to have survived. Rather, the surviving mammals and birds fed on insects, worms, and snails, which in turn fed on detritus (dead plant and animal matter).
In stream communities, few animal groups became extinct, because such communities rely less directly on food from living plants, and more on detritus washed in from the land, protecting them from extinction. Similar, but more complex patterns have been found in the oceans. Extinction was more severe among animals living in the water column than among animals living on or in the sea floor. Animals in the water column are almost entirely dependent on primary production from living phytoplankton, while animals on the ocean floor always or sometimes feed on detritus. Coccolithophorids and mollusks (including ammonites, rudists, freshwater snails, and mussels), and those organisms whose food chain included these shell builders, became extinct or suffered heavy losses. For example, it is thought that ammonites were the principal food of mosasaurs, a group of giant marine reptiles that became extinct at the boundary. The largest air-breathing survivors of the event, crocodyliforms and champsosaurs, were semi-aquatic and had access to detritus. Modern crocodilians can live as scavengers and survive for months without food, and their young are small, grow slowly, and feed largely on invertebrates and dead organisms for their first few years. These characteristics have been linked to crocodilian survival at the end of the Cretaceous.
The K–Pg boundary represents one of the most dramatic turnovers in the fossil record for various calcareous nanoplankton that formed the calcium deposits for which the Cretaceous is named. The turnover in this group is clearly marked at the species level. Statistical analysis of marine losses at this time suggests that the decrease in diversity was caused more by a sharp increase in extinctions than by a decrease in speciation. The K–Pg boundary record of dinoflagellates is not so well understood, mainly because only microbial cysts provide a fossil record, and not all dinoflagellate species have cyst-forming stages, which likely causes diversity to be underestimated. Recent studies indicate that there were no major shifts in dinoflagellates through the boundary layer.
Radiolaria have left a geological record since at least the Ordovician times, and their mineral fossil skeletons can be tracked across the K–Pg boundary. There is no evidence of mass extinction of these organisms, and there is support for high productivity of these species in southern high latitudes as a result of cooling temperatures in the early Paleocene. Approximately 46% of diatom species survived the transition from the Cretaceous to the Upper Paleocene, a significant turnover in species but not a catastrophic extinction.
The occurrence of planktonic foraminifera across the K–Pg boundary has been studied since the 1930s. Research spurred by the possibility of an impact event at the K–Pg boundary resulted in numerous publications detailing planktonic foraminiferal extinction at the boundary; there is ongoing debate between groups which think the evidence indicates substantial extinction of these species at the K–Pg boundary, and those who think the evidence supports multiple extinctions and expansions through the boundary.
Numerous species of benthic foraminifera became extinct during the event, presumably because they depend on organic debris for nutrients, while biomass in the ocean is thought to have decreased. As the marine microbiota recovered, it is thought that increased speciation of benthic foraminifera resulted from the increase in food sources. Phytoplankton recovery in the early Paleocene provided the food source to support large benthic foraminiferal assemblages, which are mainly detritus-feeding. Ultimate recovery of the benthic populations occurred over several stages lasting several hundred thousand years into the early Paleocene.
There is significant variation in the fossil record as to the extinction rate of marine invertebrates across the K–Pg boundary. The apparent rate is influenced by a lack of fossil records, rather than extinctions.
Ostracods, a class of small crustaceans that were prevalent in the upper Maastrichtian, left fossil deposits in a variety of locations. A review of these fossils shows that ostracod diversity was lower in the Paleocene than any other time in the Cenozoic. Current research cannot ascertain whether the extinctions occurred prior to, or during, the boundary interval.
Approximately 60% of late-Cretaceous Scleractinia coral genera failed to cross the K–Pg boundary into the Paleocene. Further analysis of the coral extinctions shows that approximately 98% of colonial species, ones that inhabit warm, shallow tropical waters, became extinct. The solitary corals, which generally do not form reefs and inhabit colder and deeper (below the photic zone) areas of the ocean were less impacted by the K–Pg boundary. Colonial coral species rely upon symbiosis with photosynthetic algae, which collapsed due to the events surrounding the K–Pg boundary, but the use of data from coral fossils to support K–Pg extinction and subsequent Paleocene recovery, must be weighed against the changes that occurred in coral ecosystems through the K–Pg boundary.
The numbers of cephalopod, echinoderm, and bivalve genera exhibited significant diminution after the K–Pg boundary. Most species of brachiopods, a small phylum of marine invertebrates, survived the K–Pg extinction event and diversified during the early Paleocene.
Except for nautiloids (represented by the modern order Nautilida) and coleoids (which had already diverged into modern octopodes, squids, and cuttlefish) all other species of the molluscan class Cephalopoda became extinct at the K–Pg boundary. These included the ecologically significant belemnoids, as well as the ammonoids, a group of highly diverse, numerous, and widely distributed shelled cephalopods. Researchers have pointed out that the reproductive strategy of the surviving nautiloids, which rely upon few and larger eggs, played a role in outsurviving their ammonoid counterparts through the extinction event. The ammonoids utilized a planktonic strategy of reproduction (numerous eggs and planktonic larvae), which would have been devastated by the K–Pg extinction event. Additional research has shown that subsequent to this elimination of ammonoids from the global biota, nautiloids began an evolutionary radiation into shell shapes and complexities theretofore known only from ammonoids.
Approximately 35% of echinoderm genera became extinct at the K–Pg boundary, although taxa that thrived in low-latitude, shallow-water environments during the late Cretaceous had the highest extinction rate. Mid-latitude, deep-water echinoderms were much less affected at the K–Pg boundary. The pattern of extinction points to habitat loss, specifically the drowning of carbonate platforms, the shallow-water reefs in existence at that time, by the extinction event.
Other invertebrate groups, including rudists (reef-building clams) and inoceramids (giant relatives of modern scallops), also became extinct at the K–Pg boundary.
There are substantial fossil records of jawed fishes across the K–Pg boundary, which provide good evidence of extinction patterns of these classes of marine vertebrates. While the deep-sea realm was able to remain seemingly unaffected, there was an equal loss between the open marine apex predators and the durophagous demersal feeders on the continental shelf. Within cartilaginous fish, approximately 7 out of the 41 families of neoselachians (modern sharks, skates, and rays) disappeared after this event and batoids (skates and rays) lost nearly all the identifiable species, while more than 90% of teleost fish (bony fish) families survived.
In the Maastrichtian age, 28 shark families and 13 batoid families thrived, of which 25 and 9, respectively, survived the K–T boundary event. Forty-seven of all neoselachian genera cross the K–T boundary, with 85% being sharks. Batoids display with 15%, a comparably low survival rate.
There is evidence of a mass extinction of bony fishes at a fossil site immediately above the K–Pg boundary layer on Seymour Island near Antarctica, apparently precipitated by the K–Pg extinction event; the marine and freshwater environments of fishes mitigated the environmental effects of the extinction event.
Insect damage to the fossilized leaves of flowering plants from fourteen sites in North America was used as a proxy for insect diversity across the K–Pg boundary and analyzed to determine the rate of extinction. Researchers found that Cretaceous sites, prior to the extinction event, had rich plant and insect-feeding diversity. During the early Paleocene, flora were relatively diverse with little predation from insects, even 1.7 million years after the extinction event.
There is overwhelming evidence of global disruption of plant communities at the K–Pg boundary. Extinctions are seen both in studies of fossil pollen, and fossil leaves. In North America, the data suggests massive devastation and mass extinction of plants at the K–Pg boundary sections, although there were substantial megafloral changes before the boundary. In North America, approximately 57% of plant species became extinct. In high southern hemisphere latitudes, such as New Zealand and Antarctica, the mass die-off of flora caused no significant turnover in species, but dramatic and short-term changes in the relative abundance of plant groups. In some regions, the Paleocene recovery of plants began with recolonizations by fern species, represented as a fern spike in the geologic record; this same pattern of fern recolonization was observed after the 1980 Mount St. Helens eruption.
Due to the wholesale destruction of plants at the K–Pg boundary, there was a proliferation of saprotrophic organisms, such as fungi, that do not require photosynthesis and use nutrients from decaying vegetation. The dominance of fungal species lasted only a few years while the atmosphere cleared and plenty of organic matter to feed on was present. Once the atmosphere cleared, photosynthetic organisms, initially ferns and other ground-level plants, returned. Just two species of fern appear to have dominated the landscape for centuries after the event.
Polyploidy appears to have enhanced the ability of flowering plants to survive the extinction, probably because the additional copies of the genome such plants possessed, allowed them to more readily adapt to the rapidly changing environmental conditions that followed the impact.
While it appears that many fungi were wiped out at the K-Pg boundary, it is worth noting that evidence has been found indicating that some fungal species thrived in the years after the extinction event. Microfossils from that period indicate a great increase in fungal spores, long before the resumption of plentiful fern spores in the recovery after the impact. Monoporisporites and hypha are almost exclusive microfossils for a short span during and after the iridium boundary. These saprophytes would not need sunlight, allowing them to survive during a period when the atmosphere was likely clogged with dust and sulfur aerosols.
The proliferation of fungi has occurred after several extinction events, including the Permian-Triassic extinction event, the largest known mass extinction in Earth's history, with up to 96% of all species suffering extinction.
There is limited evidence for extinction of amphibians at the K–Pg boundary. A study of fossil vertebrates across the K–Pg boundary in Montana concluded that no species of amphibian became extinct. Yet there are several species of Maastrichtian amphibian, not included as part of this study, which are unknown from the Paleocene. These include the frog Theatonius lancensis and the albanerpetontid Albanerpeton galaktion; therefore, some amphibians do seem to have become extinct at the boundary. The relatively low levels of extinction seen among amphibians probably reflect the low extinction rates seen in freshwater animals.
More than 80% of Cretaceous turtle species passed through the K–Pg boundary. All six turtle families in existence at the end of the Cretaceous survived into the Paleogene and are represented by living species.
The rhynchocephalians were a widespread and relatively successful group of lepidosaurians during the early Mesozoic, but began to decline by the mid-Cretaceous, although they were very successful in South America. They are represented today by a single genus (the Tuatara), located exclusively in New Zealand.
The order Squamata, which is represented today by lizards, snakes and amphisbaenians (worm lizards), radiated into various ecological niches during the Jurassic and was successful throughout the Cretaceous. They survived through the K–Pg boundary and are currently the most successful and diverse group of living reptiles, with more than 6,000 extant species. Many families of terrestrial squamates became extinct at the boundary, such as monstersaurians and polyglyphanodonts, and fossil evidence indicates they suffered very heavy losses in the K–T event, only recovering 10 million years after it.
Non-archosaurian marine reptiles
Giant non-archosaurian aquatic reptiles such as mosasaurs and plesiosaurs, which were the top marine predators of their time, became extinct by the end of the Cretaceous. The ichthyosaurs had disappeared from fossil records before the mass extinction occurred.
Ten families of crocodilians or their close relatives are represented in the Maastrichtian fossil records, of which five died out prior to the K–Pg boundary. Five families have both Maastrichtian and Paleocene fossil representatives. All of the surviving families of crocodyliforms inhabited freshwater and terrestrial environments—except for the Dyrosauridae, which lived in freshwater and marine locations. Approximately 50% of crocodyliform representatives survived across the K–Pg boundary, the only apparent trend being that no large crocodiles survived. Crocodyliform survivability across the boundary may have resulted from their aquatic niche and ability to burrow, which reduced susceptibility to negative environmental effects at the boundary. Jouve and colleagues suggested in 2008 that juvenile marine crocodyliforms lived in freshwater environments as do modern marine crocodile juveniles, which would have helped them survive where other marine reptiles became extinct; freshwater environments were not so strongly affected by the K–Pg extinction event as marine environments were.
One family of pterosaurs, Azhdarchidae, was definitely present in the Maastrichtian, and it likely became extinct at the K–Pg boundary. These large pterosaurs were the last representatives of a declining group that contained ten families during the mid-Cretaceous. Several other pterosaur lineages may have been present during the Maastrichtian, such as the ornithocheirids, pteranodontids, nyctosaurids, as well as a possible tapejarid, though they are represented by fragmentary remains that are difficult to assign to any given group. While this was occurring, modern birds were undergoing diversification; traditionally it was thought that they replaced archaic birds and pterosaur groups, possibly due to direct competition, or they simply filled empty niches, but there is no correlation between pterosaur and avian diversities that are conclusive to a competition hypothesis, and small pterosaurs were present in the Late Cretaceous. At least some niches previously held by birds were reclaimed by pterosaurs prior to the K–Pg event.
Most paleontologists regard birds as the only surviving dinosaurs (see Origin of birds). It is thought that all non-avian theropods became extinct, including then-flourishing groups such as enantiornithines and hesperornithiforms. Several analyses of bird fossils show divergence of species prior to the K–Pg boundary, and that duck, chicken, and ratite bird relatives coexisted with non-avian dinosaurs. Large collections of bird fossils representing a range of different species provides definitive evidence for the persistence of archaic birds to within 300,000 years of the K–Pg boundary. The absence of these birds in the Paleogene is evidence that a mass extinction of archaic birds took place there.
The most successful and dominant group of avialans, enantiornithes, were wiped out. Only a small fraction of ground and water-dwelling Cretaceous bird species survived the impact, giving rise to today's birds. The only bird group known for certain to have survived the K–Pg boundary is the Aves. Avians may have been able to survive the extinction as a result of their abilities to dive, swim, or seek shelter in water and marshlands. Many species of avians can build burrows, or nest in tree holes, or termite nests, all of which provided shelter from the environmental effects at the K–Pg boundary. Long-term survival past the boundary was assured as a result of filling ecological niches left empty by extinction of non-avian dinosaurs. The open niche space and relative scarcity of predators following the K-Pg extinction allowed for adaptive radiation of various avian groups. Ratites, for example, rapidly diversified in the early Paleogene and are believed to have convergently developed flightlessness at least three to six times, often fulfilling the niche space for large herbivores once occupied by non-avian dinosaurs.
Excluding a few controversial claims, scientists agree that all non-avian dinosaurs became extinct at the K–Pg boundary. The dinosaur fossil record has been interpreted to show both a decline in diversity and no decline in diversity during the last few million years of the Cretaceous, and it may be that the quality of the dinosaur fossil record is simply not good enough to permit researchers to distinguish between the options. There is no evidence that late Maastrichtian non-avian dinosaurs could burrow, swim, or dive, which suggests they were unable to shelter themselves from the worst parts of any environmental stress that occurred at the K–Pg boundary. It is possible that small dinosaurs (other than birds) did survive, but they would have been deprived of food, as herbivorous dinosaurs would have found plant material scarce and carnivores would have quickly found prey in short supply.
The growing consensus about the endothermy of dinosaurs (see dinosaur physiology) helps to understand their full extinction in contrast with their close relatives, the crocodilians. Ectothermic ("cold-blooded") crocodiles have very limited needs for food (they can survive several months without eating), while endothermic ("warm-blooded") animals of similar size need much more food to sustain their faster metabolism. Thus, under the circumstances of food chain disruption previously mentioned, non-avian dinosaurs died out, while some crocodiles survived. In this context, the survival of other endothermic animals, such as some birds and mammals, could be due, among other reasons, to their smaller needs for food, related to their small size at the extinction epoch.
Whether the extinction occurred gradually or suddenly has been debated, as both views have support from the fossil record. A study of 29 fossil sites in Catalan Pyrenees of Europe in 2010 supports the view that dinosaurs there had great diversity until the asteroid impact, with more than 100 living species. More recent research indicates that this figure is obscured by taphonomic biases and the sparsity of the continental fossil record. The results of this study, which were based on estimated real global biodiversity, showed that between 628 and 1,078 non-avian dinosaur species were alive at the end of the Cretaceous and underwent sudden extinction after the Cretaceous–Paleogene extinction event. Alternatively, interpretation based on the fossil-bearing rocks along the Red Deer River in Alberta, Canada, supports the gradual extinction of non-avian dinosaurs; during the last 10 million years of the Cretaceous layers there, the number of dinosaur species seems to have decreased from about 45 to approximately 12. Other scientists have made the same assessment following their research.
Several researchers support the existence of Paleocene non-avian dinosaurs. Evidence of this existence is based on the discovery of dinosaur remains in the Hell Creek Formation up to 1.3 m (4 ft 3.2 in) above and 40,000 years later than the K–Pg boundary. Pollen samples recovered near a fossilized hadrosaur femur recovered in the Ojo Alamo Sandstone at the San Juan River in Colorado, indicate that the animal lived during the Cenozoic, approximately 64.5 Ma (about 1 million years after the K–Pg extinction event). If their existence past the K–Pg boundary can be confirmed, these hadrosaurids would be considered a dead clade walking. The scientific consensus is that these fossils were eroded from their original locations and then re-buried in much later sediments (also known as reworked fossils).
The choristoderes (semi-aquatic archosauromorphs) survived across the K–Pg boundary but would die out in the early Miocene. Studies on Champsosaurus' palatal teeth suggest that there were dietary changes among the various species across the K–Pg event.
All major Cretaceous mammalian lineages, including monotremes (egg-laying mammals), multituberculates, metatherians, eutherians, dryolestoideans, and gondwanatheres survived the K–Pg extinction event, although they suffered losses. In particular, metatherians largely disappeared from North America, and the Asian deltatheroidans became extinct (aside from the lineage leading to Gurbanodelta). In the Hell Creek beds of North America, at least half of the ten known multituberculate species and all eleven metatherians species are not found above the boundary. Multituberculates in Europe and North America survived relatively unscathed and quickly bounced back in the Paleocene, but Asian forms were devastated, never again to represent a significant component of mammalian fauna. A recent study indicates that metatherians suffered the heaviest losses at the K–Pg event, followed by multituberculates, while eutherians recovered the quickest.
Mammalian species began diversifying approximately 30 million years prior to the K–Pg boundary. Diversification of mammals stalled across the boundary. Current research indicates that mammals did not explosively diversify across the K–Pg boundary, despite the ecological niches made available by the extinction of dinosaurs. Several mammalian orders have been interpreted as diversifying immediately after the K–Pg boundary, including Chiroptera (bats) and Cetartiodactyla (a diverse group that today includes whales and dolphins and even-toed ungulates), although recent research concludes that only marsupial orders diversified soon after the K–Pg boundary.
K–Pg boundary mammalian species were generally small, comparable in size to rats; this small size would have helped them find shelter in protected environments. It is postulated that some early monotremes, marsupials, and placentals were semiaquatic or burrowing, as there are multiple mammalian lineages with such habits today. Any burrowing or semiaquatic mammal would have had additional protection from K–Pg boundary environmental stresses.
North American fossils
In North American terrestrial sequences, the extinction event is best represented by the marked discrepancy between the rich and relatively abundant late-Maastrichtian pollen record and the post-boundary fern spike.
At present the most informative sequence of dinosaur-bearing rocks in the world from the K–Pg boundary is found in western North America, particularly the late Maastrichtian-age Hell Creek Formation of Montana. Comparison with the older Judith River Formation (Montana) and Dinosaur Park Formation (Alberta), which both date from approximately 75 Ma, provides information on the changes in dinosaur populations over the last 10 million years of the Cretaceous. These fossil beds are geographically limited, covering only part of one continent.
The middle–late Campanian formations show a greater diversity of dinosaurs than any other single group of rocks. The late Maastrichtian rocks contain the largest members of several major clades: Tyrannosaurus, Ankylosaurus, Pachycephalosaurus, Triceratops, and Torosaurus, which suggests food was plentiful immediately prior to the extinction.
In addition to rich dinosaur fossils, there are also plant fossils that illustrate the reduction in plant species across the K–Pg boundary. In the sediments below the K–Pg boundary the dominant plant remains are angiosperm pollen grains, but the boundary layer contains little pollen and is dominated by fern spores. More usual pollen levels gradually resume above the boundary layer. This is reminiscent of areas blighted by modern volcanic eruptions, where the recovery is led by ferns, which are later replaced by larger angiosperm plants.
The mass extinction of marine plankton appears to have been abrupt and right at the K–Pg boundary. Ammonite genera became extinct at or near the K–Pg boundary; there was a smaller and slower extinction of ammonite genera prior to the boundary associated with a late Cretaceous marine regression. The gradual extinction of most inoceramid bivalves began well before the K–Pg boundary, and a small, gradual reduction in ammonite diversity occurred throughout the very late Cretaceous.
Further analysis shows that several processes were in progress in the late Cretaceous seas and partially overlapped in time, then ended with the abrupt mass extinction. The diversity of marine life decreased when the climate near the K–Pg boundary increased in temperature. The temperature increased about three to four degrees very rapidly between 65.4 and 65.2 million years ago, which is very near the time of the extinction event. Not only did the climate temperature increase, but the water temperature decreased, causing a drastic decrease in marine diversity.
The scientific consensus is that the asteroid impact at the K–Pg boundary left megatsunami deposits and sediments around the area of the Caribbean Sea and Gulf of Mexico, from the colossal waves created by the impact. These deposits have been identified in the La Popa basin in northeastern Mexico, platform carbonates in northeastern Brazil, in Atlantic deep-sea sediments, and in the form of the thickest-known layer of graded sand deposits, around 100 m (330 ft), in the Chicxulub crater itself, directly above the shocked granite ejecta.
Fossils in sedimentary rocks deposited during the impact
Fossiliferous sedimentary rocks deposited during the K–Pg impact have been found in the Gulf of Mexico area, including tsunami wash deposits carrying remains of a mangrove-type ecosystem, evidence that after the impact water sloshed back and forth repeatedly in the Gulf of Mexico, and dead fish left in shallow water but not disturbed by scavengers.
The rapidity of the extinction is a controversial issue, because some theories about its causes imply a rapid extinction over a relatively short period (from a few years to a few thousand years), while others imply longer periods. The issue is difficult to resolve because of the Signor–Lipps effect, where the fossil record is so incomplete that most extinct species probably died out long after the most recent fossil that has been found. Scientists have also found very few continuous beds of fossil-bearing rock that cover a time range from several million years before the K–Pg extinction to several million years after it. The sedimentation rate and thickness of K–Pg clay from three sites suggest rapid extinction, perhaps over a period of less than 10,000 years. At one site in the Denver Basin of Colorado, after the K–Pg boundary layer was deposited, the fern spike lasted approximately 1,000 years, and no more than 71,000 years; at the same location, the earliest appearance of Cenozoic mammals occurred after approximately 185,000 years, and no more than 570,000 years, "indicating rapid rates of biotic extinction and initial recovery in the Denver Basin during this event."
Evidence for impact
In 1980, a team of researchers consisting of Nobel Prize-winning physicist Luis Alvarez, his son, geologist Walter Alvarez, and chemists Frank Asaro and Helen Michel discovered that sedimentary layers found all over the world at the Cretaceous–Paleogene boundary contain a concentration of iridium many times greater than normal (30, 160, and 20 times in three sections originally studied). Iridium is extremely rare in Earth's crust because it is a siderophile element which mostly sank along with iron into Earth's core during planetary differentiation. As iridium remains abundant in most asteroids and comets, the Alvarez team suggested that an asteroid struck the Earth at the time of the K–Pg boundary. There were earlier speculations on the possibility of an impact event, but this was the first hard evidence.
This hypothesis was viewed as radical when first proposed, but additional evidence soon emerged. The boundary clay was found to be full of minute spherules of rock, crystallized from droplets of molten rock formed by the impact. Shocked quartz and other minerals were also identified in the K–Pg boundary. The identification of giant tsunami beds along the Gulf Coast and the Caribbean provided more evidence, and suggested that the impact may have occurred nearby—as did the discovery that the K–Pg boundary became thicker in the southern United States, with meter-thick beds of debris occurring in northern New Mexico.
Further research identified the giant Chicxulub crater, buried under Chicxulub on the coast of Yucatán, as the source of the K–Pg boundary clay. Identified in 1990 based on work by geophysicist Glen Penfield in 1978, the crater is oval, with an average diameter of roughly 180 km (110 mi), about the size calculated by the Alvarez team. The discovery of the crater—a prediction of the impact hypothesis—provided conclusive evidence for a K–Pg impact, and strengthened the hypothesis that it caused the extinction.
In a 2013 paper, Paul Renne of the Berkeley Geochronology Center dated the impact at 66.043±0.011 million years ago, based on argon–argon dating. He further posits that the mass extinction occurred within 32,000 years of this date.
In 2007, it was proposed that the impactor belonged to the Baptistina family of asteroids. This link has been doubted, though not disproved, in part because of a lack of observations of the asteroid and its family. It was reported in 2009 that 298 Baptistina does not share the chemical signature of the K–Pg impactor. Further, a 2011 Wide-field Infrared Survey Explorer (WISE) study of reflected light from the asteroids of the family estimated their break-up at 80 Ma, giving them insufficient time to shift orbits and impact Earth by 66 Ma.
Additional evidence for the impact event is found at the Tanis site in southwestern North Dakota, United States. Tanis is part of the heavily studied Hell Creek Formation, a group of rocks spanning four states in North America renowned for many significant fossil discoveries from the Upper Cretaceous and lower Paleocene. Tanis is an extraordinary and unique site because it appears to record the events from the first minutes until a few hours after the impact of the giant Chicxulub asteroid in extreme detail. Amber from the site has been reported to contain microtektites matching those of the Chicxulub impact event. However, the finds have been met with skepticism by other geologists, who question its interpretation or who are skeptical of the team leader, Robert DePalma, who had not yet received his Ph.D. in geology at the time of the discovery and whose commercial activities have been regarded with suspicion.
Effects of impact
In March 2010, an international panel of 41 scientists reviewed 20 years of scientific literature and endorsed the asteroid hypothesis, specifically the Chicxulub impact, as the cause of the extinction, ruling out other theories such as massive volcanism. They had determined that a 10-to-15-kilometer (6 to 9 mi) asteroid hurtled into Earth at Chicxulub on Mexico's Yucatán Peninsula. The collision would have released the same energy as 100 teratonnes of TNT (420 zettajoules)—more than a billion times the energy of the atomic bombings of Hiroshima and Nagasaki.
The Chicxulub impact caused a global catastrophe. Some of the phenomena were brief occurrences immediately following the impact, but there were also long-term geochemical and climatic disruptions that devastated the ecology.
The re-entry of ejecta into Earth's atmosphere would include a brief (hours-long) but intense pulse of infrared radiation, cooking exposed organisms. This is debated, with opponents arguing that local ferocious fires, probably limited to North America, fall short of global firestorms. This is the "Cretaceous–Paleogene firestorm debate". A paper in 2013 by a prominent modeler of nuclear winter suggested that, based on the amount of soot in the global debris layer, the entire terrestrial biosphere might have burned, implying a global soot-cloud blocking out the sun and creating an impact winter effect.
Aside from the hypothesized fire and/or impact winter effects, the impact would have created a dust cloud that blocked sunlight for up to a year, inhibiting photosynthesis. The asteroid hit an area of carbonate rock containing a large amount of combustible hydrocarbons and sulphur, much of which was vaporized, thereby injecting sulfuric acid aerosols into the stratosphere, which might have reduced sunlight reaching the Earth's surface by more than 50%, and would have caused acid rain. The resulting acidification of the oceans would kill many organisms that grow shells of calcium carbonate. At Brazos section, the sea surface temperature dropped as much as 7 °C (13 °F) for decades after the impact. It would take at least ten years for such aerosols to dissipate, and would account for the extinction of plants and phytoplankton, and subsequently herbivores and their predators. Creatures whose food chains were based on detritus would have a reasonable chance of survival. Freezing temperatures probably lasted for at least three years.
If widespread fires occurred, they would have increased the CO
2 content of the atmosphere and caused a temporary greenhouse effect once the dust clouds and aerosol settled; this would have exterminated the most vulnerable organisms that survived the period immediately after the impact.
Most paleontologists now agree that an asteroid did hit the Earth at approximately the end of the Cretaceous, but there was still an ongoing dispute for some time on whether the impact was the sole cause of the extinctions. There is evidence that dinosaurs had been in decline for up to 50 million years already due to changing environmental factors.
Several studies in 2020, like Hull et al. and Chiarenza et al. show quantitatively that the Cretaceous–Paleogene Mass Extinction about 66 million years ago was mostly a result of a meteorite impact (the Chicxulub impactor) and not a result of volcanism.
Beyond extinction impacts, the event also caused more general changes of flora and fauna such as giving rise to neotropical rainforest biomes like the Amazonia, replacing species composition and structure of local forests during ~6 million years of recovery to former levels of plant diversity.
2016 Chicxulub crater drilling project
In 2016, a scientific drilling project obtained deep rock-core samples from the peak ring around the Chicxulub impact crater. The discoveries confirmed that the rock comprising the peak ring had been shocked by immense pressure and melted in just minutes from its usual state into its present form. Unlike sea-floor deposits, the peak ring was made of granite originating much deeper in the earth, which had been ejected to the surface by the impact. Gypsum is a sulfate-containing rock usually present in the shallow seabed of the region; it had been almost entirely removed, vaporized into the atmosphere. Further, the event was immediately followed by a megatsunami sufficient to lay down the largest known layer of sand separated by grain size directly above the peak ring.
These findings strongly support the impact's role in the extinction event. The impactor was large enough to create a 190-kilometer-wide (120 mi) peak ring, to melt, shock, and eject deep granite, to create colossal water movements, and to eject an immense quantity of vaporized rock and sulfates into the atmosphere, where they would have persisted for several years. This worldwide dispersal of dust and sulfates would have affected climate catastrophically, led to large temperature drops, and devastated the food chain.
Although the concurrence of the end-Cretaceous extinctions with the Chicxulub asteroid impact strongly supports the impact hypothesis, some scientists continue to support other contributing causes: volcanic eruptions, climate change, sea level change, and other impact events. The end-Cretaceous event is the only mass extinction known to be associated with an impact, and other large impacts, such as the Manicouagan Reservoir impact, do not coincide with any noticeable extinction events.
Before 2000, arguments that the Deccan Traps flood basalts caused the extinction were usually linked to the view that the extinction was gradual, as the flood basalt events were thought to have started around 68 Mya and lasted more than 2 million years. The most recent evidence shows that the traps erupted over a period of only 800,000 years spanning the K–Pg boundary, and therefore may be responsible for the extinction and the delayed biotic recovery thereafter.
The Deccan Traps could have caused extinction through several mechanisms, including the release of dust and sulfuric aerosols into the air, which might have blocked sunlight and thereby reduced photosynthesis in plants. In addition, Deccan Trap volcanism might have resulted in carbon dioxide emissions that increased the greenhouse effect when the dust and aerosols cleared from the atmosphere.
In the years when the Deccan Traps hypothesis was linked to a slower extinction, Luis Alvarez (d. 1988) replied that paleontologists were being misled by sparse data. While his assertion was not initially well-received, later intensive field studies of fossil beds lent weight to his claim. Eventually, most paleontologists began to accept the idea that the mass extinctions at the end of the Cretaceous were largely or at least partly due to a massive Earth impact. Even Walter Alvarez acknowledged that other major changes may have contributed to the extinctions.
Combining these theories, some geophysical models suggest that the impact contributed to the Deccan Traps. These models, combined with high-precision radiometric dating, suggest that the Chicxulub impact could have triggered some of the largest Deccan eruptions, as well as eruptions at active volcanoes anywhere on Earth.
Multiple impact event
Other crater-like topographic features have also been proposed as impact craters formed in connection with Cretaceous–Paleogene extinction. This suggests the possibility of near-simultaneous multiple impacts, perhaps from a fragmented asteroidal object similar to the Shoemaker–Levy 9 impact with Jupiter. In addition to the 180 km (110 mi) Chicxulub crater, there is the 24 km (15 mi) Boltysh crater in Ukraine (65.17±0.64 Ma), the 20 km (12 mi) Silverpit crater in the North Sea (59.5±14.5 Ma) possibly formed by bolide impact, and the controversial and much larger 600 km (370 mi) Shiva crater. Any other craters that might have formed in the Tethys Ocean would since have been obscured by the northward tectonic drift of Africa and India.
Maastrichtian sea-level regression
There is clear evidence that sea levels fell in the final stage of the Cretaceous by more than at any other time in the Mesozoic era. In some Maastrichtian stage rock layers from various parts of the world, the later layers are terrestrial; earlier layers represent shorelines and the earliest layers represent seabeds. These layers do not show the tilting and distortion associated with mountain building, therefore the likeliest explanation is a regression, a drop in sea level. There is no direct evidence for the cause of the regression, but the currently accepted explanation is that the mid-ocean ridges became less active and sank under their own weight.
A severe regression would have greatly reduced the continental shelf area, the most species-rich part of the sea, and therefore could have been enough to cause a marine mass extinction, but this change would not have caused the extinction of the ammonites. The regression would also have caused climate changes, partly by disrupting winds and ocean currents and partly by reducing the Earth's albedo and increasing global temperatures.
Marine regression also resulted in the loss of epeiric seas, such as the Western Interior Seaway of North America. The loss of these seas greatly altered habitats, removing coastal plains that ten million years before had been host to diverse communities such as are found in rocks of the Dinosaur Park Formation. Another consequence was an expansion of freshwater environments, since continental runoff now had longer distances to travel before reaching oceans. While this change was favorable to freshwater vertebrates, those that prefer marine environments, such as sharks, suffered.
Proponents of multiple causation view the suggested single causes as either too small to produce the vast scale of the extinction, or not likely to produce its observed taxonomic pattern. In a review article, J. David Archibald and David E. Fastovsky discussed a scenario combining three major postulated causes: volcanism, marine regression, and extraterrestrial impact. In this scenario, terrestrial and marine communities were stressed by the changes in, and loss of, habitats. Dinosaurs, as the largest vertebrates, were the first affected by environmental changes, and their diversity declined. At the same time, particulate materials from volcanism cooled and dried areas of the globe. Then an impact event occurred, causing collapses in photosynthesis-based food chains, both in the already-stressed terrestrial food chains and in the marine food chains.
Based on studies at Seymour Island in Antarctica, Sierra Petersen and colleagues argue that there were two separate extinction events near the Cretaceous–Paleogene boundary, with one correlating to Deccan Trap volcanism and one correlated with the Chicxulub impact. The team analyzed combined extinction patterns using a new clumped isotope temperature record from a hiatus-free, expanded K–Pg boundary section. They documented a 7.8±3.3 °C warming synchronous with the onset of Deccan Traps volcanism and a second, smaller warming at the time of meteorite impact. They suggest local warming may have been amplified due to the simultaneous disappearance of continental or sea ice. Intra-shell variability indicates a possible reduction in seasonality after Deccan eruptions began, continuing through the meteorite event. Species extinction at Seymour Island occurred in two pulses that coincide with the two observed warming events, directly linking the end-Cretaceous extinction at this site to both volcanic and meteorite events via climate change.
Recovery and diversification
The K–Pg extinction had a profound effect on the evolution of life on Earth. The elimination of dominant Cretaceous groups allowed other organisms to take their place, causing a remarkable amount of species diversification during the Paleogene period. The most striking example is the replacement of dinosaurs by mammals. After the K–Pg extinction, mammals evolved rapidly to fill the niches left vacant by the dinosaurs. Also significant, within the mammalian genera, new species were approximately 9.1% larger after the K–Pg boundary.
Other groups also substantially diversified. Based on molecular sequencing and fossil dating, many species of birds (the Neoaves group in particular) appeared to radiate after the K–Pg boundary. They even produced giant, flightless forms, such as the herbivorous Gastornis and Dromornithidae, and the predatory Phorusrhacidae. The extinction of Cretaceous lizards and snakes may have led to the evolution of modern groups such as iguanas, monitor lizards, and boas. On land, giant boid and enormous madtsoiid snakes appeared, and in the seas, giant sea snakes evolved. Teleost fish diversified explosively, filling the niches left vacant by the extinction. Groups appearing in the Paleocene and Eocene epochs include billfish, tunas, eels, and flatfish. Major changes are also seen in Paleogene insect communities. Many groups of ants were present in the Cretaceous, but in the Eocene ants became dominant and diverse, with larger colonies. Butterflies diversified as well, perhaps to take the place of leaf-eating insects wiped out by the extinction. The advanced mound-building termites, Termitidae, also appear to have risen in importance.
- The abbreviation is derived from the juxtaposition of K, the common abbreviation for the Cretaceous, which in turn originates from the correspondent German term Kreide, and Pg, which is the abbreviation for the Paleogene.
- The former designation includes the term 'Tertiary' (abbreviated as T), which is now discouraged as a formal geochronological unit by the International Commission on Stratigraphy.
- Shocked minerals have their internal structure deformed, and are created by intense pressures as in nuclear blasts and meteorite impacts.
- A megatsunami is a massive movement of sea waters, which can reach inland tens or hundreds of kilometers.
- Ogg, James G.; Gradstein, F. M.; Gradstein, Felix M. (2004). A geologic time scale 2004. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-78142-8.
- "International Chronostratigraphic Chart". International Commission on Stratigraphy. 2015. Archived from the original on May 30, 2014. Retrieved 29 April 2015.
- Renne, Paul R.; Deino, Alan L.; Hilgen, Frederik J.; Kuiper, Klaudia F.; Mark, Darren F.; Mitchell, William S.; Morgan, Leah E.; Mundil, Roland; Smit, Jan (7 February 2013). "Time scales of critical events around the Cretaceous-Paleogene boundary" (PDF). Science. 339 (6120): 684–687. Bibcode:2013Sci...339..684R. doi:10.1126/science.1230492. PMID 23393261. S2CID 6112274. Archived (PDF) from the original on 7 February 2017. Retrieved 1 December 2017.
- Fortey, Richard (1999). Life: A natural history of the first four billion years of life on Earth. Vintage. pp. 238–260. ISBN 978-0-375-70261-7.
- Muench, David; Muench, Marc; Gilders, Michelle A. (2000). Primal Forces. Portland, Oregon: Graphic Arts Center Publishing. p. 20. ISBN 978-1-55868-522-2.
- Schulte, Peter (5 March 2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous-Paleogene Boundary" (PDF). Science. 327 (5970): 1214–1218. Bibcode:2010Sci...327.1214S. doi:10.1126/science.1177265. JSTOR 40544375. PMID 20203042. S2CID 2659741.
- Alvarez, Luis. "The Asteroid and the Dinosaur (Nova S08E08, 1981)". IMDB. PBS-WGBH/Nova. Retrieved 12 June 2020.
- Sleep, Norman H.; Lowe, Donald R. (9 April 2014). "Scientists reconstruct ancient impact that dwarfs dinosaur-extinction blast". American Geophysical Union. Archived from the original on 1 January 2017. Retrieved 30 December 2016.
- Amos, Jonathan (15 May 2017). "Dinosaur asteroid hit 'worst possible place'". BBC News Online. Archived from the original on 18 March 2018. Retrieved 16 March 2018.
- Alvarez, L W; Alvarez, W; Asaro, F; Michel, H V (1980). "Extraterrestrial cause for the Cretaceous–Tertiary extinction" (PDF). Science. 208 (4448): 1095–1108. Bibcode:1980Sci...208.1095A. doi:10.1126/science.208.4448.1095. PMID 17783054. S2CID 16017767. Archived from the original (PDF) on 2019-08-24.
- Vellekoop, J.; Sluijs, A.; Smit, J.; et al. (May 2014). "Rapid short-term cooling following the Chicxulub impact at the Cretaceous-Paleogene boundary". Proc. Natl. Acad. Sci. U.S.A. 111 (21): 7537–41. Bibcode:2014PNAS..111.7537V. doi:10.1073/pnas.1319253111. PMC 4040585. PMID 24821785.
- Hildebrand, A. R.; Penfield, G. T.; et al. (1991). "Chicxulub crater: a possible Cretaceous/Tertiary boundary impact crater on the Yucatán peninsula, Mexico". Geology. 19 (9): 867–871. Bibcode:1991Geo....19..867H. doi:10.1130/0091-7613(1991)019<0867:ccapct>2.3.co;2.
- Schulte, P.; et al. (5 March 2010). "The Chicxulub Asteroid Impact and Mass Extinction at the Cretaceous-Paleogene Boundary" (PDF). Science. 327 (5970): 1214–1218. Bibcode:2010Sci...327.1214S. doi:10.1126/science.1177265. PMID 20203042. S2CID 2659741.
- Joel, Lucas (21 October 2019). "The dinosaur-killing asteroid acidified the ocean in a flash: the Chicxulub event was as damaging to life in the oceans as it was to creatures on land, a study shows". The New York Times. Archived from the original on 24 October 2019. Retrieved 24 October 2019.
- Henehan, Michael J. (21 October 2019). "Rapid ocean acidification and protracted Earth system recovery followed the end-Cretaceous Chicxulub impact". Proceedings of the National Academy of Sciences of the United States of America. 116 (45): 22500–22504. Bibcode:2019PNAS..11622500H. doi:10.1073/pnas.1905989116. PMC 6842625. PMID 31636204.
- Joel, Lucas (16 January 2020). "Asteroid or Volcano? New Clues to the Dinosaurs' Demise". The New York Times. Retrieved 17 January 2020.
- Hull, Pincelli M.; Bornemann, André; Penman, Donald E. (17 January 2020). "On impact and volcanism across the Cretaceous-Paleogene boundary". Science. 367 (6475): 266–272. Bibcode:2020Sci...367..266H. doi:10.1126/science.aay5055. PMID 31949074. S2CID 210698721. Retrieved 17 January 2020.
- Chiarenza, Alfio Alessandro; Farnsworth, Alexander; Mannion, Philip D.; Lunt, Daniel J.; Valdes, Paul J.; Morgan, Joanna V.; Allison, Peter A. (2020-07-21). "Asteroid impact, not volcanism, caused the end-Cretaceous dinosaur extinction". Proceedings of the National Academy of Sciences. 117 (29): 17084–17093. doi:10.1073/pnas.2006087117. ISSN 0027-8424. PMID 32601204.
- Keller, Gerta (2012). "The Cretaceous–Tertiary mass extinction, Chicxulub impact, and Deccan volcanism. Earth and life". In Talent, John (ed.). Earth and Life: Global Biodiversity, Extinction Intervals and Biogeographic Perturbations Through Time. Springer. pp. 759–793. ISBN 978-90-481-3427-4.
- Bosker, Bianca (September 2018). "The nastiest feud in science: A Princeton geologist has endured decades of ridicule for arguing that the fifth extinction was caused not by an asteroid but by a series of colossal volcanic eruptions. But she's reopened that debate". The Atlantic Monthly. Archived from the original on February 21, 2019. Retrieved 2019-01-30.
- Longrich, Nicholas R.; Tokaryk, Tim; Field, Daniel J. (2011). "Mass extinction of birds at the Cretaceous–Paleogene (K–Pg) boundary". Proceedings of the National Academy of Sciences. 108 (37): 15253–15257. Bibcode:2011PNAS..10815253L. doi:10.1073/pnas.1110395108. PMC 3174646. PMID 21914849.
- Longrich, N. R.; Bhullar, B.-A. S.; Gauthier, J. A. (December 2012). "Mass extinction of lizards and snakes at the Cretaceous-Paleogene boundary". Proc. Natl. Acad. Sci. U.S.A. 109 (52): 21396–401. Bibcode:2012PNAS..10921396L. doi:10.1073/pnas.1211526110. PMC 3535637. PMID 23236177.
- Labandeira, C.C.; Johnso,n K.R.; et al. (2002). "Preliminary assessment of insect herbivory across the Cretaceous-Tertiary boundary: Major extinction and minimum rebound". In Hartman, J.H.; Johnson, K.R.; Nichols, D.J. (eds.). The Hell Creek formation and the Cretaceous-Tertiary boundary in the northern Great Plains: An integrated continental record of the end of the Cretaceous. Geological Society of America. pp. 297–327. ISBN 978-0-8137-2361-7.
- Rehan, Sandra M.; Leys, Remko; Schwarz, Michael P. (2013). "First evidence for a massive extinction event affecting bees close to the K-T boundary". PLOS ONE. 8 (10): e76683. Bibcode:2013PLoSO...876683R. doi:10.1371/journal.pone.0076683. PMC 3806776. PMID 24194843.
- Nichols, D. J.; Johnson, K. R. (2008). Plants and the K–T Boundary. Cambridge, England: Cambridge University Press.
- Friedman M (2009). "Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction". Proceedings of the National Academy of Sciences. Washington, DC. 106 (13): 5218–5223. Bibcode:2009PNAS..106.5218F. doi:10.1073/pnas.0808468106. PMC 2664034. PMID 19276106.
- Jablonski, D.; Chaloner, W. G. (1994). "Extinctions in the fossil record (and discussion)". Philosophical Transactions of the Royal Society of London B. 344 (1307): 11–17. doi:10.1098/rstb.1994.0045.
- Alroy, John (1999). "The fossil record of North American Mammals: evidence for a Palaeocene evolutionary radiation". Systematic Biology. 48 (1): 107–118. doi:10.1080/106351599260472. PMID 12078635.
- Feduccia, Alan (1995). "Explosive evolution in Tertiary birds and mammals". Science. 267 (5198): 637–638. Bibcode:1995Sci...267..637F. doi:10.1126/science.267.5198.637. PMID 17745839. S2CID 42829066.
- Friedman, M. (2010). "Explosive morphological diversification of spiny-finned teleost fishes in the aftermath of the end-Cretaceous extinction". Proceedings of the Royal Society B. 277 (1688): 1675–1683. doi:10.1098/rspb.2009.2177. PMC 2871855. PMID 20133356.
- Weishampel, D. B.; Barrett, P. M. (2004). "Dinosaur distribution". In Weishampel, David B.; Dodson, Peter; Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley, CA: University of California Press. pp. 517–606. OCLC 441742117.
- Wilf, P.; Johnson, K.R. (2004). "Land plant extinction at the end of the Cretaceous: A quantitative analysis of the North Dakota megafloral record". Paleobiology. 30 (3): 347–368. doi:10.1666/0094-8373(2004)030<0347:LPEATE>2.0.CO;2.
- MacLeod, N.; Rawson, P.F.; Forey, P.L.; Banner, F.T.; Boudagher-Fadel, M.K.; Bown, P.R.; Burnett, J.A.; Chambers, P.; Culver, S.; Evans, S.E.; Jeffery, C.; Kaminski, M.A.; Lord, A.R.; Milner, A.C.; Milner, A.R.; Morris, N.; Owen, E.; Rosen, B.R.; Smith, A.B.; Taylor, P.D.; Urquhart, E.; Young, J.R. (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society. 154 (2): 265–292. Bibcode:1997JGSoc.154..265M. doi:10.1144/gsjgs.154.2.0265. S2CID 129654916.
- Sheehan Peter M, Hansen Thor A (1986). "Detritus feeding as a buffer to extinction at the end of the Cretaceous" (PDF). Geology. 14 (10): 868–870. Bibcode:1986Geo....14..868S. doi:10.1130/0091-7613(1986)14<868:DFAABT>2.0.CO;2. S2CID 54860261. Archived from the original (PDF) on 2019-02-27.
- Aberhan, M.; Weidemeyer, S.; Kieesling, W.; Scasso, R.A.; Medina, F.A. (2007). "Faunal evidence for reduced productivity and uncoordinated recovery in Southern Hemisphere Cretaceous-Paleogene boundary sections". Geology. 35 (3): 227–230. Bibcode:2007Geo....35..227A. doi:10.1130/G23197A.1.
- Sheehan, Peter M.; Fastovsky, D.E. (1992). "Major extinctions of land-dwelling vertebrates at the Cretaceous-Tertiary boundary, eastern Montana". Geology. 20 (6): 556–560. Bibcode:1992Geo....20..556S. doi:10.1130/0091-7613(1992)020<0556:MEOLDV>2.3.CO;2.
- Kauffman, E. (2004). "Mosasaur predation on upper Cretaceous nautiloids and ammonites from the United States Pacific Coast". PALAIOS. 19 (1): 96–100. Bibcode:2004Palai..19...96K. doi:10.1669/0883-1351(2004)019<0096:MPOUCN>2.0.CO;2.
- Pospichal, J.J. (1996). "Calcareous nannofossils and clastic sediments at the Cretaceous–Tertiary boundary, northeastern Mexico". Geology. 24 (3): 255–258. Bibcode:1996Geo....24..255P. doi:10.1130/0091-7613(1996)024<0255:CNACSA>2.3.CO;2.
- Bown, P (2005). "Selective calcareous nannoplankton survivorship at the Cretaceous–Tertiary boundary". Geology. 33 (8): 653–656. Bibcode:2005Geo....33..653B. doi:10.1130/G21566.1.
- Bambach, R.K.; Knoll, A.H.; Wang, S.C. (2004). "Origination, extinction, and mass depletions of marine diversity" (PDF). Paleobiology. 30 (4): 522–542. doi:10.1666/0094-8373(2004)030<0522:OEAMDO>2.0.CO;2.
- Gedl, P. (2004). "Dinoflagellate cyst record of the deep-sea Cretaceous-Tertiary boundary at Uzgru, Carpathian Mountains, Czech Republic". Special Publications of the Geological Society of London. 230 (1): 257–273. Bibcode:2004GSLSP.230..257G. doi:10.1144/GSL.SP.2004.230.01.13. S2CID 128771186.
- MacLeod, N. (1998). "Impacts and marine invertebrate extinctions". Special Publications of the Geological Society of London. 140 (1): 217–246. Bibcode:1998GSLSP.140..217M. doi:10.1144/GSL.SP.1998.140.01.16. S2CID 129875020.
- Courtillot, V (1999). Evolutionary Catastrophes: The science of mass extinction. Cambridge, UK: Cambridge University Press. p. 2. ISBN 978-0-521-58392-3.
- Arenillas, I; Arz, J.A.; Molina, E.; Dupuis, C. (2000). "An independent test of planktic foraminiferal turnover across the Cretaceous/Paleogene (K/P) boundary at El Kef, Tunisia: Catastrophic mass extinction and possible survivorship". Micropaleontology. 46 (1): 31–49. JSTOR 1486024.
- MacLeod, N (1996). "Nature of the Cretaceous-Tertiary (K–T) planktonic foraminiferal record: Stratigraphic confidence intervals, Signor–Lipps effect, and patterns of survivorship". In MacLeod, N.; Keller, G. (eds.). Cretaceous–Tertiary Mass Extinctions: Biotic and environmental changes. W.W. Norton. pp. 85–138. ISBN 978-0-393-96657-2.
- Keller, G.; Adatte, T.; Stinnesbeck, W.; Rebolledo-Vieyra, _; Fucugauchi, J.U.; Kramar, U.; Stüben, D. (2004). "Chicxulub impact predates the K–T boundary mass extinction". Proceedings of the National Academy of Sciences. Washington, DC. 101 (11): 3753–3758. Bibcode:2004PNAS..101.3753K. doi:10.1073/pnas.0400396101. PMC 374316. PMID 15004276.
- Galeotti S, Bellagamba M, Kaminski MA, Montanari A (2002). "Deep-sea benthic foraminiferal recolonisation following a volcaniclastic event in the lower Campanian of the Scaglia Rossa Formation (Umbria-Marche Basin, central Italy)". Marine Micropaleontology. 44 (1–2): 57–76. Bibcode:2002MarMP..44...57G. doi:10.1016/s0377-8398(01)00037-8. Retrieved 2007-08-19.
- Kuhnt, W.; Collins, E.S. (1996). "8. Cretaceous to Paleogene benthic foraminifers from the Iberia abyssal plain". Proceedings of the Ocean Drilling Program, Scientific Results. Proceedings of the Ocean Drilling Program. 149: 203–216. doi:10.2973/odp.proc.sr.149.254.1996.
- Coles, G.P.; Ayress, M.A.; Whatley, R.C. (1990). "A comparison of North Atlantic and 20 Pacific deep-sea Ostracoda". In Whatley, R.C.; Maybury, C. (eds.). Ostracoda and Global Events. Chapman & Hall. pp. 287–305. ISBN 978-0-442-31167-4.
- Brouwers, E.M.; de Deckker, P. (1993). "Late Maastrichtian and Danian Ostracode Faunas from Northern Alaska: Reconstructions of Environment and Paleogeography". PALAIOS. 8 (2): 140–154. Bibcode:1993Palai...8..140B. doi:10.2307/3515168. JSTOR 3515168.
- Vescsei, A; Moussavian, E (1997). "Paleocene reefs on the Maiella Platform margin, Italy: An example of the effects of the cretaceous/tertiary boundary events on reefs and carbonate platforms". Facies. 36 (1): 123–139. doi:10.1007/BF02536880. S2CID 129296658.
- Rosen, B R; Turnšek, D (1989). Jell A; Pickett JW (eds.). "Extinction patterns and biogeography of scleractinian corals across the Cretaceous/Tertiary boundary". Memoir of the Association of Australasian Paleontology. Proceedings of the Fifth International Symposium on Fossil Cnidaria including Archaeocyatha and Spongiomorphs. Brisbane, Queensland (8): 355–370.
- Ward, P.D.; Kennedy, W.J.; MacLeod, K.G.; Mount, J.F. (1991). "Ammonite and inoceramid bivalve extinction patterns in Cretaceous/Tertiary boundary sections of the Biscay region (southwestern France, northern Spain)". Geology. 19 (12): 1181–1184. Bibcode:1991Geo....19.1181W. doi:10.1130/0091-7613(1991)019<1181:AAIBEP>2.3.CO;2.
- Harries PJ, Johnson KR, Cobban WA, Nichols DJ (2002). "Marine Cretaceous-Tertiary boundary section in southwestern South Dakota: Comment and reply". Geology. 30 (10): 954–955. Bibcode:2002Geo....30..954H. doi:10.1130/0091-7613(2002)030<0955:MCTBSI>2.0.CO;2.
- Neraudeau, Didier; Thierry, Jacques; Moreau, Pierre (1 January 1997). "Variation in echinoid biodiversity during the Cenomanian-early Turonian transgressive episode in Charentes (France)". Bulletin de la Société Géologique de France. 168 (1): 51–61.
- Raup DM, Jablonski D (1993). "Geography of end-Cretaceous marine bivalve extinctions". Science. 260 (5110): 971–973. Bibcode:1993Sci...260..971R. doi:10.1126/science.11537491. PMID 11537491.
- MacLeod KG (1994). "Extinction of Inoceramid Bivalves in Maastrichtian Strata of the Bay of Biscay Region of France and Spain". Journal of Paleontology. 68 (5): 1048–1066. doi:10.1017/S0022336000026652.
- Kriwet, Jürgen; Benton, Michael J. (2004). "Neoselachian (Chondrichthyes, Elasmobranchii) Diversity across the Cretaceous–Tertiary Boundary". Palaeogeography, Palaeoclimatology, Palaeoecology. 214 (3): 181–194. Bibcode:2004PPP...214..181K. doi:10.1016/j.palaeo.2004.02.049.
- Patterson, C. (1993). "Osteichthyes: Teleostei". In Benton, M.J. (ed.). The Fossil Record. 2. Springer. pp. 621–656. ISBN 978-0-412-39380-8.
- Noubhani, Abdelmajid (2010). "The Selachians' faunas of the Moroccan phosphate deposits and the K-T mass extinctions". Historical Biology. 22 (1–3): 71–77. doi:10.1080/08912961003707349. S2CID 129579498.
- Zinsmeister, W.J. (1 May 1998). "Discovery of fish mortality horizon at the K–T boundary on Seymour Island: Re-evaluation of events at the end of the Cretaceous". Journal of Paleontology. 72 (3): 556–571. doi:10.1017/S0022336000024331.
- Robertson, D.S.; McKenna, M.C.; Toon, O.B.; Hope, S.; Lillegraven, J.A. (2004). "Survival in the first hours of the Cenozoic" (PDF). GSA Bulletin. 116 (5–6): 760–768. Bibcode:2004GSAB..116..760R. doi:10.1130/B25402.1. S2CID 44010682. Archived from the original (PDF) on 2019-05-07.
- Labandeira, Conrad C.; Johnson, Kirk R.; Wilf, Peter (2002). "Impact of the terminal Cretaceous event on plant–insect associations". Proceedings of the National Academy of Sciences of the United States of America. 99 (4): 2061–2066. Bibcode:2002PNAS...99.2061L. doi:10.1073/pnas.042492999. PMC 122319. PMID 11854501.
- Wilf, P.; Labandeira, C.C.; Johnson, K.R.; Ellis, B. (2006). "Decoupled plant and insect diversity after the end-Cretaceous extinction". Science. 313 (5790): 1112–1115. Bibcode:2006Sci...313.1112W. doi:10.1126/science.1129569. PMID 16931760. S2CID 52801127.
- Vajda, Vivi; Raine, J. Ian; Hollis, Christopher J. (2001). "Indication of global deforestation at the Cretaceous–Tertiary boundary by New Zealand fern spike". Science. 294 (5547): 1700–1702. Bibcode:2001Sci...294.1700V. doi:10.1126/science.1064706. PMID 11721051. S2CID 40364945.
- Wilf, P.; Johnson, K. R. (2004). "Land plant extinction at the end of the Cretaceous: a quantitative analysis of the North Dakota megafloral record". Paleobiology. 30 (3): 347–368. doi:10.1666/0094-8373(2004)030<0347:lpeate>2.0.co;2.
- Johnson, K.R.; Hickey, L.J. (1991). "Megafloral change across the Cretaceous Tertiary boundary in the northern Great Plains and Rocky Mountains". In Sharpton, V.I.; Ward, P.D. (eds.). Global Catastrophes in Earth History: An interdisciplinary conference on impacts, volcanism, and mass mortality. Geological Society of America. ISBN 978-0-8137-2247-4.
- Askin, R.A.; Jacobson, S.R. (1996). "Palynological change across the Cretaceous–Tertiary boundary on Seymour Island, Antarctica: environmental and depositional factors". In Keller, G.; MacLeod, N. (eds.). Cretaceous–Tertiary Mass Extinctions: Biotic and Environmental Changes. W W Norton. ISBN 978-0-393-96657-2.
- Schultz, P.H.; d'Hondt, S. (1996). "Cretaceous–Tertiary (Chicxulub) impact angle and its consequences". Geology. 24 (11): 963–967. Bibcode:1996Geo....24..963S. doi:10.1130/0091-7613(1996)024<0963:CTCIAA>2.3.CO;2.
- Vajda, Vivi; McLoughlin, Stephen (5 March 2004). "Fungal Proliferation at the Cretaceous-Tertiary Boundary". Science. 303 (5663): 1489. doi:10.1126/science.1093807. PMID 15001770. S2CID 44720346.
- "How did dino-era birds survive the asteroid 'apocalypse'?". National Geographic News. 24 May 2018. Archived from the original on 6 October 2018. Retrieved 5 October 2018.
- Fawcett, J. A.; Maere, S.; Van de Peer, Y. (April 2009). "Plants with double genomes might have had a better chance to survive the Cretaceous-Tertiary extinction event". Proceedings of the National Academy of Sciences of the United States of America. 106 (14): 5737–5742. Bibcode:2009PNAS..106.5737F. doi:10.1073/pnas.0900906106. PMC 2667025. PMID 19325131.
- Visscher, H.; Brinkhuis, H.; Dilcher, D. L.; Elsik, W. C.; Eshet, Y.; Looy, C. V.; Rampino, M. R.; Traverse, A. (5 March 1996). "The terminal Paleozoic fungal event: evidence of terrestrial ecosystem destabilization and collapse". Proceedings of the National Academy of Sciences. 93 (5): 2155–2158. Bibcode:1996PNAS...93.2155V. doi:10.1073/pnas.93.5.2155. PMC 39926. PMID 11607638.
- Archibald, J.D.; Bryant, L.J. (1990). "Differential Cretaceous–Tertiary extinction of nonmarine vertebrates; evidence from northeastern Montana". In Sharpton, V.L.; Ward, P.D. (eds.). Global Catastrophes in Earth History: an Interdisciplinary Conference on Impacts, Volcanism, and Mass Mortality. Special Paper. 247. Geological Society of America. pp. 549–562. doi:10.1130/spe247-p549. ISBN 978-0-8137-2247-4.
- Estes, R. (1964). "Fossil vertebrates from the late Cretaceous Lance formation, eastern Wyoming". University of California Publications, Department of Geological Sciences. 49: 1–180.
- Gardner, J. D. (2000). "Albanerpetontid amphibians from the upper Cretaceous (Campanian and Maastrichtian) of North America". Geodiversitas. 22 (3): 349–388.
- Sheehan, P. M.; Fastovsky, D. E. (1992). "Major extinctions of land-dwelling vertebrates at the Cretaceous-Tertiary boundary, Eastern Montana". Geology. 20 (6): 556–560. Bibcode:1992Geo....20..556S. doi:10.1130/0091-7613(1992)020<0556:meoldv>2.3.co;2.
- Novacek, M J (1999). "100 million years of land vertebrate evolution: The Cretaceous-early Tertiary transition". Annals of the Missouri Botanical Garden. 86 (2): 230–258. doi:10.2307/2666178. JSTOR 2666178.
- Apesteguía, Sebastián; Novas, Fernando E (2003). "Large Cretaceous sphenodontian from Patagonia provides insight into lepidosaur evolution in Gondwana". Nature. 425 (6958): 609–612. Bibcode:2003Natur.425..609A. doi:10.1038/nature01995. PMID 14534584. S2CID 4425130.
- Lutz, D. (2005). Tuatara: A living fossil. DIMI Press. ISBN 978-0-931625-43-5.
- Longrich, Nicholas R.; Bhullar, Bhart-Anjan S.; Gauthier, Jacques A. (2012). "Mass extinction of lizards and snakes at the Cretaceous–Paleogene boundary". Proceedings of the National Academy of Sciences of the United States of America. 109 (52): 21396–21401. Bibcode:2012PNAS..10921396L. doi:10.1073/pnas.1211526110. PMC 3535637. PMID 23236177.
- Chatterjee, S.; Small, B.J. (1989). "New plesiosaurs from the Upper Cretaceous of Antarctica". Geological Society of London. Special Publications. 47 (1): 197–215. Bibcode:1989GSLSP..47..197C. doi:10.1144/GSL.SP.1989.047.01.15. S2CID 140639013.
- O'Keefe, F.R. (2001). "A cladistic analysis and taxonomic revision of the Plesiosauria (Reptilia: Sauropterygia)". Acta Zoologica Fennica. 213: 1–63.
- Fischer, Valentin; Bardet, Nathalie; Benson, Roger B. J.; Arkhangelsky, Maxim S.; Friedman, Matt (2016). "Extinction of fish-shaped marine reptiles associated with reduced evolutionary rates and global environmental volatility". Nature Communications. 7 (1): 10825. doi:10.1038/ncomms10825. PMC 4786747. PMID 26953824.
- "The Great Archosaur Lineage". University of California Museum of Paleontology. Archived from the original on 28 February 2015. Retrieved 18 December 2014.
- Brochu, C.A. (2004). "Calibration age and quartet divergence date estimation". Evolution. 58 (6): 1375–1382. doi:10.1554/03-509. PMID 15266985. S2CID 198156470.
- Jouve, S.; Bardet, N.; Jalil, N-E; Suberbiola, X P; Bouya, B.; Amaghzaz, M. (2008). "The oldest African crocodylian: phylogeny, paleobiogeography, and differential survivorship of marine reptiles through the Cretaceous-Tertiary boundary". Journal of Vertebrate Paleontology. 28 (2): 409–421. doi:10.1671/0272-4634(2008)28[409:TOACPP]2.0.CO;2.
- Company, J.; Ruiz-Omeñaca, J. I.; Pereda Suberbiola, X. (1999). "A long-necked pterosaur (Pterodactyloidea, Azhdarchidae) from the upper Cretaceous of Valencia, Spain". Geologie en Mijnbouw. 78 (3): 319–333. doi:10.1023/A:1003851316054. S2CID 73638590.
- Barrett, P. M.; Butler, R. J.; Edwards, N. P.; Milner, A. R. (2008). "Pterosaur distribution in time and space: an atlas" (PDF). Zitteliana. 28: 61–107. Archived (PDF) from the original on 2017-08-06. Retrieved 2015-08-31.
- Slack, K, E; Jones, C M; Ando, T; Harrison, G L; Fordyce, R E; Arnason, U; Penny, D (2006). "Early Penguin Fossils, Plus Mitochondrial Genomes, Calibrate Avian Evolution". Molecular Biology and Evolution. 23 (6): 1144–1155. doi:10.1093/molbev/msj124. PMID 16533822.
- Penny, D.; Phillips, M.J. (2004). "The rise of birds and mammals: Are microevolutionary processes sufficient for macroevolution?". Trends in Ecology and Evolution. 19 (10): 516–522. doi:10.1016/j.tree.2004.07.015. PMID 16701316.
- Butler, Richard J.; Barrett, Paul M.; Nowbath, Stephen; Upchurch, Paul (2009). "Estimating the effects of sampling biases on pterosaur diversity patterns: Implications for hypotheses of bird / pterosaur competitive replacement". Paleobiology. 35 (3): 432–446. doi:10.1666/0094-8373-35.3.432. S2CID 84324007.
- Prondvai, E.; Bodor, E. R.; Ösi, A. (2014). "Does morphology reflect osteohistology-based ontogeny? A case study of Late Cretaceous pterosaur jaw symphyses from Hungary reveals hidden taxonomic diversity" (PDF). Paleobiology. 40 (2): 288–321. doi:10.1666/13030. S2CID 85673254.
- Longrich, N. R.; Martill, D. M.; Andres, B. (2018). "Late Maastrichtian pterosaurs from North Africa and mass extinction of Pterosauria at the Cretaceous-Paleogene boundary". PLOS Biology. 16 (3): e2001663. doi:10.1371/journal.pbio.2001663. PMC 5849296. PMID 29534059.
- Hou, L.; Martin, M.; Zhou, Z.; Feduccia, A. (1996). "Early Adaptive Radiation of Birds: Evidence from Fossils from Northeastern China". Science. 274 (5290): 1164–1167. Bibcode:1996Sci...274.1164H. doi:10.1126/science.274.5290.1164. PMID 8895459. S2CID 30639866.
- Clarke, J.A.; Tambussi, C.P.; Noriega, J.I.; Erickson, G.M.; Ketcham, R.A. (2005). "Definitive fossil evidence for the extant avian radiation in the Cretaceous". Nature. 433 (7023): 305–308. Bibcode:2005Natur.433..305C. doi:10.1038/nature03150. PMID 15662422. S2CID 4354309.
- "Primitive birds shared dinosaurs' fate". Science Daily. 20 September 2011. Archived from the original on 24 September 2011. Retrieved 20 September 2011.
- Mitchell, K.J.; Llamas, B.; Soubrier, J.; Rawlence, N.J.; Worthy, T.H.; Wood, J.; Lee, M.S.Y.; Cooper, A. (2014). "Ancient DNA reveals elephant birds and kiwi are sister taxa and clarifies ratite bird evolution". Science. 344 (6186): 989–900. Bibcode:2014Sci...344..898M. doi:10.1126/science.1251981. hdl:2328/35953. PMID 24855267. S2CID 206555952 – via Web of Science.
- Yonezawa, Takahiro; Segawa, Takahiro; Mori, Hiroshi; Campos, Paula F.; Hongoh, Yuichi; Endo, Hideki; Akiyoshi, Ayumi; Kohno, Naoki; Nishida, Shin; Wu, Jiaqi; Jin, Haofei (2017). "Phylogenomics and Morphology of Extinct Paleognaths Reveal the Origin and Evolution of the Ratites". Current Biology. 27 (1): 68–77. doi:10.1016/j.cub.2016.10.029. PMID 27989673.
- David, Archibald; Fastovsky, David (2004). "Dinosaur extinction" (PDF). In Weishampel, David B.; Dodson, Peter; Osmólska, Halszka (eds.). The Dinosauria (2nd ed.). Berkeley: University of California Press. pp. 672–684. ISBN 978-0-520-24209-8.
- Rieraa, V.; Marmib, J.; Omsa, O.; Gomez, B. (March 2010). "Orientated plant fragments revealing tidal palaeocurrents in the Fumanya mudflat (Maastrichtian, southern Pyrenees): Insights in palaeogeographic reconstructions". Palaeogeography, Palaeoclimatology, Palaeoecology. 288 (1–4): 82–92. Bibcode:2010PPP...288...82R. doi:10.1016/j.palaeo.2010.01.037.
- le Loeuff, J. (2012). "Paleobiogeography and biodiversity of Late Maastrichtian dinosaurs: How many dinosaur species became extinct at the Cretaceous-Tertiary boundary?". Bulletin de la Société Géologique de France. 183 (6): 547–559. doi:10.2113/gssgfbull.183.6.547.
- Ryan, M.J.; Russell, A.P.; Eberth, D.A.; Currie, P.J. (2001). "The taphonomy of a Centrosaurus (Ornithischia: Ceratopsidae) bone bed from the Dinosaur Park formation (Upper Campanian), Alberta, Canada, with comments on cranial ontogeny". PALAIOS. 16 (5): 482–506. Bibcode:2001Palai..16..482R. doi:10.1669/0883-1351(2001)016<0482:ttoaco>2.0.co;2.
- Sloan, R.E.; Rigby, K.; van Valen, L.M.; Gabriel, Diane (1986). "Gradual dinosaur extinction and simultaneous ungulate radiation in the Hell Creek formation". Science. 232 (4750): 629–633. Bibcode:1986Sci...232..629S. doi:10.1126/science.232.4750.629. PMID 17781415. S2CID 31638639.
- Fassett, J.E.; Lucas, S.G.; Zielinski, R.A.; Budahn, J.R. (2001). Compelling new evidence for Paleocene dinosaurs in the Ojo Alamo Sandstone San Juan Basin, New Mexico and Colorado, USA (PDF). International Conference on Catastrophic Events and Mass Extinctions: Impacts and Beyond, 9–12 July 2000. 1053. Vienna, Austria. pp. 45–46. Bibcode:2001caev.conf.3139F. Archived (PDF) from the original on 5 June 2011. Retrieved 18 May 2007.
- Sullivan, R.M. (2003). "No Paleocene dinosaurs in the San Juan Basin, New Mexico". Geological Society of America Abstracts with Programs. 35 (5): 15. Archived from the original on 8 April 2011. Retrieved 2 July 2007.
- Evans, Susan E.; Klembara, Jozef (2005). "A choristoderan reptile (Reptilia: Diapsida) from the Lower Miocene of northwest Bohemia (Czech Republic)". Journal of Vertebrate Paleontology. 25 (1): 171–184. doi:10.1671/0272-4634(2005)025[0171:ACRRDF]2.0.CO;2.
- Matsumoto, Ryoko; Evans, Susan E. (November 2015). "Morphology and function of the palatal dentition in Choristodera". Journal of Anatomy. 228 (3): 414–429. doi:10.1111/joa.12414. PMC 5341546. PMID 26573112.
- Gelfo, J.N.; Pascual, R. (2001). "Peligrotherium tropicalis (Mammalia, Dryolestida) from the early Paleocene of Patagonia, a survival from a Mesozoic Gondwanan radiation" (PDF). Geodiversitas. 23: 369–379. Archived from the original (PDF) on 12 February 2012.
- Goin, F.J.; Reguero, M.A.; Pascual, R.; von Koenigswald, W.; Woodburne, M.O.; Case, J.A.; Marenssi, S.A.; Vieytes, C.; Vizcaíno, S.F. (2006). "First gondwanatherian mammal from Antarctica". Geological Society, London. Special Publications. 258 (1): 135–144. Bibcode:2006GSLSP.258..135G. doi:10.1144/GSL.SP.2006.258.01.10. S2CID 129493664.
- McKenna, M.C.; Bell, S.K. (1997). Classification of mammals: Above the species level. Columbia University Press. ISBN 978-0-231-11012-9.
- Wood, D. Joseph (2010). The Extinction of the Multituberculates Outside North America: a Global Approach to Testing the Competition Model (M.S.). The Ohio State University. Archived from the original on 2015-04-08. Retrieved 2015-04-03.
- Pires, Mathias M.; Rankin, Brian D.; Silvestro, Daniele; Quental, Tiago B. (2018). "Diversification dynamics of mammalian clades during the K–Pg mass extinction". Biology Letters. 14 (9): 20180458. doi:10.1098/rsbl.2018.0458. PMC 6170748. PMID 30258031.
- Bininda-Emonds OR, Cardillo M, Jones KE, MacPhee RD, Beck RM, Grenyer R, Price SA, Vos RA, Gittleman JL, Purvis A (2007). "The delayed rise of present-day mammals". Nature. 446 (7135): 507–512. Bibcode:2007Natur.446..507B. doi:10.1038/nature05634. PMID 17392779. S2CID 4314965.
- Springer MS, Murphy WJ, Eizirik E, O'Brien SJ (2003). "Placental mammal diversification and the Cretaceous–Tertiary boundary". PNAS. 100 (3): 1056–1061. Bibcode:2003PNAS..100.1056S. doi:10.1073/pnas.0334222100. PMC 298725. PMID 12552136.
- Dodson, Peter (1996). The Horned Dinosaurs: A Natural History. Princeton, NJ: Princeton University Press. pp. 279–281. ISBN 978-0-691-05900-6.
- "Online guide to the continental Cretaceous–Tertiary boundary in the Raton basin, Colorado and New Mexico". U.S. Geological Survey. 2004. Archived from the original on 2006-09-25. Retrieved 2007-07-08.
- Smathers, G A; Mueller-Dombois D (1974). Invasion and Recovery of Vegetation after a Volcanic Eruption in Hawaii. Scientific Monograph. 5. United States National Park Service. Archived from the original on 3 April 2014. Retrieved 9 July 2007.
- Pope KO, d'Hondt SL, Marshall CR (1998). "Meteorite impact and the mass extinction of species at the Cretaceous/Tertiary boundary". PNAS. 95 (19): 11028–11029. Bibcode:1998PNAS...9511028P. doi:10.1073/pnas.95.19.11028. PMC 33889. PMID 9736679.
- Marshall CR, Ward PD (1996). "Sudden and Gradual Molluscan Extinctions in the Latest Cretaceous of Western European Tethys". Science. 274 (5291): 1360–1363. Bibcode:1996Sci...274.1360M. doi:10.1126/science.274.5291.1360. PMID 8910273. S2CID 1837900.
- Keller, Gerta (July 2001). "The end-cretaceous mass extinction in the marine realm: Year 2000 assessment". Planetary and Space Science. 49 (8): 817–830. Bibcode:2001P&SS...49..817K. doi:10.1016/S0032-0633(01)00032-0.
- Bourgeois, J. (2009). "Chapter 3. Geologic effects and records of tsunamis" (PDF). In Robinson, A.R.; Bernard, E.N. (eds.). The Sea (Ideas and Observations on Progress in the Study of the Seas). 15: Tsunamis. Boston, MA: Harvard University. ISBN 978-0-674-03173-9. Retrieved 29 March 2012.
- Lawton, T. F.; Shipley, K. W.; Aschoff, J. L.; Giles, K. A.; Vega, F. J. (2005). "Basinward transport of Chicxulub ejecta by tsunami-induced backflow, La Popa basin, northeastern Mexico, and its implications for distribution of impact-related deposits flanking the Gulf of Mexico". Geology. 33 (2): 81–84. Bibcode:2005Geo....33...81L. doi:10.1130/G21057.1.
- Albertão, G. A.; P. P. Martins Jr. (1996). "A possible tsunami deposit at the Cretaceous-Tertiary boundary in Pernambuco, northeastern Brazil". Sed. Geol. 104 (1–4): 189–201. Bibcode:1996SedG..104..189A. doi:10.1016/0037-0738(95)00128-X.
- Norris, R. D.; Firth, J.; Blusztajn, J. S. & Ravizza, G. (2000). "Mass failure of the North Atlantic margin triggered by the Cretaceous-Paleogene bolide impact". Geology. 28 (12): 1119–1122. Bibcode:2000Geo....28.1119N. doi:10.1130/0091-7613(2000)28<1119:MFOTNA>2.0.CO;2.
- Bryant, Edward (June 2014). Tsunami: The Underrated Hazard. Springer. p. 178. ISBN 9783319061337. Archived from the original on 2019-09-01. Retrieved 2017-08-30.
- Smit, Jan; Montanari, Alessandro; Swinburne, Nicola H.; Alvarez, Walter; Hildebrand, Alan R.; Margolis, Stanley V.; Claeys, Philippe; Lowrie, William; Asaro, Frank (1992). "Tektite-bearing, deep-water clastic unit at the Cretaceous-Tertiary boundary in northeastern Mexico". Geology. 20 (2): 99–103. Bibcode:1992Geo....20...99S. doi:10.1130/0091-7613(1992)020<0099:TBDWCU>2.3.CO;2. PMID 11537752.
- Field guide to Cretaceous-tertiary boundary sections in northeastern Mexico (PDF). Lunar and Planetary Institute. 1994. Archived (PDF) from the original on 2019-08-21. Retrieved 2019-06-25.
- Smit, Jan (1999). "The global stratigraphy of the Cretaceous-Tertiary boundary impact ejecta". Annual Reviews of Earth and Planetary Science. 27: 75–113. Bibcode:1999AREPS..27...75S. doi:10.1146/annurev.earth.27.1.75.
- Kring, David A. (2007). "The Chicxulub impact event and its environmental consequences at the Cretaceous-Tertiary boundary". Palaeogeography, Palaeoclimatology, Palaeoecology. 255 (1–2): 4–21. doi:10.1016/j.palaeo.2007.02.037.
- "Chicxulub impact event". www.lpi.usra.edu. Archived from the original on 2019-07-26. Retrieved 2019-06-25.
- Signor, Philip W., III; Lipps, Jere H. (1982). "Sampling bias, gradual extinction patterns, and catastrophes in the fossil record". In Silver, L.T.; Schultz, Peter H. (eds.). Geological implications of impacts of large asteroids and comets on the Earth. Special Publication 190. Boulder, Colorado: Geological Society of America. pp. 291–296. ISBN 978-0-8137-2190-3. OCLC 4434434112. Archived from the original on May 5, 2016. Retrieved October 25, 2015.
- Mukhopadhyay, Sujoy (2001). "A Short Duration of the Cretaceous-Tertiary Boundary Event: Evidence from Extraterrestrial Helium-3" (PDF). Science. 291 (5510): 1952–1955. Bibcode:2001Sci...291.1952M. doi:10.1126/science.291.5510.1952. PMID 11239153.
- Clyde, William C.; Ramezani, Jahandar; Johnson, Kirk R.; Bowring, Samuel A.; Jones, Matthew M. (15 October 2016). "Direct high-precision U–Pb geochronology of the end-Cretaceous extinction and calibration of Paleocene astronomical timescales". Earth and Planetary Science Letters. 452: 272–280. Bibcode:2016E&PSL.452..272C. doi:10.1016/j.epsl.2016.07.041.
- de Laubenfels, M W (1956). "Dinosaur extinction: One more hypothesis". Journal of Paleontology. 30 (1): 207–218. JSTOR 1300393.
- Smit J.; Klaver, J. (1981). "Sanidine spherules at the Cretaceous-Tertiary boundary indicate a large impact event". Nature. 292 (5818): 47–49. Bibcode:1981Natur.292...47S. doi:10.1038/292047a0. S2CID 4331801.
- Bohor, B. F.; Foord, E. E.; Modreski, P. J.; Triplehorn, D. M. (1984). "Mineralogic evidence for an impact event at the Cretaceous-Tertiary boundary". Science. 224 (4651): 867–9. Bibcode:1984Sci...224..867B. doi:10.1126/science.224.4651.867. PMID 17743194. S2CID 25887801.
- Bohor, B. F.; Modreski, P. J.; Foord, E. E. (1987). "Shocked quartz in the Cretaceous-Tertiary boundary clays: Evidence for a global distribution". Science. 236 (4802): 705–709. Bibcode:1987Sci...236..705B. doi:10.1126/science.236.4802.705. PMID 17748309. S2CID 31383614.
- Bourgeois, J.; Hansen, T. A.; Wiberg, P. A.; Kauffman, E. G. (1988). "A tsunami deposit at the Cretaceous-Tertiary boundary in Texas". Science. 241 (4865): 567–570. Bibcode:1988Sci...241..567B. doi:10.1126/science.241.4865.567. PMID 17774578. S2CID 7447635.
- Pope, K.O.; Ocampo, A.C.; Kinsland, G.L.; Smith, R. (1996). "Surface expression of the Chicxulub crater". Geology. 24 (6): 527–530. Bibcode:1996Geo....24..527P. doi:10.1130/0091-7613(1996)024<0527:SEOTCC>2.3.CO;2. PMID 11539331.
- Perlman, David. "Dinosaur extinction battle flares". sfgate.com. Archived from the original on 2013-02-08. Retrieved 2013-02-08.
- Bottke, W.F.; Vokrouhlický, D.; Nesvorný, D. (September 2007). "An asteroid breakup 160 Myr ago as the probable source of the K/T impactor". Nature. 449 (7158): 48–53. Bibcode:2007Natur.449...48B. doi:10.1038/nature06070. PMID 17805288. S2CID 4322622.
- Majaess DJ, Higgins D, Molnar LA, Haegert MJ, Lane DJ, Turner DG, Nielsen I (February 2009). "New constraints on the asteroid 298 Baptistina, the alleged family member of the K/T impactor". The Journal of the Royal Astronomical Society of Canada. 103 (1): 7–10. arXiv:0811.0171. Bibcode:2009JRASC.103....7M.
- Reddy, V.; Emery, J.P.; Gaffey, M.J.; Bottke, W.F.; Cramer, A.; Kelley, M.S. (December 2009). "Composition of 298 Baptistina: Implications for the K/T impactor link". Meteoritics & Planetary Science. 44 (12): 1917–1927. Bibcode:2009M&PS...44.1917R. CiteSeerX 10.1.1.712.8165. doi:10.1111/j.1945-5100.2009.tb02001.x.
- "NASA's WISE raises doubt about asteroid family believed responsible for dinosaur extinction". ScienceDaily. 20 September 2011. Archived from the original on 23 September 2011. Retrieved 21 September 2011.
- Depalma, Robert A.; Smit, Jan; Burnham, David A.; Kuiper, Klaudia; Manning, Phillip L.; Oleinik, Anton; Larson, Peter; Maurrasse, Florentin J.; Vellekoop, Johan; Richards, Mark A.; Gurche, Loren; Alvarez, Walter (2019). "A seismically induced onshore surge deposit at the KPG boundary, North Dakota". Proceedings of the National Academy of Sciences. 116 (17): 8190–8199. Bibcode:2019PNAS..116.8190D. doi:10.1073/pnas.1817407116. PMC 6486721. PMID 30936306.
- "National Natural Landmarks – National Natural Landmarks (U.S. National Park Service)". www.nps.gov. Retrieved 2019-03-22.
Year designated: 1966
- Smit, J., et al. (2017) Tanis, a mixed marine-continental event deposit at the KPG Boundary in North Dakota caused by a seiche triggered by seismic waves of the Chicxulub Impact Paper No. 113-15, presented 23 October 2017 at the GSA Annual Meeting, Seattle, Washington, USA.
- DePalma, R. et al. (2017) Life after impact: A remarkable mammal burrow from the Chicxulub aftermath in the Hell Creek Formation, North Dakota Paper No. 113-16, presented 23 October 2017 at the GSA Annual Meeting, Seattle, Washington, USA.
- Kaskes, P.; Goderis, S.; Belza, J.; Tack, P.; DePalma, R. A.; Smit, J.; Vincze, Laszlo; Vabgaecje, F.; Claeys, P. (2019). "Caught in amber: Geochemistry and petrography of uniquely preserved Chicxulub microtektites from the Tanis K-Pg site from North Dakota (USA)". Large Meteorite Impacts VI 2019 (LPI Contrib. No. 2136) (PDF). 6. Houston, TX: Lunar and Planetary Institute. pp. 1–2. Retrieved 11 April 2021.
- Barras, Colin (5 April 2019). "Does fossil site record dino-killing impact?". Science. 364 (6435): 10–11. doi:10.1126/science.364.6435.10. PMID 30948530.
- Robertson, D.S.; Lewis, W.M.; Sheehan, P.M.; Toon, O.B. (2013). "K/Pg extinction: Re-evaluation of the heat/fire hypothesis". Journal of Geophysical Research: Biogeosciences. 118 (1): 329–336. Bibcode:2013JGRG..118..329R. doi:10.1002/jgrg.20018.
- Kaiho, Kunio; Oshima, Naga (2017). "Site of asteroid impact changed the history of life on Earth: The low probability of mass extinction". Scientific Reports. 7 (1). Article number 14855. Bibcode:2017NatSR...714855K. doi:10.1038/s41598-017-14199-x. PMC 5680197. PMID 29123110.
- Ohno, S.; et al. (2014). "Production of sulphate-rich vapour during the Chicxulub impact and implications for ocean acidification". Nature Geoscience. 7 (4): 279–282. Bibcode:2014NatGe...7..279O. doi:10.1038/ngeo2095.
- Vellekoop, J.; et al. (2013). "Rapid short-term cooling following the Chicxulub impact at the Cretaceous–Paleogene boundary". Proceedings of the National Academy of Sciences. 111 (21): 7537–7541. Bibcode:2014PNAS..111.7537V. doi:10.1073/pnas.1319253111. PMC 4040585. PMID 24821785.
- Brugger, Julia; Feulner, Georg; Petri, Stefan (2016). "Baby, it's cold outside: Climate model simulations of the effects of the asteroid impact at the end of the Cretaceous". Geophysical Research Letters. 44 (1): 419–427. Bibcode:2017GeoRL..44..419B. doi:10.1002/2016GL072241.
- Pope, K.O.; Baines, K.H.; Ocampo, A.C.; Ivanov, B.A. (1997). "Energy, volatile production, and climatic effects of the Chicxulub Cretaceous/Tertiary impact". Journal of Geophysical Research. 102 (E9): 21645–21664. Bibcode:1997JGR...10221645P. doi:10.1029/97JE01743. PMID 11541145.
- Morgan J; Lana, C.; Kersley, A.; Coles, B.; Belcher, C.; Montanari, S.; Diaz-Martinez, E.; Barbosa, A.; Neumann, V. (2006). "Analyses of shocked quartz at the global K-P boundary indicate an origin from a single, high-angle, oblique impact at Chicxulub". Earth and Planetary Science Letters. 251 (3–4): 264–279. Bibcode:2006E&PSL.251..264M. doi:10.1016/j.epsl.2006.09.009. hdl:10044/1/1208.
- Hull, Picncelli M.; et al. (17 January 2020). "On impact and volcanism across the Cretaceous-Paleogene boundary" (PDF). Science. 367 (6475): 266–272. Bibcode:2020Sci...367..266H. doi:10.1126/science.aay5055. PMID 31949074. S2CID 210698721.
- "Asteroid impact, not volcanoes, made the Earth uninhabitable for dinosaurs". phys.org. Retrieved 6 July 2020.
- Chiarenza, Alfio Alessandro; Farnsworth, Alexander; Mannion, Philip D.; Lunt, Daniel J.; Valdes, Paul J.; Morgan, Joanna V.; Allison, Peter A. (24 June 2020). "Asteroid impact, not volcanism, caused the end-Cretaceous dinosaur extinction". Proceedings of the National Academy of Sciences. 117 (29): 17084–17093. Bibcode:2020PNAS..11717084C. doi:10.1073/pnas.2006087117. PMC 7382232. PMID 32601204.
- "Dinosaur-killing asteroid strike gave rise to Amazon rainforest". BBC News. 2 April 2021. Retrieved 9 May 2021.
- Carvalho, Mónica R.; Jaramillo, Carlos; Parra, Felipe de la; Caballero-Rodríguez, Dayenari; Herrera, Fabiany; Wing, Scott; Turner, Benjamin L.; D’Apolito, Carlos; Romero-Báez, Millerlandy; Narváez, Paula; Martínez, Camila; Gutierrez, Mauricio; Labandeira, Conrad; Bayona, German; Rueda, Milton; Paez-Reyes, Manuel; Cárdenas, Dairon; Duque, Álvaro; Crowley, James L.; Santos, Carlos; Silvestro, Daniele (2 April 2021). "Extinction at the end-Cretaceous and the origin of modern Neotropical rainforests". Science. 372 (6537): 63–68. doi:10.1126/science.abf1969. ISSN 0036-8075. Retrieved 9 May 2021.
- Hand, Eric (17 November 2016). "Updated: Drilling of dinosaur-killing impact crater explains buried circular hills". Science. doi:10.1126/science.aaf5684.
- "Chicxulub crater dinosaur extinction". The New York Times. New York, NY. 18 November 2016. Archived from the original on 9 November 2017. Retrieved 14 October 2017.
- Brannen, Peter (2017). The Ends of the World: Volcanic Apocalypses, Lethal Oceans, and Our Quest to Understand Earth's Past Mass Extinctions. Harper Collins. p. 336. ISBN 9780062364807.
- Keller G, Adatte T, Gardin S, Bartolini A, Bajpai S (2008). "Main Deccan volcanism phase ends near the K–T boundary: Evidence from the Krishna-Godavari Basin, SE India". Earth and Planetary Science Letters. 268 (3–4): 293–311. Bibcode:2008E&PSL.268..293K. doi:10.1016/j.epsl.2008.01.015.
- Duncan RA, Pyle DG (1988). "Rapid eruption of the Deccan flood basalts at the Cretaceous/Tertiary boundary". Nature. 333 (6176): 841–843. Bibcode:1988Natur.333..841D. doi:10.1038/333841a0. S2CID 4351454.
- Courtillot, Vincent (1990). "A volcanic eruption". Scientific American. 263 (4): 85–92. Bibcode:1990SciAm.263d..85C. doi:10.1038/scientificamerican1090-85. PMID 11536474.
- Alvarez, W (1997). T. rex and the Crater of Doom. Princeton University Press. pp. 130–146. ISBN 978-0-691-01630-6.
- Renne, P. R.; et al. (2015). "State shift in Deccan volcanism at the Cretaceous-Paleogene boundary, possibly induced by impact". Science. 350 (6256): 76–78. Bibcode:2015Sci...350...76R. doi:10.1126/science.aac7549. PMID 26430116.
- Richards, M. A.; et al. (2015). "Triggering of the largest Deccan eruptions by the Chicxulub impact" (PDF). Geological Society of America Bulletin. 127 (11–12): 1507–1520. Bibcode:2015GSAB..127.1507R. doi:10.1130/B31167.1.
- Mullen L (October 13, 2004). "Debating the Dinosaur Extinction". Astrobiology Magazine. Archived from the original on June 25, 2012. Retrieved 2012-03-29.
- Mullen L (October 20, 2004). "Multiple impacts". Astrobiology Magazine. Archived from the original on April 6, 2012. Retrieved 2012-03-29.
- Mullen L (November 3, 2004). "Shiva: Another K–T impact?". Astrobiology Magazine. Archived from the original on December 11, 2011. Retrieved 2012-03-29.
- Chatterjee, Sankar (August 1997). "Multiple Impacts at the KT Boundary and the Death of the Dinosaurs". 30th International Geological Congress. 26. pp. 31–54. ISBN 978-90-6764-254-5.
- Li, Liangquan; Keller, Gerta (1998). "Abrupt deep-sea warming at the end of the Cretaceous" (PDF). Geology. 26 (11): 995–998. Bibcode:1998Geo....26..995L. doi:10.1130/0091-7613(1998)026<0995:ADSWAT>2.3.CO;2. S2CID 115136793.
- Petersen, Sierra V.; Dutton, Andrea; Lohmann, Kyger C. (2016). "End-Cretaceous extinction in Antarctica linked to both Deccan volcanism and meteorite impact via climate change". Nature Communications. 7: 12079. Bibcode:2016NatCo...712079P. doi:10.1038/ncomms12079. PMC 4935969. PMID 27377632.
- Alroy J (May 1998). "Cope's rule and the dynamics of body mass evolution in North American fossil mammals" (PDF). Science. 280 (5364): 731–4. Bibcode:1998Sci...280..731A. doi:10.1126/science.280.5364.731. PMID 9563948.
- Ericson, P G; Anderson, C L; Britton, T; et al. (December 2006). "Diversification of Neoaves: integration of molecular sequence data and fossils". Biol. Lett. 2 (4): 543–7. doi:10.1098/rsbl.2006.0523. PMC 1834003. PMID 17148284.
- Grimaldi, David A. (2007). Evolution of the Insects. Cambridge Univ Pr (E). ISBN 978-0-511-12388-7.
- Fortey, Richard (2005). Earth: An Intimate History. New York: Vintage Books. ISBN 978-0-375-70620-2. OCLC 54537112.
- Preston, Douglas (8 April 2019). "The day the dinosaurs died". The New Yorker. pp. 52–65.
|Wikimedia Commons has media related to K/T Event.|
- "The Great Chicxulub Debate 2004". Geological Society of London. 2004. Retrieved 2007-08-02.
- Kring, D.A. (2005). "Chicxulub impact event: Understanding the K–T boundary". NASA Space Imagery Center. Archived from the original on 29 June 2007. Retrieved 2 August 2007.
- Cowen, R. (2000). "The K–T extinction". University of California Museum of Paleontology. Retrieved 2 August 2007.
- "What killed the dinosaurs?". University of California Museum of Paleontology. 1995. Retrieved 2 August 2007.
- "Papers and presentations resulting from the 2016 Chicxulub drilling project".
- DePalma, Robert A.; et al. (1 April 2019). "A seismically induced onshore surge deposit at the KPg boundary, North Dakota". PNAS. 116 (17): 8190–8199. Bibcode:2019PNAS..116.8190D. doi:10.1073/pnas.1817407116. PMC 6486721. PMID 30936306. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Cretaceous%E2%80%93Paleogene_extinction_event | 21 |
100 | What Are Bank Reserves?
Bank reserves are the cash minimums that financial institutions must have on hand in order to meet central bank requirements. This is real paper money that must be kept by the bank in a vault on-site or held in its account at the central bank.
Cash reserves requirements are intended to ensure that every bank can meet any large and unexpected demand for withdrawals.
In the U.S., the Federal Reserve dictates the amount of cash, called the reserve ratio, that each bank must maintain. Historically, the reserve rate has ranged from zero to 10% of bank deposits.
- Bank reserves are the minimal amounts of cash that banks are required to keep on hand in case of unexpected demand.
- Excess reserves are the additional cash that a bank keeps on hand and declines to loan out.
- Bank reserves are kept in order to prevent the panic that can arise if customers discover that a bank doesn't have enough cash on hand to meet immediate demands.
- Bank reserves may be kept in a vault on-site or sent to a bigger bank or a regional Federal Reserve bank facility.
- Historically, the reserve rate for American banks has been set at zero to 10%.
How Bank Reserves Work
Bank reserves are primarily an antidote to panic. The Federal Reserve obliges banks to hold a certain amount of cash in reserve so that they never run short and have to refuse a customer's withdrawal, possibly triggering a bank run.
A central bank may also use bank reserve levels as a weapon in monetary policy. It can lower the reserve requirement so that banks are free to make a number of new loans and increase economic activity. Or, it can demand that the banks increase their reserves to slow down economic growth.
In recent years, the U.S. Federal Reserve and the central banks of other developed economies have turned to other tactics such as quantitative easing in order to achieve the same goals. The central banks in emerging nations such as China continue to rely on raising or lowering bank reserve levels to cool or heat up their economies.
The Federal Reserve cut the cash reserve minimum to zero effective March 26, 2020, as one part of its response to the economic downturn caused by the COVID-19 pandemic.
Required and Excess Bank Reserves
Bank reserves are termed either required reserves or excess reserves. The required reserve is the minimum cash the bank can keep on hand. The excess reserve is any cash over the required minimum that the bank is holding in its vault rather than lending out to businesses and consumers.
Banks have little incentive to maintain excess reserves because cash earns no return and may even lose value over time due to inflation. Thus, banks normally minimize their excess reserves, lending out the money to clients rather than holding it in their vaults.
Still, bank reserves decrease during periods of economic expansion and increase during recessions. In good times, businesses and consumers borrow more and spend more. During recessions, they can't or won't take on additional debt. In downtimes, the banks may also toughen their lending requirements to avoid defaults.
History of Bank Reserves
Despite the determined efforts of Alexander Hamilton, among others, the United States did not have a national banking system for more than a couple of short periods of time until 1913, when the Federal Reserve System was created. (By 1863, the country at least had a national currency and a national bank chartering system.)
Until then, banks were chartered and regulated by states, with varying results. Bank collapses and "runs" on banks were common until a full-blown financial panic in 1907 led to calls for reform. The Federal Reserve System was created to oversee the nation's money supply.
Its role was significantly expanded in 1977 when, during a period of double-digit inflation, Congress defined price stability as a national policy goal and established the Federal Open Market Committee (FOMC) within the Fed to carry it out.
The required bank reserve follows a formula set by Federal Reserve Board regulations. The formula is based on the total amount deposited in the bank's net transaction accounts.
The figure includes demand deposits, automatic transfer accounts, and share draft accounts. Net transactions are calculated as the total amount in transaction accounts minus funds due from other banks, and minus cash that is in the process of being collected.
The required reserve ratio can also be used by a central bank as a tool to implement monetary policies. Through this ratio, a central bank can influence the amount of money available for borrowing.
Required bank reserves are determined by the Federal Reserve for each bank based on its net transactions.
Impact of the '08 Crisis
Until the financial crisis of 2008-2009, banks earned no interest for the cash reserves they held. That changed on Oct. 1, 2008. As part of the Emergency Economic Stabilization Act of 2008, the Federal Reserve began paying banks interest on their reserves.
At the same time, the Fed cut interest rates in order to boost demand for loans and get the economy moving again.
The result defied the conventional wisdom that banks would rather lend money out than keep it in the vault. The banks took the cash injected by the Federal Reserve and kept it as excess reserves rather than lending it out. They preferred to earn a small but risk-free interest rate to lending it out for a slightly higher but riskier return.
For this reason, the total amount of excess reserves spiked after 2008 despite an unchanged required reserve ratio.
Impact of the COVID-19 Crisis
As part of its wide-ranging response to the economic downturn caused by the COVID-19 pandemic, the Federal Reserve cut its reserve requirement on banks to zero. The action was intended to increase lending to consumers and businesses that had been hurt by the pandemic.
According to an analysis by the Brookings Institution, the move was largely irrelevant since banks by then held far more cash on hand than the minimum required.
Bank Reserves FAQs
Here are the answers to some commonly asked questions about bank reserves.
What Are Reserves in a Bank?
Reserves are, literally, cash in the bank, either on a bank's premises or in a Federal Reserve facility. Reserves cannot be lent to customers or otherwise spent. They are there in case there is an unprecedented demand from customers for withdrawals of their deposits. A bank that runs out of cash does not inspire customer confidence.
There are two types of reserves: minimum reserves and excess reserves. The minimum is set by the Federal Reserve. The excess is additional cash that the bank has in its vault.
How Much Money Do Banks Need to Keep in Reserve?
The reserve amount has historically ranged from zero to 10%. Since March 26, 2020, it has been zero as part of Federal Reserve policy to encourage bank lending to consumers and businesses hurt by the COVID-19 pandemic.
Are Bank Reserves Assets or Liabilities?
A bank's reserves are considered part of its assets and are listed as such in its accounts and its annual reports.
How Are Bank Reserves Calculated?
A bank's reserve ratio is calculated by multiplying its total deposits by the reserve ratio. For example, if a bank's deposits total $500 million, and the required reserve is 10%, multiply 500 by 0.10%. The bank's required minimum reserve is $5 million.
Where Do Banks Keep Their Reserves?
Some of it is stashed in a vault at the bank. Reserves also may be kept in the bank's account at one of the 12 regional Federal Reserve Banks. Some small banks keep part of their reserves at larger banks and tap into them at need.
This flow of cash between vaults peaks at certain times, like during holiday seasons when consumers take out extra cash. Once the demand subsides, the banks ship off some of their excess cash to the nearest Federal Reserve Bank.
The Bottom Line
The old banking system that existed in the U.S. before their regulation became centralized seems a bit Wild West by today's standards. Each state could charter banks, and small banks popped up and went under regularly. "Runs" on the bank were common.
That changed with the creation of the Federal Reserve System, and among the changes was a requirement that banks hold a minimum amount of cash in reserve to meet demand. Today's reserve minimum is zero, for now, suggesting that the Federal Reserve is comfortable with the level of cash kept voluntarily by the nation's banks. | https://www.investopedia.com/terms/b/bank-reserve.asp | 21 |
39 | The Polish–Lithuanian Commonwealth, formally known as the Crown of the Kingdom of Poland and the Grand Duchy of Lithuania and, after 1791, the Commonwealth of Poland, was a country and bi-federation of Poland and Lithuania ruled by a common monarch in real union, who was both King of Poland and Grand Duke of Lithuania. It was one of the largest and most populous countries of 16th to 17th-century Europe. At its largest territorial extent, in the early 17th century, the Commonwealth covered almost 1,000,000 square kilometres (400,000 sq mi) and as of 1618 sustained a multi-ethnic population of almost 12 million. Polish and Latin were the two co-official languages.
Royal Banner (c. 1605)
Royal Coat of arms
The Polish–Lithuanian Commonwealth (green) with vassal states (light green) at their peak in 1619
Polish and Latin
|King / Grand Duke|
|Sigismund II Augustus (first)|
|Stanisław August Poniatowski (last)|
• Privy council
|Historical era||Early modern period|
|1 July 1569|
|5 August 1772|
|3 May 1791|
|23 January 1793|
• 3rd Partition
|24 October 1795|
|1582||815,000 km2 (315,000 sq mi)|
|1618||1,000,000 km2 (390,000 sq mi)|
The Commonwealth was established by the Union of Lublin in July 1569, but the Crown of the Kingdom of Poland and the Grand Duchy of Lithuania had been in a de facto personal union since 1386 with the marriage of the Polish queen Hedwig and Lithuania's Grand Duke Jogaila, who was crowned King jure uxoris Władysław II Jagiełło of Poland. The First Partition in 1772 and the Second Partition in 1793 greatly reduced the state's size and the Commonwealth was partitioned out of existence with the Third Partition in 1795.
The Union possessed many features unique among contemporary states. Its political system was characterized by strict checks upon monarchical power. These checks were enacted by a legislature (sejm) controlled by the nobility (szlachta). This idiosyncratic system was a precursor to modern concepts of democracy, as of 1791 constitutional monarchy, and federation. Although the two component states of the Commonwealth were formally equal, Poland was the dominant partner in the union.
The Polish–Lithuanian Commonwealth was marked by high levels of ethnic diversity and by relative religious tolerance, guaranteed by the Warsaw Confederation Act 1573; however, the degree of religious freedom varied over time. The Constitution of 1791 acknowledged Catholicism as the "dominant religion", unlike the Warsaw Confederation, but freedom of religion was still granted with it.
After several decades of prosperity, it entered a period of protracted political, military and economic decline. Its growing weakness led to its partitioning among its neighbors (Austria, Prussia and Russia) during the late 18th century. Shortly before its demise, the Commonwealth adopted a massive reform effort and enacted the 3 May Constitution – the first codified constitution in modern European history and the second in modern world history (after the United States Constitution).
The official name of the state was the Kingdom of Poland and the Grand Duchy of Lithuania (Polish: Królestwo Polskie i Wielkie Księstwo Litewskie, Lithuanian: Lenkijos Karalystė ir Lietuvos Didžioji Kunigaikštystė, Latin: Regnum Poloniae Magnusque Ducatus Lithuaniae) and the Latin term was usually used in international treaties and diplomacy.
In the 17th century and later it was also known as the 'Most Serene Commonwealth of Poland' (Polish: Najjaśniejsza Rzeczpospolita Polska, Latin: Serenissima Res Publica Poloniae), the Commonwealth of the Polish Kingdom, or the Commonwealth of Poland.
Western Europeans often simplified the name to 'Poland' and in most past and modern sources it is referred to as the Kingdom of Poland, or just Poland. The terms 'Commonwealth of Poland' and 'Commonwealth of Two Nations' (Polish: Rzeczpospolita Obojga Narodów, Latin: Res Publica Utriusque Nationis) were used in the Reciprocal Guarantee of Two Nations. The English term Polish–Lithuanian Commonwealth and German Polen-Litauen are seen as renderings of the 'Commonwealth of Two Nations' variant.
Other informal names include the 'Republic of Nobles' (Polish: Rzeczpospolita szlachecka) and the 'First Commonwealth' (Polish: I Rzeczpospolita), the latter relatively common in historiography to distinguish it from the Second Polish Republic.
The Kingdom of Poland and the Grand Duchy of Lithuania underwent an alternating series of wars and alliances across the 13th and 14th centuries. The relations between the two states differed at times as each strived and competed for political, economic or military dominance of the region. In turn, Poland had remained a staunch ally of its southern neighbour, Hungary. The last Polish monarch from the native Piast dynasty, Casimir the Great, died on 5 November 1370 without fathering a legitimate male heir. Consequently, the crown passed onto his Hungarian nephew, Louis of Anjou, who ruled the Kingdom of Hungary in a personal union with Poland. A fundamental step in developing extensive ties with Lithuania was a succession crisis arising in the 1380s. Louis died on 10 September 1382, and alike his uncle did not produce a son to succeed him. His two daughters – Mary and Jadwiga – held claims to the vast dual realm. The Polish lords renounced Mary, then betrothed to Sigismund of Luxembourg, in favour of her younger sister Jadwiga. The future queen regnant was destined to wed young William Habsburg, but certain factions of the nobility remained apprehensive believing that the Austrian would not secure domestic interests. Instead, they turned to Jogaila, the Grand Duke of Lithuania. Jogaila was a lifelong pagan and vowed to convert and adopt Catholicism upon the marriage by signing the Union of Krewo on 14 August 1385. The Act imposed Christianity in Lithuania and transformed Poland into a diarchy, a kingdom ruled over by two sovereigns; their descendants and successive monarchs held the titles of king and grand duke respectively. The ultimate clause dictated that Lithuania is to be perpetually merged (perpetuo applicare) with the Polish Kingdom, however, this did not take effect until 1569.
Union of Lublin (1569)
Several minor agreements were struck prior to unification, notably the Union of Kraków and Vilnius, the Union of Vilnius and Radom and the Union of Grodno. Lithuania's vulnerable position and rising tensions on its eastern flank persuaded the nobles to seek a closer bond with Poland. The idea of a federation presented better economic opportunities, whilst securing Lithuania's borders from hostile states to the north, south and east. Lesser Lithuanian nobility was eager to share the personal privileges and political liberties enjoyed by the Polish szlachta, but did not accept Polish demands for the incorporation of the Grand Duchy into Poland as a mere province, with no sense of autonomy. Mikołaj "the Red" Radziwiłł (Radvila Rudasis) and his cousin Mikołaj "the Black" Radziwiłł, two prominent nobles and military commanders in Lithuania, vocally opposed the union.
A fierce proponent of a single unified Commonwealth was Sigismund II Augustus, who was childless and ailing. According to historians, it was his active involvement which hastened the process and made the union possible. A parliament (sejm) convened on 10 January 1569 in the city of Lublin, attended by envoys from both nations. It was agreed that the merger will take place the same year and both parliaments will be fused into a joint assembly. No independent parliamentary convocation or diet was henceforth permitted. Subjects of the Polish Crown were no longer restricted in purchasing land on Lithuanian territory and a single currency was established. Whilst the military remained separate, a unified foreign policy meant that Lithuanian troops were obliged to contribute during a conflict not to their advantage. As a result, several Lithuanian magnates deplored the accords and left the assembly in protest. Sigismund II used his authority as grand duke and enforced the Act of Union in contumaciam. In fear, the absent nobles promptly returned to the negotiations. The Union of Lublin was passed by the gathered deputies and signed by attendees on 1 July, thus creating the Polish–Lithuanian Commonwealth.
Sigismund's death in 1572 was followed by a three-year interregnum during which adjustments were made to the constitutional system; these adjustments significantly increased the power of the Polish nobility and established a truly elective monarchy.
Apex of the Golden Age
On 11 May 1573, Henry de Valois, son of Henry II of France and Catherine de' Medici, was proclaimed King of Poland and Grand Duke of Lithuania in the first royal election outside Warsaw. Approximately 40,000 notables cast a vote in what was to become a centuries-long tradition of a nobles' democracy (Golden Liberty). Henry already posed as a candidate before Sigismund's death and received widespread support from the pro-French factions. The choice was a political move aimed at curtailing Habsburg hegemony, ending skirmishes with the French-allied Ottomans, and profiting from the lucrative trade with France. Upon ascending the throne, Henry signed the contractual agreement known as the Pacta conventa and approbated the Henrician Articles. The Act stated the fundamental principles of governance and constitutional law in the Polish–Lithuanian Commonwealth. In June 1574, Henry abandoned Poland and headed back to claim the French crown following the death of his brother and predecessor, Charles IX. The throne was subsequently declared vacant.
The interregnum concluded on 12 December 1575 when primate Jakub Uchański declared Maximilian II, Holy Roman Emperor, as the next king. The decision was condemned by the anti-Habsburg coalition, which demanded a "native" candidate. As a compromise, on 13 December 1575 Anna Jagiellon – sister of Sigismund Augustus and a member of the Jagiellonian dynasty – became the new monarch. The nobles simultaneously elected Stephen Báthory as co-regent, who ruled jure uxoris. Báthory's election proved controversial – Lithuania and Ducal Prussia initially refused to recognize the Transylvanian as their ruler. The wealthy port city of Gdańsk (Danzig) staged a revolt, and, with the help of Denmark, blockaded maritime trade to neutral Elbląg (Elbing). Báthory, unable to penetrate the city's extensive fortifications, succumbed to the demands for greater privileges and freedoms. However, his successful Livonian campaign ended in the annexation of Livonia and the Duchy of Courland and Semigallia (modern-day Latvia and southern Estonia), thus expanding the Commonwealth's influence into the Baltics. Most importantly, Poland gained the Hanseatic city of Riga on the Baltic Sea.
In 1587, Sigismund Vasa – the son of John III of Sweden and Catherine Jagiellon – won the election, but his claim was overtly contested by Maximilian III of Austria, who launched a military expedition to challenge the new king. His defeat in 1588 at the hands of Jan Zamoyski sealed Sigismund's right to the throne of Poland and Sweden. Sigismund's long reign marked an end to the Polish Golden Age and the beginning of the Silver Age. A devout Catholic with despotic tendencies, he hoped to restore absolutism and imposed Roman Catholicism during the height of the Counter-Reformation. His hatred towards the Protestants in Sweden sparked a war of independence, which ended the Polish–Swedish union. As a consequence, he was deposed in Sweden by his uncle Charles IX Vasa. In Poland, the Zebrzydowski rebellion was brutally suppressed. Sigismund III then initiated a policy of expansionism, and invaded Russia in 1609 when that country was plagued by a civil war known as the Time of Troubles. In July 1610, the outnumbered Polish force comprising winged hussars defeated the Russians at the Battle of Klushino, which enabled the Poles to take and occupy Moscow for the next two years. The disgraced Vasili IV of Russia was transported in a cage to Warsaw where he paid a tribute to Sigismund; Vasili was later murdered in captivity. The Commonwealth forces were eventually driven out in 1612. The war concluded with a truce that granted Poland–Lithuania extensive territories in the east and marked its largest territorial expansion. At least five million Russians died between 1598 and 1613, the result of continuous conflict, famine and Sigismund's invasion.
The Polish–Ottoman War (1620–21) forced Poland to withdraw from Moldavia in southeastern Europe, but Sigismund's victory over the Turks at Khotyn diminished the supremacy of the Sultanate and eventually led to the murder of Osman II. This secured the Turkish frontier for the duration of Sigismund's rule. In spite of the victories in the Polish–Swedish War (1626–1629), the exhausted Commonwealth army signed the Treaty of Altmark which ceded much of Livonia to Sweden under Gustavus Adolphus. At the same time, the country's powerful parliament was dominated by nobles (Pic. 2) who were reluctant to get involved in the Thirty Years' War; this neutrality spared the country from the ravages of a political-religious conflict that devastated most of contemporary Europe.
During this period, Poland was experiencing a cultural awakening and extensive developments in arts and architecture; the first Vasa king openly sponsored foreign painters, craftsmen, musicians and engineers, who settled in the Commonwealth at his request.
Sigismund's eldest son, Ladislaus succeeded him as Władysław IV in 1632 with no major opposition. A skilled tactician, he invested in artillery, modernised the army and fiercely defended the Commonwealth's eastern borders. Under the Treaty of Stuhmsdorf, he reclaimed regions of Livonia and the Baltics which were lost during the Polish-Swedish wars. Unlike his father who worshipped the Habsburgs, Władysław sought closer ties with France and married Marie Louise Gonzaga, daughter of Charles I Gonzaga, Duke of Mantua, in 1646.
Decline and the Enlightenment
The Commonwealth's power and stability began waning after a series of blows during the following decades. Władysław's brother, John II Casimir, proved to be weak and impotent. The multicultural and mega-diverse federation already suffered domestic problems. As persecution of religious and ethnic minorities strengthened, several groups started to rebel.
A major rebellion of self-governed Ukrainian Cossacks inhabiting south-eastern borderlands of the Commonwealth rioted against Polish and Catholic oppression of Orthodox Ukraine in 1648, in what came to be known as the Khmelnytsky Uprising. It resulted in a Ukrainian request, under the terms of the Treaty of Pereyaslav, for protection by the Russian Tsar. In 1651, in the face of a growing threat from Poland, and forsaken by his Tatar allies, Khmelnytsky asked the Tsar to incorporate Ukraine as an autonomous duchy under Russian protection. Russian annexation of Zaporizhian Ukraine gradually supplanted Polish influence in that part of Europe. In the years following, Polish settlers, nobles, Catholics and Jews became the victims of retaliation massacres instigated by the Cossacks.
The other blow to the Commonwealth was a Swedish invasion in 1655, known as the Deluge, which was supported by troops of Transylvanian Duke George II Rákóczi and Frederick William, Elector of Brandenburg. Under the Treaty of Bromberg in 1657, Catholic Poland was forced to renounce suzerainty over Protestant Prussia; in 1701 the once-insignificant duchy was transformed into the Kingdom of Prussia, a major European power and Poland's most enduring foe.
In the late 17th century, the king of the weakened Commonwealth, John III Sobieski, allied with Holy Roman Emperor Leopold I to deal crushing defeats to the Ottoman Empire. In 1683, the Battle of Vienna marked the final turning point in the 250-year struggle between the forces of Christian Europe and the Islamic Ottomans. For its centuries-long opposition to Muslim advances, the Commonwealth would gain the name of Antemurale Christianitatis (bulwark of Christianity). During the next 16 years, the Great Turkish War would drive the Turks permanently south of the Danube River, never again to threaten central Europe.
By the 18th century, destabilization of its political system brought Poland to the brink of civil war. The Commonwealth was facing many internal problems and was vulnerable to foreign influences. An outright war between the King and the nobility broke out in 1715, and Tsar Peter the Great's mediation put him in a position to further weaken the state. The Russian army was present at the Silent Sejm of 1717, which limited the size of the armed forces to 24,000 and specified its funding, reaffirmed the destabilizing practice of liberum veto, and banished the king's Saxon army; the Tsar was to serve as guarantor of the agreement. Western Europe's increasing exploitation of resources in the Americas rendered the Commonwealth's supplies less crucial.
In 1764, nobleman Stanisław August Poniatowski was elected monarch with the connivance and support of his former lover Catherine the Great, Empress of Russia. By 1768, the Polish–Lithuanian Commonwealth started to be considered by Russians as the protectorate of the Russian Empire (despite the fact that it was officially still an independent state). A majority of control over Poland was central to Catherine's diplomatic and military strategies. Attempts at reform, such as the Four-Year Sejm's May Constitution, came too late. The country was partitioned in three stages by the neighbouring Russian Empire, the German Kingdom of Prussia, and the Habsburg Monarchy. By 1795, the Polish–Lithuanian Commonwealth had been completely erased from the map of Europe. Poland and Lithuania were not re-established as independent countries until 1918.
State organization and politics
The political doctrine of the Commonwealth was our state is a republic under the presidency of the King. Chancellor Jan Zamoyski summed up this doctrine when he said that Rex regnat et non-gubernat ("The King reigns but [lit. 'and'] does not govern"). The Commonwealth had a parliament, the Sejm, as well as a Senat and an elected king (Pic. 1). The king was obliged to respect citizens' rights specified in King Henry's Articles as well as in pacta conventa, negotiated at the time of his election.
The monarch's power was limited in favour of a sizable noble class. Each new king had to pledge to uphold the Henrician Articles, which were the basis of Poland's political system (and included near-unprecedented guarantees of religious tolerance). Over time, the Henrician Articles were merged with the pacta conventa, specific pledges agreed to by the king-elect. From that point onwards, the king was effectively a partner with the noble class and was constantly supervised by a group of senators. The Sejm could veto the king on important matters, including legislation (the adoption of new laws), foreign affairs, declaration of war, and taxation (changes of existing taxes or the levying of new ones).
The foundation of the Commonwealth's political system, the "Golden Liberty" (Latin: Aurea Libertas or Polish: Złota Wolność, a term used from 1573 on), included:
- election of the king by all nobles wishing to participate, known as wolna elekcja (free election);
- Sejm, the Commonwealth parliament which the king was required to hold every two years;
- pacta conventa (Latin), "agreed-to agreements" negotiated with the king-elect, including a bill of rights, binding on the king, derived from the earlier Henrician Articles.
- religious freedom guaranteed by Warsaw Confederation Act 1573,
- rokosz (insurrection), the right of szlachta to form a legal rebellion against a king who violated their guaranteed freedoms;
- liberum veto (Latin), the right of an individual Sejm deputy to oppose a decision by the majority in a Sejm session; the voicing of such a "free veto" nullified all the legislation that had been passed at that session; during the crisis of the second half of the 17th century, Polish nobles could also use the liberum veto in provincial sejmiks;
- konfederacja (from the Latin confederatio), the right to form an organization to force through a common political aim.
The three regions (see below) of the Commonwealth enjoyed a degree of autonomy. Each voivodship had its own parliament (sejmik), which exercised serious political power, including choice of poseł (deputy) to the national Sejm and charging of the deputy with specific voting instructions. The Grand Duchy of Lithuania had its own separate army, treasury and most other official institutions.
Golden Liberty created a state that was unusual for its time, although somewhat similar political systems existed in the contemporary city-states like the Republic of Venice. Both states were styled "Serenissima Respublica" or the "Most Serene Republic". At a time when most European countries were headed toward centralization, absolute monarchy and religious and dynastic warfare, the Commonwealth experimented with decentralization, confederation and federation, democracy and religious tolerance.
This political system unusual for its time stemmed from the ascendance of the szlachta noble class over other social classes and over the political system of monarchy. In time, the szlachta accumulated enough privileges (such as those established by the Nihil novi Act of 1505) that no monarch could hope to break the szlachta's grip on power. The Commonwealth's political system is difficult to fit into a simple category, but it can be tentatively described as a mixture of:
- confederation and federation, with regard to the broad autonomy of its regions. It is, however, difficult to decisively call the Commonwealth either confederation or federation, as it had some qualities of both;
- oligarchy, as only the szlachta (nobility) – around 15% of the population – had political rights;
- democracy, since all the szlachta were equal in rights and privileges, and the Sejm could veto the king on important matters, including legislation (the adoption of new laws), foreign affairs, declaration of war, and taxation (changes of existing taxes or the levying of new ones). Also, the 15% of Commonwealth population who enjoyed those political rights (the szlachta) was a substantially larger percentage than in majority European countries even in the nineteenth century; note that in 1820 in France only about 1.5% of the male adult population had the right to vote, and in 1840 in Belgium, only about 5%.
- elective monarchy, since the monarch, elected by the szlachta, was Head of State;
- constitutional monarchy, since the monarch was bound by pacta conventa and other laws, and the szlachta could disobey any king's decrees they deemed illegal.
The end of the Jagiellonian dynasty in 1572 – after nearly two centuries – disrupted the fragile equilibrium of the Commonwealth's government. Power increasingly slipped away from the central government to the nobility.
When presented with periodic opportunities to fill the throne, the szlachta exhibited a preference for foreign candidates who would not establish a strong and long-lasting dynasty. This policy often produced monarchs who were either totally ineffective or in constant debilitating conflict with the nobility. Furthermore, aside from notable exceptions such as the able Stefan Batory from Transylvania (1576–86), the kings of foreign origin were inclined to subordinate the interests of the Commonwealth to those of their own country and ruling house. This was especially visible in the policies and actions of the first two elected kings from the Swedish House of Vasa, whose politics brought the Commonwealth into conflict with Sweden, culminating in the war known as the Deluge (1655), one of the events that mark the end of the Commonwealth's Golden Age and the beginning of the Commonwealth's decline.
The Zebrzydowski Rebellion (1606–1607) marked a substantial increase in the power of the Polish magnates, and the transformation of szlachta democracy into magnate oligarchy. The Commonwealth's political system was vulnerable to outside interference, as Sejm deputies bribed by foreign powers might use their liberum veto to block attempted reforms. This sapped the Commonwealth and plunged it into political paralysis and anarchy for over a century, from the mid-17th century to the end of the 18th, while its neighbours stabilized their internal affairs and increased their military might.
The Commonwealth did eventually make a serious effort to reform its political system, adopting in 1791 the Constitution of 3 May 1791, which historian Norman Davies calls the first of its kind in Europe. The revolutionary Constitution recast the erstwhile Polish–Lithuanian Commonwealth as a Polish–Lithuanian federal state with a hereditary monarchy and abolished many of the deleterious features of the old system.
The new constitution:
- abolished the liberum veto and banned the szlachta's confederations;
- provided for a separation of powers among legislative, executive and judicial branches of government;
- established "popular sovereignty" and extended political rights to include not only the nobility but the bourgeoisie;
- increased the rights of the peasantry;
- preserved religious tolerance (but with a condemnation of apostasy from the Catholic faith).
These reforms came too late, however, as the Commonwealth was immediately invaded from all sides by its neighbors, which had been content to leave the Commonwealth alone as a weak buffer state, but reacted strongly to attempts by king Stanislaus Augustus and other reformers to strengthen the country. Russia feared the revolutionary implications of the 3 May Constitution's political reforms and the prospect of the Commonwealth regaining its position as a European power. Catherine the Great regarded the May constitution as fatal to her influence and declared the Polish constitution Jacobinical. Grigori Aleksandrovich Potemkin drafted the act for the Targowica Confederation, referring to the constitution as the "contagion of democratic ideas". Meanwhile, Prussia and Austria used it as a pretext for further territorial expansion. Prussian minister Ewald Friedrich von Hertzberg called the constitution "a blow to the Prussian monarchy", fearing that a strengthened Poland would once again dominate Prussia. In the end, the 3 May Constitution was never fully implemented, and the Commonwealth entirely ceased to exist only four years after its adoption.
The economy of the Commonwealth was predominantly based on agricultural output and trade, though there was an abundance of artisan workshops and manufactories — notably paper mills, leather tanneries, ironworks, glassworks and brickyards. Some major cities were home to craftsmen, jewellers and clockmakers. The majority of industries and trades were concentrated in the Kingdom of Poland; the Grand Duchy of Lithuania was more rural and its economy was driven by farming and clothmaking. Mining developed in the south-west region of Poland which was rich in natural resources such as lead, coal, copper and salt. The currency used in Poland–Lithuania was the złoty (meaning "the golden") and its subunit, the grosz. Foreign coins in the form of ducats, thalers and shillings were widely accepted and exchanged. The city of Gdańsk (Danzig) had the privilege of minting its own coinage. In 1794, Tadeusz Kościuszko began issuing the first Polish banknotes.
The country played a significant role in the supply of Western Europe by the export of grain (rye), cattle (oxen), furs, timber, linen, cannabis, ash, tar, carminic acid and amber. Cereals, cattle and fur amounted to nearly 90% of the country's exports to European markets by overland and maritime trade in the 16th century. From Gdańsk, ships carried cargo to the major ports of the Low Countries, such as Antwerp and Amsterdam. The land routes, mostly to the German provinces of the Holy Roman Empire such as the cities of Leipzig and Nuremberg, were used for the export of live cattle (herds of around 50,000 head) hides, salt, tobacco, hemp and cotton from the Greater Poland region. In turn, the Commonwealth imported wine, beer, fruit, exotic spices, luxury goods (e.g. tapestries, Pic. 5), furniture, fabrics as well as industrial products like steel and tools.
The agricultural sector was dominated by feudalism based on the plantation system (serfs). Slavery was forbidden in Poland in the 15th century, and formally abolished in Lithuania in 1588, replaced by the second enserfment. Typically a nobleman's landholding comprised a folwark, a large farmstead worked by serfs to produce surpluses for internal and external trade. This economic arrangement worked well for the ruling classes and nobles in the early years of the Commonwealth, which was one of the most prosperous eras of the grain trade. The economic strength of Commonwealth grain trade waned from the late 17th century on. Trade relationships were disrupted by the wars, and the Commonwealth proved unable to improve its transport infrastructure or its agricultural practices. Serfs in the region were increasingly tempted to flee. The Commonwealth's major attempts at countering this problem and improving productivity consisted of increasing serfs' workload and further restricting their freedoms in a process known as export-led serfdom.
The owner of a folwark usually signed a contract with merchants of Gdańsk, who controlled 80% of this inland trade, to ship the grain north to that seaport on the Baltic Sea. Countless rivers and waterways in the Commonwealth were used for shipping purposes, including the Vistula, Pilica, Bug, San, Nida, Wieprz, Neman. The rivers had relatively developed infrastructure, with river ports and granaries. Most of the river shipping moved north, southward transport being less profitable, and barges and rafts were often sold off in Gdańsk for lumber. Grodno become an important site after formation of a customs post at Augustów in 1569, which became a checkpoint for merchants travelling to the Crown lands from the Grand Duchy.
Urban population of the Commonwealth was low compared to Western Europe. Exact numbers depend on calculation methods. According to one source, the urban population of the Commonwealth was about 20% of the total in the 17th century, compared to approximately 50% in the Netherlands and Italy (Pic. 7). Another source suggests much lower figures: 4–8% urban population in Poland, 34–39% in the Netherlands and 22–23% in Italy. The Commonwealth's preoccupation with agriculture, coupled with the nobles' privileged position when compared to the bourgeoisie, resulted in a fairly slow process of urbanization and thus a rather slow development of industries. The nobility could also regulate the price of grain for their advantage, thus acquiring much wealth. Some of the largest trade fairs in the Commonwealth were held at Lublin.
Several ancient trading routes such as the Amber Road (Pic. 4) extended across Poland–Lithuania, which was situated in the heart of Europe and attracted foreign merchants or settlers. Countless goods and cultural artefacts continued to pass from one region to another via the Commonwealth, particularly that the country was a link between the Middle East, the Ottoman Empire and Western Europe. For instance, Isfahan rugs imported from Persia to the Commonwealth were incorrectly known as "Polish rugs" (French: Polonaise) in Western Europe.
The military in the Polish–Lithuanian Commonwealth evolved from the merger of the armies from the Polish Kingdom and from the Grand Lithuanian Duchy, though each state maintained its own division. The united armed forces comprised the Crown Army (armia koronna), recruited in Poland, and the Lithuanian Army (armia litewska) in the Grand Duchy. The military was headed by the Hetman, a rank equivalent to that of a general or supreme commander in other countries. Monarchs could not declare war or summon an army without the consent of the Sejm parliament or the Senate. The Polish–Lithuanian Commonwealth Navy never played a major role in the military structure from the mid-17th century onwards.
The most prestigious formation of the Polish army was its 16th- and 17th-century heavy cavalry in the form of Winged Hussars (husaria), whereas the Royal Foot Guards (Regiment Gwardii Pieszej Koronnej) were the elite of the infantry; the regiment supervised the king and his family. In 1788, the Great Sejm approved landslide reforms and defined future structures of the military; the Crown Army was to be split into four divisions, with seventeen field infantry regiments and eight cavalry brigades excluding special units; the Lithuanian Army was to be subdivided into two divisions, eight field regiments and two cavalry brigades excluding special units. If implemented, the reform predicted an army of almost 100,000 men.
The armies of those states differed from the organization common in other parts of Europe; according to Bardach, the mercenary formations (wojsko najemne), common in Western Europe, never gained widespread popularity in Poland. Brzezinski, however, notes that foreign mercenaries did form a significant portion of the more elite infantry units, at least until the early 17th century. In 16th-century Poland, several other formations formed the core of the military. There was a small standing army, obrona potoczna ("continuous defense") about 1,500–3,000 strong, paid for by the king, and primarily stationed at the troubled southern and eastern borders. It was supplemented by two formations mobilized in case of war — the pospolite ruszenie (Polish for levée en masse – feudal levy of mostly noble knights-landholders), and the wojsko zaciężne, recruited by the Polish commanders for the conflict. It differed from other European mercenary formations in that it was commanded by Polish officers, and dissolved after the conflict has ended.
Several years before the Union of Lublin, the Polish obrona potoczna was reformed, as the Sejm (national parliament of Poland) legislated in 1562–1563 the creation of wojsko kwarciane, named after kwarta tax levied on the royal lands for the purpose of maintaining this formation. This formation was also paid for by the king, and in the peacetime, numbered about 3,500–4,000 men according to Bardach; Brzezinski gives the range of 3,000–5,000. It was composed mostly of the light cavalry units manned by nobility (szlachta) and commanded by hetmans. Often, in wartime, the Sejm would legislate a temporary increase in the size of the wojsko kwarciane.
Science and literature
The Commonwealth was an important European center for the development of modern social and political ideas. It was famous for its rare quasi-democratic political system, praised by philosophers, and during the Counter-Reformation was known for near-unparalleled religious tolerance, with peacefully coexisting Roman Catholic, Jewish, Orthodox Christian, Protestant and Muslim (Sufi) communities. In the 18th century, the French Catholic Rulhiere wrote of 16th century Poland: "This country, which in our day we have seen divided on the pretext of religion, is the first state in Europe that exemplified tolerance. In this state, mosques arose between churches and synagogues." The Commonwealth gave rise to the famous Christian sect of the Polish Brethren, antecedents of British and American Unitarianism.
With its political system, the Commonwealth gave birth to political philosophers such as Andrzej Frycz Modrzewski (1503–1572) (Pic. 9), Wawrzyniec Grzymała Goślicki (1530–1607) and Piotr Skarga (1536–1612). Later, works by Stanisław Staszic (1755–1826) and Hugo Kołłątaj (1750–1812) helped pave the way for the Constitution of 3 May 1791, which Norman Davies calls the first of its kind in Europe.
Kraków's Jagiellonian University is one of the oldest universities in the world (established in 1364), together with the Jesuit Academy of Wilno (established in 1579) they were the major scholarly and scientific centers in the Commonwealth. The Komisja Edukacji Narodowej, Polish for Commission for National Education, formed in 1773, was the world's first national Ministry of Education. Commonwealth scientists included: Martin Kromer (1512–1589), historian and cartographer; Michael Sendivogius (1566–1636), alchemist and chemist; Jan Brożek (Ioannes Broscius in Latin) (1585–1652), polymath: a mathematician, physician and astronomer; Krzysztof Arciszewski (Crestofle d'Artischau Arciszewski in Portuguese) (1592–1656), engineer, ethnographer, general and admiral of the Dutch West Indies Company army in the war with the Spanish Empire for control of Brazil; Kazimierz Siemienowicz (1600–1651), military engineer, artillery specialist and a founder of rocketry; Johannes Hevelius (1611–1687), astronomer, founder of lunar topography; Michał Boym (1612–1659), orientalist, cartographer, naturalist and diplomat in Ming Dynasty's service (Pic. 11); Adam Adamandy Kochański (1631–1700), mathematician and engineer; Baal Shem Tov (הבעל שם טוב in Hebrew) (1698–1760), considered to be the founder of Hasidic Judaism; Marcin Odlanicki Poczobutt (1728–1810), astronomer and mathematician (Pic. 12); Jan Krzysztof Kluk (1739–1796), naturalist, agronomist and entomologist, John Jonston (1603–1675) scholar and physician, descended from Scottish nobility. In 1628 the Czech teacher, scientist, educator, and writer John Amos Comenius took refuge in the Commonwealth, when the Protestants were persecuted under the Counter Reformation.
The works of many Commonwealth authors are considered classics, including those of Jan Kochanowski (Pic. 10), Wacław Potocki, Ignacy Krasicki, and Julian Ursyn Niemcewicz. Many szlachta members wrote memoirs and diaries. Perhaps the most famous are the Memoirs of Polish History by Albrycht Stanisław Radziwiłł (1595–1656) and the Memoirs of Jan Chryzostom Pasek (ca. 1636–ca. 1701). Jakub Sobieski (1590–1646) (father of John III Sobieski) wrote notable diaries. During the Khotyn expedition in 1621 he wrote a diary called Commentariorum chotinensis belli libri tres (Diary of the Chocim War), which was published in 1646 in Gdańsk. It was used by Wacław Potocki as a basis for his epic poem, Transakcja wojny chocimskiej (The Progress of the War of Chocim). He also authored instructions for the journey of his sons to Kraków (1640) and France (1645), a good example of liberal education of the era.
Art and music
The art and music of the Commonwealth was largely shaped by prevailing European trends, though the country's minorities, foreigners as well as native folk cultures also contributed to its versatile nature. A common art form of the Sarmatian period were coffin portraits (portrety trumienne) used in funerals and other important ceremonies. As a rule, such portraits were nailed to sheet metal, six- or eight- sided in shape, fixed to the front of a coffin placed on a high, ornate catafalque. These were a unique and distinguishable feature of the Commonwealth's high culture, not found elsewhere in Europe. A similar tradition was only practiced in Roman Egypt. Polish monarchs and nobles frequently invited and sponsored foreign painters and artisans, notably from the Low Countries (the Netherlands, Flanders and Belgium), Germany or Italy. The interiors of upper-class residences, palaces and manors were adorned by wall tapestries (arrasy or tapiseria) imported from Western Europe; the most renowned collection are the Jagiellonian tapestries exhibited at Wawel Royal Castle in Kraków.
The economic, cultural and political ties between France and the Polish–Lithuanian Commonwealth gave rise to the term à la polonaise, French for "Polish-styled". With the marriage of Marie Leszczyńska to Louis XV of France in 1725, Polish culture began to flourish at the Palace of Versailles. Polish beds (lit à la polonaise) draped with baldachins became a centrepiece of Louis XV furniture in French chateaus. Folk flower motifs as well as Polish fashion were popularized in the form of a back-draped polonaise dress (robe à la polonaise) worn by aristocrats at Versailles.
The religious cultures of Poland–Lithuania coexisted and penetrated each other for the entirety of the Commonwealth's history – the Jews adopted elements of the national dress, loanwords and calques became commonplace and Roman Catholic churches in regions with significant Protestant populations were much simpler in décor than those in other parts of Poland–Lithuania. Mutual influence was further reflected in the great popularity of Byzantine icons (Pic. 13) and the icons resembling effigies of Mary in the predominantly Latin territories of today's Poland (Black Madonna) and Lithuania (Our Lady of the Gate of Dawn). Conversely, Latin infiltration into Ruthenian Orthodox and Protestant art was also conventional (Pic. 3).
Music was a common feature of religious and secular events. To that end many noblemen founded church and school choirs, and employed their own ensembles of musicians. Some, like Stanisław Lubomirski built their own opera houses (in Nowy Wiśnicz). Others, like Janusz Skumin Tyszkiewicz and Krzysztof Radziwiłł were known for their sponsorship of arts which manifested itself in their permanently retained orchestras, at their courts in Wilno (Vilnius). Musical life further flourished under the House of Vasa. Both foreign and domestic composers were active in the Commonwealth. Sigismund III brought in Italian composers and conductors, such as Luca Marenzio, Annibale Stabile, Asprilio Pacelli, Marco Scacchi and Diomedes Cato for the royal orchestra. Notable home grown musicians, who also composed and played for the King's court, included Bartłomiej Pękiel, Jacek Różycki, Adam Jarzębski, Marcin Mielczewski, Stanisław Sylwester Szarzyński, Damian Stachowicz, Mikołaj Zieleński and Grzegorz Gorczycki.
The architecture of the cities in the Polish–Lithuanian Commonwealth reflected a combination of Polish, German and Italian trends. Italian Mannerism or the Late Renaissance had a profound impact on traditional burgher architecture which can be observed to this day – castles and tenements were fitted with central Italianate courtyards composed of arched loggias, colonnades, bay windows, balconies, portals and ornamental balustrades. Ceiling frescos, sgraffito, plafonds and coffering (patterned ceilings; Polish kaseton; from Italian cassettone) were widespread. Rooftops were generally covered with terracotta rooftiles. The most distinguishable feature of Polish Mannerism are decorative "attics" above the cornice on the façade. Cities in northern Poland–Lithuania and in Livonia adopted the Hanseatic (or "Dutch") style as their primary form of architectural expression, comparable to that of the Netherlands, Belgium, northern Germany and Scandinavia.
The introduction of Baroque architecture was marked by construction of several Jesuit and Roman Catholic churches across Poland and Lithuania, notably the Peter and Paul Church in Kraków, the Corpus Christi Church in Nesvizh, Lublin Cathedral and UNESCO-enlisted sanctuary at Kalwaria Zebrzydowska. Fine examples of decorative Baroque and Rococo include Saint Anne's in Kraków and the Fara Church in Poznań. Another characteristic is the common usage of black marble. Altars, fonts, portals, balustrades, columns, monuments, tombstones, headstones and whole rooms (e.g. Marble Room at the Royal Castle in Warsaw, St. Casimir Chapel of the Vilnius Cathedral and Vasa Chapel at Wawel Cathedral) were extensively decorated with black marble, which became popular after the mid-17th century.
Magnates often undertook construction projects as monuments to themselves: churches, cathedrals, monasteries (Pic. 14), and palaces like the present-day Presidential Palace in Warsaw and Pidhirtsi Castle built by Grand Hetman Stanisław Koniecpolski. The largest projects involved entire towns, although in time many of them would lapse into obscurity or were abandoned. These towns were generally named after the sponsoring magnate. Among the most prominent is Zamość, founded by Jan Zamoyski and designed by the Italian architect Bernardo Morando as an ideal city. The magnates throughout Poland competed with the kings. The monumental castle Krzyżtopór, built in the style palazzo in fortezza between 1627 and 1644, had several courtyards surrounded by fortifications. Similar fortified complexes include castles in Łańcut and Krasiczyn.
The fascination with the culture and art of the Orient in the late Baroque period is reflected by Queen Marie's Chinese Palace in Zolochiv (Złoczów). 18th-century magnate palaces represents the characteristic type of Baroque suburban residence built entre cour et jardin (between the entrance court and the garden). Its architecture – a merger of European art with old Commonwealth building traditions are visible in Wilanów Palace in Warsaw (Pic. 15), Branicki Palace in Białystok, Potocki Palace in Radzyń Podlaski, Raczyński Palace in Rogalin, Nieborów Palace and Kozłówka Palace near Lubartów. Lesser nobility resided in country manor houses known as dworek. Neoclassicism replaced Baroque by the second-half of the 18th century – the last ruler of the Poland–Lithuania, Stanislaus II Augustus, greatly admired the classical architecture of Ancient Rome and promoted it as a symbol of the Polish Enlightenment. The Palace on the Isle and the exterior of St. Anne's Church in Warsaw are part of the neoclassical legacy of the former Commonwealth.
Szlachta and Sarmatism
The prevalent ideology of the szlachta became "Sarmatism", named after the Sarmatians, alleged ancestors of the Poles. This belief system was an important part of szlachta culture, penetrating all aspects of its life. Sarmatism enshrined equality among szlachta, horseback riding, tradition, provincial quaint life in manor houses, peace and pacifism; championed oriental-inspired souvenirs or attire for men (żupan, kontusz, sukmana, pas kontuszowy, delia, szabla); favoured European Baroque architecture; endorsed Latin as a language of thought or expression; and served to integrate the multi-ethnic nobility by creating an almost nationalistic sense of unity and of pride in Golden Liberty.
In its early, idealistic form, Sarmatism represented a positive cultural movement: it supported religious belief, honesty, national pride, courage, equality and freedom. In time, however, it became distorted. Late extreme Sarmatism turned belief into bigotry, honesty into political naïveté, pride into arrogance, courage into stubbornness and freedom into anarchy. The faults of Sarmatism were blamed for the demise of the country from the late 18th century onwards. Criticism, often one-sided and exaggerated, was used by the Polish reformists to push for radical changes. This self-deprecation was accompanied by works of German, Russian and Austrian historians, who tried to prove that it was Poland itself that was to blame for its fall.
The Polish–Lithuanian Commonwealth was immensely multicultural throughout its existence — it comprised countless religious identities and ethnic minorities inhabiting the country's vast territory. The precise number of minority groups and their populations can only be hypothesized. Statistically, the most prominent groups were the Poles, Lithuanians, Germans, Ruthenians and Jews. There were also considerable numbers of Czechs, Hungarians, Livonians, Romanis, Vlachs, Armenians, Italians, Scots and the Dutch (Olędrzy), who were either categorized as merchants, settlers or refugees fleeing religious persecution.
Prior to the union with Lithuania, the Kingdom of Poland was much more homogenous; approximately 70% of the population was Polish and Roman Catholic. With the creation of the Commonwealth, the number of Poles in comparison to the total population decreased to 50%. In 1569, the population stood at 7 million, with roughly 4.5 million Poles, 750,000 Lithuanians, 700,000 Jews and 2 million Ruthenians. Historians Michał Kopczyński and Wojciech Tygielski suggest that with the territorial expansion after the Truce of Deulino in 1618, the Commonwealth's population reached 12 million people, of which Poles constituted only 40%. At that time the nobility made up 10% of the entire population and the burghers around 15%. The average population density per square kilometer was: 24 in Mazovia, 23 in Lesser Poland, 19 in Greater Poland, 12 in Lublin palatinate, 10 in the Lwów area, 7 in Podolia and Volhynia, and 3 in the Kiev Voivodeship. There was a tendency for the people from the more densely inhabited western territories to migrate eastwards.
A sudden change in the country's demographics occurred in the mid-17th century. The Second Northern War and the Deluge followed by famine in the period from 1648 to 1657 were accountable for at least 4 million deaths. Coupled with further territorial losses, by 1717 the population had fallen to 9 million. The population slowly recovered throughout the 18th century; just before the first partition of Poland in 1772, the Commonwealth's population was 14 million, including around 1 million nobles. In 1792, the population of Poland was around 11 million and included 750,000 nobles.
The most multicultural and robust city in the country was Gdańsk (Danzig), a major Hanseatic seaport on the Baltic and Poland's wealthiest region. Gdańsk at the time was inhabited by a German-speaking majority and further hosted large numbers of foreign merchants, particularly of Scottish, Dutch or Scandinavian extraction. Historically, the Grand Duchy of Lithuania was more diverse than the Kingdom of Poland, and was deemed a melting pot of many cultures and religions. Hence, the inhabitants of the Grand Duchy were collectively known as Litvins regardless of their nationality, with the exception of Jews residing in Lithuania who were called Litvaks.
Despite guaranteed religious tolerance, gradual Polonization and Counter-Reformation sought to minimize the Commonwealth's diversity; the aim was to root out some minorities by imposing the Polish language, Latin, Polish culture and the Roman Catholic religion where possible. By the late 18th century, the Lithuanian language, culture and identity became vulnerable; the country's name was changed to "Commonwealth of Poland" in 1791.
The Warsaw Confederation signed on 28 January 1573 secured the rights of minorities and religions; it allowed all persons to worship any faith freely, though religious tolerance varied at times. As outlined by Norman Davies, "the wording and substance of the declaration of the Confederation of Warsaw of were extraordinary with regards to prevailing conditions elsewhere in Europe; and they governed the principles of religious life in the Republic for over two hundred years."
Poland retained religious freedom laws during an era when religious persecution was an everyday occurrence in the rest of Europe. The Polish–Lithuanian Commonwealth was a place where the most radical religious sects, trying to escape persecution in other countries of the Christian world, sought refuge. In 1561 Giovanni Bernardino Bonifacio d’Oria, a religious exile living in Poland, wrote of his adopted country's virtues to a colleague back in Italy: "You could live here in accordance with your ideas and preferences, in great, even the greatest freedoms, including writing and publishing. No one is a censor here." Others, particularly the leaders of the Roman Catholic church, the Jesuits and papal legates, were less optimistic about Poland's religious frivolity.
To be Polish, in remote and multi-ethnic parts of the Commonwealth, was then much less an index of ethnicity than of religion and rank; it was a designation largely reserved for the landed noble class (szlachta), which included Poles, but also many members of non-Polish origin who converted to Catholicism in increasing numbers with each following generation. For the non-Polish noble such conversion meant a final step of Polonization that followed the adoption of the Polish language and culture. Poland, as the culturally most advanced part of the Commonwealth, with the royal court, the capital, the largest cities, the second-oldest university in Central Europe (after Prague), and the more liberal and democratic social institutions had proven an irresistible magnet for the non-Polish nobility in the Commonwealth. Many referred to themselves as "gente Ruthenus, natione Polonus" (Ruthenian by blood, Polish by nationality) since the 16th century onwards.
As a result, in the eastern territories a Polish (or Polonized) aristocracy dominated a peasantry whose great majority was neither Polish nor Catholic. Moreover, the decades of peace brought huge colonization efforts to the eastern territories (nowadays roughly western and central Ukraine), heightening the tensions among nobles, Jews, Cossacks (traditionally Orthodox), Polish and Ruthenian peasants. The latter, deprived of their native protectors among the Ruthenian nobility, turned for protection to cossacks that facilitated violence that in the end broke the Commonwealth. The tensions were aggravated by conflicts between Eastern Orthodoxy and the Greek Catholic Church following the Union of Brest, overall discrimination of Orthodox religions by dominant Catholicism, and several Cossack uprisings. In the west and north, many cities had sizable German minorities, often belonging to Lutheran or Reformed churches. The Commonwealth had also one of the largest Jewish diasporas in the world – by the mid-16th century 80% of the world's Jews lived in Poland (Pic. 16).
Until the Reformation, the szlachta were mostly Catholics (Pic. 13). However, many noble families quickly adopted the Reformed religion. After the Counter-Reformation, when the Catholic Church regained power in Poland, the szlachta became almost exclusively Catholic.
The Crown had about double the population of Lithuania and five times the income of the latter's treasury. As with other countries, the borders, area and population of the Commonwealth varied over time. After the Peace of Jam Zapolski (1582), the Commonwealth had approximately 815,000 km2 area and a population of 7.5 million. After the Truce of Deulino (1618), the Commonwealth had an area of some 990,000 km2 and a population of 11–12 million (including some 4 million Poles and close to a million Lithuanians).
- Polish – officially recognized; dominant language, used by most of the Commonwealth's nobility and by the peasantry in the Crown province; official language in the Crown chancellery and since 1697 in the Grand Duchy chancellery. Dominant language in the towns.
- Latin – off. recog.; commonly used in foreign relations and popular as a second language among some of the nobility.
- French – not officially recognized; replaced Latin at the royal court in Warsaw in the beginning of the 18th century as a language used in foreign relations and as genuine spoken language. It was commonly used as a language of science and literature and as a second language among some of the nobility.
- Ruthenian – also known as Chancellery Slavonic; off. recog.; official language in the Grand Duchy chancellery until 1697 (when replaced by Polish) and in Bratslav, Chernihiv, Kiev and Volhynian voivodeships until 1673; used in some foreign relations its dialects (modern Belarusian and Ukrainian) were widely used in the Grand Duchy and eastern parts of the Crown as spoken language.
- Lithuanian – not officially recognized; but used in some official documents in the Grand Duchy and, mostly, used as a spoken language in the northernmost part of the country (in Lithuania Proper) and the northern part of Ducal Prussia (Polish fief).
- German – off. recog.; used in some foreign relations, in Ducal Prussia and by minorities in the cities especially in the Royal Prussia.
- Hebrew – off. recog.; and Aramaic used by Jews for religious, scholarly, and legal matters.
- Yiddish – not officially recognized; used by Jews in their daily life
- Italian – not officially recognized; used in some foreign relations and by Italian minorities in cities.
- Armenian – off. recog. used by the Armenian minority.
- Arabic – not officially recognized; used in some foreign relations and by Tatars in their religious matters, they also wrote Ruthenian in the Arabic script.
The Duchy of Warsaw, established in 1807 by Napoleon Bonaparte, traced its origins to the Commonwealth. Other revival movements appeared during the November Uprising (1830–31), the January Uprising (1863–64) and in the 1920s, with Józef Piłsudski's failed attempt to create a Polish-led Intermarium (Międzymorze) federation that, at its largest extent, would span from Finland in the north to the Balkans in the south. The contemporary Republic of Poland considers itself a successor to the Commonwealth, whereas the Republic of Lithuania, re-established at the end of World War I, saw the participation of the Lithuanian state in the old Polish–Lithuanian Commonwealth mostly in a negative light at the early stages of regaining its independence, although this attitude has been changing in recent years.
While the term "Poland" was also commonly used to denote this whole polity, Poland was in fact only part of a greater whole – the Polish–Lithuanian Commonwealth, which comprised primarily two parts:
- the Crown of the Polish Kingdom (Poland proper), colloquially "the Crown"
- the Grand Duchy of Lithuania, colloquially "Lithuania"
The Commonwealth was further divided into smaller administrative units known as voivodeships (województwa). Each voivodeship was governed by a Voivode (wojewoda, governor). Voivodeships were further divided into starostwa, each starostwo being governed by a starosta. Cities were governed by castellans. There were frequent exceptions to these rules, often involving the ziemia subunit of administration.
The lands that once belonged to the Commonwealth are now largely distributed among several Central and East European countries: Poland, Ukraine, Moldova (Transnistria), Belarus, Russia, Lithuania, Latvia, and Estonia. Also some small towns in Upper Hungary (today mostly Slovakia), became a part of Poland in the Treaty of Lubowla (Spiš towns).
Other notable parts of the Commonwealth, without respect to region or voivodeship divisions, include:
- Lesser Poland Province (Polish: Małopolska), southern Poland, with two largest cities, its capital at Kraków and Lublin in the north-east;
- Greater Poland Province (Polish: Wielkopolska), west–central Poland around Poznań and the Warta River system;
- Mazovia (Polish: Mazowsze), central Poland, with its capital at Warsaw;
- Lithuania Proper (Lithuanian: Didžioji Lietuva), northwest Grand Duchy, its most Catholic and ethnically Lithuanian part, capital Vilnius;
- Duchy of Samogitia (Lithuanian: Žemaitija; Polish: Żmudź), westernmost and most autonomous part of Grand Duchy of Lithuania, also the western part of Lithuania Proper, capital Raseiniai;
- Royal Prussia (Polish: Prusy Królewskie), at the southern shore of the Baltic Sea, was an autonomous area since the Second Peace of Thorn (1466), incorporated into the Crown in 1569 with the Commonwealth's formation;
- Ruthenia (Polish: Ruś), the eastern Commonwealth, adjoining Russia;
- Duchy of Livonia (Inflanty), a joint domain of the Crown and the Grand Duchy of Lithuania. Parts lost to Sweden in the 1620s and in 1660;
- Duchy of Courland and Semigallia (Lithuanian: Kuršas ir Žiemgala; Polish: Kurlandii i Semigalii), a northern fief of the Commonwealth. It established a colony in Tobago in 1637 and on St. Andrews Island at the Gambia River in 1651 (see Couronian colonization);
- Silesia (Polish: Śląsk) was not within the Commonwealth, but small parts belonged to various Commonwealth kings; in particular, the Vasa kings were dukes of Opole (Oppeln) and Racibórz (Ratibor) from 1645 to 1666.
Commonwealth borders shifted with wars and treaties, sometimes several times in a decade, especially in the eastern and southern parts. After the Peace of Jam Zapolski (1582), the Commonwealth had approximately 815,000 km2 area and a population of 7.5 million. After the Truce of Deulino (1618), the Commonwealth had an area of some 1 million km2 (990,000 km2) and a population of about 11 million.
In the 16th century, the Polish bishop and cartographer Martin Kromer, who studied in Bologna, published a Latin atlas, entitled Poland: about Its Location, People, Culture, Offices and the Polish Commonwealth, which was regarded as one of the most comprehensive guides to the country.
Kromer's works and other contemporary maps, such as those of Gerardus Mercator, show the Commonwealth as mostly plains. The Commonwealth's southeastern part, the Kresy, was famous for its steppes. The Carpathian Mountains formed part of the southern border, with the Tatra Mountain chain the highest, and the Baltic Sea formed the Commonwealth's northern border. As with most European countries at the time, the Commonwealth had extensive forest cover, especially in the east. Today, what remains of the Białowieża Forest constitutes the last largely intact primeval forest in Europe.
Part of a series on the
|History of the
- History of the Polish–Lithuanian Commonwealth (1569–1648)
- History of the Polish–Lithuanian Commonwealth (1648–1764)
- History of the Polish–Lithuanian Commonwealth (1764–1795)
- List of medieval great powers
- Armorial of Polish nobility
- List of szlachta
- Polish heraldry
- Lithuanian nobility
- History of the Germans in Poland
- History of the Jews in Poland
- History of Poland
- History of Lithuania
- Pro Fide, Lege et Rege was the motto since the 18th century.
a. ^ Name in native and official languages:
- Latin: Regnum Poloniae Magnusque Ducatus Lithuaniae / Serenissima Res Publica Poloniae
- French: Royaume de Pologne et Grand-duché de Lituanie / Sérénissime République de Pologne et Grand-duché de Lituanie
- Polish: Królestwo Polskie i Wielkie Księstwo Litewskie
- Lithuanian: Lenkijos Karalystė ir Lietuvos Didžioji Kunigaikštystė
- Belarusian: Каралеўства Польскае і Вялікае Княства Літоўскае (Karaleŭstva Polskaje і Vialikaje Kniastva Litoŭskaje)
- Ukrainian: Королівство Польське і Велике князівство Литовське
- German: Königreich Polen und Großfürstentum Litauen
b. ^ Some historians date the change of the Polish capital from Kraków to Warsaw between 1595 and 1611, although Warsaw was not officially designated capital until 1793. The Commonwealth Sejm began meeting in Warsaw soon after the Union of Lublin and its rulers generally maintained their courts there, although coronations continued to take place in Kraków. The modern concept of a single capital city was to some extent inapplicable in the feudal and decentralized Commonwealth. Warsaw is described by some historians as the capital of the entire Commonwealth. Wilno, the capital of the Grand Duchy, is sometimes called the second capital of the entity.
- This quality of the Commonwealth was recognized by its contemporaries. Robert Burton, in his The Anatomy of Melancholy, first published in 1621, writes of Poland: "Poland is a receptacle of all religions, where Samosetans, Socinians, Photinians ..., Arians, Anabaptists are to be found"; "In Europe, Poland and Amsterdam are the common sanctuaries [for Jews]".
- Partitions of Poland at the Encyclopædia Britannica
- Jagiellonian University Centre for European studies, "A Very Short History of Kraków", see: "1596 administrative capital, the tiny village of Warsaw". Archived from the original on 12 March 2009. Retrieved 29 November 2012.
- Janusz Sykała: Od Polan mieszkających w lasach – historia Polski – aż do króla Stasia, Gdansk, 2010.
- Georg Ziaja: Lexikon des polnischen Adels im Goldenen Zeitalter 1500–1600, p. 9.
- Panstwowe Przedsiebiorstwo Wydawnictw Kartograficznych: Atlas Historyczny Polski, wydanie X, 1990, p. 14, ISBN 83-7000-016-9.
- Bertram Benedict (1919): A history of the great war. Bureau of national literature, inc. p. 21.
- According to Panstwowe Przedsiebiorstwo Wydawnictw Kartograficznych: Atlas Historyczny Polski, wydanie X, 1990, p. 16, ~ 990.000 km2
- Zbigniew Pucek: Państwo i społeczeństwo 2012/1, Krakow, 2012, p. 17.
- Norman Davies, Europe: A History, Pimlico 1997, p. 554: "Poland–Lithuania was another country which experienced its 'Golden Age' during the sixteenth and early seventeenth centuries. The realm of the last Jagiellons was absolutely the largest state in Europe"
- Piotr Wandycz (2001). The price of freedom (p.66). p. 66. ISBN 978-0-415-25491-5. Retrieved 13 August 2011.
- Bertram Benedict (1919). A history of the great war. Bureau of national literature, inc. p. 21. Retrieved 13 August 2011.
- According to Panstwowe Przedsiebiorstwo Wydawnictw Kartograficznych: Atlas Historyczny Polski, wydanie X, 1990, p. 16, 990.000 km2
- Based on 1618 population map Archived 17 February 2013 at the Wayback Machine (p. 115), 1618 languages map (p119), 1657–67 losses map (p. 128) and 1717 map Archived 17 February 2013 at the Wayback Machine (p. 141) from Iwo Cyprian Pogonowski, Poland a Historical Atlas, Hippocrene Books, 1987, ISBN 0-88029-394-2
- According to Panstwowe Przedsiebiorstwo Wydawnictw Kartograficznych: Atlas Historyczny Polski, wydanie X, 1990, p. 16, just over 9 million in 1618.
- Maciej Janowski, Polish Liberal Thought, Central European University Press, 2001, ISBN 963-9241-18-0, Google Print: p. 3, p. 12
- Paul W. Schroeder, The Transformation of European Politics 1763–1848, Oxford University Press, 1996, ISBN 0-19-820654-2, Google print p. 84
- Rett R. Ludwikowski, Constitution-Making in the Region of Former Soviet Dominance, Duke University Press, 1997, ISBN 0-8223-1802-4, Google Print, p. 34
- George Sanford, Democratic Government in Poland: Constitutional Politics Since 1989, Palgrave, 2002, ISBN 0-333-77475-2, Google print p. 11 – constitutional monarchy, p. 3 – anarchy
- Aleksander Gella, Development of Class Structure in Eastern Europe: Poland and Her Southern Neighbors, SUNY Press, 1998, ISBN 0-88706-833-2, Google Print, p. 13
- "Formally, Poland and Lithuania were to be distinct, equal components of the federation ... But Poland, which retained possession of the Lithuanian lands it had seized, had greater representation in the diet and became the dominant partner.""Lublin, Union of". Encyclopædia Britannica. 2006.
- Norman Davies, God's Playground. A History of Poland, Vol. 1: The Origins to 1795, Vol. 2: 1795 to the Present. Oxford: Oxford University Press. ISBN 0-19-925339-0 / ISBN 0-19-925340-4
- Halina Stephan, Living in Translation: Polish Writers in America, Rodopi, 2003, ISBN 90-420-1016-9, Google Print p. 373. Quoting from Sarmatian Review academic journal mission statement: "Polish–Lithuanian Commonwealth was ... characterized by religious tolerance unusual in premodern Europe"
- Feliks Gross, https://books.google.com/books?ie=UTF-8&vid=ISBN0313309329&id=I6wM4X9UQ8QC&pg=PA122&lpg=PA122&dq=Polish-Lithuanian+Commonwealth+religious+tolerance Citizenship and Ethnicity: The Growth and Development of a Democratic Multiethnic Institution, Greenwood Press, 1999, ISBN 0-313-30932-9, p. 122 (notes)
- "In the mid-1500s, united Poland was the largest state in Europe and perhaps the continent's most powerful state politically and militarily". "Poland". Encyclopædia Britannica. 2009. Encyclopædia Britannica Online. Retrieved 26 June 2009.
- Francis Dvornik (1992). The Slavs in European History and Civilization. Rutgers University Press. p. 300. ISBN 0-8135-0799-5.
- Martin Van Gelderen, Quentin Skinner, Republicanism: A Shared European Heritage, Cambridge University Press, 2002, ISBN 0-521-80756-5 p. 54.
- "The Causes of Slavery or Serfdom: A Hypothesis" Archived 15 December 2007 at the Wayback Machine (discussion and full online text) of Evsey Domar (1970). Economic History Review 30:1 (March), pp. 18–32.
- Poland's 1997 Constitution in Its Historical Context; Daniel H. Cole, Indiana University School of Law, 22 September 1998 http://indylaw.indiana.edu/instructors/cole/web%20page/polconst.pdf
- Blaustein, Albert (1993). Constitutions of the World. Fred B. Rothman & Company. ISBN 9780837703626.
- Isaac Kramnick, Introduction, Madison, James (1987). The Federalist Papers. Penguin Classics. p. 13. ISBN 0-14-044495-5.
May second oldest constitution.
- John Markoff describes the advent of modern codified national constitutions as one of the milestones of democracy, and states that "The first European country to follow the U.S. example was Poland in 1791." John Markoff, Waves of Democracy, 1996, ISBN 0-8039-9019-7, p. 121.
- Davies, Norman (1996). Europe: A History. Oxford University Press. p. 699. ISBN 0-19-820171-0.
- "Regnum Poloniae Magnusque Ducatus Lithuaniae – definicja, synonimy, przykłady użycia". sjp.pwn.pl. Retrieved 27 October 2016.
- Ex quo serenissima respublica Poloniae in corpore ad exempluin omnium aliarnm potentiarum, lilulum regiuin Borussiae recognoscere decrevit (...)
Antoine-François-Claude Ferrand (1820). "Volume 1". Histoire des trois démembremens de la Pologne: pour faire suite à l'histoire de l'Anarchie de Pologne par Rulhière (in French). Deterville. p. 182.
- the name given by Marcin Kromer in his work Polonia sive de situ, populis, moribus, magistratibus et re publica regni Polonici libri duo, 1577.
- the therm used for instance in Zbior Deklaracyi, Not I Czynnosci Głownieyszych, Ktore Poprzedziły I Zaszły Pod Czas Seymu Pod Węzłem Konfederacyi Odprawuiącego Się Od Dnia 18. Wrzesnia 1772. Do 14 Maia 1773
- Name used for the common state, Henryk Rutkowski, Terytorium, w: Encyklopedia historii gospodarczej Polski do 1945 roku, t. II, Warszawa 1981, s. 398.
- Richard Buterwick. The Polish Revolution and the Catholic Church, 1788–1792: A Political History. Oxford University Press. 2012. pp. 5, xvii.
- 1791 document signed by the King Stanislaw August "Zareczenie wzaiemne Oboyga Narodow" pp. 1, 5
- Jasienica, Paweł (1997). Polska Jagiellonów. Polska: Prószyński i Spółka. pp. 30–32. ISBN 9788381238816.
- Jasienica 1997, pp. 30–32
- Halecki 1991, p. 52
- Halecki 1991, p. 52
- Halecki 1991, p. 71
- Halecki 1991, p. 52
- Engel, Pál (2001). The Realm of St Stephen: A History of Medieval Hungary, 895–1526. I.B. Tauris Publishers. p. 170. ISBN 1-86064-061-3.
- Halecki, Oscar (1991). Jadwiga of Anjou and the Rise of East Central Europe. Polish Institute of Arts and Sciences of America. pp. 116–117. ISBN 0-88033-206-9.
- Jasienica 1997, p. 63
- Halecki 1991, p. 155
- Manikowska, Halina (2005). Historia dla Maturzysty. Warszawa: Wydawnictwo Szkolne PWN. p. 141. ISBN 83-7195-853-6.
- Butterwick 2021, pp. 12–14
- Butterwick 2021
- Butterwick, Richard (2021). The Polish-Lithuanian Commonwealth, 1733-1795. Yale University Press. p. 14. ISBN 9780300252200.
- Borucki, Marek (2009). Historia Polski do 2009 roku. Polska: Mada. p. 57. ISBN 9788389624598.
- Gierowski, Józef (1986a). Historia Polski 1505–1764. Warsaw: PWN. pp. 92–109. ISBN 83-01-03732-6.
- Butterwick 2021, p. 21
- Butterwick 2021, p. 21
- Pernal, Andrew Boleslaw (2010). Rzeczpospolita Obojga Narodów a Ukraina. Polska: Księgarnia Akademicka. p. 10. ISBN 9788371889738.
- Pernal 2010, p. 10
- Maniecky & Szajnicha 1869, p. 504 harvnb error: no target: CITEREFManieckySzajnicha1869 (help)
- Maniecky, Wojciech; Szajnocha, Karol (1869). Dziennik Literacki (in Polish). Ossoliński. p. 504.
- Maniecky & Szajnicha 1869, p. 504 harvnb error: no target: CITEREFManieckySzajnicha1869 (help)
- The death of Sigismund II Augustus in 1572 was followed by a three-year Interregnum during which adjustments were made in the constitutional system. The lower nobility was now included in the selection process, and the power of the monarch was further circumscribed in favor of the expanded noble class. From that point, the king was effectively a partner with the noble class and constantly supervised by a group of senators.
"The Elective Monarchy". Poland – The Historical Setting. Federal Research Division of the Library of Congress. 1992. Archived from the original on June 4, 2011. Retrieved July 15, 2011.
- Bardach, Juliusz (1987). Historia państwa i prawa polskiego (in Polish). Warszawa: PWN. pp. 216–217.
- Bardach 1987, pp. 216–217
- Stone, Daniel (2001). The Polish-Lithuanian state, 1386–1795 [A History of East Central Europe, Volume IV.] Seattle: University of Washington Press. p. 118. ISBN 0-295-98093-1.
- Besala, Jerzy; Biedrzycka, Agnieszka (2005). Stefan Batory: Polski Słownik Biograficzny (in Polish). Volume XLIII. p. 116.
|volume=has extra text (help)
- Besala & Biedrzycka 2005, p. 116
- Besala & Biedrzycka 2005, p. 117
- Besala & Biedrzycka 2005, p. 116
- Besala & Biedrzycka 2005, pp. 116–117
- Besala & Biedrzycka 2005, pp. 118–119
- Besala & Biedrzycka 2005, pp. 118–119
- Besala & Biedrzycka 2005, pp. 121
- Szujski, Józef (1894). Dzieła Józefa Szujskiego. Dzieje Polski (in Polish). 3. Kraków: Szujski-Kluczycki. p. 139. Retrieved 9 January 2021.
- pisze, Przemek (3 July 2013). "Bitwa pod Byczyną. Zamoyski upokarza Habsburgów i gwarantuje tron Zygmuntowi III - HISTORIA.org.pl - historia, kultura, muzea, matura, rekonstrukcje i recenzje historyczne". Retrieved 16 November 2016.
- Kizwalter, Tomasz (1987). Kryzys Oświecenia a początki konserwatyzmu polskiego (in Polish). Warszawa (Warsaw): Uniwersytet Warszawski. p. 21. Retrieved 3 May 2021.
- Szujski 1894, p. 161
- Peterson, Gary Dean (2014). Warrior Kings of Sweden. The Rise of an Empire in the Sixteenth and Seventeenth Centuries. McFarland, Incorporated, Publishers. ISBN 9781476604114. Retrieved 14 January 2021.
- Peterson 2014, p. 107
- Jędruch, Jacek (1982). Constitutions, Elections, and Legislatures of Poland, 1493-1977. University Press of America. p. 89. ISBN 9780819125095. Retrieved 1 February 2021.
- Dabrowski, Patrice M. (2014). Poland. The First Thousand Years. US: Cornell University Press. p. 168. ISBN 9781501757402. Retrieved 18 February 2021.
- Shubin, Daniel H. (2009). Tsars and Imposters. Russia's Time of Troubles. New York: Algora. p. 201. ISBN 9780875866871. Retrieved 1 February 2021.
- Cooper, J. P. (1979). The New Cambridge Modern History: Volume 4, The Decline of Spain and the Thirty Years War, 1609-48/49. CUP Archive. ISBN 9780521297134. Retrieved 11 April 2019.
- Gillespie, Alexander (2017). The Causes of War. Volume III: 1400 CE to 1650 CE. Portland: Bloomsbury Publishing. p. 194. ISBN 9781509917662. Retrieved 18 February 2021.
|volume=has extra text (help)
- Dyer, Thomas Henry (1861). The History of Modern Europe. From the Fall of Constantinople, in 1453, to the War in the Crimea, in 1857. Volume 2. London: J. Murray. p. 504. Retrieved 20 February 2021.
- Podhorodecki, Leszek (1985). Rapier i koncerz: z dziejów wojen polsko-szwedzkich. Warsaw: Książka i Wiedza. p. 191–200. ISBN 83-05-11452-X.
- Gillespie 2017, p. 141
- Miłobędzki, Adam (1980). Dzieje sztuki polskiej: Architektura polska XVII wieku (in Polish). Polska: Panstwowe Wydawnictwo Naukowe. p. 115. Retrieved 8 January 2021.
- Czapliński, Władysław (1976). Władysław IV i jego czasy [Władysław IV and His Times] (in Polish). Warsaw: PW "Wiedza Poweszechna". pp. 102–118.
- Czapliński 1976, p. 170
- Czapliński 1976, p. 202
- Czapliński 1976, pp. 353–356
- Poland, the knight among nations, Louis Edwin Van Norman, New York: 1907, p. 18.
- William J. Duiker, Jackson J. Spielvogel (2006). The Essential World History: Volume II: Since 1500. Cengage Learning. p. 336. ISBN 0-495-09766-7.
- Norman Davies (1998). Europe: A History. HarperCollins. pp. 657–660. ISBN 978-0-06-097468-8.
vilnius capital grand duchy.
- Rey Koslowski (2000). Migrants and citizens: demographic change in the European state system. Cornell University Press. p. 51. ISBN 978-0-8014-3714-4.
polish lithuanian commonwealth americas western europe.
- Bartłomiej Szyndler (2009). Racławice 1794. Bellona Publishing. pp. 64–65. ISBN 9788311116061. Retrieved 26 September 2014.
- Sužiedėlis 2011, p. xxv.
- Andrzej Jezierski, Cecylia Leszczyńska, Historia gospodarcza Polski, 2003, s. 68.
- Russia's Rise as a European Power, 1650–1750, Jeremy Black, History Today, Vol. 36 Issue: 8, August 1986.
- Roman, Wanda Krystyna (2003). Działalność niepodległościowa żołnierzy polskich na Litwie i Wileńszczyźnie. Polska: Naukowe Wydawn. Piotrkowskie. p. 23. ISBN 9788388865084. Retrieved 13 February 2021.
- Jan Zamoyski's speech in the Parliament, 1605 Harbottle Thomas Benfield (2009). Dictionary of Quotations (Classical). BiblioBazaar, LLC. p. 254. ISBN 978-1-113-14791-2.
- Bardach 1987, pp. 216–217
- Bardach 1987, pp. 216–217
- Pacy, James S.; James T. McHugh (2001). Diplomats without a Country: Baltic Diplomacy, International Law, and the Cold War (1st ed.). Post Road West, Westport, Connecticut: Greenwood Press. doi:10.1336/0313318786. ISBN 0-313-31878-6. Retrieved 3 September 2006.
- Josef Macha (1974). Ecclesiastical Unification. Pont. Institutum Orientalium Studiorum. p. 154.
- Andrej Kotljarchuk (2006). In the Shadows of Poland and Russia: The Grand Duchy of Lithuania and Sweden in the European Crisis of the Mid-17th Century. Stockholm University. pp. 37, 87. ISBN 978-91-89-31563-1.
- Joanna Olkiewicz, Najaśniejsza Republika Wenecka (Most Serene Republic of Venice), Książka i Wiedza, 1972, Warszawa
- Joseph Conrad, Notes on Life and Letters: Notes on Life and Letters, Cambridge University Press, 2004, ISBN 0-521-56163-9, Google Print, p. 422 (notes)
- Frost, Robert I. The Northern Wars: War, State and Society in northeastern Europe, 1558–1721. Harlow, England; New York: Longman's. 2000. Especially pp. 9–11, 114, 181, 323.
- David Sneath (2007). The headless state: aristocratic orders, kinship society, & misrepresentations of nomadic inner Asia. Columbia University Press. p. 188. ISBN 978-0-231-14054-6.
- M. L. Bush (1988). Rich noble, poor noble. Manchester University Press ND. pp. 8–9. ISBN 0-7190-2381-5.
- Bardach 1987, pp. 216–217
- Frost, Robert I. (2004). After the Deluge; Poland-Lithuania and the Second Northern War, 1655-1660. Cambridge: University Press. ISBN 9780521544023.
- William Christian Bullitt, Jr., The Great Globe Itself: A Preface to World Affairs, Transaction Publishers, 2005, ISBN 1-4128-0490-6, Google Print, pp. 42–43
- John Adams, The Political Writings of John Adams, Regnery Gateway, 2001, ISBN 0-89526-292-4, Google Print, p. 242
- Henry Eldridge Bourne, The Revolutionary Period in Europe 1763 to 1815, Kessinger Publishing, 2005, ISBN 1-4179-3418-2, Google Print p. 161
- Wolfgang Menzel, Germany from the Earliest Period Vol. 4, Kessinger Publishing, 2004, ISBN 1-4191-2171-5, Google Print, p. 33
- Isabel de Madariaga, Russia in the Age of Catherine the Great, Sterling Publishing Company, Inc., 2002, ISBN 1-84212-511-7, Google Print p. 431
- Carl L. Bucki, The Constitution of May 3, 1791 Archived 5 December 2008 at the Wayback Machine, Text of a presentation made at the Polish Arts Club of Buffalo on the occasion of the celebrations of Poland's Constitution Day on 3 May 1996. Retrieved 20 March 2006.
- Piotr Stefan Wandycz. The Price of Freedom: A History of East Central Europe from the Middle Ages to the Present, Routledge (UK), 2001, ISBN 0-415-25491-4, Google Print p. 131.
- Niepodległość. 6. Polska: Fundacja "Polonia Restituta,". 1991. Retrieved 13 February 2021.
- Sobiech, Marcin (2018). "Jak powstawała i co zawiera mapa Rzeczpospolitej Obojga Narodów". Exgeo (in Polish). Marcin Sobiech. Retrieved 16 February 2021.
- Kucharczuk 2011, p. 64
- "shillings - Polish translation – Linguee". Linguee.com. Retrieved 27 April 2018.
- Flisowski, Zbigniew (1985). Bastion u wrót Gdańska. Polska: Nasza Księgarnia. p. 11. ISBN 9788310087799.
- "Pierwsze polskie banknoty". Skarbnica Narodowa (in Polish). Retrieved 16 February 2021.
- Gdańskie Towarzystwo Naukowe (1991). Seria popularno-naukowa "Pomorze Gdańskie" (in Polish). 19. Gdańsk: Towarzystwo Naukowe. p. 149. Retrieved 16 February 2021.
- Kucharczuk, Katarzyna (2011). Polska samorządna; ilustrowane dzieje administracji i samorządu terytorialnego na tle historii Polski. Carta Blanca. p. 66. ISBN 9788377051207. Retrieved 16 February 2021.
- Zsigmond Pál Pach, Zs. P. Pach (1970). The role of East-Central Europe in international trade, 16th and 17th centuries. Akadémiai Kiadó. p. 220.
- Institute of History (Polish Academy of Sciences) (1991). "Volumes 63–66". Acta Poloniae historica. National Ossoliński Institute. p. 42. ISBN 0-88033-186-0.
- Krzysztof Olszewski (2007). The Rise and Decline of the Polish–Lithuanian Commonwealth due to Grain Trade. pp. 6–7.
- Maciej Kobyliński. "Rzeczpospolita spichlerzem Europy". www.polinow.pl (in Polish). Retrieved 28 December 2009.
- Nicholas L. Chirovsky (1984). The Lithuanian-Rus'commonwealth, the Polish domination, and the Cossack-Hetman state. Philosophical Library. p. 367. ISBN 0-8022-2407-5.
- Sven-Olof Lindquist, Birgitta Radhe (1989). Economy and culture in the Baltic, 1650–1700: papers of the VIIIth Visby Symposium held at Gotland's Historical Museum, Visby, August 18th–22th [sic], 1986. Gotlands Fornsal. p. 367. ISBN 91-971048-8-4.
- Sowa, Jan (2015). Inna Rzeczpospolita jest możliwa! Widma przeszłości, wizje przyszłości (in Polish). Polska: WAB. ISBN 9788328022034. Retrieved 16 February 2021.
- "Welcome to Encyclopædia Britannica's Guide to History". Britannica.com. 31 January 1910. Retrieved 1 February 2009.
- PPerry Anderson (1979). Lineages of the absolutist state. Verso. p. 285. ISBN 0-86091-710-X.
- Robert Bideleux, Ian Jeffries (2007). A history of Eastern Europe: crisis and change. Taylor & Francis. p. 189. ISBN 978-0-415-36627-4.
- Yves-Marie Bercé (1987). Revolt and revolution in early modern Europe: an essay on the history of political violence. Manchester University Press. p. 151.
- Krzysztof Olszewski (2007). The Rise and Decline of the Polish–Lithuanian Commonwealth due to Grain Trade (PDF). p. 7. Retrieved 22 April 2009.
- Jarmo Kotilaine (2005). Russia's foreign trade and economic expansion in the seventeenth century: windows on the world. BRILL. p. 47. ISBN 90-04-13896-X.
- Allen, Robert. "Economic Structure and agricultural productivity in Europe, 1300–1800" (PDF). Retrieved 5 May 2015.
- kurkowski, Jan (2010). "Jarmarki w województwie lubelskim w XVI w." Pasaż Wiedzy. Muzeum Pałacu Króla Jana III w Wilanowie. Retrieved 16 February 2021.
- Billock, Jennifer (2019). "Follow the Ancient Amber Road". Smithsonian Magazine. Smithsonian. Retrieved 16 February 2021.
- "Drogi handlowe w dawnej Polsce". PWN. Encyklopedia PWN. Retrieved 16 February 2021.
- Polskie Towarzystwo Historyczne (1989). Kwartalnik historyczny. 3–4. Polskie Towarzystwo Historyczne. p. 214. Retrieved 16 February 2021.
- ""Polonaise" carpet". www.museu.gulbenkian.pt. Archived from the original on February 28, 2003. Retrieved May 18, 2009.
- Stachowicz 1894, p. 279
- Tomaszewska, A. "Wolna elekcja i zasady jej funkcjonowania" (PDF). Tomaszewska. Retrieved 16 February 2021.
- Juliusz Bardach, Zdzisław Kaczmarczyk, Bogusław Leśnodorski (1957). Historia państwa i prawa Polski do roku 1795 (in Polish). 2. Polska: Państwowe Wydawn. Naukowe. pp. 306–308. Retrieved 16 February 2021.CS1 maint: uses authors parameter (link)
- Richard Brzezinski (1988). Polish Armies 1569-1696 (2). Osprey Publishing. p. 11. ISBN 978-0-85045-744-5.
- Stachowicz, Michał (1894). Wojsko polskie Kościuszki w roku 1794 (in Polish). Poznań: Księgarnia Katolicka. pp. 23–25. Retrieved 14 February 2021.
- Stachowicz 1894, pp. 23–25
- Juliusz Bardach, Boguslaw Lesnodorski, and Michal Pietrzak, Historia panstwa i prawa polskiego (Warsaw: Paristwowe Wydawnictwo Naukowe), 1987, p. 229.
- Brzezinski (1988), p. 6.
- Bardach et al. (1987), pp. 229–230.
- Brzezinski (1987), p. 10.
- Bardach et al. (1987), pp. 227–228.
- Zwoliński, Stefan (1995). Naczelni wodzowie i wyżsi dowódcy Polskich Sił Zbrojnych na Zachodzie (in Polish). Polska: Wojskowy Instytut Historyczny. p. 12. ISBN 9788386268276. Retrieved 13 February 2021.
- J. K. Fedorowicz; Maria Bogucka; Henryk Samsonowicz (1982). A Republic of nobles: studies in Polish history to 1864. CUP Archive. p. 209. ISBN 0-521-24093-X.
- Jacek F. Gieras (1994). "Volume 30 of Monographs in electrical and electronic engineering, Oxford science publications". Linear induction drives. Oxford University Press. p. V. ISBN 0-19-859381-3.
- Norman Davies (2005). God's Playground: A History of Poland. Columbia University Press. p. 167. ISBN 0-231-12819-3.
- "Setting Sail". www.warsawvoice.pl. 29 May 2003. Retrieved 21 May 2009.
- Paul Peucker. "Jan Amos Comenius (1592–1670)" (PDF). www.moravian.org. Archived from the original (PDF) on September 2, 2009. Retrieved May 18, 2009.
- Jacek Jędruch (1982). Constitutions, Elections, and Legislatures of Poland, 1493-1977: A Guide to Their History. University Press of America. p. 125. ISBN 978-08-19-12509-5.
- "Portraits collection". www.muzeum.leszno.pl. Retrieved 18 May 2009.
- Mariusz Karpowicz (1991). Baroque in Poland. Arkady. p. 68. ISBN 83-213-3412-1.
- Łyczak, Bartłomiej (1 January 2011). "The Coffin Portrait and Celebration of Death in Polish–Lithuanian Commonwealth in the Modern Period". IKON. 4: 233–242. doi:10.1484/J.IKON.5.100699.
- Szablowski, Jerzy (1975). Arrasy flamandzkie w zamku królewskim na Wawelu (in Polish). Polska: Arkady. p. 15. Retrieved 13 February 2021.
- Szablowski, Jerzy (1975). Arrasy flamandzkie w zamku królewskim na Wawelu (in Polish). Polska: Arkady. Retrieved 13 February 2021.
- Orlińska-Mianowska, Ewa (2008). Fashion world of the 18th and early 19th century. Polska: Bosz. ISBN 9788387730727. Retrieved 13 February 2021.
- Singleton, Esther (12 December 2019). "French and English furniture distinctive styles and periods described and illustrated". Good Press – via Google Books.
- Dialog. Miesiȩcznik poświȩcony dramaturgii współczesnej, teatralnej, filmowej, radiowej, telewizyjnej. 11. Polska: RSW "Prasa". 1966. p. 6. Retrieved 13 February 2021.
- Waugh, Norah (1968). The Cut of Women's Clothes. London: Routledge. pp. 72–73. ISBN 0-87830-026-0.
- Lubliner, Ludwig (1858). Obrona Żydów zamieszkałych w krajach polskich od niesłusznych zarzutów i fałszywych oskarzeń. Brussels: C. Vanderauwer. p. 7. Retrieved 13 February 2021.
- Muthesius, Stefan (1994). Polska; art, architecture, design 966-1990. Langewiesche Köster. p. 34. ISBN 9783784576121. Retrieved 13 February 2021.
- Państwowy Instytut Badania Sztuki Ludowej (1974). "Volumes 28–29". Polska sztuka ludowa (Polish Folk Art). Państwowy Instytut Sztuki. p. 259.
- Paul Robert Magocsi (1996). A history of Ukraine. University of Toronto Press. pp. 286–287. ISBN 0-8020-7820-6.
- Michael J. Mikoś. "Baroque". www.staropolska.pl. Retrieved 13 May 2009.
- Rolska-Boruch, Irena (2003). "Domy pańskie" na Lubelszczyźnie od późnego gotyku do wczesnego baroku. Polska: Wydawnictwo KUL. ISBN 9788373630291. Retrieved 13 February 2021.
- Kowalczyk, Jerzy (1973). Sebastiano Serlio a sztuka polska. Polska: Zakład Narodowy im. Ossolińskich. p. 119. Retrieved 13 February 2021.
- Miłobędzki, Adam (1994). The architecture of Poland: a chapter of the European heritage. Kraków: International Cultural Centre. p. 110. ISBN 9788385739142.
- Zdzisław Klimczuk, Józef Garliński (1996). Most Holandia - Polska (in Polish). Polska: Bis Press. p. 32. Retrieved 13 February 2021.CS1 maint: uses authors parameter (link)
- Karpowicz, Mariusz (1994). Sztuki polskiej drogi dziwne (in Polish). Excalibur. p. 47. ISBN 9788390015286. Retrieved 13 February 2021.
- Feliks Gryglewicz, Romuald Łukaszyk, Wincenty Granat, Zygmunt Sułowski (1973). Encyklopedia katolicka: Kinszasa-Krzymuska. Lublin: Tow. Nauk. Katolickiego Uniwersytetu Lubelskiego. p. 1189. Retrieved 13 February 2021.CS1 maint: uses authors parameter (link)
- "Palaces and Castles in a Lion Country". www.lvivtoday.com.ua. 2 June 2008. Retrieved 19 May 2009.
- Snopek, Jerzy (1999). Oświecenie. Polska: Wydawnictwo Naukowe PWN. p. 134. ISBN 9788301129170. Retrieved 13 February 2021.
- Kazimierz Maliszewski (1990). Obraz świata i Rzeczypospolitej w polskich gazetach rękopiśmiennych z okresu późnego baroku: studium z dziejów kształtowania się i rozpowszechniania sarmackich stereotypów wiedzy i informacji o "theatrum mundi" (in Polish). Schr. p. 79. ISBN 83-231-0239-2. W każdym razie "królowa bez korony i pierwsza dama Rzeczypospolitej", jak współcześni określali Sieniawską, zasługuje na biografię naukową.
- Andrzej Wasko, Sarmatism or the Enlightenment: <space>The Dilemma of Polish Culture, Sarmatian Review XVII:2, online
- Dziejochciejstwo, dziejokrętactwo, Janusz Tazbir, Polityka 6 (2591) 10 February 2007 (in Polish)
- Paradowski, Ryszard (2005). Unia Europejska a społeczeństwo obywatelskie (in Polish). Poznań: Wydawn. Nauk. Instytutu Nauk Politycznych i Dziennikarstwa Uniwersytetu im. Adama Mickiewicza. p. 168. ISBN 9788387704940.
- Kopczyński, Michał; Tygielski, Wojciech (2010). Pod wspólnym niebem. Narody dawnej Rzeczypospolitej (in Polish). Warszawa: Bellona. ISBN 9788311117242.
- Kopczyński & Tygielski 2010
- Kopczyński & Tygielski 2010, p. 236
- Kopczyński & Tygielski 2010, p. 237
- Total and Jewish population based on Frazee; others are estimations from Pogonowski (see the following reference). Charles A. Frazee, World History the Easy Way, Barron's Educational Series, ISBN 0-8120-9766-1, Google Print, 50
- R. B. Wernham, The new Cambridge modern history: The Counter-Reformation and price revolution, 1559–1610, 1968, Cambridge University Press, Google print p. 377
- Matthew P. Romaniello, Charles Lipp. Contested Spaces of Nobility in Early Modern Europe. Ashgate Publishing, Ltd. 2011. p. 233.
- Polish Sociological Review (in Polish). Polish Sociological Association. 2007. p. 96.
- Kopczyński & Tygielski 2010, p. 201
- Kopczyński & Tygielski 2010, pp. 25–83
- Kopczyński & Tygielski 2010, pp. 29–38
- Stone, Daniel, The Polish-Lithuanian State, 1386–1795, Seattle and London: University of Washington Press, 2001.
- Norman Davies, God's Playground. A History of Poland, Vol. 1: The Origins to 1795, Vol. 2: 1795 to the Present. Oxford: Oxford University Press. Page 126. ISBN 0-19-925339-0 / ISBN 0-19-925340-4
- Piekarski, Adam (1979). Freedom of Conscience and Religion in Poland. Interpress Publishers. p. 31.
- "Memory of the World Register Nomination Form". portal.unesco.org. Retrieved 2 August 2011.
- Linda Gordon, Cossack Rebellions: Social Turmoil in the Sixteenth Century Ukraine, SUNY Press, 1983, ISBN 0-87395-654-0, Google Print, p. 51
- Serhii Plokhy (2006). The origins of the Slavic nations: premodern identities in Russia, Ukraine, and Belarus. Cambridge University Press. p. 169. ISBN 0-521-86403-8.
- "Lemberg". Catholic Encyclopedia. Retrieved 3 September 2010.
- Peter Kardash, Brett Lockwood (1988). Ukraine and Ukrainians. Fortuna. p. 134. ISBN 9780731675036.
- Magocsi, Paul R. (2010). A History of Ukraine: The Land and Its Peoples. University of Toronto Press. p. 190. ISBN 978-1442610217.
- "Poland, history of", Encyclopædia Britannica from Encyclopædia Britannica Premium Service. . Retrieved 10 February 2006 and "Ukraine", Encyclopædia Britannica from Encyclopædia Britannica Premium Service. . Retrieved 14 February 2006.
- "European Jewish Congress – Poland". Eurojewcong.org. Archived from the original on 11 December 2008. Retrieved 1 February 2009.
- Thus, at the time of the first partition in 1772, the Polish–Lithuanian Commonwealth consisted of 43 per cent Latin Catholics, 33 per cent Greek Catholics, 10 per cent Christian Orthodox, 9 per cent Jews and 4 per cent Protestant Willfried Spohn, Anna Triandafyllidou (2003). Europeanisation, national identities, and migration: changes in boundary constructions between Western and Eastern Europe. Routledge. p. 127. ISBN 0-415-29667-6.
- Artūras Tereškinas (2005). Imperfect communities: identity, discourse and nation in the seventeenth-century Grand Duchy of Lithuania. Lietuvių literatūros ir tautosakos institutas. p. 31. ISBN 9955-475-94-3.
- Aleksander Gieysztor, ed. (1988). Rzeczpospolita w dobie Jana III (Commonwealth during the reign of John III). Royal Castle in Warsaw. p. 45.
- Anatol Lieven, The Baltic Revolution: Estonia, Latvia, Lithuania and the Path to Independence, Yale University Press, 1994, ISBN 0-300-06078-5, Google Print, p. 48
- Stephen Barbour, Cathie Carmichael, Language and Nationalism in Europe, Oxford University Press, 2000, ISBN 0-19-925085-5, Google Print p. 184
- Östen Dahl, Maria Koptjevskaja-Tamm, The Circum-Baltic Languages: Typology and Contact, John Benjamins Publishing Company, 2001, ISBN 90-272-3057-9, Google Print, p. 45
- Glanville Price, Encyclopedia of the Languages of Europe, Blackwell Publishing, 1998, ISBN 0-631-22039-9, Google Print, p. 30
- Mikulas Teich & Roy Porter, The National Question in Europe in Historical Context, Cambridge University Press, 1993, ISBN 0-521-36713-1, Google Print, p. 295
- Kevin O'Connor, Culture And Customs of the Baltic States, Greenwood Press, 2006, ISBN 0-313-33125-1, Google Print, p. 115
- Daniel. Z Stone, A History of East Central Europe, p. 46.
- Karin Friedrich et al., The Other Prussia: Royal Prussia, Poland and Liberty, 1569–1772, Cambridge University Press, 2000, ISBN 0-521-58335-7, Google Print, p. 88
- Tomasz Kamusella (2008). The Politics of Language and Nationalism in Modern Central Europe. Palgrave Macmillan. p. 115. ISBN 978-0-230-55070-4.
- L'union personnelle polono-saxonne contribua davantage à faire connaître en Pologne le français que l'allemand. Cette fonction de la langue française, devenue l'instrument de communication entre les groupes dirigeants des deux pays. Polish Academy of Sciences Institute of History (1970). "Volume 22". Acta Poloniae historica (in French). National Ossoliński Institute. p. 79.
- They were the first Catholic schools in which one of the main languages of instruction was Polish. [...] Although he followed Locke in attaching weight to the native language, in general Latin lost ground to French rather than Polish. Richard Butterwick (1998). Poland's last king and English culture: Stanisław August Poniatowski, 1732–1798. Oxford University Press. p. 70. ISBN 0-19-820701-8.
- Руська (Волинська) метрика
- Although still sometimes in use by the end of the XVII century and lack of official decree like one for Grand Duchy chancellery, there was no separate Ruthenian Metrica since 1673.
- Piotr Eberhardt, Jan Owsinski, Ethnic Groups and Population Changes in Twentieth-century Central-Eastern Europe: History, Data, Analysis, M.E. Sharpe, 2003, ISBN 0-7656-0665-8, Google Print, p. 177
- Östen Dahl, Maria Koptjevskaja-Tamm, The Circum-Baltic Languages: Typology and Contact, John Benjamins Publishing Company, 2001, ISBN 90-272-3057-9, Google Print, p. 41
- Zinkevičius, Z. (1993). Rytų Lietuva praeityje ir dabar. Vilnius: Mokslo ir enciklopedijų leidykla. p. 70. ISBN 5-420-01085-2.
Official usage of Lithuanian language in the 16th century Lithuania's cities proves magistrate's decree of Wilno city, which was sealed by Žygimantas Augustas' in 1552...//Courts juratory were written in Lithuanian language. In fact, such [courts juratory written in Lithuanian] survived from the 17th century...
- ""Mes Wladislaus..." a letter from Wladyslaw Vasa issued in 1639 written in Lithuanian language". Retrieved 3 September 2006.
- Ališauskas, V.; L. Jovaiša; M. Paknys; R. Petrauskas; E. Raila; et al. (2001). Lietuvos Didžiosios Kunigaikštijos kultūra. Tyrinėjimai ir vaizdai. Vilnius. p. 500. ISBN 9955-445-26-2.
In 1794 Government's declarations were carried out and in Lithuanian.
- Daniel. Z Stone, A History of East Central Europe, p. 4.
- Czesław Miłosz, The History of Polish Literature, University of California Press, 1983, ISBN 0-520-04477-0, Google Print, p. 108
- Jan K. Ostrowski, Land of the Winged Horsemen: Art in Poland, 1572–1764, Yale University Press, 1999, ISBN 0-300-07918-4, Google Print, p. 27
- Joanna B. Michlic (2006). Poland's threatening other: the image of the Jew from 1880 to the present. U of Nebraska Press. p. 42. ISBN 0-8032-3240-3.
- Karol Zierhoffer, Zofia Zierhoffer (2000). Nazwy zachodnioeuropejskie w języku polskim a związki Polski z kulturą Europy (in Polish). Wydawnictwo Poznańskiego Towarzystwa Przyjaciół Nauk. p. 79. ISBN 83-7063-286-6. Podobną opinię przekazał nieco późnej, w 1577 r. Marcin Kromer "Za naszej pamięci weszli [...] do głównych miast Polski kupcy i rzemieślnicy włoscy, a język ich jest także częściowo w użyciu, mianowicie wśród wytworniejszych Polaków, którzy chętnie podróżują do Włoch".
- Rosemary A. Chorzempa (1993). Polish roots. Genealogical Pub. ISBN 0-8063-1378-1.
- Jan K. Ostrowski, ed. (1999). Art in Poland, 1572–1764: land of the winged horsemen. Art Services International. p. 32. ISBN 0-88397-131-3. In 1600 the son of the chancellor of Poland was learning four languages: Latin, Greek, Turkish, and Polish. By the time he had completed his studies, he was fluent not only in Turkish but also in Tatar and Arabic.
- Lola Romanucci-Ross; George A. De Vos; Takeyuki Tsuda (2006). Ethnic identity: problems and prospects for the twenty-first century. Rowman Altamira. p. 84. ISBN 0-7591-0973-7.
- Barile, Davide (2019). Historic Power Europe; A Post-Hegelian Interpretation of European Integration. New York: Taylor & Francis. ISBN 9781000731132.
- A. stated, for instance by the preamble of the Constitution of the Republic of Poland of 1997.
- Alfonsas Eidintas, Vytautas Zalys, Lithuania in European Politics: The Years of the First Republic, 1918–1940, Palgrave, 1999, ISBN 0-312-22458-3. Print, p. 78
- ""Zobaczyć Kresy". Grzegorz Górny. Rzeczpospolita 23 August 2008 (in Polish)" (in Polish). Rp.pl. 23 August 2008. Retrieved 1 February 2009.
- Sarah Johnstone (2008). Ukraine. Lonely Planet. p. 27. ISBN 978-1-74104-481-2.
- Stephen K. Batalden, Sandra L. Batalden (1997). The newly independent states of Eurasia: handbook of former Soviet republics. Greenwood Publishing Group. p. 45. ISBN 0-89774-940-5.
- Richard M. Golden (2006). "Volume 4". Encyclopedia of witchcraft: the Western tradition. ABC-CLIO. p. 1039. ISBN 1-57607-243-6.
- Girolamo Imbruglia; Rolando Minuti; Luisa Simonutti (2007). Traduzioni e circolazione delle idee nella cultura europea tra '500 e '700 (in Italian). Bibliopolis. p. 76. ISBN 978-88-70-88537-8.
- Daniel H. Cole (2002). Pollution and property: comparing ownership institutions for environmental protection. Cambridge University Press. p. 106. ISBN 0-521-00109-9.
- (in English) Gordon Campbell (2006). The Grove encyclopedia of decorative arts. Oxford University Press US. p. 13. ISBN 01-95189-48-5.
- Gwei-Djen Lu; Joseph Needham; Vivienne Lo (2002). Celestial lancets: a history and rationale of acupuncture and moxa. Routledge. p. 284. ISBN 07-00714-58-8.
- (in English) Ian Ridpath. "Taurus Poniatovii - Poniatowski's bull". www.ianridpath.com. Retrieved 18 May 2009.
- "Old City of Zamość". UNESCO World Heritage Centre. 23 September 2009. Retrieved 15 September 2011.
- After a fire had destroyed a wooden synagogue in 1733 Stanislaw Lubomirski decided to found a new bricked synagogue building. (in English) Polin Travel. "Lancut". www.jewish-guide.pl. Retrieved 2 September 2010.
- Guillaume de Lamberty (1735). "Volume 3". Mémoires pour servir à l'histoire du XVIIIe siècle, contenant les négociations, traitez, résolutions et autres documents authentiques concernant les affaires d'état: avec le supplément aux années MDCXCVI-MDCCIII (in French). p. 343.
Généreux et Magnifiques Seigneurs les Sénateurs et autres Ordres de la Sérénissime République de Pologne et du grand Duché de Lithuanie
- Francis W. Carter (1994). Trade and urban development in Poland: an economic geography of Cracow, from its origins to 1795 – Volume 20 of Cambridge studies in historical geography. Cambridge University Press. pp. 186, 187. ISBN 978-0-521-41239-1.
- Daniel Stone (2001). The Polish–Lithuanian state, 1386–1795. University of Washington Press. p. 221. ISBN 978-0-295-98093-5.
- Robert Bideleux, Ian Jeffries (1998). A history of eastern Europe: crisis and change. Routledge. p. 126. ISBN 978-0-415-16111-4.
- Politics and reformations: communities, polities, nations, and empires. 2007 p. 206.
- Zeitschrift für Ostmitteleuropa-Forschung. 2006, Vol. 55; p. 2.
- Thomas A. Brady, Christopher Ocker; entry by David Frick (2007). Politics and reformations: communities, polities, nations, and empires : essays in honor of Thomas A. Brady, Jr. Brill Publishers. p. 206. ISBN 978-90-04-16173-3.CS1 maint: multiple names: authors list (link)
- Marcel Cornis-Pope, John Neubauer; essay by Tomas Venclova (2004). History of the literary cultures of East-Central Europe: junctures and disjunctures in the 19th and 20th centuries (Volume 2). John Benjamins Publishing Company. p. 11. ISBN 978-90-272-3453-7.CS1 maint: multiple names: authors list (link)
- Bardach, Juliusz; Lesnodorski, Boguslaw; Pietrzak, Michal (1987). Historia panstwa i prawa polskiego. Warsaw: Paristwowe Wydawnictwo Naukowe.
- Brzezinski, Richard (1987). Polish Armies (1): 1569–1696. Men-At-Arms Series. 184. Osprey Publishing. ISBN 0-85045-736-X.
- Brzezinski, Richard (1988). Polish Armies (2): 1569–1696. Men-At-Arms Series. 188. Osprey Publishing. ISBN 0-85045-744-0.
- Frost, Robert (2015). The Oxford History of Poland–Lithuania. I: The Making of the Polish–Lithuanian Union, 1385–1569. Oxford University Press. ISBN 978-0198208693.
- Litwin, Henryk (October 2016). "Central European Superpower". BUM Magazine.
- Norkus, Zenonas (2017). An Unproclaimed Empire: The Grand Duchy of Lithuania: From the Viewpoint of Comparative Historical Sociology of Empires. Routledge. ISBN 978-1138281547.
- Rowell, S. C. (2014). Lithuania Ascending: A Pagan Empire within East-Central Europe, 1295–1345. Cambridge Studies in Medieval Life and Thought: Fourth Series. Cambridge University Press. ISBN 978-1107658769.
- Rowell, S. C.; Baronas, Darius (2015). The Conversion of Lithuania. From Pagan Barbarians to Late Medieval Christians. Vilnius: Institute of Lithuanian Literature and Folklore. ISBN 978-6094251528.
- Stone, Daniel Z. (2014). The Polish–Lithuanian State, 1386–1795. University of Washington Press. ISBN 978-0295803623.
- Sužiedėlis, Saulius A. (2011). Historical Dictionary of Lithuania (2 ed.). Scarecrow Press. ISBN 978-0810875364.
|Wikimedia Commons has media related to Polish-Lithuanian Commonwealth.|
- (in Polish and English) Commonwealth of Diverse Cultures: Poland's Heritage
- (in Polish) Knowledge passage
- (in Polish) The Polish–Lithuanian Commonwealth–Maps, history of cities in Poland, Ukraine, Belarus and Lithuania | https://library.kiwix.org/wikipedia_en_top_maxi/A/Polish%E2%80%93Lithuanian_Commonwealth | 21 |
20 | File Name: chemical structure and bonding .zip
This chapter provides a review of material covered in a standard freshman general-chemistry course through a discussion of the following topics:. Organic chemistry studies the properties and reactions of organic compounds.
1: Structure and Bonding
Chemical bonding describes a variety of interactions that hold atoms together in chemical compounds. Chemical bonds are the connections between atoms in a molecule. These bonds include both strong intramolecular interactions, such as covalent and ionic bonds. They are related to weaker intermolecular forces, such as dipole-dipole interactions, the London dispersion forces, and hydrogen bonding. The weaker forces will be discussed in a later concept. Chemical bonds : This pictures shows examples of chemical bonding using Lewis dot notation.
Hydrogen and carbon are not bonded, while in water there is a single bond between each hydrogen and oxygen. Bonds, especially covalent bonds, are often represented as lines between bonded atoms. Acetylene has a triple bond, a special type of covalent bond that will be discussed later. Chemical bonds are the forces of attraction that tie atoms together.
The nature of the interaction between the atoms depends on their relative electronegativity. Atoms with equal or similar electronegativity form covalent bonds, in which the valence electron density is shared between the two atoms. The electron density resides between the atoms and is attracted to both nuclei.
This type of bond forms most frequently between two non- metals. When there is a greater electronegativity difference than between covalently bonded atoms, the pair of atoms usually forms a polar covalent bond. The electrons are still shared between the atoms, but the electrons are not equally attracted to both elements. As a result, the electrons tend to be found near one particular atom most of the time.
Again, polar covalent bonds tend to occur between non-metals. Finally, for atoms with the largest electronegativity differences such as metals bonding with nonmetals , the bonding interaction is called ionic, and the valence electrons are typically represented as being transferred from the metal atom to the nonmetal.
Once the electrons have been transferred to the non-metal, both the metal and the non-metal are considered to be ions. The two oppositely charged ions attract each other to form an ionic compound. Covalent interactions are directional and depend on orbital overlap, while ionic interactions have no particular directionality. Each of these interactions allows the atoms involved to gain eight electrons in their valence shell, satisfying the octet rule and making the atoms more stable.
These atomic properties help describe the macroscopic properties of compounds. For example, smaller covalent compounds that are held together by weaker bonds are frequently soft and malleable. On the other hand, longer-range covalent interactions can be quite strong, making their compounds very durable. Ionic compounds, though composed of strong bonding interactions, tend to form brittle crystalline lattices.
Ionic bonds are a subset of chemical bonds that result from the transfer of valence electrons, typically between a metal and a nonmetal. Ionic bonds are a class of chemical bonds that result from the exchange of one or more valence electrons from one atom, typically a metal, to another, typically a nonmetal. This electron exchange results in an electrostatic attraction between the two atoms called an ionic bond. An atom that loses one or more valence electrons to become a positively charged ion is known as a cation, while an atom that gains electrons and becomes negatively charged is known as an anion.
This exchange of valence electrons allows ions to achieve electron configurations that mimic those of the noble gases, satisfying the octet rule. The octet rule states that an atom is most stable when there are eight electrons in its valence shell. Atoms with less than eight electrons tend to satisfy the duet rule, having two electrons in their valence shell. By satisfying the duet rule or the octet rule, ions are more stable. An anion is indicated by a negative superscript charge - something to the right of the atom.
Similarly, if a chlorine atom gains an extra electron, it becomes the chloride ion, Cl —. Both ions form because the ion is more stable than the atom due to the octet rule. Once the oppositely charged ions form, they are attracted by their positive and negative charges and form an ionic compound. Ionic bonds are also formed when there is a large electronegativity difference between two atoms. This difference causes an unequal sharing of electrons such that one atom completely loses one or more electrons and the other atom gains one or more electrons, such as in the creation of an ionic bond between a metal atom sodium and a nonmetal fluorine.
Formation of sodium fluoride : The transfer of electrons and subsequent attraction of oppositely charged ions. To determine the chemical formulas of ionic compounds, the following two conditions must be satisfied:. This is because Mg has two valence electrons and it would like to get rid of those two ions to obey the octet rule. Fluorine has seven valence electrons and usually forms the F — ion because it gains one electron to satisfy the octet rule.
Therefore, the formula of the compound is MgF 2. The subscript two indicates that there are two fluorines that are ionically bonded to magnesium. On the macroscopic scale, ionic compounds form crystalline lattice structures that are characterized by high melting and boiling points and good electrical conductivity when melted or solubilized.
Fluorine has seven valence electrons and as such, usually forms the F — ion because it gains one electron to satisfy the octet rule. Covalent bonds are a class of chemical bonds where valence electrons are shared between two atoms, typically two nonmetals. The formation of a covalent bond allows the nonmetals to obey the octet rule and thus become more stable. For example:. Covalent bonding requires a specific orientation between atoms in order to achieve the overlap between bonding orbitals.
Sigma bonds are the strongest type of covalent interaction and are formed via the overlap of atomic orbitals along the orbital axis. The overlapped orbitals allow the shared electrons to move freely between atoms. Pi bonds are a weaker type of covalent interactions and result from the overlap of two lobes of the interacting atomic orbitals above and below the orbital axis. Unlike an ionic bond, a covalent bond is stronger between two atoms with similar electronegativity. For atoms with equal electronegativity, the bond between them will be a non- polar covalent interaction.
In non-polar covalent bonds, the electrons are equally shared between the two atoms. For atoms with differing electronegativity, the bond will be a polar covalent interaction, where the electrons will not be shared equally. Ionic solids are generally characterized by high melting and boiling points along with brittle, crystalline structures. Covalent compounds, on the other hand, have lower melting and boiling points. Unlike ionic compounds, they are often not soluble in water and do not conduct electricity when solubilized.
Key Takeaways Key Points Chemical bonds are forces that hold atoms together to make compounds or molecules. Chemical bonds include covalent, polar covalent, and ionic bonds. Atoms with relatively similar electronegativities share electrons between them and are connected by covalent bonds.
Atoms with large differences in electronegativity transfer electrons to form ions. The ions then are attracted to each other. This attraction is known as an ionic bond. Key Terms bond : A link or force between neighboring atoms in a molecule or compound.
This attraction usually forms between a metal and a non-metal. This interaction typically forms between two non-metals.
Ionic Bonds Ionic bonds are a subset of chemical bonds that result from the transfer of valence electrons, typically between a metal and a nonmetal. Learning Objectives Summarize the characteristic features of ionic bonds. Key Takeaways Key Points Ionic bonds are formed through the exchange of valence electrons between atoms, typically a metal and a nonmetal. The loss or gain of valence electrons allows ions to obey the octet rule and become more stable.
Ionic compounds are typically neutral. Therefore, ions combine in ways that neutralize their charges. Key Terms valence electrons : The electrons of an atom that can participate in the formation of chemical bonds with other atoms. They are the furthest electrons from the nucleus. Covalent Bonds Covalent bonding involves two atoms, typically nonmetals, sharing valence electrons. Learning Objectives Differentiate between covalent and ionic bonds.
Key Takeaways Key Points Covalent bonds involve two atoms, typically nonmetals, that share electron density to form strong bonding interactions.
Covalent bonds include single, double, and triple bonds and are composed of sigma and pi bonding interactions where 2, 4, or 6 electrons are shared respectively. Covalent compounds typically have lower melting and boiling points than ionic compounds. Key Terms electronegativity : The tendency of an atom or molecule to attract electrons and thus form bonds. Licenses and Attributions.
CC licensed content, Specific attribution.
We apologize for the inconvenience...
As of today we have 77,, eBooks for you to download for free. No annoying ads, no download limits , enjoy it and don't forget to bookmark and share the love! Chemical Bonding and Molecular Geometry. Can't find what you're looking for? Try pdfdrive:hope to request a book.
covalent bonds. Origins of Organic Chemistry. Page 3. ▫ Review ideas from general chemistry: atoms.
Structure and Bonding
A chemical bond is a lasting attraction between atoms , ions or molecules that enables the formation of chemical compounds. The bond may result from the electrostatic force of attraction between oppositely charged ions as in ionic bonds or through the sharing of electrons as in covalent bonds. The strength of chemical bonds varies considerably; there are "strong bonds" or "primary bonds" such as covalent, ionic and metallic bonds, and "weak bonds" or "secondary bonds" such as dipole—dipole interactions , the London dispersion force and hydrogen bonding. Since opposite charges attract via a simple electromagnetic force , the negatively charged electrons that are orbiting the nucleus and the positively charged protons in the nucleus attract each other. | https://sicm1.org/and-pdf/256-chemical-structure-and-bonding-pdf-357-727.php | 21 |
15 | Care work has long been considered the work that makes all other work possible.
The term “care work” encompasses both paid and unpaid work. The International Labor Organization includes two overlapping activities in their definition of care work: direct, personal and relational activities, like caring for children or nursing someone who is ill, as well as indirect care, like cooking and cleaning.
Most of the care work done around the world is unpaid and done by women and girls, often from marginalized groups. The amount of time women spend doing unpaid caregiving in comparison to men has profound impacts on economic inequality across gender.
And with debate raging on Capitol Hill about providing assistance to caregivers in the new infrastructure bill, and with the aid prescribed in President Joe Biden’s American Jobs Plan, understanding what care workers go through in their daily lives is vital to provide them the assistance they deserve.
The undervalued nature of care work also has ramifications for paid care workers. Care work has already been one of the fastest-growing sectors of the American economy, the Institute for Women’s Policy Research finds. The number of these jobs, which tend to pay less than the median annual wage across all sectors, is expected to expand further as the elderly population in the United States grows.
Domestic workers are one particular category of care workers. Domestic workers, whether they are hired by an individual or through an agency, do a wide range of work, from cleaning to personal care. The common denominator is that they work in private homes. Like all care work, the demographics of domestic workers is very gendered, analysis from the Economic Policy Institute shows.
Domestic work is also heavily racialized. Domestic workers are more likely than all other workers to be immigrants, and undocumented workers in the sector face additional vulnerabilities.
In the United States, domestic work is deeply entwined with the legacy of slavery. This legacy is why domestic workers, along with agricultural workers, were left out of the labor protections granted in the 1930s, including the collective bargaining protections of the National Labor Relations Act. This exclusion continued in various subsequent labor protections.
Domestic work is also borne out of the history of settler colonialism in the United States. Indigenous people worked during the colonial period as domestic servants, both as enslaved or waged laborers. Policies created by the Bureau of Indian Affairs institutionalized the practice. This was a part of the U.S. policy of assimilation – Indigenous girls were placed in boarding schools to learn about maintaining a household, and then placed in the homes of white settler colonial families as domestic workers.
The current realities of domestic workers reflect these racist histories. Domestic workers continue to be excluded from a variety of labor protections to this day. Working in private residences leaves domestic workers particularly vulnerable. Surveys done by the Institute for Policy Studies and National Domestic Workers Alliance have highlighted the lack of worker protections and potential for sexual harassment and abuse. This lack of protection goes hand in hand with the devaluation of domestic work. As the Economic Policy Institute shows, domestic workers face high poverty rates.
Groups like the National Domestic Worker Alliance have long organized and advocated to enshrine rights and benefits tailored to the unique challenges domestic workers face. Several cities and states have passed domestic worker bills of rights, and Rep. Pramila Jayapal and Sen. Kamala Harris have introduced a National Domestic Worker Bill of Rights in Congress.
Implementing worker protections and benefits is one crucial aspect of reducing inequalities in the care economy. So too is investment in care. A first of its kind study from the UCLA Labor Center sheds light on the California households that employ domestic workers, which total as many as 2 million, 44 percent of which are low-income.
Poor pay is also prevalent in California, UCLA found. Four in ten employees are paid a low wage, defined as less than two-thirds the full-time median wage, which at the time of the study was $13.83 an hour. Seventeen percent were paid below the minimum wage. One in five moderate and high-income households paid a low wage despite being able to pay more, while a third of low-income households paid higher wages.
Some states have implemented programs to begin to offset the costs of providing care. In Hawaii, the Kupuna Caregivers Program provides financial assistance to employed caregivers to offset the cost of care so they can remain in the workforce. Washington has created a social insurance program to help cover the costs of elder care. The National Academy of Social Insurance has laid out a menu of options for states building towards universal family care. Modeling from the International Trade Union Confederation also shows that investments in the care economy is a more gender equitable way to stimulate employment and economic growth.
Care work is critical to the functioning of our society at any time. During the Covid-19 pandemic, this workforce, which is overwhelmingly female and disproportionately people of color, has become even more essential. The term “care work” encompasses both paid and unpaid work and encompasses both direct activities, like caring for children or nursing someone who is ill, as well as indirect care, like cooking and cleaning.
Domestic workers are one particular category of care workers. Whether hired by an individual or through an agency, this workforce performs a wide range of tasks, from cleaning to personal care, in private homes. Already a vulnerable category of workers, domestic workers are under immense stress as they serve on the frontlines of the Covid-19 pandemic. According to an April 2020 survey by the National Domestic Workers Alliance, 84 percent of domestic workers reported experiencing food insecurity, 77 percent were the primary breadwinners for their families, 72 percent reported having lost their livelihoods, and half reported lacking access to medical care during the pandemic.
As a joint survey from the Institute for Policy Studies and the National Domestic Workers Alliance shows, Black immigrant domestic workers are even more vulnerable during this crisis. More than 800 respondents in three communities — New York, Boston, and Miami-Dade County in Florida — show the scale of this crisis. As of June 2020, 65 percent reported being at risk of eviction or utility shut off in the next three months, 49 percent were fearful of seeking out government aid due to their immigration status, 45 percent had lost their jobs, and a quarter reported having their hours reduced.
Because women tend to bear more responsibility for family caregiving, they were more likely than men to drop out of the labor force, particularly in the first phase of the pandemic, to take care of children who had to stay home from school or day care or to help infected family members. According to Bureau of Labor Statistics data, between January and September 2020, women’s labor force participation rate dropped by 2.4 percentage points, compared to a drop of 1.9 points for men. Rand Corporation research reveals that the participation gap between women and men with children was even larger during this period. The steepest decline in labor force participation was among women with two children, at 3.82 points, compared to a 1.39 point drop for men with two children.
Among U.S. women who’ve stopped looking for work during the pandemic, the steepest drops have been among women of color. Between February and December 2020, the drop in labor force participation was 4.3 points for Black women and 3.8 points for Latinx women, compared to 1.6 for white women. Several factors may have contributed to women of color becoming discouraged from seeking work. On top of their greater caregiving responsibilities, women of color face racial discrimination in hiring and layoffs and they are disproportionately concentrated in service and care sector jobs with high risks of Covid exposure.
MORE INSTITUTE FOR POLICY STUDIES RESOURCES ON THE CARE ECONOMY
Video: Black Immigrant Domestic Workers in the Time of Covid-19, Co-produced with the National Domestic Workers Alliance
I Am Not a Mop Bucket for Wealthy Families: Domestic Workers Like Me Need Covid-19 Relief, Regardless of Immigration Status | https://inequality.org/facts/inequality-care-economy/ | 21 |
23 | Spanish missions in California
Part of a series on
in the Americas
of the Catholic Church
|Missions in North America|
|Missions in South America|
|Part of a series on the|
|Spanish missions in California|
The Spanish missions in California ( Spanish: Misiones españolas en California) comprise a series of 21 religious outposts or missions established between 1769 and 1833 in what is now the U.S. state of California. Founded by Catholic priests of the Franciscan order to evangelize the Native Americans, the missions led to the creation of the New Spain province of Alta California and were part of the expansion of the Spanish Empire into the most northern and western parts of Spanish North America.
Following long-term secular and religious policy of Spain in Spanish America, the missionaries forced the native Californians to live in settlements called reductions, disrupting their traditional way of life. The missionaries introduced European fruits, vegetables, cattle, horses, ranching, and technology. Significant reductions in Native American population occurred mostly through introduction of European diseases. In the end, the missions had mixed results in their objectives: to convert, educate, develop and transform the native peoples into Spanish subjects.
By 1810, Spain's king had been imprisoned by the French, and financing for military payroll and missions in California ceased. In 1821, Mexico achieved independence from Spain, although Mexico did not send a governor to California until 1824, and only a portion of payroll was ever reinstated (ibid.). The 21,000 Mission Indians produced hide, tallow, wool, and textiles at this time, and the leather products were exported to Boston, South America, and Asia. This trading system sustained the colonial economy from 1810 until 1830. The missions began to lose control over land in the 1820s, as unpaid military men unofficially encroached, but officially missions maintained authority over native neophytes and control of land holdings until the 1830s. At the peak of its development in 1832, the coastal mission system controlled an area equal to approximately one-sixth of Alta California. The Alta California government secularized the missions after the passage of the Mexican secularization act of 1833. This divided the mission lands into land grants, in effect legitimizing and completing the transfer of Indian congregation lands to military commanders and their most loyal men; these became many of the Ranchos of California.
The surviving mission buildings are the state's oldest structures and its most-visited historic monuments. They have become a symbol of California, appearing in many movies and television shows, and are an inspiration for Mission Revival architecture. The oldest cities of California formed around or near Spanish missions, including the four largest: Los Angeles, San Diego, San Jose, and San Francisco.
Prior to 1754, grants of mission lands were made directly by the Spanish Crown. But, given the remote locations and the inherent difficulties in communicating with the territorial governments, power was transferred to the viceroys of New Spain to grant lands and establish missions in North America. Plans for the Alta California missions were laid out under the reign of King Charles III, and came at least in part as a response to recent sightings of Russian fur traders along the California coast in the mid 1700s. The missions were to be interconnected by an overland route which later became known as the Camino Real. The detailed planning and direction of the missions was to be carried out by Friar Junípero Serra, O.F.M. (who, in 1767, along with his fellow priests, had taken control over a group of missions in Baja California Peninsula previously administered by the Jesuits).
The Rev. Fermín Francisco de Lasuén took up Serra's work and established nine more mission sites, from 1786 through 1798; others established the last three compounds, along with at least five asistencias (mission assistance outposts).
Work on the coastal mission chain was concluded in 1823, completed after Serra's death in 1784. Plans to build a twenty-second mission in Santa Rosa in 1827 were canceled. [notes 1]
The Rev. Pedro Estévan Tápis proposed establishing a mission on one of the Channel Islands in the Pacific Ocean off San Pedro Harbor in 1784, with either Santa Catalina or Santa Cruz (known as Limú to the Tongva residents) being the most likely locations, the reasoning being that an offshore mission might have attracted potential people to convert who were not living on the mainland, and could have been an effective measure to restrict smuggling operations. Governor José Joaquín de Arrillaga approved the plan the following year, however an outbreak of sarampión ( measles) killing some 200 Tongva people coupled with a scarcity of land for agriculture and potable water left the success of such a venture in doubt, so no effort to found an island mission was ever made.
In September 1821, the Rev. Mariano Payeras, "Comisario Prefecto" of the California missions, visited Cañada de Santa Ysabel east of Mission San Diego de Alcalá as part of a plan to establish an entire chain of inland missions. The Santa Ysabel Asistencia had been founded in 1818 as a "mother" mission, however, the plan's expanding beyond never came to fruition.
In addition to the presidio (royal fort) and pueblo (town), the misión was one of the three major agencies employed by the Spanish sovereign to extend its borders and consolidate its colonial territories. Asistencias ("satellite" or "sub" missions, sometimes referred to as "contributing chapels") were small-scale missions that regularly conducted Mass on days of obligation but lacked a resident priest; as with the missions, these settlements were typically established in areas with high concentrations of potential native converts. The Spanish Californians had never strayed from the coast when establishing their settlements; Mission Nuestra Señora de la Soledad was located farthest inland, being only some thirty miles (48 kilometers) from the shore. Each frontier station was forced to be self-supporting, as existing means of supply were inadequate to maintain a colony of any size. California was months away from the nearest base in colonized Mexico, and the cargo ships of the day were too small to carry more than a few months’ rations in their holds. To sustain a mission, the padres required converted Native Americans, called neophytes, to cultivate crops and tend livestock in the volume needed to support a fair-sized establishment. The scarcity of imported materials, together with a lack of skilled laborers, compelled the missionaries to employ simple building materials and methods in the construction of mission structures.
Although the missions were considered temporary ventures by the Spanish hierarchy, the development of an individual settlement was not simply a matter of "priestly whim." The founding of a mission followed longstanding rules and procedures; the paperwork involved required months, sometimes years of correspondence, and demanded the attention of virtually every level of the bureaucracy. Once empowered to erect a mission in a given area, the men assigned to it chose a specific site that featured a good water supply, plenty of wood for fires and building materials, and ample fields for grazing herds and raising crops. The padres blessed the site, and with the aid of their military escort fashioned temporary shelters out of tree limbs or driven stakes, roofed with thatch or reeds (cañas). It was these simple huts that ultimately gave way to the stone and adobe buildings that exist to the present.
The first priority when beginning a settlement was the location and construction of the church (iglesia). The majority of mission sanctuaries were oriented on a roughly east–west axis to take the best advantage of the sun's position for interior illumination; the exact alignment depended on the geographic features of the particular site. Once the spot for the church had been selected, its position was marked and the remainder of the mission complex was laid out. The workshops, kitchens, living quarters, storerooms, and other ancillary chambers were usually grouped in the form of a quadrangle, inside which religious celebrations and other festive events often took place. The cuadrángulo was rarely a perfect square because the missionaries had no surveying instruments at their disposal and simply measured off all dimensions by foot. Some fanciful accounts regarding the construction of the missions claimed that tunnels were incorporated in the design, to be used as a means of emergency egress in the event of attack; however, no historical evidence (written or physical) has ever been uncovered to support these assertions. [notes 2]
The Alta California missions, known as reductions (reducciones) or congregations (congregaciones), were settlements founded by the Spanish colonizers of the New World with the purpose of totally assimilating indigenous populations into European culture and the Catholic religion. It was a doctrine established in 1531, which based the Spanish state's right over the land and persons of the Indies on the Papal charge to evangelize them. It was employed wherever the indigenous populations were not already concentrated in native pueblos. Indians were congregated around the mission proper through forced resettlement, in which the Spanish "reduced" them from what they perceived to be a free "undisciplined'" state with the ambition of converting them into "civilized" members of colonial society. The civilized and disciplined culture of the natives, developed over 8,000 years, was not considered. A total of 146 Friars Minor, mostly Spaniards by birth, were ordained as priests and served in California between 1769 and 1845. Sixty-seven missionaries died at their posts (two as martyrs: Padres Luis Jayme and Andrés Quintana), while the remainder returned to Europe due to illness, or upon completing their ten-year service commitment. As the rules of the Franciscan Order forbade friars to live alone, two missionaries were assigned to each settlement, sequestered in the mission's convento. To these the governor assigned a guard of five or six soldiers under the command of a corporal, who generally acted as steward of the mission's temporal affairs, subject to the priests' direction.
Indians were initially attracted into the mission compounds by gifts of food, colored beads, bits of bright cloth, and trinkets. Once a Native American " gentile" was baptized, they were labeled a neophyte, or new believer. This happened only after a brief period during which the initiates were instructed in the most basic aspects of the Catholic faith. But, while many natives were lured to join the missions out of curiosity and sincere desire to participate and engage in trade, many found themselves trapped once they were baptized. On the other hand, Indians staffed the militias at each mission and had a role in mission governance.
To the padres, a baptized Indian person was no longer free to move about the country, but had to labor and worship at the mission under the strict observance of the priests and overseers, who herded them to daily masses and labors. If an Indian did not report for their duties for a period of a few days, they were searched for, and if it was discovered that they had left without permission, they were considered runaways. Large-scale military expeditions were organized to round up the escaped neophytes. Sometimes, the Franciscans allowed neophytes to escape the missions, or they would allow them to visit their home village. However, the Franciscans would only allow this so that they could secretly follow the neophytes. Upon arriving to the village and capturing the runaways, they would take back Indians to the missions, sometimes as many as 200 to 300 Indians.
On one occasion," writes Hugo Reid, "they went as far as the present Rancho del Chino, where they tied and whipped every man, woman and child in the lodge, and drove part of them back.... On the road they did the same with those of the lodge at San Jose. On arriving home the men were instructed to throw their bows and arrows at the feet of the priest, and make due submission. The infants were then baptized, as were also all children under eight years of age; the former were left with their mothers, but the latter kept apart from all communication with their parents. The consequence was, first, the women consented to the rite and received it, for the love they bore their children; and finally the males gave way for the purpose of enjoying once more the society of wife and family. Marriage was then performed, and so this contaminated race, in their own sight and that of their kindred, became followers of Christ.
A total of 20,355 natives were "attached" to the California missions in 1806 (the highest figure recorded during the Mission Period); under Mexican rule the number rose to 21,066 (in 1824, the record year during the entire era of the Franciscan missions). [notes 5] During the entire period of Mission rule, from 1769 to 1834, the Franciscans baptized 53,600 adult Indians and buried 37,000. Dr. Cook estimates that 15,250 or 45% of the population decrease was caused by disease. Two epidemics of measles, one in 1806 and the other in 1828, caused many deaths. The mortality rates were so high that the missions were constantly dependent upon new conversions.
Young native women were required to reside in the monjerío (or "nunnery") under the supervision of a trusted Indian matron who bore the responsibility for their welfare and education. Women only left the convent after they had been "won" by an Indian suitor and were deemed ready for marriage. Following Spanish custom, courtship took place on either side of a barred window. After the marriage ceremony the woman moved out of the mission compound and into one of the family huts. These "nunneries" were considered a necessity by the priests, who felt the women needed to be protected from the men, both Indian and de razón ("instructed men", i.e. Europeans). The cramped and unsanitary conditions the girls lived in contributed to the fast spread of disease and population decline. So many died at times that many of the Indian residents of the missions urged the priests to raid new villages to supply them with more women. As of December 31, 1832 (the peak of the mission system's development) the mission padres had performed a combined total of 87,787 baptisms and 24,529 marriages, and recorded 63,789 deaths.
The neophytes were kept in well-guarded mission compounds. The policy of the Franciscans was to keep them constantly occupied.
Bells were vitally important to daily life at any mission. The bells were rung at mealtimes, to call the Mission residents to work and to religious services, during births and funerals, to signal the approach of a ship or returning missionary, and at other times; novices were instructed in the intricate rituals associated with the ringing the mission bells. The daily routine began with sunrise Mass and morning prayers, followed by instruction of the natives in the teachings of the Roman Catholic faith. After a generous (by era standards) breakfast of atole, the able-bodied men and women were assigned their tasks for the day. The women were committed to dressmaking, knitting, weaving, embroidering, laundering, and cooking, while some of the stronger girls ground flour or carried adobe bricks (weighing 55 lb, or 25 kg each) to the men engaged in building. The men worked a variety of jobs, having learned from the missionaries how to plow, sow, irrigate, cultivate, reap, thresh, and glean. In addition, they were taught to build adobe houses, tan leather hides, shear sheep, weave rugs and clothing from wool, make ropes, soap, paint, and other useful duties.
The work day was six hours, interrupted by dinner (lunch) around 11:00 a.m. and a two-hour siesta, and ended with evening prayers and the rosary, supper, and social activities. About 90 days out of each year were designated as religious or civil holidays, free from manual labor. The labor organization of the missions resembled a slave plantation in many respects. [notes 6] Foreigners who visited the missions remarked at how the priests' control over the Indians appeared excessive, but necessary given the white men's isolation and numeric disadvantage. [notes 7] Indians were not paid wages as they were not considered free laborers and, as a result, the missions were able to profit from the goods produced by the Mission Indians to the detriment of the other Spanish and Mexican settlers of the time who could not compete economically with the advantage of the mission system.
The Franciscans began to send neophytes to work as servants of Spanish soldiers in the presidios. Each presidio was provided with land, el rancho del rey, which served as a pasture for the presidio livestock and as a source of food for the soldiers. Theoretically the soldiers were supposed to work on this land themselves but within a few years the neophytes were doing all the work on the presidio farm and, in addition, were serving domestics for the soldiers. While the fiction prevailed that neophytes were to receive wages for their work, no attempt was made to collect the wages for these services after 1790. It is recorded that the neophytes performed the work "under unmitigated compulsion."
In recent years, much debate has arisen about the priests' treatment of the Indians during the Mission period, and many believe that the California mission system is directly responsible for the decline of the native cultures. [notes 8] From the perspective of the Spanish priest, their efforts were a well-meaning attempt to improve the lives of the heathen natives. [notes 9] [notes 10]
The missionaries of California were by-and-large well-meaning, devoted men...[whose] attitudes toward the Indians ranged from genuine (if paternalistic) affection to wrathful disgust. They were ill-equipped—nor did most truly desire—to understand complex and radically different Native American customs. Using European standards, they condemned the Indians for living in a "wilderness," for worshipping false gods or no God at all, and for having no written laws, standing armies, forts, or churches.
The goal of the missions was, above all, to become self-sufficient in relatively short order. Farming, therefore, was the most important industry of any mission. Barley, maize, and wheat were among the most common crops grown. Cereal grains were dried and ground by stone into flour. Even today, California is well known for the abundance and many varieties of fruit trees that are cultivated throughout the state. The only fruits indigenous to the region, however, consisted of wild berries or grew on small bushes. Spanish missionaries brought fruit seeds over from Europe, many of which had been introduced from Asia following earlier expeditions to the continent; orange, grape, apple, peach, pear, and fig seeds were among the most prolific of the imports. Grapes were also grown and fermented into wine for sacramental use and again, for trading. The specific variety, called the Criolla or Mission grape, was first planted at Mission San Juan Capistrano in 1779; in 1783, the first wine produced in Alta California emerged from the mission's winery. Ranching also became an important mission industry as cattle and sheep herds were raised.
Mission San Gabriel Arcángel unknowingly witnessed the origin of the California citrus industry with the planting of the region's first significant orchard in 1804, though the commercial potential of citrus was not realized until 1841. Olives (first cultivated at Mission San Diego de Alcalá) were grown, cured, and pressed under large stone wheels to extract their oil, both for use at the mission and to trade for other goods. The Rev. Serra set aside a portion of the Mission Carmel gardens in 1774 for tobacco plants, a practice that soon spread throughout the mission system. [notes 11]
It was also the missions' responsibility to provide the Spanish forts, or presidios, with the necessary foodstuffs, and manufactured goods to sustain operations. It was a constant point of contention between missionaries and the soldiers as to how many fanegas of barley, or how many shirts or blankets the mission had to provide the garrisons on any given year. At times these requirements were hard to meet, especially during years of drought, or when the much anticipated shipments from the port of San Blas failed to arrive. The Spaniards kept meticulous records of mission activities, and each year reports submitted to the Father-Presidente summarizing both the material and spiritual status at each of the settlements.
Livestock was raised, not only for the purpose of obtaining meat, but also for wool, leather, and tallow, and for cultivating the land. In 1832, at the height of their prosperity, the missions collectively owned:
- 151,180 head of cattle;
- 137,969 sheep;
- 14,522 horses;
- 1,575 mules or burros;
- 1,711 goats; and
- 1,164 swine.
All these grazing animals were originally brought up from Mexico. A great many Indians were required to guard the herds and flocks on the mission ranches, which created the need for "...a class of horsemen scarcely surpassed anywhere." These animals multiplied beyond the settler's expectations, often overrunning pastures and extending well-beyond the domains of the missions. The giant herds of horses and cows took well to the climate and the extensive pastures of the Coastal California region, but at a heavy price for the California Native American people. The uncontrolled spread of these new herds, and associated invasive exotic plant species, quickly exhausted the native plants in the grasslands, and the chaparral and woodlands that the Indians depended on for their seed, foliage, and bulb harvests. The grazing-overgrazing problems were also recognized by the Spaniards, who periodically had extermination parties cull and kill thousands of excess livestock, when herd populations grew beyond their control or the land's capacity. Years with a severe drought did this also.
Mission kitchens and bakeries prepared and served thousands of meals each day. Candles, soap, grease, and ointments were all made from tallow ( rendered animal fat) in large vats located just outside the west wing. Also situated in this general area were vats for dyeing wool and tanning leather, and primitive looms for weaving. Large bodegas (warehouses) provided long-term storage for preserved foodstuffs and other treated materials.
Each mission had to fabricate virtually all of its construction materials from local materials. Workers in the carpintería ( carpentry shop) used crude methods to shape beams, lintels, and other structural elements; more skilled artisans carved doors, furniture, and wooden implements. For certain applications bricks (ladrillos) were fired in ovens ( kilns) to strengthen them and make them more resistant to the elements; when tejas (roof tiles) eventually replaced the conventional jacal roofing (densely packed reeds) they were placed in the kilns to harden them as well. Glazed ceramic pots, dishes, and canisters were also made in mission kilns.
Prior to the establishment of the missions, the native peoples knew only how to utilize bone, seashells, stone, and wood for building, tool making, weapons, and so forth. The missionaries established manual training in European skills and methods; in agriculture, mechanical arts, and the raising and care of livestock. Everything consumed and otherwise utilized by the natives was produced at the missions under the supervision of the padres; thus, the neophytes not only supported themselves, but after 1811 sustained the entire military and civil government of California. The foundry at Mission San Juan Capistrano was the first to introduce the Indians to the Iron Age. The blacksmith used the mission's forges (California's first) to smelt and fashion iron into everything from basic tools and hardware (such as nails) to crosses, gates, hinges, even cannon for mission defense. Iron in particular was a commodity that the mission acquired solely through trade, as the missionaries had neither the know-how nor technology to mine and process metal ores.
No study of the missions is complete without mention of their extensive water supply systems. Stone zanjas ( aqueducts, sometimes spanning miles, brought fresh water from a nearby river or spring to the mission site. Open or covered lined ditches and/or baked clay pipes, joined together with lime mortar or bitumen, gravity-fed the water into large cisterns and fountains, and emptied into waterways where the force of the water was used to turn grinding wheels and other simple machinery, or dispensed for use in cleaning. Water used for drinking and cooking was allowed to trickle through alternate layers of sand and charcoal to remove the impurities. One of the best-preserved mission water systems is at Mission Santa Barbara.
Beginning in 1492 with the voyages of Christopher Columbus, the Kingdom of Spain sought to establish missions to convert indigenous people in Nueva España ( New Spain), which consisted of the Caribbean, Mexico, and most of what is now the Southwestern United States) to Roman Catholicism. This would facilitate colonization of these lands awarded to Spain by the Catholic Church, including that region later known as Alta California. [notes 12] [notes 13] [notes 14]
Only 48 years after Columbus discovered the Americas for Europe, Francisco Vázquez de Coronado set out from Compostela, New Spain on February 23, 1540, at the head of a large expedition. Accompanied by 400 European men-at-arms (mostly Spaniards), 1,300 to 2,000 Mexican Indian allies, several Indian and African slaves, and four Franciscan friars, he traveled from Mexico through parts of the southwestern United States to present-day Kansas between 1540 and 1542. Two years later on 27 June 1542, Juan Rodriguez Cabrillo set out from Navidad, Mexico and sailed up the coast of Baja California and into the region of Alta California.
Unknown to Spain, Sir Francis Drake, an English privateer who raided Spanish treasure ships and colonial settlements, claimed the Alta California region as Nova Albion for the English Crown in 1579, a full generation before the first English landing in Jamestown in 1607. During his circumnavigation of the world, Drake anchored in a harbor just north of present-day San Francisco, California, establishing friendly relations with the Coastal Miwok and claiming the territory for Queen Elizabeth I. However, Drake sailed back to England and England (and later Britain) never pressed for any sort of claim regarding the region.
However, it wasn't until 1741 that the Spanish monarchy of King Philip V was stimulated to consider how to protect his claims to Alta California. Philip was spurred on when the territorial ambitions of Tsarist Russia were expressed in the Vitus Bering expedition along the western coast on the North American continent. [notes 15] [notes 16]
California represents the "high-water mark" of Spanish expansion in North America as the last and northernmost colony on the continent. The mission system arose in part from the need to control Spain's ever-expanding holdings in the New World. Realizing that the colonies required a literate population base that the mother country could not supply, the Spanish government (with the cooperation of the Church) established a network of missions to convert the indigenous population to Christianity. They aimed to make converts and tax-paying citizens of those they conquered. [notes 17] To make them into Spanish citizens and productive inhabitants, the Spanish government and the Church required the indigenous people to learn Spanish language and vocational skills along with Christian teachings.
Estimates for the pre-contact indigenous population in California are based on a number of different sources and vary substantially, from 133,000, to 225,000, to as high as 705,000 from more than 100 separate tribes or nations. [notes 18] [notes 19]
On January 29, 1767, Spain's King Charles III ordered the new governor Gaspar de Portolá to forcibly expel the Jesuits, who operated under the authority of the Pope and had established a chain of fifteen missions on the Baja California Peninsula. [notes 20] Visitador General José de Gálvez engaged the Franciscans, under the leadership of Friar Junípero Serra, to take charge of those outposts on March 12, 1768. The padres closed or consolidated several of the existing settlements, and also founded Misión San Fernando Rey de España de Velicatá (the only Franciscan mission in all of Baja California) and the nearby Visita de la Presentación in 1769. This plan, however, changed within a few months after Gálvez received the following orders: "Occupy and fortify San Diego and Monterey for God and the King of Spain." The Church ordered the priests of the Dominican Order to take charge of the Baja California missions so the Franciscans could concentrate on founding new missions in Alta California.
On July 14, 1769 Gálvez sent the Portolá expedition out from Loreto to explore lands to the north. Leader Gaspar de Portolá was accompanied by a group of Franciscans led by Junípero Serra. Serra's plan was to extend the string of missions north from the Baja California peninsula, connected by an established road and spaced a day's travel apart. The first Alta California mission and presidio were founded at San Diego, the second at Monterey.
En route to Monterey, the Rev. Francisco Gómez and the Rev. Juan Crespí came across a Native settlement wherein two young girls were dying: one, a baby, said to be "dying at its mother's breast," the other a small girl suffering of burns. On July 22, Gómez baptized the baby, naming her Maria Magdalena, while Crespí baptized the older child, naming her Margarita. These were the first recorded baptisms in Alta California. Crespi dubbed the spot Los Cristianos. [notes 21] The group continued northward but missed Monterey Harbor and returned to San Diego on January 24, 1770. Near the end of 1769 the Portolá expedition had reached its most northerly point at present-day San Francisco. In following years, the Spanish Crown sent a number of follow-up expeditions to explore more of Alta California.
Each mission was to be turned over to a secular clergy and all the common mission lands distributed amongst the native population within ten years after its founding, a policy that was based upon Spain's experience with the more advanced tribes in Mexico, Central America, and Peru. In time, it became apparent to the Rev. Serra and his associates that the natives on the northern frontier in Alta California required a much longer period of acclimatization. None of the California missions ever attained complete self-sufficiency, and required continued (albeit modest) financial support from mother Spain. Mission development was therefore financed out of El Fondo Piadoso de las Californias (The Pious Fund of the Californias, which originated in 1697 and consisted of voluntary donations from individuals and religious bodies in Mexico to members of the Society of Jesus) to enable the missionaries to propagate the Catholic Faith in the area then known as California. Starting with the onset of the Mexican War of Independence in 1810, this support largely disappeared, and missions and converts were left on their own. As of 1800, native labor had made up the backbone of the colonial economy.
Arguably "the worst epidemic of the Spanish Era in California" was known to be the measles epidemic of 1806, wherein one-quarter of the mission Native American population of the San Francisco Bay Area died of the measles or related complications between March and May of that year. In 1811, the Spanish Viceroy in Mexico sent an interrogatorio (questionnaire) to all of the missions in Alta California regarding the customs, disposition, and condition of the Mission Indians. The replies, which varied greatly in the length, spirit, and even the value of the information contained therein, were collected and prefaced by the Father-Presidente with a short general statement or abstract; the compilation was thereupon forwarded to the viceregal government. [notes 22] The contemporary nature of the responses, no matter how incomplete or biased some may be, are nonetheless of considerable value to modern ethnologists.
Russian colonization of the Americas reached its southernmost point with the 1812 establishment of Fort Ross (krepost' rus), an agricultural, scientific, and fur-trading settlement located in present-day Sonoma County, California. In November and December 1818, several of the missions were attacked by Hipólito Bouchard, "California's only pirate." [notes 23] A French privateer sailing under the flag of Argentina, Pirata Buchar (as Bouchard was known to the locals) worked his way down the California coast, conducting raids on the installations at Monterey, Santa Barbara, and San Juan Capistrano, with limited success. Upon hearing of the attacks, many mission priests (along with a few government officials) sought refuge at Mission Nuestra Señora de la Soledad, the mission chain's most isolated outpost. Ironically, Mission Santa Cruz (though ultimately ignored by the marauders) was ignominiously sacked and vandalized by local residents who were entrusted with securing the church's valuables.
By 1819, Spain decided to limit its "reach" in the New World to Northern California due to the costs involved in sustaining these remote outposts; the northernmost settlement therefore is Mission San Francisco Solano, founded in Sonoma in 1823. [notes 24] The Chumash people revolted against the Spanish presence in 1824. The Chumash planned a coordinated rebellion at three missions. Due to an incident with a soldier at Mission Santa Inés, the rebellion began on Saturday, February 21. The Chumash withdrew from Mission Santa Inés upon the arrival of military reinforcements, then attacked Mission La Purisima from inside, forced the garrison to surrender, and allowed the garrison, their families, and the mission priest to depart for Santa Inés. The next day, the Chumash of Mission Santa Barbara captured the mission from within without bloodshed, repelled a military attack on the mission, and then retreated from the mission to the hills. The Chumash continued to occupy Mission La Purisima until a Mexican military unit attacked people on March 16 and forced them to surrender. Two military expeditions were sent after the Chumash in the hills; the first did not find them and the second negotiated with the Chumash and convinced a majority to return to the missions by June 28.
An attempt to found a twenty-second mission in Santa Rosa in 1827 was aborted. [notes 25] [notes 26] [notes 27] In 1833 the final group of missionaries arrived in Alta California. These were Mexican-born (rather than Spaniards), and had been trained at the Apostolic College of Our Lady of Guadalupe in Zacatecas. Among these friars was Francisco García Diego y Moreno, who would become the first bishop of the Diocese of Both Californias. These friars would bear the brunt of the changes brought on by secularization and the U.S. occupation, and many would be marked by allegations of corruption.
José María de Echeandía, the first native Mexican elected Governor of Alta California issued a "Proclamation of Emancipation" (or "Prevenciónes de Emancipacion") on July 25, 1826. All Indians within the military districts of San Diego, Santa Barbara, and Monterey who were found qualified were freed from missionary rule and made eligible to become Mexican citizens. Those who wished to remain under mission tutelage were exempted from most forms of corporal punishment. [notes 29] By 1830 even the neophyte populations themselves appeared confident in their own abilities to operate the mission ranches and farms independently; the padres, however, doubted the capabilities of their charges in this regard.
Accelerating immigration, both Mexican and foreign, increased pressure on the Alta California government to seize the mission properties and dispossess the natives in accordance with Echeandía's directive. [notes 30] Despite the fact that Echeandía's emancipation plan was met with little encouragement from the novices who populated the southern missions, he was nonetheless determined to test the scheme on a large scale at Mission San Juan Capistrano. To that end, he appointed a number of comisionados (commissioners) to oversee the emancipation of the Indians. The Mexican government passed legislation on December 20, 1827 that mandated the expulsion of all Spaniards younger than sixty years of age from Mexican territories; Governor Echeandía nevertheless intervened on behalf of some of the missionaries to prevent their deportation once the law took effect in California.
Governor José Figueroa (who took office in 1833) initially attempted to keep the mission system intact, but the Mexican Congress passed An Act for the Secularization of the Missions of California on August 17, 1833 when liberal Valentín Gómez Farías was in office. [notes 31]
The Act also provided for the colonization of both Alta and Baja California, the expenses of this latter move to be borne by the proceeds gained from the sale of the mission property to private interests.
Mission San Juan Capistrano was the very first to feel the effects of secularization when, on August 9, 1834 Governor Figueroa issued his "Decree of Confiscation." Nine other settlements quickly followed, with six more in 1835; San Buenaventura and San Francisco de Asís were among the last to succumb, in June and December 1836, respectively. The Franciscans soon thereafter abandoned most of the missions, taking with them almost everything of value, after which the locals typically plundered the mission buildings for construction materials. Former mission pasture lands were divided into large land grants called ranchos, greatly increasing the number of private land holdings in Alta California.
In spite of this neglect, the Indian towns at San Juan Capistrano, San Dieguito, and Las Flores did continue on for some time under a provision in Gobernador Echeandía's 1826 Proclamation that allowed for the partial conversion of missions to pueblos. According to one estimate, the native population in and around the missions proper was approximately 80,000 at the time of the confiscation; others claim that the statewide population had dwindled to approximately 100,000 by the early 1840s, due in no small part to the natives' exposure to European diseases, and from the Franciscan practice of cloistering women in the convento and controlling sexuality during the child-bearing age. ( Baja California Territory experienced a similar reduction in native population resulting from Spanish colonization efforts there).
Pío de Jesús Pico, the last Mexican Governor of Alta California, found upon taking office that there were few funds available to carry on the affairs of the province. He prevailed upon the assembly to pass a decree authorizing the renting or the sale of all mission property, reserving only the church, a curate's house, and a building for a courthouse. The expenses of conducting the services of the church were to be provided from the proceeds, but there was no disposition made as to what should be done to secure the funds for that purpose. After secularization, Father-Presidente Narciso Durán transferred the missions' headquarters to Santa Bárbara, thereby making Mission Santa Bárbara the repository of some 3,000 original documents that had been scattered through the California missions. The Mission archive is the oldest library in the State of California that still remains in the hands of its founders, the Franciscans (it is the only mission where they have maintained an uninterrupted presence). Beginning with the writings of Hubert Howe Bancroft, the library has served as a center for historical study of the missions for more than a century. In 1895 journalist and historian Charles Fletcher Lummis criticized the Act and its results, saying:
Disestablishment—a polite term for robbery—by Mexico (rather than by native Californians misrepresenting the Mexican government) in 1834, was the death blow of the mission system. The lands were confiscated; the buildings were sold for beggarly sums, and often for beggarly purposes. The Indian converts were scattered and starved out; the noble buildings were pillaged for their tiles and adobes...
Precise figures relating to the population decline of California indigenes are not available. One writer, Gregory Orfalea, estimates that pre-contact population was reduced by 33 percent during Spanish and Mexican rule, mostly through introduction of European diseases, but much more after the United States takeover in 1848. By 1870, the loss of indigenous lives had become catastrophic. Up to 80 percent died, leaving a population of about 30,000 in 1870. Orfalea claims that nearly half of the native deaths after 1848 were murder.
In 1837–38, a major smallpox epidemic devastated native tribes north of San Francisco Bay, in the jurisdiction of Mission San Francisco Solano. General Mariano Vallejo estimated that 70,000 died from the disease. Vallejo's ally, chief Sem-Yeto, was one of the few natives to be vaccinated, and one of the few to survive.
When the mission properties were secularized between 1834 and 1838, the approximately 15,000 resident neophytes lost whatever protection the mission system afforded them. While under the secularization laws the natives were to receive up to one-half of the mission properties, this never happened. The natives lost whatever stock and movable property they may have accumulated. When California became a U.S. state, California law stripped them of legal title to the land. In the Act of September 30, 1850, Congress appropriated funds to allow the President to appoint three Commissioners, O. M. Wozencraft, Redick McKee and George W. Barbour, to study the California situation and "...negotiate treaties with the various Indian tribes of California." Treaty negotiations ensued during the period between March 19, 1851 and January 7, 1852, during which the Commission interacted with 402 Indian chiefs and headmen (representing approximately one-third to one-half of the California tribes) and entered into eighteen treaties.
California Senator William M. Gwin's Act of March 3, 1851 created the Public Land Commission, whose purpose was to determine the validity of Spanish and Mexican land grants in California. On February 19, 1853 Archbishop J.S. Alemany filed petitions for the return of all former mission lands in the state. Ownership of 1,051.44 acres (4.2550 km2) (essentially exact area of land occupied by the original mission buildings, cemeteries, and gardens) was subsequently conveyed to the Church, along with the Cañada de los Pinos (or College Rancho) in Santa Barbara County comprising 35,499.73 acres (143.6623 km2), and La Laguna in San Luis Obispo County, consisting of 4,157.02 acres (16.8229 km2). As the result of a U.S. government investigation in 1873, a number of Indian reservations were assigned by executive proclamation in 1875. The commissioner of Indian affairs reported in 1879 that the number of Mission Indians in the state was down to around 3,000.
There is controversy over the California Department of Education's treatment of the missions in the Department's elementary curriculum; in the tradition of historical revisionism, it has been alleged that the curriculum "waters down" the harsh treatment of Native Americans. Modern anthropologists cite a cultural bias on the part of the missionaries that blinded them to the natives' plight and caused them to develop strong negative opinions of the California Indians. [notes 32] European diseases like influenza, measles, tuberculosis, gonorrhea, and dysentery caused a significant population reduction from the first encounter through the 19th century as California Native Americans had no immunity to these diseases.
The impact that the original Spanish system of colonization had on modern day California cannot be overstated. Although the certain cooperation between Church and State that was part and parcel of the original California mission system was soon discarded by the Mexican government, it nonetheless provided a foundation upon which later forms of government would soon be established. The early missions and their sub-missions formed the nuclei of what would later become the major metropolitan areas of San Francisco and Los Angeles, as well as many other smaller municipalities. In addition to clearing the way for Spanish, Mexican, and later American settlers, the early Spanish mission system established the viability of the early Western economies of cattle and agriculture which survive in modern form in the state to this day. The Spanish mission system acted to "settle and Westernize" California, but unfortunately did so very much at the expense of the earlier Native American Culture of California that had preceded the Spanish mission system.
- The Rev. Junípero Serra (1769–1784)
- The Rev. Francisco Palóu (presidente pro tempore) (1784–1785)
- The Rev. Fermín Francisco de Lasuén (1785–1803)
- The Rev. Pedro Estévan Tápis (1803–1812)
- The Rev. José Francisco de Paula Señan (1812–1815)
- The Rev. Mariano Payéras (1815–1820)
- The Rev. José Francisco de Paula Señan (1820–1823)
- The Rev. Vicente Francisco de Sarría (1823–1824)
- The Rev. Narciso Durán (1824–1827)
- The Rev. José Bernardo Sánchez (1827–1831)
- The Rev. Narciso Durán (1831–1838)
- The Rev. José Joaquin Jimeno (1838–1844)
- The Rev. Narciso Durán (1844–1846)
The "Father-Presidente" was the head of the Catholic missions in Alta and Baja California. He was appointed by the College of San Fernando de Mexico until 1812, when the position became known as the "Commissary Prefect" who was appointed by the Commissary General of the Indies (a Franciscan residing in Spain). Beginning in 1831, separate individuals were elected to oversee Upper and Lower California.
- Mission San Diego de Alcalá (1769–1771)
- Mission San Carlos Borromeo de Carmelo (1771–1815)
- Mission La Purísima Concepción*(1815–1819)
- Mission San Carlos Borromeo de Carmelo (1819–1824)
- Mission San José*(1824–1827)
- Mission San Carlos Borromeo de Carmelo (1827–1830)
- Mission San José*(1830–1833)
- Mission Santa Barbara (1833–1846)
† The Rev. Payeras and the Rev. Durán remained at their resident missions during their terms as Father-Presidente, therefore those settlements became the de facto headquarters (until 1833, when all mission records were permanently relocated to Santa Barbara). [notes 33]
There were 21 missions accompanied by military outposts in Alta California from San Diego to Sonoma, California. To facilitate travel between them on horse and foot, the mission settlements were situated approximately 30 miles (48 kilometers) apart, about one day's journey on horseback, or three days on foot. The entire trail eventually became a 600-mile (966-kilometer) long "California Mission Trail." :132 :152 Heavy freight movement was practical only via water. Tradition has it that the padres sprinkled mustard seeds along the trail to mark it with bright yellow flowers. :79 :260
During the Mission Period Alta California was divided into four military districts. Each was garrisoned (comandancias) by a presidio strategically placed along the California coast to protect the missions and other Spanish settlements in Upper California. Each of these functioned as a base of military operations for a specific region. They were independent of one another and were organized from south to north as follows:
- El Presidio Real de San Diego founded on July 16, 1769 – responsible for the defense of all installations located within the First Military District (the missions at San Diego, San Luis Rey, San Juan Capistrano, and San Gabriel);
- El Presidio Real de Santa Bárbara founded on April 12, 1782 – responsible for the defense of all installations located within the Second Military District (the missions at San Fernando, San Buenaventura, Santa Barbara, Santa Inés, and La Purísima, along with El Pueblo de Nuestra Señora la Reina de los Ángeles del Río de Porciúncula [Los Angeles]);
- El Presidio Real de San Carlos de Monterey (El Castillo) founded on June 3, 1770 – responsible for the defense of all installations located within the Third Military District (the missions at San Luis Obispo, San Miguel, San Antonio, Soledad, San Carlos, and San Juan Bautista, along with Villa Branciforte [Santa Cruz]); and
- El Presidio Real de San Francisco founded on December 17, 1776 – responsible for the defense of all installations located within the Fourth Military District (the missions at Santa Cruz, San José, Santa Clara, San Francisco, San Rafael, and Solano, along with El Pueblo de San José de Guadalupe [San Jose]).
- El Presidio de Sonoma, or "Sonoma Barracks" (a collection of guardhouses, storerooms, living quarters, and an observation tower) was established in 1836 by Mariano Guadalupe Vallejo (the "Commandante-General of the Northern Frontier of Alta California") as a part of Mexico's strategy to halt Russian incursions into the region. The Sonoma Presidio became the new headquarters of the Mexican Army in California, while the remaining presidios were essentially abandoned and, in time, fell into ruins.
An ongoing power struggle between church and state grew increasingly heated and lasted for decades. Originating as a feud between the Rev. Serra and Pedro Fages (the military governor of Alta California from 1770 to 1774, who regarded the Spanish installations in California as military institutions first and religious outposts second), the uneasy relationship persisted for more than sixty years. [notes 34] Dependent upon one another for their very survival, military leaders and mission padres nevertheless adopted conflicting stances regarding everything from land rights, the allocation of supplies, protection of the missions, the criminal propensities of the soldiers, and (in particular) the status of the native populations. [notes 35]
California is home to the greatest number of well-preserved missions found in any U.S. state. [notes 36] The missions are collectively the best-known historic element of the coastal regions of California:
- Most of the missions are still owned and operated by some entity within the Catholic Church.
- Three of the missions are still run under the auspices of the Franciscan Order (Santa Barbara, San Miguel Arcángel, and San Luis Rey de Francia)
- Four of the missions (San Diego de Alcalá, San Carlos Borromeo de Carmelo, San Francisco de Asís, and San Juan Capistrano) have been designated minor basilicas by the Holy See due to their cultural, historic, architectural, and religious importance.
- Mission La Purísima Concepción, Mission San Francisco Solano, and the one remaining mission-era structure of Mission Santa Cruz are owned and operated by the California Department of Parks and Recreation as State Historic Parks;
- Seven mission sites are designated National Historic Landmarks, fourteen are listed in the National Register of Historic Places, and all are designated as California Historical Landmarks for their historic, architectural, and archaeological significance.
Because virtually all of the artwork at the missions served either a devotional or didactic purpose, there was no underlying reason for the mission residents to record their surroundings graphically; visitors, however, found them to be objects of curiosity. During the 1850s a number of artists found gainful employment as draftsmen attached to expeditions sent to map the Pacific coastline and the border between California and Mexico (as well as plot practical railroad routes); many of the drawings were reproduced as lithographs in the expedition reports.[ citation needed]
In 1875 American illustrator Henry Chapman Ford began visiting each of the twenty-one mission sites, where he created a historically important portfolio of watercolors, oils, and etchings. His depictions of the missions were (in part) responsible for the revival of interest in the state's Spanish heritage, and indirectly for the restoration of the missions. The 1880s saw the appearance of a number of articles on the missions in national publications and the first books on the subject; as a result, a large number of artists did one or more mission paintings, though few attempted a series.
The popularity of the missions also stemmed largely from Helen Hunt Jackson's 1884 novel Ramona and the subsequent efforts of Charles Fletcher Lummis, William Randolph Hearst, and other members of the "Landmarks Club of Southern California" to restore three of the southern missions in the early 20th century (San Juan Capistrano, San Diego de Alcalá, and San Fernando; the Pala Asistencia was also restored by this effort). [notes 37] Lummis wrote in 1895,
In ten years from now—unless our intelligence shall awaken at once—there will remain of these noble piles nothing but a few indeterminable heaps of adobe. We shall deserve and shall have the contempt of all thoughtful people if we suffer our noble missions to fall.
In acknowledgement of the magnitude of the restoration efforts required and the urgent need to have acted quickly to prevent further or even total degradation, Lummis went on to state,
It is no exaggeration to say that human power could not have restored these four missions had there been a five-year delay in the attempt.
In 1911 author John Steven McGroarty penned The Mission Play, a three-hour pageant describing the California missions from their founding in 1769 through secularization in 1834, and ending with their "final ruin" in 1847.
Today, the missions exist in varying degrees of architectural integrity and structural soundness. The most common extant features at the mission grounds include the church building and an ancillary convento ( convent) wing. In some cases (in San Rafael, Santa Cruz, and Soledad, for example), the current buildings are replicas constructed on or near the original site. Other mission compounds remain relatively intact and true to their original, Mission Era construction.
A notable example of an intact complex is the now-threatened Mission San Miguel Arcángel: its chapel retains the original interior murals created by Salinan Indians under the direction of Esteban Munras, a Spanish artist and last Spanish diplomat to California. This structure was closed to the public from 2003 to 2009 due to severe damage from the San Simeon earthquake. Many missions have preserved (or in some cases reconstructed) historic features in addition to chapel buildings.
The missions have earned a prominent place in California's historic consciousness, and a steady stream of tourists from all over the world visit them. In recognition of that fact, on November 30, 2004 President George W. Bush signed HR 1446, the California Mission Preservation Act, into law. The measure provided $10 million over a five-year period to the California Missions Foundation for projects related to the physical preservation of the missions, including structural rehabilitation, stabilization, and conservation of mission art and artifacts. The California Missions Foundation, a volunteer, tax-exempt organization, was founded in 1998 by Richard Ameil, an eighth generation Californian. A change to the California Constitution has also been proposed that would allow the use of State funds in restoration efforts.
On California Missions:
- List of Spanish missions in California
- San Antonio de Pala Asistencia, not a full mission, but still serving the Pala reservation
On California history:
- Juan Bautista de Anza National Historic Trail
- History of California through 1899
- History of the west coast of North America
- Mission Vieja
On general missionary history:
- Catholic Church and the Age of Discovery
- History of Christian Missions
- List of the oldest churches in Mexico
On colonial Spanish American history:
- Spanish colonization of the Americas
- California mission clash of cultures
- Indian Reductions
- California Genocide
- Native Americans in the United States
- "By that time, it was found that the Russian colonies were not such undesirable neighbors as in 1817 it was thought they might become... the Russian scare, for the time being at least was over; and as for the old enthusiasm for new spiritual conquests, there was none left."
- Engelhardt: One such hypothesis was put forth by author by Prent Duel in his 1919 work Mission Architecture as Exemplified in San Xavier Del Bac: "Most missions of early date possessed secret passages as a means of escape in case they were besieged. It is difficult to locate any of them now as they are well concealed."
- Chapman: "Latter-day historians have been altogether too prone to regard the hostility to the Spaniards on the part of the California Indians as a matter of small consequence, since no disaster in fact ever happened...On the other hand the San Diego plot involved untold thousands of Indians, being virtually a national uprising, and owing to the distance from New Spain to and the extreme difficulty of maintaining communications a victory for the Indians would have ended Spanish settlement in Alta California." As it turned out, "...the position of the Spaniards was strengthened by the San Diego outbreak, for the Indians felt from that time forth that it was impossible to throw out their conquerors." See also Mission Puerto de Purísima Concepción and Mission San Pedro y San Pablo de Bicuñer regarding the Yuma 'massacres' of 1781.
- Engelhardt: Not all of the native cultures responded with hostility to the Spaniards' presence; Engelhardt portrayed the natives at Mission San Juan Capistrano (dubbed the " Juaneño" by the missionaries), where there was never any instance of unrest, as being "uncommonly friendly and docile." The Rev. Juan Crespí, who accompanied the 1769 expedition, described the first encounter with the area's inhabitants: "They came unarmed and with a gentleness which has no name they brought their poor seeds to us as gifts...The locality itself and the docility of the Indians invited the establishment of a Mission for them."
- Chapman: "Over the hills of the Coast Range, in the valleys of the Sacramento and San Joaquin, north of San Francisco Bay, and in the Sierra Nevadas of the south there were untold thousands whom the mission system never reached...they were as if in a world apart from the narrow strip of coast which was all there was of the Spanish California."
- Bennett: "The system had singularly failed in its purposes. It was the design of the Spanish government to have the missions educate, elevate, civilize, the Indians into citizens. When this was done, citizenship should be extended them and the missions should be dissolved as having served their purpose...[instead] the priests returned them projects of conversion, schemes of faith, which they never comprehended...He [the Indian] became a slave; the mission was a plantation; the friar was a taskmaster."
- Bennett: "In 1825 Governor Argüello wrote that the slavery of the Indians at the missions was bestial... Governor Figueroa declared that the missions were 'entrenchments of monastic despotism'..."
- Bennett: "It cannot be said that the mission system made the Indians more able to sustain themselves in civilization than it had found them...Upon the whole it may be said that this mission experiment was a failure."
- Lippy: "A matter of debate in reflecting on the role of Spanish missions concerns the degree to which the Spanish colonial regimes regarded the work of the priests as a legitimate religious enterprise and the degree to which it was viewed as a 'frontier institution,' part of a colonial defense program. That is, were Spanish motives based on a desire to promote conversion or on a desire to have religious missions serve as a buffer to protect the main colonial settlements and an aid in controlling the Indians?"
- Bennett: The missions in effect served as "...the citadels of the theocracy which was planted in California by Spain, under which its wild inhabitants were subjected, which stood as their guardians, civil and religious, and whose duty it was to elevate them and make them acceptable as citizens and Spanish subjects...it remained for the Spanish priests to undertake to preserve the Indian and seek to make his existence compatible with higher civilization."
- Bean: "Serra's decision to plant tobacco at the missions was prompted by the fact that from San Diego to Monterey the natives invariably begged him for Spanish tobacco."
- The Spanish claim to the Pacific Northwest dated back to a 1493 papal bull ( Inter caetera) and rights contained in the 1494 Treaty of Tordesillas; in these two formal acts, Spain gave itself the exclusive right to colonize all of the Western Hemisphere (excluding Brazil), including all of the west coast of North America.
- The term Alta California as applies to the mission chain founded by Serra refers specifically to the modern-day United States State of California.
- Leffingwell: The Rev. Antonio de la Ascensión, a Carmelite who visited San Diego with Vizcaíno's 1602 expedition, "surveyed the area and concluded that the land was fertile, the fish plentiful, and gold abundant." Ascensión was convinced that California's potential wealth and strategic location merited colonization, and in 1620 recommended in a letter to Madrid that missions be established in the region, a venture that would involve military as well as religious personnel.
- Chapman: "It is usually stated that the Spanish court at Madrid received reports about Russian aggression in the Pacific northwest, and sent orders to meet them by the occupation of Alta California, wherefore the expeditions of 1769 were made. This view contains only a smattering of the truth. It is evident from [José de] Gálvez's correspondence of 1768 that he and [Carlos Francisco de] Croix had discussed the advisability of an immediate expedition to Monterey, long before any word came from Spain about the Russian activities."
- Bennett: California had been visited a number of times since Cabrillo's discovery in 1542, which initially included notable expeditions led by Englishmen Francis Drake in 1579 and Thomas Cavendish 1587, and later on by Woodes Rogers (1710), George Shelvocke (1719), James Cook (1778), and finally George Vancouver in 1792. Spanish explorer Sebastián Vizcaíno made landfall in San Diego Bay in 1602, and the famed conquistador Hernán Cortés explored the California Gulf Coast in 1735.
- Bennett: "Other pioneers have blazed the way for civilization by the torch and the bullet, and the red man has disappeared before them; but it remained for the Spanish priests to undertake to preserve the Indian and seek to make his existence compatible with a higher civilization."
- Kroeber: "In the matter of population, too, the effect of Caucasian contact cannot be wholly slighted, since all statistics date from a late period. The disintegration of Native numbers and Native culture have proceeded hand in hand, but in very different rations according to locality. The determination of population strength before the arrival of whites is, on the other hand, of considerable significance toward the understanding of Indian culture, on account of the close relations which are manifest between type of culture and density of population."
- Chapman, p. 383: "...there may have been about 133,000 [Native inhabitants] in what is now the state as a whole, and 70,000 in or near the conquered area. The missions included only the Indians of given localities, though it is true that they were situated on the best lands and in the most populous centres. Even in the vicinity of the missions, there were some unconverted groups, however." See Population of Native California.
- Bennett: Due to the isolation of the Baja California missions, the decree for expulsion did not arrive in June 1767, as it did in the rest of New Spain, but was delayed until the new governor, Portolà, arrived with the news on November 30. Jesuits from the operating missions gathered in Loreto, whereupon they left for exile on February 3, 1768.
- Engelhardt: Today, the site (located at Marine Corps Base Camp Pendleton in San Diego County) is in Los Christianitos ("The Little Christians") Canyon, and is designated as La Christiana California Historical Landmark #562 Archived 2005-07-11 at the Wayback Machine. on
- Kroeber: "Some of the missionaries evidently regarded compliance with the instructions of the questionnaire as an official requirement which was perfunctorily performed. In many cases no answers were given various questions at certain of the missions."
- There is a great contrast between the legacy of Bouchard in Argentina versus his reputation in the United States. In Buenos Aires, Bouchard is honored as a brave patriot, while in California he is most often remembered as a pirate, and not a privateer. See Hippolyte Bouchard.
- Hittell: "...it [Mission San Francisco Solano] was quite frequently known as the mission of Sonoma. From the beginning it was rather a military than a religious establishment—a sort of outpost or barrier, first against the Russians and afterwards against the Americans; but still a large adobe church was built and Indians were baptized."
- Hittell: "By that time, it was found that the Russians were not such undesirable neighbors as in 1817 it was thought they might become...the Russian scare, for the time being at least was over; and as for the old enthusiasm for new spiritual conquests, there was none left."
- Bennett 1897b, p. 154: "Up to 1817 the 'spiritual conquest' of California had been confined to the territory south of San Francisco Bay. And this, it might be said, was as far as possible under the mission system. There had been a few years prior to that time certain alarming incursions of the Russians, which distressed Spain, and it was ordered that missions be started across the bay."
- Chapman: "...the Russians and the English were by no means the only foreign peoples who threatened Spain's domination of the Pacific coast. The Indians and the Chinese had their opportunity before Spain appeared upon the scene. The Japanese were at one time a potential concern, and the Portuguese and Dutch voyagers occasionally gave Spain concern. The French for many years were the most dangerous enemy of all, but with their disappearance from North America in 1763, as a result of their defeat in the Seven Years' War, they were no longer a menace. The people of the United States were eventually to become the most powerful outstanding element."
- Robinson: The cortes (legislature) of New Spain issued a decree in 1813 for at least partial secularization that affected all missions in America and was to apply to all outposts that had operated for ten years or more; however, the decree was never enforced in California.
- Catholic historian Zephyrin Engelhardt referred to Echeandía as "...an avowed enemy of the religious orders."
- Settlers made numerous false claims to diminish the natives' abilities: "The Indians are by nature slovenly and indolent," stated one newcomer. "They have unfeelingly appropriated the region," claimed another.
- Yenne: In 1833, Figueroa replaced the Spanish-born Franciscan padres at all of the settlements north of Mission San Antonio de Padua with Mexican-born Franciscan priests from the College of Guadalupe de Zacatecas. In response, Father-Presidente Narciso Durán transferred the headquarters of the Alta California Mission System to Mission Santa Bárbara, where it remained until 1846.
- Hittell: "Boscana himself and his brother missionaries were men of narrow range of thought, continually seeking among the superstitions of the natives for resemblances of the true faith and ever ready to catch at the slightest hints and magnify them into complicated dogmas corresponding afar of those which they themselves taught."
- In 1833 Figueroa replaced the padres at all of the settlements north of Mission San Antonio de Padua with Mexican-born Franciscan priests from the College of Guadalupe de Zacatecas. In response, Father-Presidente Narciso Durán transferred the headquarters of the Alta California Mission System to Mission Santa Bárbara, where they remained until 1846.
- Bennett: "...Junípero had in California insisted that the military should be subservient to the priests, that the conquest was spiritual, not temporal..."
- Engelhardt: "Recruited from the scum of society in Mexico, frequently convicts and jailbirds, it is not surprising that the mission guards, leather-jacket soldiers, as they were called, should be guilty of...crimes at nearly all the Missions...In truth, the guards counted among the worst obstacles to missionary progress. The wonder is, that the missionaries nevertheless succeeded so well in attracting converts."
- Morrison: That the buildings in the California mission chain are in large part intact is due in no small measure to their relatively recent construction; Mission San Diego de Alcalá was founded more than two centuries after the establishment of the Mission of Nombre de Dios in St. Augustine, Florida in 1565 and 170 years following the founding of Mission San Gabriel del Yunque in present-day Santa Fe, New Mexico in 1598.
- Thompson: In the words of Charles Lummis, the historic structures "...were falling to ruin with frightful rapidity, their roofs being breached or gone, the adobe walls melting under the winter rains."
- Saunders and Chase, p. 65
- Kelsey, p. 18
- "The Jesuit Republic of South America | VQR Online". www.vqronline.org. Retrieved 2020-07-10.
- Duggan, MC (2016). "With and Without an Empire: Financing for California Missions Before and After 1810" in Pacific Historical Review, Vol. 85, No. 1, pp. 23–71. Duggan, M. C. (2016). "With and Without an Empire: Financing for California Missions Before and After 1810". Pacific Historical Review. 85 (1): 23–71. doi: 10.1525/phr.2016.85.1.23. Archived from the original on 2018-04-27. Retrieved 2018-03-05.
- Robinson, p. 25
- Capron, p. 3
- Early California ... Russian Presence Archived 2016-10-13 at the Wayback Machine Oakland Museum of California website, downloaded Sept. 10, 2016
- Young, p. 17
- Bancroft, pp. 33–34
- Ruscin, p. 61
- Chapman, p. 418: Chapman does not consider the sub-missions (asistencias) that make up the inland chain in this regard.
- Engelhardt 1920, pp. 350–351
- Ruscin, p. 12
- Paddison, p. 48
- Chapman, pp. 310–311
- Engelhardt 1922, p. 12
- Rawls, pp. 14–16
- Leffingwell, pp. 19, 132
- Bennett 1897a, p. 20: Priests were paid an annual salary of $400.
- Engelhardt 1908, pp. 3–18
- Carey McWilliams. Southern California:An Island on the Land Archived 2015-10-11 at the Wayback Machine
- Duggan, Marie Christine. "Beyond Slavery: The Institutional Status of Mission Indians". Franciscan Florida in Pan-Borderlands Perspective: Adaptation, Negotiation, and Resistance. Archived from the original on 2018-04-27. Retrieved 2018-03-05. Duggan, M.C. "Beyond Slavery: Institutional Status of Mission Indians, in Burns and Johnson (eds.), Franciscans and American Indians in Pan-Borderlands Perspective. Oceanside, CA: AAFH, 2017.
- McWilliams, Carey. "The Indian in the Closet". Archived from the original on 25 May 2017. Retrieved 7 March 2017.
- Chapman, p. 383
- Paddison, p. 130
- Newcomb, p. viii
- Krell, p. 316
- Engelhardt 1922, p. 30
- Bennett 1897b, p. 156
- Bennett 1897b, p. 158
- Bennett 1897b, p. 160: "The fathers claimed all the land in California in trust for the Indians, yet the Indians received no visible benefit from the trust."
- Lippy, p. 47
- Bennett 1897a, p. 10
- Paddison, p. xiv
- A. Thompson, p. 341
- Bean and Lawson, p. 37
- A fanega is equal to 100 pounds.
- Krell, p. 316: As of December 31, 1832.
- "California Native Grasslands Association – Home". Archived from the original on 2009-08-28.
- Engelhardt 1922, p. 211
- "Santa Barbara – Mission Historical Park". Archived from the original on 2017-09-05.
- Leffingwell, p. 10
- Winship. pp. 32–4, 37
- Flint, R. (Winter 2005). "What They Never Told You about the Coronado Expedition". Kiva. 71 (2): 203–217. JSTOR 30246725.
- Kelsey, Harry (1986). Juan Rodríguez Cabrillo. San Marino: The Huntington Library.
- Morrison, p. 214
- "Drake Claims California for England". History.com. Archived from the original on 24 September 2015. Retrieved 11 December 2015.
- Kelsey, Harry. "The Queen's Pirate". The New York Times. Archived from the original on 25 March 2016. Retrieved 11 December 2015.
- Bancroft, Hubert H.; History of California Vol. XXII 1846–1848, p. 201, The History Company Publishers, San Francisco, 1882 (Google eBook)
- Frost, Orcutt William, ed. (2003), Bering: The Russian Discovery of America, New Haven, Connecticut: Yale University Press, ISBN 978-0-300-10059-4
- Chapman, p. 216
- Bennett 1897a, pp. 11–12
- Rawls, p. 3
- "Old Mission Santa Inés:" Clerical historian Maynard Geiger, "This was to be a cooperative effort, imperial in origin, protective in purpose, but primarily spiritual in execution."
- Chapman, Charles E. PhD (1921). A History of California; The Spanish Period. New York: The MacMillan Company. ISBN 978-1148507927.
- Orfalea, Gregory. "Hungry for Souls Was Junípero Serra a Saint?". Commonweal magazine. Archived from the original on 22 December 2015. Retrieved 11 December 2015.
- Rawls, p. 6
- Kroeber 1925, p. vi.
- Bennett, p. 15
- Bennett 1897a, p. 16
- James, p. 11
- Engelhardt 1922, p. 258
- Yenne, p. 10
- Leffingwell, p. 25
- "History". COUNTY OF LOS ANGELES. 2016-12-02. Retrieved 2020-10-12.
- Engelhardt 1920, p. 76
- Robinson, p. 28
- Bennett 1897a, p. 13
- Rawls, p. 106
- Milliken, pp. 172–173, 193
- Kroeber, p. 1
- Kroeber, p. 2
- Kelsey, p. 4
- Nordlander, p. 10
- Jones, p. 170
- Young, p. 102
- Hittell, p. 499
- Beebe, Rose; Senkewicz, Robert (2001). Lands of Promise and Despair: Chronicles of Early California, 1535-1846. Santa Clara: Santa Clara University. ISBN 1-890771-48-1.
- Chapman, pp. 254–255
- Bacich, Damian. "The Zacatecan Franciscans in Alta California: A Misunderstood Legacy." Boletín: Journal of the California Mission Studies Association Archived 2015-02-22 at the Wayback Machine, Vol. 28, Nos. 1&2, 2011–12
- Robinson, p. 29
- Engelhardt 1922, p. 80
- Bancroft, vol. i, pp. 100–101: The motives behind the issuance of Echeandía's premature decree may have had more to do with his desire to appease "...some prominent Californians who had already had their eyes on the mission lands..." than with concern for the welfare of the natives.
- Stern and Miller, pp. 51–52
- Forbes, p. 201: In 1831, the number of Indians under missionary control in all of Upper California stood at 18,683; garrison soldiers, free settlers, and "other classes" totaled 4,342.
- Kelsey, p. 21
- Bancroft, vol. iii, pp. 322; 626
- Engelhard 1922, p. 223
- Yenne, pp. 18–19
- Engelhardt 1922, p. 114
- Yenne, pp. 83, 93
- Robinson, p. 42
- Cook, p. 200
- James, p. 215
- Engelhardt 1922, p. 248
- Bancroft, H. H. (1886). The works of Hubert Howe Bancroft: History of California : vol. IV, 1840–1845, pp73-74. San Francisco Calif.: A.L. Bancroft
- Robinson, p. 14
- Robinson, p. 100
- Robinson, pp. 31–32: The area shown is that stated in the Corrected Reports of Spanish and Mexican Grants in California Complete to February 25, 1886 as a supplement to the Official Report of 1883–1884. Patents for each mission were issued to Archbishop J.S. Alemany based on his claim filed with the Public Land Commission on February 19, 1853.
- Rawls, pp. 112–113
- McKanna, p. 15; also, per Hittell, p. 753
- McCormack, Brian T. "Conjugal Violence, Sex, Sin and Murder in the Mission Communities of Alta California." Journal of the History OF Sexuality 16.3 (July, 2007): 391–415. Project MUSE [Johns Hopkins UP]. Web. 12 Feb. 2017.
- Henderson, "Church and State: 1821–1910", p. 254.
- Urbanism and Empire in the Far West, 1840–1890. By Eugene P. Moehring. 2004. University of Nevada Press. Pg. 3.
- Indians, Franciscans, and Spanish Colonization: The Impact of the Mission System on California Indians. by Robert H. Jackson. 1996. University of NM Press.
- A Place in Time: The Story of the Mission de la Purisima Conceptión Archived 2016-06-29 at the Wayback Machine. California Parks Service. Vimeo video presentation.
- Ruscin, p. 196
- Yenne, p. 186
- Yenne, Bill (2004). The Missions of California. Advantage Publishers Group, San Diego, California. ISBN 978-1-59223-319-9.
- Bennett, John E. (January 1897a). "Should the California Missions Be Preserved? – Part I". Overland Monthly. XXIX (169): 9–24.
- Markham, Edwin (1914). California the Wonderful: Her Romantic History, Her Picturesque People, Her Wild Shores... Hearst's International Library Company, Inc., New York.
- Riesenberg, Felix (1962). The Golden Road: The Story of California's Spanish Mission Trail. McGraw-Hill, New York. ISBN 978-0-07-052740-9.
- Engelhardt 1920, p. 228
- Leffingwell, p. 22
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 6,465; garrison soldiers totaled 796.
- Leffingwell, p. 68
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 3,292; garrison soldiers totaled 613; the population of El Pueblo de los Ángeles numbered 1,388.
- Leffingwell, p. 119
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 3,305; garrison soldiers totaled 708; the population of Villa Branciforte numbered 130.
- Leffingwell, p. 154
- Forbes, p. 202: In 1831, the number of Indians under missionary control stood at 5,433; garrison soldiers totaled 371; the population of El Pueblo de San José numbered 524.
- Leffingwell, p. 170
- Paddison, p. 23
- Bennett 1897a, p. 20
- Engelhardt 1922, pp. 8–10
- Young, p. 18
- Stern and Miller, p. 85
- Stern and Neuerburg, p. 95
- Thompson, Mark, pp. 185–186
- "Past Campaigns"
- Stern and Miller, p. 60
- "California Missions Preservation Act" (PDF). gpo.gov. Archived (PDF) from the original on 26 February 2005. Retrieved 27 April 2018.
- Coronado and Ignatin
- Bancroft, Hubert Howe (1886). History of California, Volume II (1801–1894). The History Company, San Francisco, California.
- Bean, Lowell John & Harry Lawton (1976). Native Californians: A Theoretical Perspective. Ballena Press, Banning, California.
- Bennett, John E. (January 1897a). "Should the California Missions Be Preserved? – Part I". Overland Monthly. XXIX (169): 9–24.
- Bennett, John E. (February 1897b). "Should the California Missions Be Preserved? – Part II". Overland Monthly. XXIX (170): 150–161.
- Capron, E.S. (1854). History of California from its Discovery to the Present Time. John P. Jewett & Company, Cleveland, Ohio.
- Chapman, Charles E. (1921). A History of California; The Spanish Period. The MacMillan Company, New York.
- Cook, Sherburne F., PhD (1976). The Population of the California Indians, 1769–1970. University of California Press, Berkeley, California. ISBN 978-0-520-02923-1.CS1 maint: multiple names: authors list ( link)
- Coronado, Michael; Heather Ignatin (June 5, 2006). "Plan would open Prop. 40 funds to missions". The Orange County Register. Retrieved 2008-03-08.
- Engelhardt, Zephyrin, O.F.M. (1908). The Missions and Missionaries of California, Volume One. The James H. Barry Co., San Francisco, California.CS1 maint: multiple names: authors list ( link)
- Engelhardt, Zephyrin, O.F.M. (1920). San Diego Mission. James H. Barry Company, San Francisco, California.CS1 maint: multiple names: authors list ( link)
- Engelhardt, Zephyrin, O.F.M. (1922). San Juan Capistrano Mission. Standard Printing Co., Los Angeles, California.CS1 maint: multiple names: authors list ( link)
- Forbes, Alexander (1839). California: A History of Upper and Lower California. Smith, Elder and Co., Cornhill, London. ISBN 978-0-405-04972-9.
- Geiger, Maynard J., O.F.M., PhD (1969). Franciscan Missionaries in Hispanic California, 1769–1848: A Biographical Dictionary. Huntington Library, San Marino, California.CS1 maint: multiple names: authors list ( link)
- Harley, R. Bruce (1997–2003). "The San Bernardino Asistencias". California Mission Studies Association. Archived from the original on 2006-06-13. Retrieved 2006-11-21.
- Hittell, Theodore H. (1898). History of California, Volume I. N.J. Stone & Company, San Francisco, California.
- James, George Wharton (1913). The Old Franciscan Missions of California. Little, Brown, and Co. Inc., Boston, Massachusetts. ISBN 978-0-89341-321-7.
- Jones, Roger W. (1997). California from the Conquistadores to the Legends of Laguna. Rockledge Enterprises, Laguna Hills, California.
- Jones, Terry L.; Kathryn A. Klar (2005). "Linguistic Evidence for a Prehistoric Polynesia-Southern California Contact Event". Anthropological Linguistics (47): 369–400.
- Jones, Terry L. and Kathryn A. Klar (eds.) (2007). California Prehistory: Colonization, Culture, and Complexity. Altimira Press, Landham, Maryland. ISBN 978-0-7591-0872-1.CS1 maint: extra text: authors list ( link)
- Kelsey, H. (1993). Mission San Juan Capistrano: A Pocket History. Interdisciplinary Research, Inc., Altadena, California. ISBN 978-0-9785881-0-6.
- Krell, Dorothy (ed.) (1979). The California Missions: A Pictorial History. Sunset Publishing Corporation, Menlo Park, California. ISBN 978-0-376-05172-1.CS1 maint: extra text: authors list ( link)
- Kroeber, Alfred L. (1908). "A Mission Record of the California Indians". University of California Publications in American Archaeology and Ethnology. 8 (1): 1–27.
- Kroeber, Alfred L. (1925). Handbook of the Indians of California. Dover Publications, Inc., New York. ISBN 978-0-486-23368-0.
- Leffingwell, Randy (2005). California Missions and Presidios: The History & Beauty of the Spanish Missions. Voyageur Press, Inc., Stillwater, Minnesota. ISBN 978-0-89658-492-1.
- Lippy, Charles H. (1985). Bibliography of Religion in the South. Mercer University Press, Macon, Georgia. ISBN 978-0-86554-161-0.
- Markham, Edwin (1914). California the Wonderful: Her Romantic History, Her Picturesque People, Her Wild Shores... Hearst's International Library Company, Inc., New York.
- Margolin, Malcolm (1993). The Way We Lived: California Indian Stories, Songs & Remembrances. Heyday Books, Berkeley, California. ISBN 978-0-930588-55-7.
- McKanna, Clare Vernon (2002). Race and Homicide in Nineteenth-Century California. University of Nevada Press, Reno, Nevada. ISBN 978-0-87417-515-8.
- Milliken, Randall (1995). A Time of Little Choice: The Disintegration of Tribal Culture in the San Francisco Bay Area 1769–1910. Ballena Press, Menlo Park, California. ISBN 978-0-87919-132-0.
- Morrison, Hugh (1987). Early American Architecture: From the First Colonial Settlements to the National Period. Dover Publications, New York. ISBN 978-0-486-25492-0.
- Newcomb, Rexford (1973). The Franciscan Mission Architecture of Alta California. Dover Publications, Inc., New York. ISBN 978-0-486-21740-6.
- Nordlander, David J. (1994). For God & Tsar: A Brief History of Russian America 1741–1867. Alaska Natural History Association, Anchorage, AK. ISBN 978-0-930931-15-5.
- Oakley, Kenneth P. (September 1963). "Relative Dating of Arlington Springs Man". Science. 141 (3586): 1172. Bibcode: 1963Sci...141.1172O. doi: 10.1126/science.141.3586.1172. PMID 14043359.
- Paddison, Joshua (ed.) (1999). A World Transformed: Firsthand Accounts of California Before the Gold Rush. Heyday Books, Berkeley, California. ISBN 978-1-890771-13-3.CS1 maint: extra text: authors list ( link)
- "Past Campaigns". California Mission Studies Association. 2000. Archived from the original on August 13, 2007. Retrieved 2007-07-08.
- "The Pious Fund of the Californias". Catholic Encyclopedia. 1911. Archived from the original on June 30, 2007. Retrieved 2007-07-08.
- "Pre-Mission History". Old Mission Santa Inés. 2007. Archived from the original on August 26, 2007. Retrieved 2007-08-26.
- Rawls, James J. (1984). Indians of California: The Changing Image. University of Oklahoma Press, Norman, Oklahoma. ISBN 978-0-8061-2020-1.
- Riesenberg, Felix (1962). The Golden Road: The Story of California's Spanish Mission Trail. McGraw-Hill, New York. ISBN 978-0-07-052740-9.
- Robinson, W.W. (1948). Land in California. University of California Press, Berkeley and Los Angeles, California. ISBN 978-0-520-03875-2.
- Ruscin, Terry (1999). Mission Memoirs. Sunbelt Publications, San Diego, California. ISBN 978-0-932653-30-7.
- Saunders, Charles Francis and J. Smeaton Chase (1915). The California Padres and Their Missions. Houghton Mifflin, Boston and New York. ISBN 978-0-910118-53-8.
- Stern, Jean & Gerald J. Miller (1995). Romance of the Bells: The California Missions in Art. The Irvine Museum, Irvine, California. ISBN 978-0-9635468-5-2.
- Thompson, Anthony W., Robert J. Church, and Bruce H. Jones (2000). Pacific Fruit Express. Signature Press, Wilton, California. ISBN 978-1-930013-03-2.CS1 maint: multiple names: authors list ( link)
- Thompson, Mark (2001). American Character: The Curious Life of Charles Fletcher Lummis and the Rediscovery of the Southwest. Arcade Publishing, New York. ISBN 978-1-55970-550-9.
- Vancouver, George (1801). A Voyage of Discovery to the North Pacific Ocean and Round the World, Volume III. Printed for John Stockdale, Piccadilly, London.
- Yenne, Bill (2004). The Missions of California. Advantage Publishers Group, San Diego, California. ISBN 978-1-59223-319-9.
- Young, S. & Levick, M. (1988). The Missions of California. Chronicle Books LLC, San Francisco, California. ISBN 978-0-8118-1938-1.
- Baer, Kurt (1958). Architecture of the California Missions. University of California Press, Los Angeles, California.
- Berger, John A. (1941). The Franciscan Missions of California. G.P. Putnam's Sons, New York.
- Carillo, J. M., O.F.M. (1967). The Story of Mission San Antonio de Padua. Paisano Press, Inc., Balboa Island, California.CS1 maint: multiple names: authors list ( link)
- Camphouse, M. (1974). Guidebook to the Missions of California. Anderson, Ritchie & Simon, Los Angeles, California. ISBN 978-0-378-03792-1.
- Costo, Rupert. Costo, Jeannette Henry. (1987). The missions of California : a legacy of genocide. Indian Historian Press. OCLC 851338670.CS1 maint: multiple names: authors list ( link)
- Crespí, Juan: A Description of Distant Roads: Original Journals of the First Expedition into California, 1796–1770, edited and translated by Alan K. Brown, San Diego State University Press, 2001, ISBN 978-1-879691-64-3
- Crump, S. (1975). California's Spanish Missions: Their Yesterdays and Todays. Trans-Anglo Books, Del Mar, California. ISBN 978-0-87046-028-9.
- Drager, K. & Fracchia, C. (1997). The Golden Dream: California from Gold Rush to Statehood. Graphic Arts Center Publishing Company, Portland, Oregon. ISBN 978-1-55868-312-9.
- Johnson, P., ed. (1964). The California Missions. Lane Book Company, Menlo Park, California.CS1 maint: multiple names: authors list ( link) CS1 maint: extra text: authors list ( link)
- Moorhead, Max L. (1991). The Presidio: Bastion Of The Spanish Borderlands. University of Oklahoma Press, Norman, Oklahoma. ISBN 978-0-8061-2317-2.
- Rawls, J. & Bean, W. (1997). California: An Interpretive History. McGraw-Hill, New York. ISBN 978-0-07-052411-8.
- Robinson, W.W. (1953). Panorama: A Picture History of Southern California. Anderson, Ritchie & Simon, Los Angeles, California.
- Weitze, Karen J. (1984). California's Mission Revival. Hennessy & Ingalls, Inc., Los Angeles, California. ISBN 978-0-912158-89-1.
- Wright, Ralph B., Ed. (1984). California's Missions. Lowman Publishing Company, Arroyo Grande, California.CS1 maint: multiple names: authors list ( link)
- Early California Population Project (ECPP) The Huntington Library, 2006. Provides public access to all the information contained in California's historic mission registers.
- California Missions article at the Catholic Encyclopedia
- The California Missions, 2001.
- Matrimonial Investigation records of the San Gabriel Mission Claremont Colleges Digital Library, 2008, 169 records digitized and searchable by priest name or by the names of the couple requesting marriage.
- Junipero Serra, the Vatican, & Enslavement Theology Preview of Fogel, Daniel. ISM Press Books. Offers a critical perspective on the missions' impact on California's Indians.
- MissionTour Tom Simondi, 2001–2005.
- The Old Franciscan Missions of California James, George Wharton, 1913. eText at Project Gutenberg.
- The San Diego Founders Trail 2001–2008 website.
- Trails and Roads: El Camino Real Faigin, Daniel P. California Highways, 1996–2004
- Almanac: California Missions GAzis-SAx, Joel, 1999.
|Wikimedia Commons has media related to California missions.|
- The California Frontier Project: Dedicated the early California, including the Spanish missions
- California Mission Studies Association
- California's Spanish Missions
- The California Missions Trail, California Department of Parks and Recreation
- Library of Congress: American Memory Project: Early California History, The Missions
- Tricia Anne Weber: The Spanish Missions of California
- Album of Views of the Missions of California, Souvenir Publishing Company, San Francisco, Los Angeles, 1890s.
- The Missions of California, by Eugene Leslie Smyth, Chicago: Alexander Belford & Co., 1899.
- California Historical Society
- California Mission Visitors Guide
- California Missions: A Journey Along the El Camino Real (exhibit at The California Museum)
- National Register of Historic Places: Early History of the California Coast: List of Sites
- California Mission Sketches by Henry Miller, 1856 and Finding Aid to the Documents relating to Missions of the Californias : typescript, 1768–1802 at The Bancroft Library
- Howser, Huell (December 8, 2000). "Art of the Missions (110)". California Missions. Chapman University Huell Howser Archive. | https://earthspot.org/geo/?search=California_missions | 21 |
24 | Long and short scales
The long and short scales are two of several naming systems for integer powers of ten which use some of the same terms for different magnitudes.
For whole numbers smaller than 1,000,000,000 (109), such as one thousand or one million, the two scales are identical. For larger numbers, starting with 109, the two systems differ. For identical names, the long scale proceeds by powers of one million, whereas the short scale proceeds by powers of one thousand. For example, "one billion" means one thousand millions in the short scale, while it means one million millions in the long scale. The long scale system introduces new terms for the intervening values, typically replacing the word ending -ion with -iard.
In both short and long scale naming, names are given each multiplication step for increments of the base-10 exponent of three, i.e. for each integer n in the sequence of multipliers 103n. For certain multipliers, including those for all numbers smaller than 109, both systems use the same names. The differences arise from the assignment of identical names to specific values of n, for numbers starting with 109, for which n=3. In the short scale system, the identical names are for n=3, 4, 5, ..., while the long scale places them at n=4, 6, 8, etc.
In the long scale, billion means one million millions (1012) and trillion means one million billions (1018), and so on. Therefore, an n-illion equals 106n. In some languages, the long scale introduces new names for the interleaving multipliers, replacing the ending -ion with -iard; for example, the next multiplier after million is milliard, after billion it is billiard. Hence, an n-iard equals 106n+3.
Countries with usage of the long scale include most countries in continental Europe, and most that are French-speaking, German-speaking, Spanish-speaking and Portuguese-speaking countries (except Brazil).
Number names are rendered in the language of the country, but are similar due to shared etymology. Some languages, particularly in East Asia and South Asia, have large number naming systems that are different from both the long and short scales, for example the Indian numbering system.
For most of the 19th and 20th centuries, the United Kingdom largely used the long scale, whereas the United States used the short scale, so that the two systems were often referred to as British and American in the English language. After several decades of increasing informal British usage of the short scale, in 1974 the government of the UK adopted it, and it is used for all official purposes. With very few exceptions,[further explanation needed] the British usage and American usage are now identical.
To avoid confusion resulting from the coexistence of short and long term in any language, the International System of Units (SI) recommends using the metric prefix to indicate orders of magnitude, but it is only relevant to scientific applications, and not (for example) to finance. Unlike words like billion and million, metric prefixes keep the same meaning regardless of the country and the language.
The relationship between the numeric values and the corresponding names in the two scales can be described as:
| Value in
|Metric prefix||Value in positional notation||Short scale||Long scale|
|106||mega||M||1,000,000||million||1,000 × 1,0001||million||1,000,0001|
|109||giga||G||1,000,000,000||billion||1,000 × 1,0002||milliard||thousand million||1,000 × 1,000,0001|
|1012||tera||T||1,000,000,000,000||trillion||1,000 × 1,0003||billion||1,000,0002|
|1015||peta||P||1,000,000,000,000,000||quadrillion||1,000 × 1,0004||billiard||thousand billion||1,000 × 1,000,0002|
|1018||exa||E||1,000,000,000,000,000,000||quintillion||1,000 × 1,0005||trillion||1,000,0003|
|1021||zetta||Z||1,000,000,000,000,000,000,000||sextillion||1,000 × 1,0006||trilliard||thousand trillion||1,000 × 1,000,0003|
|1024||yotta||Y||1,000,000,000,000,000,000,000,000||septillion||1,000 × 1,0007||quadrillion||1,000,0004|
The relationship between the names and the corresponding numeric values in the two scales can be described as:
|Name||Short scale||Long scale|
| Value in
|Metric prefix||Logic|| Value in
|million||106||mega||M||1,000 × 1,0001||106||mega||M||1,000,0001|
|billion||109||giga||G||1,000 × 1,0002||1012||tera||T||1,000,0002|
|trillion||1012||tera||T||1,000 × 1,0003||1018||exa||E||1,000,0003|
|quadrillion||1015||peta||P||1,000 × 1,0004||1024||yotta||Y||1,000,0004|
|etc.||To next named order of magnitude:
multiply by 1,000
|To next named order of magnitude:|
multiply by 1,000,000
The root mil in million does not refer to the numeral, 1. The word, million, derives from the Old French, milion, from the earlier Old Italian, milione, an intensification of the Latin word, mille, a thousand. That is, a million is a big thousand, much as a great gross is a dozen gross or 12 × 144 = 1728.
The word milliard, or its translation, is found in many European languages and is used in those languages for 109. However, it is not found in American English, which uses billion, and not used in British English, which preferred to use thousand million before the current usage of billion. The financial term, yard, which derives from milliard, is used on financial markets, as, unlike the term, billion, it is internationally unambiguous and phonetically distinct from million. Likewise, many long scale countries use the word billiard (or similar) for one thousand long scale billions (i.e., 1015), and the word trilliard (or similar) for one thousand long scale trillions (i.e., 1021), etc.
The existence of the different scales means that care must be taken when comparing large numbers between languages or countries, or when interpreting old documents in countries where the dominant scale has changed over time. For example, British English, French, and Italian historical documents can refer to either the short or long scale, depending on the date of the document, since each of the three countries has used both systems at various times in its history. Today, the United Kingdom officially uses the short scale, but France and Italy use the long scale.
The pre-1974 former British English word billion, post-1961 current French word billion, post-1994 current Italian word bilione, German Billion; Dutch biljoen; Swedish biljon; Finnish biljoona; Danish billion; Polish bilion, Spanish billón; Slovenian bilijon and the European Portuguese word bilião (with a different spelling to the Brazilian Portuguese variant, but in Brazil referring to short scale) all refer to 1012, being long-scale terms. Therefore, each of these words translates to the American English or post-1974 British English word: trillion (1012 in the short scale), and not billion (109 in the short scale).
On the other hand, the pre-1961 former French word billion, pre-1994 former Italian word bilione, Brazilian Portuguese word bilhão and the Welsh word biliwn all refer to 109, being short scale terms. Each of these words translates to the American English or post-1974 British English word billion (109 in the short scale).
The term billion originally meant 1012 when introduced.
- In long scale countries, milliard was defined to its current value of 109, leaving billion at its original 1012 value and so on for the larger numbers. Some of these countries, but not all, introduced new words billiard, trilliard, etc. as intermediate terms.
- In some short scale countries, milliard was defined to 109 and billion dropped altogether, with trillion redefined down to 1012 and so on for the larger numbers.
- In many short scale countries, milliard was dropped altogether and billion was redefined down to 109, adjusting downwards the value of trillion and all the larger numbers.
|13th century||The word million was not used in any language before the 13th century. Maximus Planudes (c. 1260–1305) was among the first recorded users.|
|Late 14th century||William Langland's Piers Plowman (written c. 1360–1387 in Middle English), with
|1475||French mathematician Jehan Adam, writing in Middle French, recorded the words bymillion and trimillion as meaning 1012 and 1018 respectively in a manuscript Traicté en arismetique pour la practique par gectouers, now held in the Bibliothèque Sainte-Geneviève in Paris.
|1484||Nicolas Chuquet, in his article Le Triparty en la Science des Nombres par Maistre Nicolas Chuquet Parisien, used the words byllion, tryllion, quadrillion, quyllion, sixlion, septyllion, ottyllion, and nonyllion to refer to 1012, 1018, ... 1054. Most of the work was copied without attribution by Estienne de La Roche and published in his 1520 book, L'arismetique. Chuquet's original article was rediscovered in the 1870s and then published for the first time in 1880.
The extract from Chuquet's manuscript, the transcription and translation provided here all contain an original mistake: one too many zeros in the 804300 portion of the fully written out example: 745324'8043000 '700023'654321 ...
|1516||Guillaume Budé), writing in Latin, used the term milliart to mean "ten myriad myriad" or 109 in his book De Asse et partibus eius Libri quinque.
|1549||The influential French mathematician Jacques Pelletier du Mans used the name milliard (or milliart) to mean 1012, attributing the term to the earlier usage by Guillaume Budé|
|17th century||With the increased usage of large numbers, the traditional punctuation of large numbers into six-digit groups evolved into three-digit group punctuation. In some places, the large number names were then applied to the smaller numbers, following the new punctuation scheme. Thus, in France and Italy, some scientists then began using billion to mean 109, trillion to mean 1012, etc. This usage formed the origins of the later short scale. The majority of scientists either continued to say thousand million or changed the meaning of the Pelletier term, milliard, from "million of millions" down to "thousand million". This meaning of milliard has been occasionally used in England, but was widely adopted in France, Germany, Italy and the rest of Europe, for those keeping the original long scale billion from Adam, Chuquet and Pelletier.|
|1676||The first published use of milliard as 109 occurred in the Netherlands.
|18th century||The short-scale meaning of the term billion was brought to the British American colonies. As early as 1762 (and through at least the early 20th century), the dictionary of the Académie française defined billion as a term of arithmetic meaning a thousand millions.|
|1729||The first American appearance of the short scale value of billion as 109 was published in the Greenwood Book of 1729, written anonymously by Prof. Isaac Greenwood of Harvard College|
|Early 19th century||France widely converted to the short scale, and was followed by the U.S., which began teaching it in schools. Many French encyclopedias of the 19th century either omitted the long scale system or called it "désormais obsolète", a now obsolete system. Nevertheless, by the mid 20th century France would officially convert back to the long scale.|
|1926||H. W. Fowler's A Dictionary of Modern English Usage noted
Although American English usage did not change, within the next 50 years French usage changed from short scale to long and British English usage changed from long scale to short.
|1948||The 9th General Conference on Weights and Measures received requests to establish an International System of Units. One such request was accompanied by a draft French Government discussion paper, which included a suggestion of universal use of the long scale, inviting the short-scale countries to return or convert. This paper was widely distributed as the basis for further discussion. The matter of the International System of Units was eventually resolved at the 11th General Conference in 1960. The question of long scale versus short scale was not resolved and does not appear in the list of any conference resolutions.|
|1960||The 11th General Conference on Weights and Measures adopted the International System of Units (SI), with its own set of numeric prefixes. SI is therefore independent of the number scale being used. SI also notes the language-dependence of some larger-number names and advises against using ambiguous terms such as billion, trillion, etc. The National Institute of Standards and Technology within the US also considers that it is best that they be avoided entirely.|
|1961||The French Government confirmed their official usage of the long scale in the Journal officiel (the official French Government gazette).|
|1974||British prime minister Harold Wilson explained in a written answer to the House of Commons that UK government statistics would from then on use the short scale. Hansard, for 20 December 1974, reported it
The BBC and other UK mass media quickly followed the government's lead within the UK.
During the last quarter of the 20th century, most other English-speaking countries (the Republic of Ireland, Australia, New Zealand, South Africa, Zimbabwe, etc.) either also followed this lead or independently switched to the short scale use. However, in most of these countries, some limited long scale use persists and the official status of the short scale use is not clear.
|1975||French mathematician Geneviève Guitel introduced the terms long scale (French: échelle longue) and short scale (French: échelle courte) to refer to the two numbering systems.|
|1994||The Italian Government confirmed their official usage of the long scale.|
As large numbers in natural sciences are usually represented by metric prefixes, scientific notation or otherwise, the most commonplace occurrence of large numbers represented by long or short scale terms is in finance. The following table includes some historic examples related to hyper-inflation and other financial incidents.
German hyperinflation in the 1920s Weimar Republic caused 'Eintausend Mark' (1000 Mark = 103 Mark) German banknotes to be over-stamped as 'Eine Milliarde Mark' (109 Mark). This introduced large-number names to the German populace.
The Mark or Papiermark was replaced at the end of 1923 by the Rentenmark at an exchange rate of
1 Rentenmark = 1 billion (long scale) Papiermark = 1012 Papiermark = 1 trillion (short scale) Papiermark
100 million b-pengő (long scale) = 100 trillion (long scale) pengő = 1020 pengő = 100 quintillion (short scale) pengő.
On 1 August 1946, the forint was introduced at a rate of
1 forint = 400 quadrilliard (long scale) pengő = 4 × 1029 pengő = 400 octillion (short scale) pengő.
500 thousand million (long scale) dinars = 5 × 1011 dinar banknotes = 500 billion (short scale) dinars.
The later introduction of the new dinar came at an exchange rate of
1 new dinar = 1 × 1027 dinars = ~1.3 × 1027 pre 1990 dinars.
Hyperinflation in Zimbabwe led to banknotes of 1014 Zimbabwean dollars, marked "One Hundred Trillion Dollars" (short scale), being issued in 2009, shortly ahead of the currency being abandoned after a final redenomination to the 'fourth dollar'. From 2013 to 2019 when the RTGS Dollar entered use, no new currency was announced, and so foreign currencies were used instead.
100 trillion (short scale) Zimbabwean dollars = 1014 Zimbabwean dollars = 100 billion (long scale) Zimbabwean dollars = 1027 pre-2006 Zimbabwean dollars = 1 quadrilliard (long scale) pre-2006 Zimbabwean dollars.
|2013||As of 24 October 2013[update], the combined total public debt of the United States stood at $17.078 trillion.
17 trillion (short scale) US Dollars = 1.7 × 1013 US Dollars = 17 billion (long scale) US Dollars
Short scale users
Most English-language countries and regions use the short scale with 109 being billion. For example:[shortscale note 1]
- American Samoa
- Antigua and Barbuda
- Australia [shortscale note 2]
- Belize (English-speaking)
- Botswana (English-speaking)
- British Virgin Islands
- Cameroon (English-speaking)
- Canada (English-speaking) see Using both below
- Cayman Islands
- Cook Islands
- Eswatini (formerly Swaziland)
- Falkland Islands
- Ghana (English-speaking)
- Guyana (English-speaking)
- Hong Kong (English-speaking)
- Ireland (English-speaking, Irish: billiún, trilliún)
- Isle of Man
- Kenya (English-speaking)
- Malawi (English-speaking)
- Malaysia (English-speaking; Malay: bilion billion, trilion trillion)
- Malta (English-speaking; Maltese: biljun, triljun
- Marshall Islands
- Mauritius (English speaking) see Using both below
- Federated States of Micronesia
- New Zealand
- Nigeria (English-speaking)
- Norfolk Island
- Northern Mariana Islands
- Papua New Guinea (English-speaking)
- Philippines (English-speaking) [shortscale note 3]
- Pitcairn Islands
- Saint Helena, Ascension and Tristan da Cunha
- Saint Kitts and Nevis
- Saint Lucia
- Saint Vincent and the Grenadines
- Seychelles (English speaking) see Using both below
- Sierra Leone
- Singapore (English-speaking)
- Solomon Islands
- South Georgia and the South Sandwich Islands
- South Sudan (English-speaking)
- Tanzania (English-speaking)
- Trinidad and Tobago
- Turks and Caicos Islands
- Uganda (English-speaking)
- United Kingdom (see also Wales below) [shortscale note 4]
- United States[shortscale note 5]
- United States Virgin Islands
- Vanuatu (English speaking) see Using both below
- Zambia (English-speaking)
- Zimbabwe (English-speaking)
Most Arabic-language countries and regions use the short scale with 109 being مليار milyar, except for a few countries like Saudi Arabia and the UAE which use the word بليون billion for 109. For example:[shortscale note 6]
Other short scale
Other countries also use a word similar to trillion to mean 1012, etc. Whilst a few of these countries like English use a word similar to billion to mean 109, most like Arabic have kept a traditional long scale word similar to milliard for 109. Some examples of short scale use, and the words used for 109 and 1012, are
- Afghanistan (Dari: میلیارد milyard or بیلیون billion, تریلیون trillion, Pashto: میلیارد milyard, بیلیون billion, تریلیون trillion)
- Albania (miliard, trilion)
- Armenia ( միլիարդ miliard, տրիլիոն trilion)
- Azerbaijan (milyardcode: aze promoted to code: az , (trilyoncode: aze promoted to code: az )
- Belarus (мільярдcode: bel promoted to code: be milyardcode: bel promoted to code: be , трыльёнcode: bel promoted to code: be trilyoncode: bel promoted to code: be )
- Brazil (Brazilian Portuguese: bilhão, trilhão)
- Brunei (Malay: bilion, trilion)
- Bulgaria (милиард miliard, трилион trilion)
- Cyprus (Greek: δισεκατομμύριο disekatommyrio, τρισεκατομμύριο trisekatommyrio, Turkish: milyar, trilyon)
- Estonia (miljard or biljon[shortscale note 7], triljon)
- Georgia (მილიარდი miliardi, ტრილიონი trilioni)
- Indonesia (miliar, triliun[shortscale note 8]
- Israel (Hebrew: מיליארד milyard, טריליון trilyon)
- Kazakhstan (Kazakh: миллиард milliard, триллион trillion)
- Kyrgyzstan (Kyrgyz: миллиард milliard, триллион trillion)
- Latvia (miljards, triljons)
- Lithuania (milijardas, trilijonas)
- Moldova (Romanian: miliard, trilion)
- Myanmar (formerly Burma) (Burmese: ဘီလျံ, IPA: [bìljàɰ̃]; ထရီလျံ, [tʰəɹìljàɰ̃])
- Namibia (Afrikaans speaking) see Using both below
- Puerto Rico (Spanish speaking) see Using both below
- Russia (миллиард milliard, триллион trillion)
- Tajikistan (Tajik: миллиард milliard, триллион trillion)
- Turkey (milyar, trilyon)
- Turkmenistan (Turkmen: milliard, billion; Russian: миллиард milliard, триллион trillion)
- South Africa (Afrikaans speaking) see Using both below
- Ukraine (мільярд mil'yard, трильйон tryl'yon)
- Uzbekistan (Uzbek: milliard, trillion; Russian: миллиард milliard, триллион trillion)
- Wales (biliwn, triliwn) (In some contexts a paraphrase is needed to resolve ambiguity, as the lenitive of both miliwn and biliwn is the same: filiwn.)
Long scale users
The traditional long scale is used by most Continental European countries and by most other countries whose languages derive from Continental Europe (with the notable exceptions of Albania, Greece, Romania, and Brazil). These countries use a word similar to billion to mean 1012. Some use a word similar to milliard to mean 109, while others use a word or phrase equivalent to thousand millions.
- Burkina Faso
- Canada (Canadian French) see Using both below
- Central African Republic
- Democratic Republic of the Congo
- Republic of the Congo
- French Polynesia
- French Southern and Antarctic Lands
- Ivory Coast (Côte d'Ivoire)
- New Caledonia
- Quebec (province of Canada, Canadian French) see Using both below
- Saint Barthélemy
- Saint Martin (French portion of St. Martin Island)
- Wallis and Futuna
German-language countries and regions use the long scale with 109 = Milliarde, for example:
With the notable exception of Brazil, a short scale country, most Portuguese-language countries and regions use the long scale with 109 = mil de milhões or milhar de milhões, for example:
- Costa Rica
- Dominican Republic
- El Salvador
- Equatorial Guinea
- Guatemala (millardo)
- Honduras (millardo)
- Mexico (mil millones or millardo)
- Nicaragua (mil millones or millardo)
- Panama (mil millones or millardo)
- Peru (mil millones)
- Puerto Rico see Using both below
- Spain (millardo or typ. mil millones)
Other long scale
Some examples of long scale use, and the words used for 109 and 1012, are
- Andorra (Catalan: miliard or typ. mil milions, bilió)
- Bosnia and Herzegovina (Bosnian: milijarda, bilion; Croatian: milijarda, bilijun, Serbian: милијарда milijarda, билион bilion)
- Croatia (milijarda, bilijun)
- Czech Republic (miliarda, bilion)
- Denmark (milliard, billion)
- Esperanto (0miliardo, duiliono) [longscale note 3]
- Faroe Islands (milliard, billión)
- Finland (Finnish: miljardi, biljoona; Swedish: miljard, biljon)
- Greenland (milliardi, billioni)
- Hungary (milliárd, billió or ezermilliárd)
- Iceland (milljarður, billjón)
- Iran (Persian: میلیارد milyard, بیلیون billion, تریلیون trillion)
- Italy (miliardo, bilione) [longscale note 4]
- Luxembourg (French: milliard, billion; German: Milliarde, Billion; Luxembourgish: milliard, billioun)
- Madagascar (French: milliard, billion; Malagasy: miliara, arivo miliara)
- Mauritius (English speaking) see Using both below
- Montenegro (Montenegrin: milijarda, bilion)
- Namibia (Afrikaans speaking) see Using both below
- North Macedonia (милијарда milijarda, билион bilion)
- Norway (Bokmål: milliard, billion; Nynorsk: milliard, billion)
- Poland (miliard, bilion)
- Romania (miliard, bilion). There are ambiguities for numbers above 1012.
- San Marino (Italian: miliardo, bilione)
- Serbia (милијарда milijarda, билион bilion)
- Seychelles (English speaking) see Using both below
- Slovakia (miliarda, bilión)
- Slovenia (milijarda, bilijon)
- South Africa (Afrikaans speaking) see Using both below
- Sweden (miljard, biljon)
- Switzerland (French: milliard, billion; German: Milliarde, Billion; Italian: miliardo, bilione; Romansh: milliarda, billiun)
- Vatican City (Italian: miliardo, bilione)
- Vanuatu (English speaking) see Using both below
Some countries use either the short or long scales, depending on the internal language being used or the context.
|Country or territory||Short scale usage||Long scale usage|
|Canada[shortscale longscale note 1]||Canadian English (109 = billion, 1012 = trillion)||Canadian French (109 = milliard, 1012 = billion or mille milliards).|
|Mauritius; Seychelles; Vanuatu||English (109 = billion, 1012 = trillion)||French (109 = milliard, 1012 = billion)|
|South African English (109 = billion, 1012 = trillion)||Afrikaans (109 = miljard, 1012 = biljoen)|
|Puerto Rico||Economic and technical (109 = billón, 1012 = trillón)||Latin American export publications (109 = millardo or mil millones, 1012 = billón)|
The following countries use naming systems for large numbers that are not etymologically related to the short and long scales:
|Country||Number system||Naming of large numbers|
|Bangladesh, India, Maldives, Nepal, Pakistan||Indian numbering system||For everyday use, but short or long scale may also be in use [other scale note 1]|
|Bhutan||Dzongkha numerals||Traditional system|
|Cambodia||Khmer numerals||Traditional system|
|East Asian numbering system:||Traditional myriad system for the larger numbers; special words and symbols up to 1088|
|Greece||Calque of the short scale||Names of the short scale have not been loaned but calqued into Greek, based on the native Greek word for million, εκατομμύριο ekatommyrio ("hundred-myriad", i.e. 100×10000):|
|Laos||Lao numerals||Traditional system|
|Mongolia||Mongolian numerals||Traditional myriad system for the larger numbers; special words up to 1067|
|Sri Lanka||Traditional systems|
|Thailand||Thai numerals||Traditional system based on millions|
|Vietnam||Vietnamese numerals||Traditional system(s) based on thousands|
The long and short scales are both present on most continents, with usage dependent on the language used. Examples include:
|Continent||Short scale usage||Long scale usage|
|Africa||Arabic (Egypt, Libya, Tunisia), English (South Sudan), South African English||Afrikaans, French (Benin, Central African Republic, Gabon, Guinea), Portuguese (Mozambique)|
|North America||American English, Canadian English||U.S. Spanish, Canadian French, Mexican Spanish|
|South America||Brazilian Portuguese, English (Guyana)||American Spanish, Dutch (Suriname), French (French Guiana)|
|Antarctica||Australian English, British English, New Zealand English, Russian||American Spanish (Argentina, Chile), French (France), Norwegian (Norway)|
|Asia||Burmese (Myanmar), Hebrew (Israel), Indonesian, Malaysian English, Philippine English, Kazakh, Uzbek, Kyrgyz||Portuguese (East Timor, Macau), Persian (Iran)|
|Europe||British English, Welsh, Estonian, Greek, Latvian, Lithuanian, Russian, Turkish, Ukrainian||Danish, Dutch, Finnish, French, German, Icelandic, Italian, Norwegian, Polish, Portuguese, Romanian, Spanish, Swedish and most other languages of continental Europe|
|Oceania||Australian English, New Zealand English||French (French Polynesia, New Caledonia)|
Notes on current usage
- English language countries: Apart from the United States, the long scale was used for centuries in many English language countries before being superseded in recent times by short scale usage. Because of this history, some long scale use persists and the official status of the short scale in anglophone countries other than the UK and US is sometimes obscure.
- Australian usage: In Australia, education, media outlets, and literature all use the short scale in line with other English-speaking countries. The current recommendation by the Australian Government Department of Finance and Deregulation (formerly known as AusInfo), and the legal definition, is the short scale. As recently as 1999, the same department did not consider short scale to be standard, but only used it occasionally. Some documents use the term thousand million for 109 in cases where two amounts are being compared using a common unit of one 'million'.
- Filipino usage: Some short-scale words have been adopted into Filipino.
- British usage: Billion has meant 109 in most sectors of official published writing for many years now. The UK government, the BBC, and most other broadcast or published mass media, have used the short scale in all contexts since the mid-1970s. Before the widespread use of billion for 109, UK usage generally referred to thousand million rather than milliard. The long scale term milliard, for 109, is obsolete in British English, though its derivative, yard, is still used as slang in the London money, foreign exchange, and bond markets.
- American usage: In the United States, the short scale has been taught in school since the early 19th century. It is therefore used exclusively.
- Arabic language countries: Most Arabic-language countries use: 106, مليون million; 109, مليار milyar; 1012, ترليون trilyon; etc.
- Estonian usage: Biljon is used due to English influences and is less common than miljard.
- Indonesian usage: Large numbers are common in Indonesia, in part because its currency (rupiah) is generally expressed in large numbers (the lowest common circulating denomination is Rp100 with Rp1000 is considered as base unit). The term juta, equivalent to million (106), is generally common in daily life. Indonesia officially employs the term miliar (derived from the long scale Dutch word miljard) for the number 109, with no exception. For 1012 and greater, Indonesia follows the short scale, thus 1012 is named triliun. The term seribu miliar (a thousand milliards) or more rarely sejuta juta (a million millions) are also used for 1012 less often. Terms greater than triliun are not very familiar to Indonesians.
- French usage: France, with Italy, was one of two European countries which converted from the long scale to the short scale during the 19th century, but returned to the original long scale during the 20th century. In 1961, the French Government confirmed their long scale status. However the 9th edition of the dictionary of the Académie française describes billion as an outdated synonym of milliard, and says that the new meaning of 1012 was decreed in 1961, but never caught on.
- Spanish language countries: Spanish-speaking countries sometimes use millardo (milliard) for 109, but mil millones (thousand millions) is used more frequently. The word billón is sometimes used in the short scale sense in those countries more influenced by the United States, where "billion" means "one thousand millions". The usage of billón to mean "one thousand millions", controversial from the start, was denounced by the Royal Spanish Academy as recently as 2010, but was finally accepted in a later version of the official dictionary as standard usage among educated Spanish speakers in the United States (including Puerto Rico).
- Esperanto language usage: The Esperanto language words biliono, triliono etc. used to be ambiguous, and both long and short scale were used and presented in dictionaries. The current edition of the main Esperanto dictionary PIV however recommends the long scale meanings, as does the grammar PMEG. Ambiguity may be avoided by the use of the unofficial but generally recognised suffix -iliono, whose function is analogous to the long scale, i.e. it is appended to a (single) numeral indicating the power of a million, e.g. duiliono (from du meaning "two") = biliono = 1012, triiliono = triliono = 1018, etc. following the 1×106X long scale convention. Miliardo is an unambiguous term for 109, and generally the suffix -iliardo, for values 1×106X+3, for example triliardo = 1021 and so forth.
- Italian usage: Italy, with France, was one of the two European countries which partially converted from the long scale to the short scale during the 19th century, but returned to the original long scale in the 20th century. In 1994, the Italian Government confirmed its long scale status.
In Italian, the word bilione officially means 1012, trilione means 1018, etc.. Colloquially, bilione can mean both 109 and 1012; trilione can mean both 1012 and (rarer) 1018 and so on. Therefore, in order to avoid ambiguity, they are seldom used. Forms such as miliardo (milliard) for 109, mille miliardi (a thousand milliards) for 1012, un milione di miliardi (a million milliards) for 1015, un miliardo di miliardi (a milliard of milliards) for 1018, mille miliardi di miliardi (a thousand milliard of milliards) for 1021 are more common.
Both long and short scale
- Canadian usage: Both scales are in use currently in Canada. English-speaking regions use the short scale exclusively, while French-speaking regions use the long scale, though the Canadian government standards website recommends that in French billion and trillion be avoided, recommending milliard for 109, and mille milliards (a thousand milliards) for 1012.
- South African usage: South Africa uses both the long scale (in Afrikaans and sometimes English) and the short scale (in English). Unlike the 1974 UK switch, the switch from long scale to short scale took time. As of 2011[update] most English language publications use the short scale. Some Afrikaans publications briefly attempted usage of the "American System" but that has led to comment in the papers and has been disparaged by the "Taalkommissie" (The Afrikaans Language Commission of the South African Academy of Science and Art) and has thus, to most appearances, been abandoned.
Neither long nor short scale
- Indian, Pakistani and Bangladeshi usage: Outside of financial media, the use of billion by Bangladeshi, Indian and Pakistani English speakers highly depends on their educational background. Some may continue to use the traditional British long scale. In everyday life, Bangladeshis, Indians and Pakistanis largely use their own common number system, commonly referred to as the Indian numbering system – for instance, Bangladeshi, Pakistani, and Indian English commonly use the words lakh to denote 100 thousand, crore to denote ten million (i.e. 100 lakhs) and arab to denote thousand million.
Unambiguous ways of identifying large numbers include:
- In written communications, the simplest solution for moderately large numbers is to write the full amount, for example 1 000 000 000 000 rather than, say, 1 trillion (short scale) or 1 billion (long scale).
- Combinations of the unambiguous word million, for example: 109 = "one thousand million"; 1012 = "one million million". This becomes rather unwieldy for numbers above 1012.
- Combination of numbers of more than 3 digits with the unambiguous word million, for example 13,600 million
- Scientific notation (also known as standard form or exponential notation, for example 1×109, 1×1010, 1×1011, 1×1012, etc.), or its engineering notation variant (for example 1×109, 10×109, 100×109, 1×1012, etc.), or the computing variant E notation (for example
1e12, etc.). This is the most common practice among scientists and mathematicians, and is both unambiguous and convenient.
- SI prefixes in combination with SI units, for example, giga for 109 and tera for 1012 can give gigawatt (=109 W) and terawatt (=1012 W), respectively. The International System of Units (SI) is independent of whichever scale is being used. Use with non-SI units (e.g. "giga-dollars", "giga-miles") is uncommon although "megabucks" is in informal use representing a large sum of money rather than exactly a million dollars. k€ and M€ is more frequently encountered, although the official scheme places the Euro sign in front of the value. The SI-approach has the advantage to survive translations without ambiguity and is well understood in an IT context, especially in countries that have established SI-standards.
- British-English usage of 'Billion vs Thousand million vs Milliard'. Google Books ngram viewer. Google Inc. Retrieved 26 April 2014.
- Guitel, Geneviève (1975). Histoire comparée des numérations écrites (in French). Paris: Flammarion. pp. 51–52. ISBN 978-2-08-211104-1.
- Guitel, Geneviève (1975). ""Les grands nombres en numération parlée (État actuel de la question)", i.e. "The large numbers in oral numeration (Present state of the question)"". Histoire comparée des numérations écrites (in French). Paris: Flammarion. pp. 566–574. ISBN 978-2-08-211104-1.
- "Authoritative [[Real Academia Española|RAE]] dictionary: billón". Archived from the original on 4 November 2015. Retrieved 12 March 2015.
- Fowler, H. W. (1926). A Dictionary of Modern English Usage. Great Britain: Oxford University Press. pp. 52–53. ISBN 978-0-19-860506-5.
- ""BILLION" (DEFINITION) — HC Deb 20 December 1974 vol 883 cc711W–712W". Hansard Written Answers. Hansard. 20 December 1972. Retrieved 2 April 2009.
- O'Donnell, Frank (30 July 2004). "Britain's £1 trillion debt mountain – How many zeros is that?". The Scotsman. Retrieved 31 January 2008.
- "Who wants to be a trillionaire?". BBC News. 7 May 2007. Retrieved 11 May 2010.
- Comrie, Bernard (24 March 1996). "billion:summary". Linguist List (Mailing list). Retrieved 24 July 2011.
- "Oxford Dictionaries: How many is a billion?". Oxford University Press. Retrieved 7 May 2018.
- "Oxford Dictionaries: Billion". Oxford University Press. Retrieved 24 July 2011.
- Nielsen, Ron (2006). The Little Green Handbook. Macmillan Publishers. p. 290. ISBN 978-0-312-42581-4.
- Smith, David Eugene (1953) [first published 1925]. History of Mathematics. II. Courier Dover Publications. pp. 84–86. ISBN 978-0-486-20430-7.
- "Wortschatz-Lexikon: Milliarde" (in German). Universität Leipzig: Wortschatz-Lexikon. Archived from the original on 27 September 2011. Retrieved 19 August 2011.
- "Wortschatz-Lexikon: Billion" (in German). Universität Leipzig: Wortschatz-Lexikon. Archived from the original on 7 August 2011. Retrieved 19 August 2011.
- "Wortschatz-Lexikon: Billiarde" (in German). Universität Leipzig: Wortschatz-Lexikon. Archived from the original on 27 September 2011. Retrieved 28 July 2011.
- "Wortschatz-Lexikon: Trilliarde" (in German). Universität Leipzig: Wortschatz-Lexikon. Archived from the original on 27 September 2011. Retrieved 28 July 2011.
- "Direttiva CEE / CEEA / CE 1994 n. 55, p.12" (PDF) (in Italian). Italian Government. 21 November 1994. Retrieved 24 July 2011.
- Adam, Jehan (1475). "Traicté en arismetique pour la practique par gectouers... (MS 3143)" (in Middle French). Paris: Bibliothèque Sainte-Geneviève. Cite journal requires
- "HOMMES DE SCIENCE, LIVRES DE SAVANTS A LA BIBLIOTHÈQUE SAINTE-GENEVIÈVE, Livres de savants II". Traicté en arismetique pour la practique par gectouers… (in French). Bibliothèque Sainte-Geneviève. 2005. Retrieved 25 October 2014.
- Thorndike, Lynn (1926). "The Arithmetic of Jehan Adam, 1475 A.D". The American Mathematical Monthly. Mathematical Association of America. 1926 (January): 24–28. JSTOR 2298533.
- Chuquet, Nicolas (1880) [written 1484]. "Le Triparty en la Science des Nombres par Maistre Nicolas Chuquet Parisien". Bulletino di Bibliographia e di Storia delle Scienze Matematische e Fisische (in Middle French). Bologna: Aristide Marre. XIII (1880): 593–594. ISSN 1123-5209. Retrieved 17 July 2011.
- Chuquet, Nicolas (1880) [written 1484]. "Le Triparty en la Science des Nombres par Maistre Nicolas Chuquet Parisien" (in Middle French). miakinen.net. Retrieved 1 March 2008.
- Flegg, Graham (23–30 December 1976). "Tracing the origins of One, Two, Three". New Scientist. Reed Business Information. 72 (1032): 747. ISSN 0262-4079. Retrieved 17 July 2011.
- Budaeus, Guilielmus (1516). De Asse et partibus eius Libri quinque (in Latin). pp. folio 93.
- Littré, Émile (1873–1874). Dictionnaire de la langue française. Paris, France: L. Hachette. p. 347.
Ce n'est qu'au milieu du XVIIe siècle qu'il fut réglé que les tranches, au lieu d'être de six en six chiffres, seraient de trois en trois chiffres ; ce qui revint à diviser par 1000 l'ancien billion, l'ancien trillion, etc. [It was only in the middle of the 17th century that it was settled that the slices, instead of being from six to six digits, would be from three to three digits; which resulted in dividing by 1000 the old billion, the old trillion, and so on.]
- Houck (1676). "Arithmetic". Netherlands: 2. Cite journal requires
- Dictionnaire de l'académie françoise (4th ed.). Paris, France: Institut de France. 1762. p. 177.
- Dictionnaire de l'Académie française (6th ed.). Paris, France. 1835. p. 189.
- Dictionnaire de l'Académie française (7th ed.). Paris, France: Institut de France. 1877. p. 182.
- Dictionnaire de l'Académie française (8th ed.). Paris, France: Institut de France. 1932–1935. p. 144.
- "Resolution 6 of the 9th meeting of the CGPM (1948)". BIPM. Retrieved 7 August 2011.
- "Resolution 6 of the 10th meeting of the CGPM (1954)". BIPM. Retrieved 23 June 2012.
- "Resolution 12 of the 11th meeting of the CGPM (1960)". BIPM. Retrieved 28 July 2011.
- The International System of Units (SI) (PDF) (8 ed.). BIPM. May 2006. pp. 134 / 5.3.7 Stating values of dimensionless quantities, or quantities of dimension one. ISBN 92-822-2213-6.
- Thompson, Ambler; Taylor, Barry N. (30 March 2008). Guide for the Use of the International System of Units (SI), NIST SP – 811. US: National Institute of Standards and Technology. p. 21. Retrieved 13 September 2014.
- "Décret 61-501" (PDF). Journal Officiel (in French). French Government: 4587 note 3a, and erratum on page 7572. 11 August 1961 [commissioned 3 May 1961 published 20 May 1961]. Archived from the original (PDF) on 20 January 2010. Retrieved 31 January 2008.
- "BBC News: Zimbabweans play the zero game". BBC. 23 July 2008. Retrieved 13 July 2012.
- "BBC News: Zimbabwe rolls out Z$100tr note". BBC. 16 January 2009. Retrieved 24 July 2010.
- "BBC News: Zimbabwe abandons its currency". BBC. 29 January 2009. Retrieved 13 July 2012.
- "The Debt to the Penny and Who Holds It". www.treasurydirect.gov. US Government. Retrieved 27 October 2013.
- "Economic infographics". demonocracy.info. BTC. Retrieved 27 October 2013.
- "RBA: Definition of billion". Reserve Bank of Australia. Retrieved 22 August 2011.
- "BBC News: Who wants to be a trillionaire?". BBC. 7 May 2007. Retrieved 11 May 2010.
- "billion". Cambridge Dictionaries Online. Cambridge University Press. Retrieved 21 August 2011.
- "trillion". Cambridge Dictionaries Online. Cambridge University Press. Retrieved 21 August 2011.
- "Al Jazem English-Arabic online dictionary: Billion". Al Jazem English-Arabic online dictionary. Encyclopædia Britannica. Retrieved 6 June 2012.
- "Al Jazem English-Arabic online dictionary:Trillion". Al Jazem English-Arabic online dictionary. Encyclopædia Britannica. Retrieved 6 June 2012.
- Qeli, Albi. "An English-Albanian Dictionary". Albi Qeli, MD. Retrieved 6 June 2012.
- "Eesti õigekeelsussõnaraamat ÕS 2006: miljard" (in Estonian). Institute of the Estonian Language (Eesti Keele Instituut). 2006. Retrieved 13 August 2011.
- "Eesti õigekeelsussõnaraamat ÕS 2013: biljon" (in Estonian). Institute of the Estonian Language (Eesti Keele Instituut). 2013. Retrieved 25 June 2017.
- "Eesti õigekeelsussõnaraamat ÕS 2006: triljon" (in Estonian). Institute of the Estonian Language (Eesti Keele Instituut). 2006. Retrieved 13 August 2011.
- Robson S. O. (Stuart O.), Singgih Wibisono, Yacinta Kurniasih. Javanese English dictionary Tuttle Publishing: 2002, ISBN 0-7946-0000-X, 821 pages
- Avram, Mioara; Sala, Marius (2000), May We Introduce the Romanian Language to You?, Editura Fundatiei Culturale Române, p. 151, ISBN 9789735772246,
the numeral miliard "billion"
- "Britain to Reduce $4 billion from Defence". Bi-Weekly Eleven (in Burmese). Yangon. 3 (30). 15 October 2010.
- "De Geïntegreerde Taal-Bank: miljard" (in Dutch). Instituut voor Nederlandse Lexicologie. Retrieved 19 August 2011.
- "De Geïntegreerde Taal-Bank: biljoen" (in Dutch). Instituut voor Nederlandse Lexicologie. Retrieved 19 August 2011.
- "French Larousse: milliard" (in French). Éditions Larousse. Archived from the original on 18 March 2012. Retrieved 19 August 2011.
- "French Larousse: billion" (in French). Éditions Larousse. Archived from the original on 18 March 2012. Retrieved 19 August 2011.
- "billion". Dictionnaire de l'Académie française (in French) (9th ed.). Académie française. 1992. Retrieved 17 January 2016.
BILLION (les deux l se prononcent sans mouillure) n. m. XVe siècle, byllion, « un million de millions » ; XVIe siècle, « mille millions ». Altération arbitraire de l'initiale de million, d'après la particule latine bi-, « deux fois ».[BILLION (the two Ls are pronounced without palatalisation) masculine noun. Spelled byllion in the 15th century when it meant a million millions; in the 16th century it meant a thousand millions. It is an arbitrary alteration of the start of million by inserting the Latin prefix bi-, meaning twice. Now rarely used. It means a thousand millions. It is an outdated synonym of Milliard. According to a decree of 1961, the word Billion received a new value, to wit a million millions (1012), which has not come into common usage.][permanent dead link]
Rare. Mille millions. Syn. vieilli de Milliard. Selon un décret de 1961, le mot Billion a reçu une nouvelle valeur, à savoir un million de millions (1012), qui n'est pas entrée dans l'usage.
- "Diccionario Panhispánico de Dudas: millardo" (in Spanish). Real Academia Española. Retrieved 19 August 2011.
- "Diccionario Panhispánico de Dudas: billon" (in Spanish). Real Academia Española. Retrieved 24 July 2010.
- "Diccionario de la lengua española" (in Spanish). Real Academia Española. Retrieved 2 July 2018.
- Wennergren, Bertilo (8 March 2008). "Plena Manlibro de Esperanta Gramatiko" (in Esperanto). Retrieved 15 September 2010.
- "Italian-English Larousse: bilione". Éditions Larousse. Archived from the original on 18 March 2012. Retrieved 21 August 2011.
- Institutul de Lingvistică „Iorgu Iordan – Alexandru Rosetti" al Academiei Române (2012), Dicționarul explicativ al limbii române (ediția a II-a revăzută și adăugită), Editura Univers Enciclopedic Gold
- "Scara numerică" [numerical scale]. dexonline.ro (in Romanian). 2016. Retrieved 3 May 2016.
- "Switzerland: Words and Phrases". TRAMsoft Gmbh. 29 August 2009. Retrieved 15 August 2011.
- "Canadian government standards website". Canadian Government. 2010. Retrieved 15 September 2010.
- "billion". Granddictionnaire.com. 13 May 2013. Retrieved 24 April 2018.
- "Taalkommissie se reaksie op biljoen, triljoen" (in Afrikaans). Naspers: Media24. Archived from the original on 14 July 2014. Retrieved 16 July 2014.
- "'Groen boek': mooiste, beste, gebruikersvriendelikste" (in Afrikaans). Naspers:Media24. Archived from the original on 14 July 2014. Retrieved 16 July 2014.
- Gupta, S.V. (2010). Units of measurement: past, present and future: international system of units. Springer. pp. 12 (Section 1.2.8 Numeration). ISBN 978-3642007385. Retrieved 22 August 2011.
- Foundalis, Harry. "Greek Numbers and Numerals (Ancient and Modern)". Retrieved 20 May 2007.
- "BBC: GCSE Bitesize – The origins of the universe". BBC. Retrieved 28 July 2011. | https://en.wikipedia.org/wiki/Long_scale | 21 |
22 | by Steve Simon, PhD
Hazard functions are a key tool in survival analysis. But they’re not always easy to interpret.
In this article, we’re going to explore the definition, purpose, and meaning of hazard functions. Then we’ll explore a few different shapes to see what they tell us about the data.
This is a simple distribution of survival probabilities across age. It could be survival of a battery after manufacture, a person after a diagnosis, or a bout of unemployment.
It is unrepresentative, but easy to work with. For anyone who is curious, this curve represents a Weibull distribution with a shape parameter of 1.6 and a scale parameter of 36.
The curve is the probability density function, a commonly used function where areas under the curve represent probabilities of falling within a particular interval.
This is the survival function, or the complement of the cumulative distribution function. The Y-axis here is the proportion of the sample who is still alive. The X axis is age.
It starts at 1 because everyone is alive at time zero, and gradually declines to zero, because no one is immortal.
Here is a plot that shows that the probability of death between the ages of 20 and 30 is about 1 in 5.
This plot shows that the probability of death between the ages of 40 and 60 is also about 1 in 5.
Apples to oranges comparison
Although the two probabilities are equal, the comparison is not a “fair” comparison. The first event, dying between 20 and 30 years of age, is an apple and the second, dying between 40 and 60 years of age is an orange.
There are three problems with trying to compare these events.
- The number of people alive at age 20 is much larger than the number of people alive at age 40.
- The probabilities are measured across different time ranges.
- The probabilities are quite heterogenous across the time intervals.
A fairer comparison–the hazard function.
The hazard function fixes the three problems noted above.
- It adjusts for the fact that fewer people are alive at age 40 than at age 20.
- It calculates a rate by dividing by the time range.
- It calculates the rate over a narrow time interval, .
Here’s the mathematical definition.
Let’s pull this apart piece by piece.
The limit may remind you of formulas you’ve seen for the derivative in your Calculus class (if you took Calculus). If you remember those days of long ago, you may be able to show that
This is what the survival function looks like for our particular example. The hazard function here is much less than one, but it can be bigger than one because it is not a probability. Think of the hazard as a short term measure of risk. In this example, the risk is higher as age increases.
Why is the hazard function important?
The shape of the hazard function tells you some important qualitative information about survival. There are four general patterns worth noting.
Monotone increasing hazard function
The hazard function shown above is an example of a monotone increasing hazard. The short-term risk of death at 20, given that you survived until your 20th birthday, is about 0.031. But at the age of 40, your hazard function is 0.047.
That means that your short-term risk of death given that you survived until your 40th birthday is worse. The advertising phrase, “You’re not getting older, you’re getting better” doesn’t really apply.
You’ve done well to make it this far, but your short term prospects are grimmer than they were at 20, and things will continue to go downhill. This is why life insurance costs more as you get older.
In a manufacturing setting a monotone increasing hazard function means that new is better than used. This is true for automobiles, for example, which is why if I wanted to trade in my 2008 Prius for a 2020 model, I’d have to part with at least $20,000 dollars.
Batteries also follow a monotone increasing hazard. I have four smoke alarms in my house and the batteries always seem to fail at 3am, necessitating a late night trip to get a replacement. If I was smart, I’d have a stock of extra batteries in hand.
But another strategy would be that when one battery fails at 3am, I should replace not just that battery, but the ones in the other three smoke alarms at the same time.
The short term risk is high for all batteries so I would be throwing away three batteries that were probably also approaching the end of their lifetime, and I’d save myself from more 3am failures in the near future.
Monotone decreasing hazard function
Not everything deteriorates over time. Many electronic systems tend to get stronger over time. The short term risk of failure is high early in the life of some electronic systems, but that risk goes down over time.
The longer these systems survive without failure, the tougher they become. This evokes the famous saying of Friedrich Nietzsche, “That which does not kill us, makes us stronger.”
Many manufacturers will use a monotone decreasing hazard function to their advantage. Before shipping electronic systems with a decreasing hazard function, the manufacturer will run the system for 48 hours.
Better for it to fail on the factory floor where it is easily swapped out for another system. The systems that do get shipped after 48 are tougher and more reliable than ones fresh off the factory floor, leading to a savings in warranty expenses.
A monotone decreasing hazard means that used is better than new. Products with a decreasing hazard almost always are worth more over time than new products. Think of them as being “battle hardened.”
Constant hazard function
In some settings, the hazard function is constant. This is a situation where new and used are equal. Every product fails over time, but the short term risk of failure is the same at any age. The rate at which radioactive elements decay is characterized by a constant hazard function.
Radon is a difficult gas to work with because every day about 17% of it disappears through radioactive decay.
After two weeks, only 7% of the radon is left. But the 7% that remains has the same rate of disappearance as brand new radon. Atoms don’t show any effects, positive or negative, of age.
Bathtub hazard function
Getting back from machines to humans, your hazard function is a mixture of early decreasing hazard and late increasing hazard. That’s often described as a “bathtub” hazard because the hazard function looks like the side profile of a bathtub with a steep drop on one end and a gradual rise to the other end.
A bathtub hazard recognizes that the riskiest day of your life was the day you were born. If you survived that, the riskiest month of your life was your first month. After that things start to settle down.
But gradually, as parts of your body tend to wear out, the short term risk increases again and if you live to a ripe old age of 90, your short term risk might be as bad as it was during your first year of life.
Most of us won’t live that long but the frailty of an infant and the frailty of a 90 year old are comparable, but for different reasons. The things that kill an infant are different than the things that kill an old person.
I’m not an actuary, so I apologize if some of my characterizations of risk over age are not perfectly accurate. I did try to match up my numbers roughly with what I found on the Internet, but I know that this is, at best, a crude approximation.
Proportional hazards models
The other reason that hazard functions are important is that they are useful in developing statistical models of survival.
The general shape of the hazard function–monotone increasing, monotone decreasing, bathtub–doesn’t change from one group of patients to another. But often the hazard of one group is proportionately larger or smaller than the others.
If this proportionality in hazard functions applies, then the mathematical details of the statistical models are greatly simplified.
It doesn’t always have to be this way. When you are comparing two groups of patients, one getting a surgical treatment for their disease and another getting a medical treatment, the general shapes might be quite different.
Surgery might have a decreasing hazard function. The risks of surgery (infection, excessive bleeding) are most likely to appear early and the longer you stay healthy, the less the risk that the surgery will kill you.
The risks of drugs, on the other hand, might lead to an increasing hazard function. The cumulative dosages over time might lead to greater risk the longer you are on the drugs.
This doesn’t always happen, but when you have different shapes for the hazard function depending on what treatment you are getting, the analysis can still be done, but it requires more effort.
You can’t assume that the hazards are proportional in this case and you have to account for this with a more complex statistical model.
The hazard function is a short term measure of risk that accounts for the fact your risk of death changes as you age. The shape of the hazard function has important ramifications in manufacturing.
When you can assume that the hazard functions are proportional between one group of patients and another, then your statistical models are greatly simplified. | https://www.theanalysisfactor.com/survival-analysis-interpreting-shapes-of-hazard-functions/ | 21 |
100 | Origins of the American Civil War
Historians who debate the origins of the American Civil War focus on the reasons that seven southern states (followed by four other states after the onset of the war) declared their secession from the United States (the Union) and united to form the Confederate States (simply known as the "Confederacy"), and the reasons that the North refused to let them go. Most of the debate is about the first question, the reason that some Southern states decided to secede. Most historians in the 21st century agree that conflict over slavery caused the war, but they disagree sharply on the aspects of this conflict (ideological, economic, political, or social) that were most important.
The principal political battle leading to Southern secession was over whether slavery would be permitted to expand into newly acquired western territory destined to be formed into states. Initially Congress had annexed new states into the Union alternating between slave and free. This had kept a sectional balance in the Senate but not in the House of Representatives as free states outstripped slave states in population. Thus, at mid 19th century, the free versus slave status of the new territory was a critical issue, both for the North where anti-slavery sentiment had grown, and for the South where the fear of slavery's abolition had grown. Another factor for secession and the formation of the Confederacy, was the development of white Southern nationalism in the preceding decades. The primary reason for the North to reject secession was to preserve the Union, a cause based on American nationalism.
Abraham Lincoln won the 1860 presidential election but had not been on the ballot in ten Southern states. His victory triggered declarations of secession by seven slave states of the Deep South, all of whose riverfront or coastal economies were based on cotton that was cultivated by slave labor. They formed the Confederate States after Lincoln was elected but before he had taken office.
Nationalists in the North and "Unionists" in the South refused to recognize the declarations of secession. No foreign government ever recognized the Confederacy. The US government under President James Buchanan refused to relinquish its forts that were in territory claimed by the Confederacy. The war itself began on April 12, 1861, when Confederate forces bombarded Fort Sumter, a major fortress in the harbor of Charleston, South Carolina.
As a panel of historians emphasized in 2011, "while slavery and its various and multifaceted discontents were the primary cause of disunion, it was disunion itself that sparked the war." Pulitzer Prize-winning author David Potter wrote: "The problem for Americans who, in the age of Lincoln, wanted slaves to be free was not simply that southerners wanted the opposite, but that they themselves cherished a conflicting value: they wanted the Constitution, which protected slavery, to be honored, and the Union, which had fellowship with slaveholders, to be preserved. Thus they were committed to values that could not logically be reconciled."
Geography and demographics
By the mid-19th century the United States had become a nation of two distinct regions. The free states in New England, the Northeast, and the Midwest had a rapidly growing economy based on family farms, industry, mining, commerce and transportation, with a large and rapidly growing urban population. Their growth was fed by a high birth rate and large numbers of European immigrants, especially from Ireland and Germany. The South was dominated by a settled plantation system based on slavery; there was some rapid growth taking place in the Southwest (e.g., Texas), based on high birth rates and high migration from the Southeast; there was also immigration by Europeans, but in much smaller number. The heavily rural South had few cities of any size, and little manufacturing except in border areas such as St. Louis and Baltimore. Slave owners controlled politics and the economy, although about 75% of white Southern families owned no slaves.
Overall, the Northern population was growing much more quickly than the Southern population, which made it increasingly difficult for the South to dominate the national government. By the time the 1860 election occurred, the heavily agricultural southern states as a group had fewer Electoral College votes than the rapidly industrializing northern states. Abraham Lincoln was able to win the 1860 presidential election without even being on the ballot in ten Southern states. Southerners felt a loss of federal concern for Southern pro-slavery political demands, and their continued domination of the federal government was threatened. This political calculus provided a very real basis for Southerners' worry about the relative political decline of their region, due to the North growing much faster in terms of population and industrial output.
In the interest of maintaining unity, politicians had mostly moderated opposition to slavery, resulting in numerous compromises such as the Missouri Compromise of 1820 under the presidency of James Monroe. After the Mexican–American War of 1846 to 1848, the issue of slavery in the new territories led to the Compromise of 1850. While the compromise averted an immediate political crisis, it did not permanently resolve the issue of the Slave Power (the power of slaveholders to control the national government on the slavery issue). Part of the Compromise of 1850 was the Fugitive Slave Act of 1850 that required Northerners assist Southerners in reclaiming fugitive slaves, which many Northerners found to be extremely offensive.
Amid the emergence of increasingly virulent and hostile sectional ideologies in national politics, the collapse of the old Second Party System in the 1850s hampered politicians' efforts to reach yet another compromise. The compromise that was reached (the 1854 Kansas–Nebraska Act) outraged many Northerners, and led to the formation of the Republican Party, the first major party that was almost entirely Northern-based. The industrializing North and agrarian Midwest became committed to the economic ethos of free-labor industrial capitalism.
Arguments that slavery was undesirable for the nation had long existed, and early in U.S. history were made even by some prominent Southerners. After 1840, abolitionists denounced slavery as not only a social evil but a moral wrong. Activists in the new Republican Party, usually Northerners, had another view: they believed the Slave Power conspiracy was controlling the national government with the goal of extending slavery and limiting access to good farm land to rich slave owners. Southern defenders of slavery, for their part, increasingly came to contend that black people benefited from slavery.
Historical tensions and compromises
At the time of the American Revolution, the institution of slavery was firmly established in the American colonies. It was most important in the six southern states from Maryland to Georgia, but the total of a half million slaves were spread out through all of the colonies. In the South, 40% of the population was made up of slaves, and as Americans moved into Kentucky and the rest of the southwest, one-sixth of the settlers were slaves. By the end of the Revolutionary War, the New England states provided most of the American ships that were used in the foreign slave trade, while most of their customers were in Georgia and the Carolinas.
During this time many Americans found it easy to reconcile slavery with the Bible but a growing number rejected this defense of slavery. A small antislavery movement, led by the Quakers, appeared in the 1780s, and by the late 1780s all of the states banned the international slave trade. No serious national political movement against slavery developed, largely due to the overriding concern over achieving national unity. When the Constitutional Convention met, slavery was the one issue "that left the least possibility of compromise, the one that would most pit morality against pragmatism." In the end, many would take comfort in the fact that the word "slavery" never occurs in the Constitution. The three-fifths clause was a compromise between those (in the North) who wanted no slaves counted, and those (in the South) who wanted all the slaves counted. The Constitution also allowed the federal government to suppress domestic violence which would dedicate national resources to defending against slave revolts. Imports could not be banned for 20 years. The need for three-fourths approval for amendments made the Constitutional abolition of slavery virtually impossible .
With the outlawing of the African slave trade on January 1, 1808, many Americans felt that the slavery issue was resolved. Any national discussion that might have continued over slavery was drowned out by the years of trade embargoes, maritime competition with Great Britain and France, and, finally, the War of 1812. The one exception to this quiet regarding slavery was the New Englanders' association of their frustration with the war with their resentment of the three-fifths clause that seemed to allow the South to dominate national politics.
During and in the aftermath of the American Revolution (1775–1783), the northern states (north of the Mason–Dixon line separating Pennsylvania from Maryland and Delaware) abolished slavery by 1804, although in some states older slaves were turned into indentured servants who could not be bought or sold. In the Northwest Ordinance of 1787, Congress (still under the Articles of Confederation) barred slavery from the Midwestern territory north of the Ohio River. When Congress organized the territories acquired through the Louisiana Purchase of 1803, there was no ban on slavery.
In 1819 Congressman James Tallmadge Jr. of New York initiated an uproar in the South when he proposed two amendments to a bill admitting Missouri to the Union as a free state. The first barred slaves from being moved to Missouri, and the second would free all Missouri slaves born after admission to the Union at age 25. With the admission of Alabama as a slave state in 1819, the U.S. was equally divided with 11 slave states and 11 free states. The admission of the new state of Missouri as a slave state would give the slave states a majority in the Senate; the Tallmadge Amendment would give the free states a majority.
The Tallmadge amendments passed the House of Representatives but failed in the Senate when five Northern senators voted with all the Southern senators. The question was now the admission of Missouri as a slave state, and many leaders shared Thomas Jefferson's fear of a crisis over slavery—a fear that Jefferson described as "a fire bell in the night". The crisis was solved by the Missouri Compromise, in which Massachusetts agreed to cede control over its relatively large, sparsely populated and disputed exclave, the District of Maine. The compromise allowed Maine to be admitted to the Union as a free state at the same time that Missouri was admitted as a slave state. The Compromise also banned slavery in the Louisiana Purchase territory north and west of the state of Missouri along the line of 36–30. The Missouri Compromise quieted the issue until its limitations on slavery were repealed by the Kansas–Nebraska Act of 1854.
In the South, the Missouri crisis reawakened old fears that a strong federal government could be a fatal threat to slavery. The Jeffersonian coalition that united southern planters and northern farmers, mechanics and artisans in opposition to the threat presented by the Federalist Party had started to dissolve after the War of 1812. It was not until the Missouri crisis that Americans became aware of the political possibilities of a sectional attack on slavery, and it was not until the mass politics of Andrew Jackson's administration that this type of organization around this issue became practical.
The American System, advocated by Henry Clay in Congress and supported by many nationalist supporters of the War of 1812 such as John C. Calhoun, was a program for rapid economic modernization featuring protective tariffs, internal improvements at federal expense, and a national bank. The purpose was to develop American industry and international commerce. Since iron, coal, and water power were mainly in the North, this tax plan was doomed to cause rancor in the South where economies were agriculture-based. Southerners claimed it demonstrated favoritism toward the North.
The nation suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. The highly protective Tariff of 1828 (called the "Tariff of Abominations" by its detractors), designed to protect American industry by taxing imported manufactured goods, was enacted into law during the last year of the presidency of John Quincy Adams. Opposed in the South and parts of New England, the expectation of the tariff's opponents was that with the election of Andrew Jackson the tariff would be significantly reduced.
By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and his vice-president John C. Calhoun, the most effective proponent of the constitutional theory of state nullification through his 1828 "South Carolina Exposition and Protest".
Congress enacted a new tariff in 1832, but it offered the state little relief, resulting in the most dangerous sectional crisis since the Union was formed. Some militant South Carolinians even hinted at withdrawing from the Union in response. The newly elected South Carolina legislature then quickly called for the election of delegates to a state convention. Once assembled, the convention voted to declare null and void the tariffs of 1828 and 1832 within the state. President Andrew Jackson responded firmly, declaring nullification an act of treason. He then took steps to strengthen federal forts in the state.
Violence seemed a real possibility early in 1833 as Jacksonians in Congress introduced a "Force Bill" authorizing the President to use the federal army and navy in order to enforce acts of Congress. No other state had come forward to support South Carolina, and the state itself was divided on willingness to continue the showdown with the federal government. The crisis ended when Clay and Calhoun worked to devise a compromise tariff. Both sides later claimed victory. Calhoun and his supporters in South Carolina claimed a victory for nullification, insisting that it had forced the revision of the tariff. Jackson's followers, however, saw the episode as a demonstration that no single state could assert its rights by independent action.
Calhoun, in turn, devoted his efforts to building up a sense of Southern solidarity so that when another standoff should come, the whole section might be prepared to act as a bloc in resisting the federal government. As early as 1830, in the midst of the crisis, Calhoun identified the right to own slaves—the foundation of the plantation agricultural system—as the chief southern minority right being threatened:
I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.
The issue appeared again after 1842's Black Tariff. A period of relative free trade followed 1846's Walker Tariff, which had been largely written by Southerners. Northern industrialists (and some in western Virginia) complained it was too low to encourage the growth of industry.
Gag Rule debates
From 1831 to 1836 William Lloyd Garrison and the American Anti-Slavery Society (AA-SS) initiated a campaign to petition Congress in favor of ending slavery in the District of Columbia and all federal territories. Hundreds of thousands of petitions were sent, with the number reaching a peak in 1835.
The House passed the Pinckney Resolutions on May 26, 1836. The first of these stated that Congress had no constitutional authority to interfere with slavery in the states and the second that it "ought not" do so in the District of Columbia. The third resolution, known from the beginning as the "gag rule", provided that:
All petitions, memorials, resolutions, propositions, or papers, relating in any way, or to any extent whatsoever, to the subject of slavery or the abolition of slavery, shall, without being either printed or referred, be laid on the table and that no further action whatever shall be had thereon.
Former President John Quincy Adams, who was elected to the House of Representatives in 1830, became an early and central figure in the opposition to the gag rules. He argued that they were a direct violation of the First Amendment right "to petition the Government for a redress of grievances". A majority of Northern Whigs joined the opposition. Rather than suppress anti-slavery petitions, however, the gag rules only served to offend Americans from Northern states, and dramatically increase the number of petitions.
Since the original gag was a resolution, not a standing House Rule, it had to be renewed every session, and the Adams' faction often gained the floor before the gag could be imposed. However in January 1840, the House of Representatives passed the Twenty-first Rule, which prohibited even the reception of anti-slavery petitions and was a standing House rule. Now the pro-petition forces focused on trying to revoke a standing rule. The Rule raised serious doubts about its constitutionality and had less support than the original Pinckney gag, passing only by 114 to 108. Throughout the gag period, Adams' "superior talent in using and abusing parliamentary rules" and skill in baiting his enemies into making mistakes, enabled him to evade the rule and debate the slavery issues. The gag rule was finally rescinded on December 3, 1844, by a strongly sectional vote of 108 to 80, all the Northern and four Southern Whigs voting for repeal, along with 55 of the 71 Northern Democrats.
Antebellum South and the Union
There had been a continuing contest between the states and the national government over the power of the latter—and over the loyalty of the citizenry—almost since the founding of the republic. The Kentucky and Virginia Resolutions of 1798, for example, had defied the Alien and Sedition Acts, and at the Hartford Convention, New England voiced its opposition to President James Madison and the War of 1812, and discussed secession from the Union.
Although a minority of free Southerners owned slaves, free Southerners of all classes nevertheless defended the institution of slavery—threatened by the rise of free labor abolitionist movements in the Northern states—as the cornerstone of their social order.
- 26% in the 15 Slave states (AL, AR, DE, FL, GA, KY, LA, MD, MS, MO, NC, SC, TN, TX, VA)
- 16% in the 4 Border states (DE, KY, MD, MO)
- 31% in the 11 Confederate states (AL, AR, FL, GA, LA, MS, NC, SC, TN, TX, VA)
- 37% in the first 7 Confederate states (AL, FL, GA, LA, MS, SC, TX)
- 25% in the second 4 Confederate states (AR, NC, TN, VA)
Mississippi was the highest at 49%, followed by South Carolina at 46%
Based on a system of plantation slavery, the social structure of the South was far more stratified and patriarchal than that of the North. In 1850 there were around 350,000 slaveholders in a total free Southern population of about six million. Among slaveholders, the concentration of slave ownership was unevenly distributed. Perhaps around 7 percent of slaveholders owned roughly three-quarters of the slave population. The largest slaveholders, generally owners of large plantations, represented the top stratum of Southern society. They benefited from economies of scale and needed large numbers of slaves on big plantations to produce cotton, a highly profitable labor-intensive crop.
In the 1850s, as large plantation owners outcompeted smaller farmers, more slaves were owned by fewer planters. Yet poor whites and small farmers generally accepted the political leadership of the planter elite. Several factors helped explain why slavery was not under serious threat of internal collapse from any move for democratic change initiated from the South. First, given the opening of new territories in the West for white settlement, many non-slaveowners also perceived a possibility that they, too, might own slaves at some point in their life.
Second, small free farmers in the South often embraced racism, making them unlikely agents for internal democratic reforms in the South. The principle of white supremacy, accepted by almost all white Southerners of all classes, made slavery seem legitimate, natural, and essential for a civilized society. "Racial" discrimination was completely legal. White racism in the South was sustained by official systems of repression such as the "slave codes" and elaborate codes of speech, behavior, and social practices illustrating the subordination of blacks to whites. For example, the "slave patrols" were among the institutions bringing together southern whites of all classes in support of the prevailing economic and racial order. Serving as slave "patrollers" and "overseers" offered white Southerners positions of power and honor in their communities. Policing and punishing Blacks who transgressed the regimentation of slave society was a valued community service in the South, where the fear of free Blacks threatening law and order figured heavily in the public discourse of the period.
Third, many yeomen and small farmers with a few slaves were linked to elite planters through the market economy. In many areas, small farmers depended on local planter elites for vital goods and services, including access to cotton gins, markets, feed and livestock, and even loans (since the banking system was not well developed in the antebellum South). Southern tradesmen often depended on the richest planters for steady work. Such dependency effectively deterred many white non-slaveholders from engaging in any political activity that was not in the interest of the large slaveholders. Furthermore, whites of varying social class, including poor whites and "plain folk" who worked outside or in the periphery of the market economy (and therefore lacked any real economic interest in the defense of slavery) might nonetheless be linked to elite planters through extensive kinship networks. Since inheritance in the South was often unequitable (and generally favored eldest sons), it was not uncommon for a poor white person to be perhaps the first cousin of the richest plantation owner of his county and to share the same militant support of slavery as his richer relatives. Finally, there was no secret ballot at the time anywhere in the United States—this innovation did not become widespread in the U.S. until the 1880s. For a typical white Southerner, this meant that so much as casting a ballot against the wishes of the establishment meant running the risk of being socially ostracized.
Thus, by the 1850s, Southern slaveholders and non-slaveholders alike felt increasingly encircled psychologically and politically in the national political arena because of the rise of free soilism and abolitionism in the Northern states. Increasingly dependent on the North for manufactured goods, for commercial services, and for loans, and increasingly cut off from the flourishing agricultural regions of the Northwest, they faced the prospects of a growing free labor and abolitionist movement in the North.
Historian William C. Davis refutes the argument that Southern culture was different from that of Northern states or that it was a cause of the war, stating, "Socially and culturally the North and South were not much different. They prayed to the same deity, spoke the same language, shared the same ancestry, sang the same songs. National triumphs and catastrophes were shared by both." He stated that culture was not the cause of the war, but rather, slavery was: "For all the myths they would create to the contrary, the only significant and defining difference between them was slavery, where it existed and where it did not, for by 1804 it had virtually ceased to exist north of Maryland. Slavery demarked not just their labor and economic situations, but power itself in the new republic."
Militant defense of slavery
With the outcry over developments in Kansas strong in the North, defenders of slavery—increasingly committed to a way of life that abolitionists and their sympathizers considered obsolete or immoral—articulated a militant pro-slavery ideology that would lay the groundwork for secession upon the election of a Republican president. Southerners waged a vitriolic response to political change in the North. Slaveholding interests sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile" and "ruinous" legislation. Behind this shift was the growth of the cotton textile industry in the North and in Europe, which left slavery more important than ever to the Southern economy.
Southern spokesmen greatly exaggerated the power of abolitionists, looking especially at the great popularity of Uncle Tom's Cabin (1852), the novel and play by Harriet Beecher Stowe (whom Abraham Lincoln reputedly called "the little woman that started this great war"). They saw a vast growing abolitionist movement after the success of The Liberator in 1831 by William Lloyd Garrison. The fear was a race war by blacks that would massacre whites, especially in counties where whites were a small minority.
The South reacted with an elaborate intellectual defense of slavery. J. D. B. De Bow of New Orleans established De Bow's Review in 1846, which quickly grew to become the leading Southern magazine, warning about the dangers of depending on the North economically. De Bow's Review also emerged as the leading voice for secession. The magazine emphasized the South's economic inequality, relating it to the concentration of manufacturing, shipping, banking and international trade in the North. Searching for Biblical passages endorsing slavery and forming economic, sociological, historical and scientific arguments, slavery went from being a "necessary evil" to a "positive good". Dr. John H. Van Evrie's book Negroes and Negro slavery: The First an Inferior Race: The Latter Its Normal Condition—setting out the arguments the title would suggest—was an attempt to apply scientific support to the Southern arguments in favor of race-based slavery.
Latent sectional divisions suddenly activated derogatory sectional imagery which emerged into sectional ideologies. As industrial capitalism gained momentum in the North, Southern writers emphasized whatever aristocratic traits they valued (but often did not practice) in their own society: courtesy, grace, chivalry, the slow pace of life, orderly life and leisure. This supported their argument that slavery provided a more humane society than industrial labor. In his Cannibals All!, George Fitzhugh argued that the antagonism between labor and capital in a free society would result in "robber barons" and "pauper slavery", while in a slave society such antagonisms were avoided. He advocated enslaving Northern factory workers, for their own benefit. Abraham Lincoln, on the other hand, denounced such Southern insinuations that Northern wage earners were fatally fixed in that condition for life. To Free Soilers, the stereotype of the South was one of a diametrically opposite, static society in which the slave system maintained an entrenched anti-democratic aristocracy.
Southern fears of modernization
According to the historian James M. McPherson, exceptionalism applied not to the South but to the North after the North ended slavery and launched an industrial revolution that led to urbanization, which in turn led to increased education, which in its own turn gave ever-increasing strength to various reform movements but especially abolitionism. The fact that seven immigrants out of eight settled in the North (and the fact that most immigrants viewed slavery with disfavor), compounded by the fact that twice as many whites left the South for the North as vice versa, contributed to the South's defensive-aggressive political behavior. The Charleston Mercury wrote that on the issue of slavery the North and South "are not only two Peoples, but they are rival, hostile Peoples." As De Bow's Review said, "We are resisting revolution. ... We are not engaged in a Quixotic fight for the rights of man. ... We are conservative."
Southern fears of modernity
Allan Nevins argued that the Civil War was an "irrepressible" conflict, adopting a phrase from Senator William H. Seward. Nevins synthesized contending accounts emphasizing moral, cultural, social, ideological, political, and economic issues. In doing so, he brought the historical discussion back to an emphasis on social and cultural factors. Nevins pointed out that the North and the South were rapidly becoming two different peoples, a point made also by historian Avery Craven. At the root of these cultural differences was the problem of slavery, but fundamental assumptions, tastes, and cultural aims of the regions were diverging in other ways as well. More specifically, the North was rapidly modernizing in a manner threatening to the South. Historian McPherson explains:
When secessionists protested in 1861 that they were acting to preserve traditional rights and values, they were correct. They fought to preserve their constitutional liberties against the perceived Northern threat to overthrow them. The South's concept of republicanism had not changed in three-quarters of a century; the North's had. ... The ascension to power of the Republican Party, with its ideology of competitive, egalitarian free-labor capitalism, was a signal to the South that the Northern majority had turned irrevocably towards this frightening, revolutionary future.
Harry L. Watson has synthesized research on antebellum southern social, economic, and political history. Self-sufficient yeomen, in Watson's view, "collaborated in their own transformation" by allowing promoters of a market economy to gain political influence. Resultant "doubts and frustrations" provided fertile soil for the argument that southern rights and liberties were menaced by Black Republicanism.
J. Mills Thornton III explained the viewpoint of the average white Alabamian. Thornton contends that Alabama was engulfed in a severe crisis long before 1860. Deeply held principles of freedom, equality, and autonomy, as expressed in Republican values, appeared threatened, especially during the 1850s, by the relentless expansion of market relations and commercial agriculture. Alabamians were thus, he judged, prepared to believe the worst once Lincoln was elected.
Sectional tensions and the emergence of mass politics
The politicians of the 1850s were acting in a society in which the traditional restraints that suppressed sectional conflict in the 1820s and 1850s—the most important of which being the stability of the two-party system—were being eroded as this rapid extension of democracy went forward in the North and South. It was an era when the mass political party galvanized voter participation to 80% or 90% turnout rates, and a time in which politics formed an essential component of American mass culture. Historians agree that political involvement was a larger concern to the average American in the 1850s than today. Politics was, in one of its functions, a form of mass entertainment, a spectacle with rallies, parades, and colorful personalities. Leading politicians, moreover, often served as a focus for popular interests, aspirations, and values.
Historian Allan Nevins, for instance, writes of political rallies in 1856 with turnouts of anywhere from twenty to fifty thousand men and women. Voter turnouts even ran as high as 84% by 1860. An abundance of new parties emerged 1854–56, including the Republicans, People's party men, Anti-Nebraskans, Fusionists, Know Nothings, Know-Somethings (anti-slavery nativists), Maine Lawites, Temperance men, Rum Democrats, Silver Gray Whigs, Hindus, Hard Shell Democrats, Soft Shells, Half Shells and Adopted Citizens. By 1858, they were mostly gone, and politics divided four ways. Republicans controlled most Northern states with a strong Democratic minority. The Democrats were split North and South and fielded two tickets in 1860. Southern non-Democrats tried different coalitions; most supported the Constitutional Union party in 1860.
Many Southern states held constitutional conventions in 1851 to consider the questions of nullification and secession. With the exception of South Carolina, whose convention election did not even offer the option of "no secession" but rather "no secession without the collaboration of other states", the Southern conventions were dominated by Unionists who voted down articles of secession.
Historians today generally agree that economic conflicts were not a major cause of the war. While an economic basis to the sectional crisis was popular among the "Progressive school" of historians from the 1910s to the 1940s, few professional historians now subscribe to this explanation. According to economic historian Lee A. Craig, "In fact, numerous studies by economic historians over the past several decades reveal that economic conflict was not an inherent condition of North-South relations during the antebellum era and did not cause the Civil War."
When numerous groups tried at the last minute in 1860–61 to find a compromise to avert war, they did not turn to economic policies. The three major attempts at compromise, the Crittenden Compromise, the Corwin Amendment and the Washington Peace Conference, addressed only the slavery-related issues of fugitive slave laws, personal liberty laws, slavery in the territories and interference with slavery within the existing slave states.
Economic value of slavery to the South
Historian James L. Huston emphasizes the role of slavery as an economic institution. In October 1860 William Lowndes Yancey, a leading advocate of secession, placed the value of Southern-held slaves at $2.8 billion. Huston writes:
Understanding the relations between wealth, slavery, and property rights in the South provides a powerful means of understanding southern political behavior leading to disunion. First, the size dimensions of slavery are important to comprehend, for slavery was a colossal institution. Second, the property rights argument was the ultimate defense of slavery, and white southerners and the proslavery radicals knew it. Third, the weak point in the protection of slavery by property rights was the federal government. ... Fourth, the intense need to preserve the sanctity of property rights in Africans led southern political leaders to demand the nationalization of slavery—the condition under which slaveholders would always be protected in their property holdings.
The cotton gin greatly increased the efficiency with which cotton could be harvested, contributing to the consolidation of "King Cotton" as the backbone of the economy of the Deep South, and to the entrenchment of the system of slave labor on which the cotton plantation economy depended. Any chance that the South would industrialize was over.
The tendency of monoculture cotton plantings to lead to soil exhaustion created a need for cotton planters to move their operations to new lands, and therefore to the westward expansion of slavery from the Eastern seaboard into new areas (e.g., Alabama, Mississippi, and beyond to East Texas).
Regional economic differences
The South, Midwest, and Northeast had quite different economic structures. They traded with each other and each became more prosperous by staying in the Union, a point many businessmen made in 1860–61. However, Charles A. Beard in the 1920s made a highly influential argument to the effect that these differences caused the war (rather than slavery or constitutional debates). He saw the industrial Northeast forming a coalition with the agrarian Midwest against the plantation South. Critics challenged his image of a unified Northeast and said that the region was in fact highly diverse with many different competing economic interests. In 1860–61, most business interests in the Northeast opposed war.
After 1950, only a few mainstream historians accepted the Beard interpretation, though it was accepted by libertarian economists. Historian Kenneth Stampp, who abandoned Beardianism after 1950, sums up the scholarly consensus: "Most historians ... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united."
Free labor vs. pro-slavery arguments
Historian Eric Foner argued that a free-labor ideology dominated thinking in the North, which emphasized economic opportunity. By contrast, Southerners described free labor as "greasy mechanics, filthy operators, small-fisted farmers, and moonstruck theorists". They strongly opposed the homestead laws that were proposed to give free farms in the west, fearing the small farmers would oppose plantation slavery. Indeed, opposition to homestead laws was far more common in secessionist rhetoric than opposition to tariffs. Southerners such as Calhoun argued that slavery was "a positive good", and that slaves were more civilized and morally and intellectually improved because of slavery.
Religious conflict over the slavery question
Led by Mark Noll, a body of scholarship has highlighted the fact that the American debate over slavery became a shooting war in part because the two sides reached diametrically opposite conclusions based on reading the same authoritative source of guidance on moral questions: the King James Version of the Bible.
After the American Revolution and the disestablishment of government-sponsored churches, the U.S. experienced the Second Great Awakening, a massive Protestant revival. Without centralized church authorities, American Protestantism was heavily reliant on the Bible, which was read in the standard 19th-century Reformed hermeneutic of "common sense", literal interpretation as if the Bible were speaking directly about the modern American situation instead of events that occurred in a much different context, millennia ago. By the mid-19th century this form of religion and Bible interpretation had become a dominant strand in American religious, moral and political discourse, almost serving as a de facto state religion.
"The pro-slavery South could point to slaveholding by the godly patriarch Abraham (Gen 12:5; 14:14; 24:35–36; 26:13–14), a practice that was later incorporated into Israelite national law (Lev 25:44–46). It was never denounced by Jesus, who made slavery a model of discipleship (Mk 10:44). The Apostle Paul supported slavery, counseling obedience to earthly masters (Eph 6:5–9; Col 3:22–25) as a duty in agreement with "the sound words of our Lord Jesus Christ and the teaching which accords with godliness" (1 Tim 6:3). Because slaves were to remain in their present state unless they could win their freedom (1 Cor 7:20–24), he sent the fugitive slave Onesimus back to his owner Philemon (Phlm 10–20). The abolitionist north had a difficult time matching the pro-slavery south passage for passage. ... Professor Eugene Genovese, who has studied these biblical debates over slavery in minute detail, concludes that the pro-slavery faction clearly emerged victorious over the abolitionists except for one specious argument based on the so-called Curse of Ham (Gen 9:18–27). For our purposes, it is important to realize that the South won this crucial contest with the North by using the prevailing hermeneutic, or method of interpretation, on which both sides agreed. So decisive was its triumph that the South mounted a vigorous counterattack on the abolitionists as infidels who had abandoned the plain words of Scripture for the secular ideology of the Enlightenment."
Protestant churches in the U.S., unable to agree on what God's Word said about slavery, ended up with schisms between Northern and Southern branches: the Methodist Episcopal Church in 1844, the Baptists in 1845, and the Presbyterian Church in 1857. These splits presaged the subsequent split in the nation: "The churches played a major role in the dividing of the nation, and it is probably true that it was the splits in the churches which made a final split of the nation inevitable." The conflict over how to interpret the Bible was central:
"The theological crisis occasioned by reasoning like [conservative Presbyterian theologian James H.] Thornwell's was acute. Many Northern Bible-readers and not a few in the South felt that slavery was evil. They somehow knew the Bible supported them in that feeling. Yet when it came to using the Bible as it had been used with such success to evangelize and civilize the United States, the sacred page was snatched out of their hands. Trust in the Bible and reliance upon a Reformed, literal hermeneutic had created a crisis that only bullets, not arguments, could resolve."
"The question of the Bible and slavery in the era of the Civil War was never a simple question. The issue involved the American expression of a Reformed literal hermeneutic, the failure of hermeneutical alternatives to gain cultural authority, and the exercise of deeply entrenched intuitive racism, as well as the presence of Scripture as an authoritative religious book and slavery as an inherited social-economic relationship. The North—forced to fight on unfriendly terrain that it had helped to create—lost the exegetical war. The South certainly lost the shooting war. But constructive orthodox theology was the major loser when American believers allowed bullets instead of hermeneutical self-consciousness to determine what the Bible said about slavery. For the history of theology in America, the great tragedy of the Civil War is that the most persuasive theologians were the Rev. Drs. William Tecumseh Sherman and Ulysses S. Grant."
There were many causes of the Civil War, but the religious conflict, almost unimaginable in modern America, cut very deep at the time. Noll and others highlight the significance of the religion issue for the famous phrase in Lincoln's second inaugural: "Both read the same Bible and pray to the same God, and each invokes His aid against the other."
The territorial crisis and the United States Constitution
Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase (Louisiana Purchase), negotiation (Adams–Onís Treaty, Oregon Treaty), and conquest (the Mexican Cession). Of the states carved out of these territories by 1845, all had entered the union as slave states: Louisiana, Missouri, Arkansas, Florida, and Texas, as well as the southern portions of Alabama and Mississippi. With the conquest of northern Mexico, including California, in 1848, slaveholding interests looked forward to the institution flourishing in these lands as well. Southerners also anticipated annexing as slave states Cuba (see Ostend Manifesto), Mexico, and Central America (see Golden Circle (proposed country)). Northern free soil interests vigorously sought to curtail any further expansion of slave soil. It was these territorial disputes that the proslavery and antislavery forces collided over.
The existence of slavery in the southern states was far less politically polarizing than the explosive question of the territorial expansion of the institution in the west. Moreover, Americans were informed by two well-established readings of the Constitution regarding human bondage: that the slave states had complete autonomy over the institution within their boundaries, and that the domestic slave trade—trade among the states—was immune to federal interference. The only feasible strategy available to attack slavery was to restrict its expansion into the new territories. Slaveholding interests fully grasped the danger that this strategy posed to them. Both the South and the North believed: "The power to decide the question of slavery for the territories was the power to determine the future of slavery itself."
By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed to be sanctioned by the Constitution, implicitly or explicitly. Two of the "conservative" doctrines emphasized the written text and historical precedents of the founding document, while the other two doctrines developed arguments that transcended the Constitution.
One of the "conservative" theories, represented by the Constitutional Union Party, argued that the historical designation of free and slave apportionments in territories should become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view.
The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance—that slavery could be excluded altogether in a territory at the discretion of Congress—with one caveat: the due process clause of the Fifth Amendment must apply. In other words, Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846.
Of the two doctrines that rejected federal authority, one was articulated by northern Democrat of Illinois Senator Stephen A. Douglas, and the other by southern Democratic Senator Jefferson Davis of Mississippi and Senator John C. Breckinridge of Kentucky.
Douglas devised the doctrine of territorial or "popular" sovereignty, which declared that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery—a purely local matter. Congress, having created the territory, was barred, according to Douglas, from exercising any authority in domestic matters. To do so would violate historic traditions of self-government, implicit in the US Constitution. The Kansas–Nebraska Act of 1854 legislated this doctrine.
The fourth in this quartet is the theory of state sovereignty ("states' rights"), also known as the "Calhoun doctrine" after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the federal union under the US Constitution—and not merely as an argument for secession. The basic premise was that all authority regarding matters of slavery in the territories resided in each state. The role of the federal government was merely to enable the implementation of state laws when residents of the states entered the territories. Calhoun asserted that the federal government in the territories was only the agent of the several sovereign states, and hence incapable of forbidding the bringing into any territory of anything that was legal property in any state. State sovereignty, in other words, gave the laws of the slaveholding states extra-jurisdictional effect.
"States' rights" was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L Krannawitter points out, "[T]he Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power."
Antislavery movements in the North gained momentum in the 1830s and 1840s, a period of rapid transformation of Northern society that inspired a social and political reformism. Many of the reformers of the period, including abolitionists, attempted in one way or another to transform the lifestyle and work habits of labor, helping workers respond to the new demands of an industrializing, capitalistic society.
Antislavery, like many other reform movements of the period, was influenced by the legacy of the Second Great Awakening, a period of religious revival in the new country stressing the reform of individuals, which was still relatively fresh in the American memory. Thus, while the reform spirit of the period was expressed by a variety of movements with often-conflicting political goals, most reform movements shared a common feature in their emphasis on the Great Awakening principle of transforming the human personality through discipline, order, and restraint.
"Abolitionist" had several meanings at the time. The followers of William Lloyd Garrison, including Wendell Phillips and Frederick Douglass, demanded the "immediate abolition of slavery", hence the name. A more pragmatic group of abolitionists, like Theodore Weld and Arthur Tappan, wanted immediate action, but that action might well be a program of gradual emancipation, with a long intermediate stage. "Antislavery men", like John Quincy Adams, did what they could to limit slavery and end it where possible, but were not part of any abolitionist group. For example, in 1841 Adams represented the Amistad African slaves in the Supreme Court of the United States and argued that they should be set free. In the last years before the war, "antislavery" could mean the Northern majority, like Abraham Lincoln, who opposed expansion of slavery or its influence, as by the Kansas–Nebraska Act, or the Fugitive Slave Act. Many Southerners called all these abolitionists, without distinguishing them from the Garrisonians. James M. McPherson explains the abolitionists' deep beliefs: "All people were equal in God's sight; the souls of black folks were as valuable as those of whites; for one of God's children to enslave another was a violation of the Higher Law, even if it was sanctioned by the Constitution."
Stressing the Yankee Protestant ideals of self-improvement, industry, and thrift, most abolitionists—most notably William Lloyd Garrison—condemned slavery as a lack of control over one's own destiny and the fruits of one's labor.
Wendell Phillips, one of the most ardent abolitionists, attacked the Slave Power and presaged disunion as early as 1845:
The experience of the fifty years ... shows us the slaves trebling in numbers—slaveholders monopolizing the offices and dictating the policy of the Government—prostituting the strength and influence of the Nation to the support of slavery here and elsewhere—trampling on the rights of the free States, and making the courts of the country their tools. To continue this disastrous alliance longer is madness. ... Why prolong the experiment?
Abolitionists also attacked slavery as a threat to the freedom of white Americans. Defining freedom as more than a simple lack of restraint, antebellum reformers held that the truly free man was one who imposed restraints upon himself. Thus, for the anti-slavery reformers of the 1830s and 1840s, the promise of free labor and upward social mobility (opportunities for advancement, rights to own property, and to control one's own labor), was central to the ideal of reforming individuals.
Controversy over the so-called Ostend Manifesto (which proposed the U.S. annexation of Cuba as a slave state) and the Fugitive Slave Act kept sectional tensions alive before the issue of slavery in the West could occupy the country's politics in the mid-to-late 1850s.
Antislavery sentiment among some groups in the North intensified after the Compromise of 1850, when Southerners began appearing in Northern states to pursue fugitives or often to claim as slaves free African Americans who had resided there for years. Meanwhile, some abolitionists openly sought to prevent enforcement of the law. Violation of the Fugitive Slave Act was often open and organized. In Boston—a city from which it was boasted that no fugitive had ever been returned—Theodore Parker and other members of the city's elite helped form mobs to prevent enforcement of the law as early as April 1851. A pattern of public resistance emerged in city after city, notably in Syracuse in 1851 (culminating in the Jerry Rescue incident late that year), and Boston again in 1854. But the issue did not lead to a crisis until revived by the same issue underlying the Missouri Compromise of 1820: slavery in the territories.
Arguments for and against slavery
William Lloyd Garrison, a prominent abolitionist, was motivated by a belief in the growth of democracy. Because the Constitution had a three-fifths clause, a fugitive slave clause, and a 20-year protection of the Atlantic slave trade, Garrison publicly burned a copy of the U.S. Constitution, and called it "a covenant with death and an agreement with hell". In 1854, he said:
I am a believer in that portion of the Declaration of American Independence in which it is set forth, as among self-evident truths, "that all men are created equal; that they are endowed by their Creator with certain inalienable rights; that among these are life, liberty, and the pursuit of happiness." Hence, I am an abolitionist. Hence, I cannot but regard oppression in every form—and most of all, that which turns a man into a thing—with indignation and abhorrence.
Opposite opinions on slavery were expressed by Confederate Vice-President Alexander Stephens in his "Cornerstone Speech". Stephens said:
(Thomas Jefferson's) ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error. ... Our new government is founded upon exactly the opposite idea; its foundations are laid, its corner-stone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.
"Free soil" movement
Opposition to the 1847 Wilmot Proviso helped to consolidate the "free-soil" forces. In 1848 Radical New York Democrats known as Barnburners, members of the Liberty Party, and anti-slavery Whigs formed the Free-Soil Party. The party supported former President Martin Van Buren and Charles Francis Adams Sr. for President and Vice President. The party opposed the expansion of slavery into territories where it had not yet existed, such as Oregon and the ceded Mexican territory. It had the effect of dividing the Democratic Party in the North, especially in areas of Yankee settlement.
Eric Foner in Free Soil, Free Labor, Free Men: The Ideology of the Republican Party Before the Civil War (1970) emphasized the importance of free labor ideology to Northern opponents of slavery, pointing out that the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that rich slave owners would buy up the best lands and block opportunity for free white farmers using family and hired labor. Free Soilers joined the Republican party in 1854, with their appeal to powerful demands in the North through a broader commitment to "free labor" principles. Fear of the "Slave Power" had a far greater appeal to Northern self-interest than did abolitionist arguments based on the plight of black slaves in the South.
Slavery question in territories acquired from Mexico
Soon after the Mexican War started and long before negotiation of the new US-Mexico border, the question of slavery in the territories to be acquired polarized the Northern and Southern United States in the most bitter sectional conflict up to this time, which lasted for a deadlock of four years during which the Second Party System broke up, Mormon pioneers settled Utah, the California Gold Rush settled California, and New Mexico under a federal military government turned back Texas's attempt to assert control over territory Texas claimed as far west as the Rio Grande. Eventually the Compromise of 1850 preserved the Union, but only for another decade. Proposals included:
- The Wilmot Proviso banning slavery in any new territory to be acquired from Mexico, not including Texas, which had been annexed the previous year. Passed by the United States House of Representatives in August 1846 and February 1847 but not the Senate. Later an effort to attach the proviso to the Treaty of Guadalupe Hidalgo also failed.
- Failed amendments to the Wilmot Proviso by William W. Wick and then Stephen Douglas extending the Missouri Compromise line (36°30' parallel north) west to the Pacific Ocean , allowing slavery in most of present-day New Mexico and Arizona, southern Nevada, and Southern California, as well as any other territories that might be acquired from Mexico. The line was again proposed by the Nashville Convention of June 1850.
- Popular sovereignty, developed by Lewis Cass and Douglas as the eventual Democratic Party position, letting each territory decide whether to allow slavery.
- William L. Yancey's "Alabama Platform", endorsed by the Alabama and Georgia legislatures and by Democratic state conventions in Florida and Virginia, called for no restrictions on slavery in the territories either by the federal government or by territorial governments before statehood, opposition to any candidates supporting either the Wilmot Proviso or popular sovereignty, and federal legislation overruling Mexican anti-slavery laws.
- General Zachary Taylor, who became the Whig candidate in 1848 and then President from March 1849 to July 1850, proposed after becoming President that the entire area become two free states, called California and New Mexico, but much larger than the eventual ones. None of the area would be left as an unorganized or organized territory, avoiding the question of slavery in the territories.
- The Mormons' proposal for a State of Deseret, incorporating most of the area of the Mexican Cession but excluding the large non-Mormon populations in Northern California and central New Mexico, was considered unlikely to succeed in Congress, but nevertheless in 1849 President Zachary Taylor sent his agent John Wilson westward with a proposal to combine California and Deseret as a single state, decreasing the number of new free states and the erosion of Southern parity in the Senate.
- The Compromise of 1850, proposed by Henry Clay in January 1850, guided to passage by Douglas over Northern Whig and Southern Democrat opposition, and enacted September 1850, admitted California as a free state, including Southern California, and organized Utah Territory and New Mexico Territory with slavery to be decided by popular sovereignty. Texas dropped its claim to the disputed northwestern areas in return for debt relief, and the areas were divided between the two new territories and unorganized territory. El Paso, where Texas had successfully established county government, was left in Texas. No territory dominated by Southerners (like the later short-lived Confederate Territory of Arizona) was created. Also, the slave trade was abolished in Washington, D.C. (but not slavery itself), and the Fugitive Slave Act was strengthened.
States' rights was an issue in the 19th century for those who felt that the federal government was superseded by the authority of the individual states and was in violation of the role intended for it by the Founding Fathers of the United States. Kenneth M. Stampp notes that each section used states' rights arguments when convenient, and shifted positions when convenient. For example, the Fugitive Slave Act of 1850 was enacted by southern representatives to use federal authority to suppress northern states' rights. The constitution gave federal protection to slave property rights, and slaveholders demanded that this federal power should be strengthened and take precedence over northern state laws. Anti-slavery forces in northern legislatures had resisted this constitutional right in the form of state personal liberty laws that placed state laws above the federal mandate.
States' rights and slavery
From the close of the nullification episode of 1832–1833 to the outbreak of the Civil War, the agitation of state rights was intimately connected with a new issue of growing importance, the slavery question, and the principal form assumed by the doctrine was that of the right of secession. The pro-slavery forces sought refuge in the state rights position as a shield against federal interference with pro-slavery projects. ... As a natural consequence, anti-slavery legislatures in the North were led to lay great stress on the national character of the Union and the broad powers of the general government in dealing with slavery. Nevertheless, it is significant to note that when it served anti-slavery purposes better to lapse into state rights dialectic, northern legislatures did not hesitate to be inconsistent.
Echoing Schlesinger, Forrest McDonald wrote that "the dynamics of the tension between federal and state authority changed abruptly during the late 1840s" as a result of the acquisition of territory in the Mexican War. McDonald states:
And then, as a by-product or offshoot of a war of conquest, slavery—a subject that leading politicians had, with the exception of the gag rule controversy and Calhoun's occasional outbursts, scrupulously kept out of partisan debate—erupted as the dominant issue in that arena. So disruptive was the issue that it subjected the federal Union to the greatest strain the young republic had yet known.
In a February 1861 speech to the Virginian secession convention, Georgian Henry L. Benning stated the reasoning behind Georgia's declaring secession from the Union:
What was the reason that induced ... secession? This reason may be summed up in one single proposition. It was a conviction, a deep conviction ... that a separation from the North—was the only thing that could prevent the abolition of ... slavery. ... unless there had been a separation from the North, slavery would be abolished in Georgia ...
States' rights and minority rights
States' rights theories gained strength from the awareness that the Northern population was growing much faster than the population of the South, so it was only a matter of time before the North controlled the federal government. Acting as a "conscious minority", Southerners hoped that a strict, constructionist interpretation of the Constitution would limit federal power over the states, and that a defense of states' rights against federal encroachments or even nullification or secession would save the South. Before 1860, most presidents were either Southern or pro-South. The North's growing population would mean the election of pro-North presidents, and the addition of free-soil states would end Southern parity with the North in the Senate. As the historian Allan Nevins described Calhoun's theory of states' rights, "Governments, observed Calhoun, were formed to protect minorities, for majorities could take care of themselves."
Until the 1860 election, the South's interests nationally were entrusted to the Democratic Party. In 1860, the Democratic Party split into Northern and Southern factions as the result of a "bitter debate in the Senate between Jefferson Davis and Stephen Douglas". The debate was over resolutions proposed by Davis "opposing popular sovereignty and supporting a federal slave code and states' rights" which carried over to the national convention in Charleston.
Jefferson Davis defined equality in terms of the equal rights of states, and opposed the declaration that all men are created equal. Jefferson Davis stated that a "disparaging discrimination" and a fight for "liberty" against "the tyranny of an unbridled majority" gave the Confederate states a right to secede. In 1860, Congressman Laurence M. Keitt of South Carolina said, "The anti-slavery party contend that slavery is wrong in itself, and the Government is a consolidated national democracy. We of the South contend that slavery is right, and that this is a confederate Republic of sovereign States."
Stampp mentioned Confederate Vice President Alexander Stephens' A Constitutional View of the Late War Between the States as an example of a Southern leader who said that slavery was the "cornerstone of the Confederacy" when the war began and then later switched course in saying that the war was not about slavery but states' rights after the Confederacy's defeat. Stampp said that Stephens became one of the most ardent defenders of the Lost Cause.
Historian William C. Davis also mentioned inconsistencies in Southern states' rights arguments. He explained the Confederate Constitution's protection of slavery at the national level as follows:
To the old Union they had said that the Federal power had no authority to interfere with slavery issues in a state. To their new nation they would declare that the state had no power to interfere with a federal protection of slavery. Of all the many testimonials to the fact that slavery, and not states rights, really lay at the heart of their movement, this was the most eloquent of all.
W.C. Davis also stated that:
Southern historian Gordon Rhea wrote in 2011 that:
Tariffs appear nowhere in ... sermons and speeches, and 'states' rights' are mentioned only in the context of the rights of states to ... own other humans. The central message was to play on the fear of African barbarians ... The preachers and politicians delivered on their promise. The Confederate States were established explicitly to preserve and expand the institution of slavery. Alexander Stephens, the Confederacy's vice president, said so himself in 1861, in unambiguous terms.
Compromise of 1850
The victory of the United States over Mexico resulted in the addition of large new territories conquered from Mexico. Controversy over whether the territories would be slave or free raised the risk of a war between slave and free states and over Northern support for the Wilmot Proviso, which would have banned slavery in the conquered territories, increased sectional tensions. The controversy was temporarily resolved by the Compromise of 1850, which allowed the territories of Utah and New Mexico to decide for or against slavery, but also allowed the admission of California as a free state, reduced the size of the slave state of Texas by adjusting the boundary, and ended the slave trade but not slavery itself in the District of Columbia. In return, the South got a stronger fugitive slave law than the version mentioned in the US Constitution. The Fugitive Slave Law would reignite controversy over slavery.
Fugitive Slave Law issues
The Fugitive Slave Act of 1850 required Northerners to assist Southerners in reclaiming fugitive slaves, which many Northerners strongly opposed. Anthony Burns was among the fugitive slaves captured and returned in chains to slavery as a result of the law. Harriet Beecher Stowe's best-selling novel Uncle Tom's Cabin greatly increased opposition to the Fugitive Slave Act.
Kansas–Nebraska Act (1854)
Most people thought the Compromise had ended the territorial issue, but Stephen A. Douglas reopened it in 1854. Douglas proposed the Kansas–Nebraska Bill with the intention of opening up vast new high-quality farm lands to settlement. As a Chicagoan, he was especially interested in the railroad connections from Chicago into Kansas and Nebraska, but that was not a controversial point. More importantly, Douglas firmly believed in democracy at the grass roots—that actual settlers have the right to decide on slavery, not politicians from other states. His bill provided that popular sovereignty, through the territorial legislatures, should decide "all questions pertaining to slavery", thus effectively repealing the Missouri Compromise. The ensuing public reaction against it created a firestorm of protest in the Northern states. It was seen as an effort to repeal the Missouri Compromise. However, the popular reaction in the first month after the bill's introduction failed to foreshadow the gravity of the situation. As Northern papers initially ignored the story, Republican leaders lamented the lack of a popular response.
Eventually, the popular reaction did come, but the leaders had to spark it. Salmon P. Chase's "Appeal of the Independent Democrats" did much to arouse popular opinion. In New York, William H. Seward finally took it upon himself to organize a rally against the Nebraska bill, since none had arisen spontaneously. Press such as the National Era, the New-York Tribune, and local free-soil journals, condemned the bill. The Lincoln–Douglas debates of 1858 drew national attention to the issue of slavery expansion.
Founding of the Republican Party (1854)
Convinced that Northern society was superior to that of the South, and increasingly persuaded of the South's ambitions to extend slave power beyond its existing borders, Northerners were embracing a viewpoint that made conflict likely; however, conflict required the ascendancy of a political group to express the views of the North, such as the Republican Party. The Republican Party—campaigning on the popular, emotional issue of "free soil" in the frontier—captured the White House after just six years of existence.
The Republican Party grew out of the controversy over the Kansas–Nebraska legislation. Once the Northern reaction against the Kansas–Nebraska Act took place, its leaders acted to advance another political reorganization. Henry Wilson declared the Whig Party dead and vowed to oppose any efforts to resurrect it. Horace Greeley's Tribune called for the formation of a new Northern party, and Benjamin Wade, Chase, Charles Sumner, and others spoke out for the union of all opponents of the Nebraska Act. The Tribune's Gamaliel Bailey was involved in calling a caucus of anti-slavery Whig and Democratic Party Congressmen in May.
Meeting in a Ripon, Wisconsin, Congregational church on February 28, 1854, some thirty opponents of the Nebraska Act called for the organization of a new political party and suggested that "Republican" would be the most appropriate name (to link their cause to the defunct Republican Party of Thomas Jefferson). These founders also took a leading role in the creation of the Republican Party in many northern states during the summer of 1854. While conservatives and many moderates were content merely to call for the restoration of the Missouri Compromise or a prohibition of slavery extension, radicals advocated repeal of the Fugitive Slave Laws and rapid abolition in existing states. The term "radical" has also been applied to those who objected to the Compromise of 1850, which extended slavery in the territories.
But without the benefit of hindsight, the 1854 elections would seem to indicate the possible triumph of the Know-Nothing movement rather than anti-slavery, with the Catholic/immigrant question replacing slavery as the issue capable of mobilizing mass appeal. Know-Nothings, for instance, captured the mayoralty of Philadelphia with a majority of over 8,000 votes in 1854. Even after opening up immense discord with his Kansas–Nebraska Act, Senator Douglas began speaking of the Know-Nothings, rather than the Republicans, as the principal danger to the Democratic Party.
When Republicans spoke of themselves as a party of "free labor", they appealed to a rapidly growing, primarily middle class base of support, not permanent wage earners or the unemployed (the working class). When they extolled the virtues of free labor, they were merely reflecting the experiences of millions of men who had "made it" and millions of others who had a realistic hope of doing so. Like the Tories in England, the Republicans in the United States would emerge as the nationalists, homogenizers, imperialists, and cosmopolitans.
Those who had not yet "made it" included Irish immigrants, who made up a large growing proportion of Northern factory workers. Republicans often saw the Catholic working class as lacking the qualities of self-discipline, temperance, and sobriety essential for their vision of ordered liberty. Republicans insisted that there was a high correlation between education, religion, and hard work—the values of the "Protestant work ethic"—and Republican votes. "Where free schools are regarded as a nuisance, where religion is least honored and lazy unthrift is the rule," read an editorial of the pro-Republican Chicago Democratic Press after James Buchanan's defeat of John C. Fremont in the 1856 presidential election, "there Buchanan has received his strongest support."
Ethno-religious, socio-economic, and cultural fault lines ran throughout American society, but were becoming increasingly sectional, pitting Yankee Protestants with a stake in the emerging industrial capitalism and American nationalism increasingly against those tied to Southern slaveholding interests. For example, acclaimed historian Don E. Fehrenbacher, in his Prelude to Greatness, Lincoln in the 1850s, noticed how Illinois was a microcosm of the national political scene, pointing out voting patterns that bore striking correlations to regional patterns of settlement. Those areas settled from the South were staunchly Democratic, while those by New Englanders were staunchly Republican. A belt of border counties were known for their political moderation, and traditionally held the balance of power. Intertwined with religious, ethnic, regional, and class identities, the issues of free labor and free soil were thus easy to play on.
Events during the next two years in "Bleeding Kansas" sustained the popular fervor originally aroused among some elements in the North by the Kansas–Nebraska Act. Free-State settlers from the North were encouraged by press and pulpit and the powerful organs of abolitionist propaganda. Often they received financial help from such organizations as the Massachusetts Emigrant Aid Company. Those from the South often received financial contributions from the communities they left. Southerners sought to uphold their constitutional rights in the territories and to maintain sufficient political strength to repulse "hostile and ruinous legislation".
While the Great Plains were largely unfit for the cultivation of cotton, informed Southerners demanded that the West be open to slavery, often—perhaps most often—with minerals in mind. Brazil, for instance, was an example of the successful use of slave labor in mining. In the middle of the 18th century, diamond mining supplemented gold mining in Minas Gerais and accounted for a massive transfer of masters and slaves from Brazil's northeastern sugar region. Southern leaders knew a good deal about this experience. It was even promoted in the pro-slavery De Bow's Review as far back as 1848.
Fragmentation of the American party system
"Bleeding Kansas" and the elections of 1856
In Kansas around 1855, the slavery issue reached a condition of intolerable tension and violence. But this was in an area where an overwhelming proportion of settlers were merely land-hungry Westerners indifferent to the public issues. The majority of the inhabitants were not concerned with sectional tensions or the issue of slavery. Instead, the tension in Kansas began as a contention between rival claimants. During the first wave of settlement, no one held titles to the land, and settlers rushed to occupy newly open land fit for cultivation. While the tension and violence did emerge as a pattern pitting Yankee and Missourian settlers against each other, there is little evidence of any ideological divides on the questions of slavery. Instead, the Missouri claimants, thinking of Kansas as their own domain, regarded the Yankee squatters as invaders, while the Yankees accused the Missourians of grabbing the best land without honestly settling on it.
However, the 1855–56 violence in "Bleeding Kansas" did reach an ideological climax after John Brown—regarded by followers as the instrument of God's will to destroy slavery—entered the melee. His assassination of five pro-slavery settlers (the so-called "Pottawatomie massacre", during the night of May 24, 1856) resulted in some irregular, guerrilla-style strife. Aside from John Brown's fervor, the strife in Kansas often involved only armed bands more interested in land claims or loot.
—Frederick Douglass speaking of John Brown
Of greater importance than the civil strife in Kansas, however, was the reaction against it nationwide and in Congress. In both North and South, the belief was widespread that the aggressive designs of the other section were epitomized by (and responsible for) what was happening in Kansas. Consequently, "Bleeding Kansas" emerged as a symbol of sectional controversy.
Indignant over the developments in Kansas, the Republicans—the first entirely sectional major party in U.S. history—entered their first presidential campaign with confidence. Their nominee, John C. Frémont, was a generally safe candidate for the new party. Although his nomination upset some of their Nativist Know-Nothing supporters (his mother was a Catholic), the nomination of the famed explorer of the Far West and ex-senator from California with a short political record was an attempt to woo ex-Democrats. The other two Republican contenders, William H. Seward and Salmon P. Chase, were seen as too radical.
Nevertheless, the campaign of 1856 was waged almost exclusively on the slavery issue—pitted as a struggle between democracy and aristocracy—focusing on the question of Kansas. The Republicans condemned the Kansas–Nebraska Act and the expansion of slavery, but they advanced a program of internal improvements combining the idealism of anti-slavery with the economic aspirations of the North. The new party rapidly developed a powerful partisan culture, and energetic activists drove voters to the polls in unprecedented numbers. People reacted with fervor. Young Republicans organized the "Wide Awake" clubs and chanted "Free Soil, Free Labor, Free Men, Frémont!" With Southern Fire-Eaters and even some moderates uttering threats of secession if Frémont won, the Democratic candidate, Buchanan, benefited from apprehensions about the future of the Union.
Millard Fillmore, the candidate of the American Party (Know-Nothings) and the Silver Gray Whigs, said in a speech at Albany, New York, that the election of a Republican candidate would dissolve the Union. Abraham Lincoln replied on July 23 in a speech at Galena, Illinois; Carl Sandburg wrote that this speech probably resembled Lincoln's Lost Speech: "This Government would be very weak, indeed, if a majority, with a disciplined army and navy, and a well-filled treasury, could not preserve itself, when attacked by an unarmed, undisciplined, unorganized minority. All this talk about the dissolution of the Union is humbug—nothing but folly. We won't dissolve the Union, and you shan't."
Dred Scott decision (1857) and the Lecompton Constitution
The Lecompton Constitution and Dred Scott v. Sanford [sic] (the Respondent's name, Sandford, was misspelled in the reports) were both part of the Bleeding Kansas controversy over slavery as a result of the Kansas–Nebraska Act, which was Stephen Douglas' attempt at replacing the Missouri Compromise ban on slavery in the Kansas and Nebraska territories with popular sovereignty, which meant that the people of a territory could vote either for or against slavery. The Lecompton Constitution, which would have allowed slavery in Kansas, was the result of massive vote fraud by the pro-slavery Border Ruffians. Douglas defeated the Lecompton Constitution because it was supported by the minority of pro-slavery people in Kansas, and Douglas believed in majority rule. Douglas hoped that both South and North would support popular sovereignty, but the opposite was true. Neither side trusted Douglas.
The Supreme Court decision of 1857 in Dred Scott v. Sandford added to the controversy. Chief Justice Roger B. Taney's decision said that blacks were "so far inferior that they had no rights which the white man was bound to respect," and that slavery could spread into the territories even if the majority of people in the territories were anti-slavery. Lincoln warned that "the next Dred Scott decision" could impose slavery on Northern states.
Buchanan, Republicans and anti-administration Democrats
President James Buchanan decided to end the troubles in Kansas by urging Congress to admit Kansas as a slave state under the Lecompton Constitution. Kansas voters, however, soundly rejected this constitution by a vote of 10,226 to 138. As Buchanan directed his presidential authority to promoting the Lecompton Constitution, he further angered the Republicans and alienated members of his own party. Prompting their break with the administration, the Douglasites saw this scheme as an attempt to pervert the principle of popular sovereignty on which the Kansas–Nebraska Act was based. Nationwide, conservatives were incensed, feeling as though the principles of states' rights had been violated. Even in the South, ex-Whigs and border state Know-Nothings—most notably John Bell and John J. Crittenden (key figures in the event of sectional controversies)—urged the Republicans to oppose the administration's moves and take up the demand that the territories be given the power to accept or reject slavery.
As the schism in the Democratic party deepened, moderate Republicans argued that an alliance with anti-administration Democrats, especially Stephen Douglas, would be a key advantage in the 1860 elections. Some Republican observers saw the controversy over the Lecompton Constitution as an opportunity to peel off Democratic support in the border states, where Frémont picked up little support. After all, the border states had often gone for Whigs with a Northern base of support in the past without prompting threats of Southern withdrawal from the Union.
Among the proponents of this strategy was The New York Times, which called on the Republicans to downplay opposition to popular sovereignty in favor of a compromise policy calling for "no more slave states" in order to quell sectional tensions. The Times maintained that for the Republicans to be competitive in the 1860 elections, they would need to broaden their base of support to include all voters who for one reason or another were upset with the Buchanan Administration.
Indeed, pressure was strong for an alliance that would unite the growing opposition to the Democratic Administration. But such an alliance was no novel idea; it would essentially entail transforming the Republicans into the national, conservative, Union party of the country. In effect, this would be a successor to the Whig party.
Republican leaders, however, staunchly opposed any attempts to modify the party position on slavery, appalled by what they considered a surrender of their principles when, for example, all the ninety-two Republican members of Congress voted for the Crittenden-Montgomery bill in 1858. Although this compromise measure blocked Kansas' entry into the union as a slave state, the fact that it called for popular sovereignty, instead of rejecting slavery altogether, was troubling to the party leaders.
In the end, the Crittenden-Montgomery bill did not create a grand anti-administration coalition of Republicans, ex-Whig Southerners in the border states, and Northern Democrats. Instead, the Democratic Party merely split along sectional lines. Anti-Lecompton Democrats complained that certain leaders had imposed a pro-slavery policy upon the party. The Douglasites, however, refused to yield to administration pressure. Like the anti-Nebraska Democrats, who were now members of the Republican Party, the Douglasites insisted that they—not the administration—commanded the support of most northern Democrats.
Extremist sentiment in the South advanced dramatically as the Southern planter class perceived its hold on the executive, legislative, and judicial apparatuses of the central government wane. It also grew increasingly difficult for Southern Democrats to manipulate power in many of the Northern states through their allies in the Democratic Party.
Historians have emphasized that the sense of honor was a central concern of upper-class white Southerners. The idea of being treated like a second-class citizen was anathema and could not be tolerated by an honorable southerner. The abolitionist position held that slavery was a negative or evil phenomenon that damaged the rights of white men and the prospects of republicanism. To the white South this rhetoric made Southerners second-class citizens because it trampled what they believed was their Constitutional right to take their chattel property anywhere.
Assault on Sumner (1856)
On May 19 Massachusetts Senator Charles Sumner gave a long speech in the Senate entitled "The Crime Against Kansas", which condemned the Slave Power as the evil force behind the nation's troubles. Sumner said the Southerners had committed a "crime against Kansas", singling out Senator Andrew P. Butler of South Carolina:
Not in any common lust for power did this uncommon tragedy have its origin. It is the rape of a virgin Territory, compelling it to the hateful embrace of slavery; and it may be clearly traced to a depraved desire for a new Slave State, hideous offspring of such a crime, in the hope of adding to the power of slavery in the National Government.
Sumner famously cast the South Carolinian as having "chosen a mistress ... who, though ugly to others, is always lovely to him; though polluted in the sight of the world, is chaste in his sight—I mean the harlot, slavery!" According to Hoffer (2010), "It is also important to note the sexual imagery that recurred throughout the oration, which was neither accidental nor without precedent. Abolitionists routinely accused slaveholders of maintaining slavery so that they could engage in forcible sexual relations with their slaves." Three days later, Sumner, working at his desk on the Senate floor, was beaten almost to death by Congressman Preston S. Brooks, Butler's nephew. Sumner took years to recover; he became the martyr to the antislavery cause who said the episode proved the barbarism of slave society. Brooks was lauded as a hero upholding Southern honor. Although Representative Anson Burlingame managed to publicly embarrass Brooks in retaliation, the original episode further polarized North and South, strengthened the new Republican Party, and added a new element of violence on the floor of Congress.
Emergence of Lincoln
Republican Party structure
Despite their significant loss in the election of 1856, Republican leaders realized that even though they appealed only to Northern voters, they need win only two more states, such as Pennsylvania and Illinois, to win the presidency in 1860.
As the Democrats were grappling with their own troubles, leaders in the Republican party fought to keep elected members focused on the issue of slavery in the West, which allowed them to mobilize popular support. Chase wrote Sumner that if the conservatives succeeded, it might be necessary to recreate the Free Soil Party. He was also particularly disturbed by the tendency of many Republicans to eschew moral attacks on slavery for political and economic arguments.
The controversy over slavery in the West was still not creating a fixation on the issue of slavery. Although the old restraints on the sectional tensions were being eroded with the rapid extension of mass politics and mass democracy in the North, the perpetuation of conflict over the issue of slavery in the West still required the efforts of radical Democrats in the South and radical Republicans in the North. They had to ensure that the sectional conflict would remain at the center of the political debate.
William Seward contemplated this potential in the 1840s, when the Democrats were the nation's majority party, usually controlling Congress, the presidency, and many state offices. The country's institutional structure and party system allowed slaveholders to prevail in more of the nation's territories and to garner a great deal of influence over national policy. With growing popular discontent with the unwillingness of many Democratic leaders to take a stand against slavery, and growing consciousness of the party's increasingly pro-Southern stance, Seward became convinced that the only way for the Whig Party to counteract the Democrats' strong monopoly of the rhetoric of democracy and equality was for the Whigs to embrace anti-slavery as a party platform. Once again, to increasing numbers of Northerners, the Southern labor system was increasingly seen as contrary to the ideals of American democracy.
Republicans believed in the existence of "the Slave Power Conspiracy", which had seized control of the federal government and was attempting to pervert the Constitution for its own purposes. The "Slave Power" idea gave the Republicans the anti-aristocratic appeal with which men like Seward had long wished to be associated politically. By fusing older anti-slavery arguments with the idea that slavery posed a threat to Northern free labor and democratic values, it enabled the Republicans to tap into the egalitarian outlook which lay at the heart of Northern society.
In this sense, during the 1860 presidential campaign, Republican orators even cast "Honest Abe" as an embodiment of these principles, repeatedly referring to him as "the child of labor" and "son of the frontier", who had proved how "honest industry and toil" were rewarded in the North. Although Lincoln had been a Whig, the "Wide Awakes" (members of the Republican clubs) used replicas of rails that he had split to remind voters of his humble origins.
In almost every northern state, organizers attempted to have a Republican Party or an anti-Nebraska fusion movement on ballots in 1854. In areas where the radical Republicans controlled the new organization, the comprehensive radical program became the party policy. Just as they helped organize the Republican Party in the summer of 1854, the radicals played an important role in the national organization of the party in 1856. Republican conventions in New York, Massachusetts, and Illinois adopted radical platforms. These radical platforms in such states as Wisconsin, Michigan, Maine, and Vermont usually called for the divorce of the government from slavery, the repeal of the Fugitive Slave Laws, and no more slave states, as did platforms in Pennsylvania, Minnesota, and Massachusetts when radical influence was high.
Conservatives at the Republican 1860 nominating convention in Chicago were able to block the nomination of William Seward, who had an earlier reputation as a radical (but by 1860 had been criticized by Horace Greeley as being too moderate). Other candidates had earlier joined or formed parties opposing the Whigs and had thereby made enemies of many delegates. Lincoln was selected on the third ballot. However, conservatives were unable to bring about the resurrection of "Whiggery". The convention's resolutions regarding slavery were roughly the same as they had been in 1856, but the language appeared less radical. In the following months, even Republican conservatives like Thomas Ewing and Edward Baker embraced the platform language that "the normal condition of territories was freedom". All in all, the organizers had done an effective job of shaping the official policy of the Republican Party.
Southern slaveholding interests now faced the prospects of a Republican president and the entry of new free states that would alter the nation's balance of power between the sections. To many Southerners, the resounding defeat of the Lecompton Constitution foreshadowed the entry of more free states into the Union. Dating back to the Missouri Compromise, the Southern region desperately sought to maintain an equal balance of slave states and free states so as to be competitive in the Senate. Since the last slave state was admitted in 1845, five more free states had entered. The tradition of maintaining a balance between North and South was abandoned in favor of the addition of more free soil states.
The Lincoln-Douglas Debates were a series of seven debates in 1858 between Stephen Douglas, United States senator from Illinois, and Abraham Lincoln, the Republican who sought to replace Douglas in the Senate. The debates were mainly about slavery. Douglas defended his Kansas–Nebraska Act, which replaced the Missouri Compromise ban on slavery in the Louisiana Purchase territory north and west of Missouri with popular sovereignty, which allowed residents of territories such as the Kansas to vote either for or against slavery. Douglas put Lincoln on the defensive by accusing him of being a Black Republican abolitionist, but Lincoln responded by asking Douglas to reconcile popular sovereignty with the Dred Scott decision. Douglas' Freeport Doctrine was that residents of a territory could keep slavery out by refusing to pass a slave code and other laws needed to protect slavery. Douglas' Freeport Doctrine, and the fact that he helped defeat the pro-slavery Lecompton Constitution, made Douglas unpopular in the South, which led to the 1860 split of the Democratic Party into Northern and Southern wings. The Democrats retained control of the Illinois legislature, and Douglas thus retained his seat in the U.S. Senate (at that time senators were elected by the state legislatures, not by popular vote); however, Lincoln's national profile was greatly raised, paving the way for his election as president of the United States two years later.
In The Rise of American Civilization (1927), Charles and Mary Beard argue that slavery was not so much a social or cultural institution as an economic one (a labor system). The Beards cited inherent conflicts between Northeastern finance, manufacturing, and commerce and Southern plantations, which competed to control the federal government so as to protect their own interests. According to the economic determinists of the era, both groups used arguments over slavery and states' rights as a cover.
Recent historians have rejected the Beardian thesis. But their economic determinism has influenced subsequent historians in important ways. Time on the Cross: The Economics of American Negro Slavery (1974) by Robert William Fogel (who would win the 1993 Nobel Memorial Prize in Economic Sciences) and Stanley L. Engerman, wrote that slavery was profitable and that the price of slaves would have continued to rise. Modernization theorists, such as Raimondo Luraghi, have argued that as the Industrial Revolution was expanding on a worldwide scale, the days of wrath were coming for a series of agrarian, pre-capitalistic, "backward" societies throughout the world, from the Italian and American South to India. But most American historians point out the South was highly developed and on average about as prosperous as the North.
Panic of 1857 and sectional realignments
A few historians believe that the serious financial Panic of 1857 and the economic difficulties leading up to it strengthened the Republican Party and heightened sectional tensions. Before the panic, strong economic growth was being achieved under relatively low tariffs. Hence much of the nation concentrated on growth and prosperity.
The iron and textile industries were facing acute, worsening trouble each year after 1850. By 1854, stocks of iron were accumulating in each world market. Iron prices fell, forcing many American iron mills to shut down.
Republicans urged western farmers and northern manufacturers to blame the depression on the domination of the low-tariff economic policies of southern-controlled Democratic administrations. However, the depression revived suspicion of Northeastern banking interests in both the South and the West. Eastern demand for western farm products shifted the West closer to the North. As the "transportation revolution" (canals and railroads) went forward, an increasingly large share and absolute amount of wheat, corn, and other staples of western producers—once difficult to haul across the Appalachians—went to markets in the Northeast. The depression emphasized the value of the western markets for eastern goods and homesteaders who would furnish markets and respectable profits.
Aside from the land issue, economic difficulties strengthened the Republican case for higher tariffs for industries in response to the depression. This issue was important in Pennsylvania and perhaps New Jersey.
Meanwhile, many Southerners grumbled over "radical" notions of giving land away to farmers that would "abolitionize" the area. While the ideology of Southern sectionalism was well-developed before the Panic of 1857 by figures like J.D.B. De Bow, the panic helped convince even more cotton barons that they had grown too reliant on Eastern financial interests.
Thomas Prentice Kettell, former editor of the Democratic Review, was another commentator popular in the South to enjoy a great degree of prominence between 1857 and 1860. Kettell gathered an array of statistics in his book on Southern Wealth and Northern Profits, to show that the South produced vast wealth, while the North, with its dependence on raw materials, siphoned off the wealth of the South. Arguing that sectional inequality resulted from the concentration of manufacturing in the North, and from the North's supremacy in communications, transportation, finance, and international trade, his ideas paralleled old physiocratic doctrines that all profits of manufacturing and trade come out of the land. Political sociologists, such as Barrington Moore, have noted that these forms of romantic nostalgia tend to crop up whenever industrialization takes hold.
Such Southern hostility to the free farmers gave the North an opportunity for an alliance with Western farmers. After the political realignments of 1857–58—manifested by the emerging strength of the Republican Party and their networks of local support nationwide—almost every issue was entangled with the controversy over the expansion of slavery in the West. While questions of tariffs, banking policy, public land, and subsidies to railroads did not always unite all elements in the North and the Northwest against the interests of slaveholders in the South under the pre-1854 party system, they were translated in terms of sectional conflict—with the expansion of slavery in the West involved.
As the depression strengthened the Republican Party, slaveholding interests were becoming convinced that the North had aggressive and hostile designs on the Southern way of life. The South was thus increasingly fertile ground for secessionism.
The Republicans' Whig-style personality-driven "hurrah" campaign helped stir hysteria in the slave states upon the emergence of Lincoln and intensify divisive tendencies, while Southern "fire eaters" gave credence to notions of the slave power conspiracy among Republican constituencies in the North and West. New Southern demands to re-open the African slave trade further fueled sectional tensions.
From the early 1840s until the outbreak of the Civil War, the cost of slaves had been rising steadily. Meanwhile, the price of cotton was experiencing market fluctuations typical of raw commodities. After the Panic of 1857, the price of cotton fell while the price of slaves continued its steep rise. At the 1858 Southern commercial convention, William L. Yancey of Alabama called for the reopening of the African slave trade. Only the delegates from the states of the Upper South, who profited from the domestic trade, opposed the reopening of the slave trade since they saw it as a potential form of competition. The convention in 1858 wound up voting to recommend the repeal of all laws against slave imports, despite some reservations.
John Brown and Harpers Ferry (1859)
On October 16, 1859, radical abolitionist John Brown led an attempt to start an armed slave revolt by seizing the U.S. Army arsenal at Harper's Ferry, Virginia (now West Virginia). Brown and twenty-one followers, both whites (including three of Brown's sons) and blacks (three free Blacks, one freedman, and one fugitive slave), planned to seize the armory and use weapons stored there to arm Black slaves in order to spark a general uprising by the slave population.
Although the raiders were initially successful in cutting the telegraph line and capturing the Armory, they allowed a passing train to continue, and at the next station with a working telegraph the conductor alerted authorities to the attack. The raiders were forced by the militia and other locals to barricade themselves in the Armory, in a sturdy building later known as John Brown's Fort. Robert E. Lee (then a colonel in the U.S. Army) led a company of U.S. Marines in storming the armory on October 18. Ten of the raiders were killed, including two of Brown's sons; Brown himself along with a half dozen of his followers were captured; five of the raiders escaped immediate capture. Six locals were killed and nine injured; the Marines suffered one dead and one injured.
Brown was subsequently hanged for treason, murder, and inciting a slave insurrection, as were six of his followers. (See John Brown's raiders.) The raid, trial, and execution were covered in great detail by the press, which sent reporters and sketch artists to the scene on the next train. It immediately became a cause célèbre in both the North and the South, with Brown vilified by Southerners as a bloodthirsty fanatic, but celebrated by many Northern abolitionists as a martyr to the cause of ending slavery.
Elections of 1860
Initially, William H. Seward of New York, Salmon P. Chase of Ohio, and Simon Cameron of Pennsylvania were the leading contenders for the Republican presidential nomination. But Abraham Lincoln, a former one-term House member who gained fame amid the Lincoln–Douglas debates of 1858, had fewer political opponents within the party and outmaneuvered the other contenders. On May 16, 1860, he received the Republican nomination at their convention in Chicago.
The schism in the Democratic Party over the Lecompton Constitution and Douglas' Freeport Doctrine caused Southern "Fire-Eaters" to oppose front runner Stephen A. Douglas' bid for the Democratic presidential nomination. Douglas defeated the pro-slavery Lecompton Constitution for Kansas because the majority of Kansans were antislavery, and Douglas' popular sovereignty doctrine would allow the majority to vote slavery up or down as they chose. Douglas' Freeport Doctrine alleged that the antislavery majority of Kansans could thwart the Dred Scott decision that allowed slavery by withholding legislation for a slave code and other laws needed to protect slavery. As a result, Southern extremists demanded a slave code for the territories, and used this issue to divide the northern and southern wings of the Democratic Party. Southerners left the party and in June nominated John C. Breckinridge, while Northern Democrats supported Douglas. As a result, the Southern planter class lost a considerable measure of sway in national politics. Because of the Democrats' division, the Republican nominee faced a divided opposition. Adding to Lincoln's advantage, ex-Whigs from the border states had earlier formed the Constitutional Union Party, nominating John C. Bell for president. Thus, party nominees waged regional campaigns. Douglas and Lincoln competed for Northern votes, while Bell, Douglas and Breckinridge competed for Southern votes.
Result and impact of the election of 1860
- Abraham Lincoln: 180 (40% of the popular vote)
- J.C. Breckinridge: 72 (18% of the popular vote)
- John Bell: 39 (13% of the popular vote)
- Stephen A. Douglas: 12 (30% of the popular vote)
Voting [on November 6, 1860] split sharply along sectional lines. Lincoln was elected by carrying the electoral votes of the North; he had a sweeping majority of 180 electoral votes. Given the vote count in each state, he would still have won the electoral college even if all three opponents had somehow been able to merge their tickets.
Split in the Democratic Party
The Alabama extremist William Lowndes Yancey's demand for a federal slave code for the territories split the Democratic Party between North and South, which made the election of Lincoln possible. Yancey tried to make his demand for a slave code moderate enough to get Southern support and yet extreme enough to enrage Northerners and split the party. He demanded that the party support a slave code for the territories if later necessary, so that the demand would be conditional enough to win Southern support. His tactic worked, and lower South delegates left the Democratic Convention at Institute Hall in Charleston, South Carolina, and walked over to Military Hall. The South Carolina extremist Robert Barnwell Rhett hoped that the lower South would completely break with the Northern Democrats and attend a separate convention at Richmond, Virginia, but lower South delegates gave the national Democrats one last chance at unification by going to the convention at Baltimore, Maryland, before the split became permanent. The end result was that John C. Breckinridge became the candidate of the Southern Democrats, and Stephen Douglas became the candidate of the Northern Democrats.
Yancey's previous 1848 attempt at demanding a slave code for the territories was his Alabama Platform, which was in response to the Northern Wilmot Proviso attempt at banning slavery in territories conquered from Mexico. Justice Peter V. Daniel wrote a letter about the Proviso to former President Martin Van Buren: "It is that view of the case which pretends to an insulting exclusiveness or superiority on the one hand, and denounces a degrading inequality or inferiority on the other; which says in effect to the Southern man, 'Avaunt! you are not my equal, and hence are to be excluded as carrying a moral taint with you.' Here is at once the extinction of all fraternity, of all sympathy, of all endurance even; the creation of animosity fierce, implacable, undying." Both the Alabama Platform and the Wilmot Proviso failed, but Yancey learned to be less overtly radical in order to get more support. Southerners thought they were merely demanding equality, in that they wanted Southern property in slaves to get the same (or more) protection as Northern forms of property.
With the emergence of the Republicans as the nation's first major sectional party by the mid-1850s, politics became the stage on which sectional tensions were played out. Although much of the West—the focal point of sectional tensions—was unfit for cotton cultivation, Southern secessionists read the political fallout as a sign that their power in national politics was rapidly weakening. Before, the slave system had been buttressed to an extent by the Democratic Party, which was increasingly seen as representing a more pro-Southern position that unfairly permitted Southerners to prevail in the nation's territories and to dominate national policy before the Civil War. But Democrats suffered a significant reverse in the electoral realignment of the mid-1850s. 1860 was a critical election that marked a stark change in existing patterns of party loyalties among groups of voters; Abraham Lincoln's election was a watershed in the balance of power of competing national and parochial interests and affiliations.
Immediately after finding out the election results, a special South Carolina convention declared "that the Union now subsisting between South Carolina and other states under the name of the 'United States of America' is hereby dissolved;" by February six more cotton states would follow (Mississippi, Florida, Alabama, Georgia, Louisiana, Texas), forming the Confederate States of America. In 1960, Lipset examined the secessionist vote in each Southern state in 1860–61. In each state he divided the counties by the proportion of slaves, low, medium and high. He found that in the 181 high-slavery counties, the vote was 72% for secession. In the 205 low-slavery counties, the vote was only 37% for secession, and in the 153 middle counties, the vote for secession was at 60%. Both the outgoing Buchanan administration and the incoming Lincoln administration refused to recognize the legality of secession or the legitimacy of the Confederacy. After Lincoln called for troops, four border states (that lacked cotton) seceded (Virginia, Arkansas, North Carolina, Tennessee). The Upper Southern States were in a dilemma: they wanted to retain their slaves but were afraid that if they joined with the lower southern states that were rebelling they would be caught in the middle of a conflict, and their states would be the battle ground. By staying in the Union the Upper Southern states felt that their slave rights would continue to be recognized by the Union.
The tariff issue was and is sometimes cited—long after the war—by Lost Cause historians and neo-Confederate apologists. In 1860–61 none of the groups that proposed compromises to head off secession brought up the tariff issue as a major issue. Pamphleteers North and South rarely mentioned the tariff, and when some did, for instance, Matthew Fontaine Maury and John Lothrop Motley, they were generally writing for a foreign audience.
The tariff in effect prior to the enactment of the Morrill Tariff of 1861 had been written and approved by the South for the benefit of the South. Complaints came from the Northeast (especially Pennsylvania) and regarded the rates as too low. Some Southerners feared that eventually the North would grow so big that it would control Congress and could raise the tariff at will.
As for states' rights, while a state's right of revolution mentioned in the Declaration of Independence was based on the inalienable equal rights of man, secessionists believed in a modified version of states' rights that was safe for slavery.
These issues were especially important in the lower South, where 47 percent of the population were slaves. The upper South, where 32 percent of the population were slaves, considered the Fort Sumter crisis—especially Lincoln's call for troops to march south to recapture it—a cause for secession. The northernmost border slave states, where 13 percent of the population were slaves, did not secede.
When South Carolina seceded in December 1860, Major Robert Anderson, a pro-slavery, former slave owner from Kentucky, remained loyal to the Union. He was the commanding officer of United States Army forces in Charleston, South Carolina—the last remaining important Union post in the Deep South. Acting upon orders from the War Department to hold and defend the U.S. forts, he moved his small garrison from Fort Moultrie, which was indefensible, to the more modern, more defensible, Fort Sumter in the middle of Charleston Harbor. South Carolina leaders cried betrayal, while the North celebrated with enormous excitement at this show of defiance against secessionism. In February 1861 the Confederate States of America were formed and took charge. Jefferson Davis, the Confederate president, ordered the fort be captured. The artillery attack was commanded by Brig. Gen. P. G. T. Beauregard, who had been Anderson's student at West Point. The attack began April 12, 1861, and continued until Anderson, badly outnumbered and outgunned, surrendered the fort on April 14. The battle began the American Civil War, as an overwhelming demand for war swept both the North and South, with only Kentucky attempting to remain neutral.
According to Adam Goodheart (2011), the modern meaning of the American flag was also forged in the defense of Fort Sumter. Thereafter, the flag was used throughout the North to symbolize American nationalism and rejection of secessionism.
Before that day, the flag had served mostly as a military ensign or a convenient marking of American territory, flown from forts, embassies, and ships, and displayed on special occasions like the Fourth of July. But in the weeks after Major Anderson's surprising stand, it became something different. Suddenly the Stars and Stripes flew—as it does today, and especially as it did after September 11—from houses, from storefronts, from churches; above the village greens and college quads. For the first time American flags were mass-produced rather than individually stitched and even so, manufacturers could not keep up with demand. As the long winter of 1861 turned into spring, that old flag meant something new. The abstraction of the Union cause was transfigured into a physical thing: strips of cloth that millions of people would fight for, and many thousands die for.
Onset of the Civil War and the question of compromise
Abraham Lincoln's rejection of the Crittenden Compromise, the failure to secure the ratification of the Corwin Amendment in 1861, and the inability of the Washington Peace Conference of 1861 to provide an effective alternative to Crittenden and Corwin came together to prevent a compromise that is still debated by Civil War historians. Even as the war was going on, William Seward and James Buchanan were outlining a debate over the question of inevitability that would continue among historians.
Needless war argument
Two competing explanations of the sectional tensions inflaming the nation emerged even before the war. The first was the "Needless War" argument. Buchanan believed the sectional hostility to be the accidental, unnecessary work of self-interested or fanatical agitators. He also singled out the "fanaticism" of the Republican Party. Seward, on the other hand, believed there to be an irrepressible conflict between opposing and enduring forces. Shelden argues that, "Few scholars in the twenty-first century would call the Civil War 'needless,' as the emancipation of 4 million slaves hinged on Union victory."
Irrepressible conflict argument
The "Irrepressible Conflict" argument was the first to dominate historical discussion. In the first decades after the fighting, histories of the Civil War generally reflected the views of Northerners who had participated in the conflict. The war appeared to be a stark moral conflict in which the South was to blame, a conflict that arose as a result of the designs of slave power. Henry Wilson's History of the Rise and Fall of the Slave Power in America (1872–1877) is the foremost representative of this moral interpretation, which argued that Northerners had fought to preserve the union against the aggressive designs of "slave power". Later, in his seven-volume History of the United States from the Compromise of 1850 to the Civil War (1893–1900), James Ford Rhodes identified slavery as the central—and virtually only—cause of the Civil War. The North and South had reached positions on the issue of slavery that were both irreconcilable and unalterable. The conflict had become inevitable.
But the idea that the war was avoidable became central among historians in the 1920s, 1930s and 1940s. Revisionist historians, led by James G. Randall (1881–1953) at the University of Illinois, Woodrow Wilson (1856-1924) at Princeton University and Avery Craven (1885–1980) at the University of Chicago, saw in the social and economic systems of the South no differences so fundamental as to require a war. Historian Mark Neely explains their position:
Revisionism challenged the view that fundamental and irreconcilable sectional differences made the outbreak of war inevitable. It scorned a previous generation's easy identification of the Northern cause with abolition, but it continued a tradition of hostility to the Reconstruction measures that followed the war. The Civil War became a needless conflict brought on by a blundering generation that exaggerated sectional differences between North and South. Revisionists revived the reputation of the Democratic party as great nationalists before the war and as dependable loyalists during it. Revisionism gave Lincoln's Presidency a tragic beginning at Fort Sumter, a rancorous political setting of bitter factional conflicts between radicals and moderates within Lincoln's own party, and an even more tragic ending. The benevolent Lincoln died at the moment when benevolence was most needed to blunt radical designs for revenge on the South.
Randall blamed the ineptitude of a "blundering generation" of leaders. He also saw slavery as essentially a benign institution, crumbling in the presence of 19th century tendencies. Craven, the other leading revisionist, placed more emphasis on the issue of slavery than Randall but argued roughly the same points. In The Coming of the Civil War (1942), Craven argued that slave laborers were not much worse off than Northern workers, that the institution was already on the road to ultimate extinction, and that the war could have been averted by skillful and responsible leaders in the tradition of Congressional statesmen Henry Clay and Daniel Webster. Two of the key leaders in antebellum politics, Clay and Webster, in contrast to the 1850s generation of leaders, shared a predisposition to compromises marked by a passionate patriotic devotion to the Union.
But it is possible that the politicians of the 1850s were not inept. More recent studies have kept elements of the revisionist interpretation alive, emphasizing the role of political agitation (the efforts of Democratic politicians of the South and Republican politicians in the North to keep the sectional conflict at the center of the political debate). David Herbert Donald (1920–2009), a student of Randall, argued in 1960 that the politicians of the 1850s were not unusually inept but that they were operating in a society in which traditional restraints were being eroded in the face of the rapid extension of democracy. The stability of the two-party system kept the union together, but would collapse in the 1850s, thus reinforcing, rather than suppressing, sectional conflict. The union, Donald said, died of democracy.
In December 1860, amid the secession crisis, president-elect Abraham Lincoln wrote a letter to Alexander Stephens, in which he summarized the cause of the crisis:
Several months later, on March 21, 1861, Alexander Stephens, now the Confederate vice president, delivered his "Cornerstone Speech" in Savannah, Georgia. In the speech, he states that slavery was the cause of the secession crisis, and outlines the principal differences between Confederate ideology and U.S. ideology:
The new [Confederate] Constitution has put at rest forever all the agitating questions relating to our peculiar institutions—African slavery as it exists among us—the proper status of the negro in our form of civilization. This was the immediate cause of the late rupture and present revolution. ...[Thomas Jefferson] ideas, however, were fundamentally wrong. They rested upon the assumption of the equality of races. This was an error. ...Our new government is founded upon exactly the opposite idea; its foundations are laid, its cornerstone rests, upon the great truth that the negro is not equal to the white man; that slavery—subordination to the superior race—is his natural and normal condition.
In July 1863, as decisive campaigns were fought at Gettysburg and Vicksburg, Republican senator Charles Sumner re-dedicated his speech The Barbarism of Slavery and said that desire to preserve slavery was the sole cause of the war:
[T]here are two apparent rudiments to this war. One is Slavery and the other is State Rights. But the latter is only a cover for the former. If Slavery were out of the way there would be no trouble from State Rights. The war, then, is for Slavery, and nothing else. It is an insane attempt to vindicate by arms the lordship which had been already asserted in debate. With mad-cap audacity it seeks to install this Barbarism as the truest Civilization. Slavery is declared to be the "corner-stone" of the new edifice.
Lincoln's war goals were reactions to the war, as opposed to causes. Abraham Lincoln explained the nationalist goal as the preservation of the Union on August 22, 1862, one month before his preliminary Emancipation Proclamation:
I would save the Union. I would save it the shortest way under the Constitution. The sooner the national authority can be restored; the nearer the Union will be "the Union as it was."... My paramount object in this struggle is to save the Union, and is not either to save or to destroy slavery. If I could save the Union without freeing any slave I would do it, and if I could save it by freeing all the slaves I would do it; and if I could save it by freeing some and leaving others alone I would also do that. ...I have here stated my purpose according to my view of official duty; and I intend no modification of my oft-expressed personal wish that all men everywhere could be free.
On March 4, 1865, Lincoln said in his second inaugural address that slavery was the cause of the War:
One-eighth of the whole population were colored slaves, not distributed generally over the Union, but localized in the southern part of it. These slaves constituted a peculiar and powerful interest. All knew that this interest was somehow the cause of the war. To strengthen, perpetuate, and extend this interest was the object for which the insurgents would rend the Union even by war, while the Government claimed no right to do more than to restrict the territorial enlargement of it.
- Aaron Sheehan-Dean, "A Book for Every Perspective: Current Civil War and Reconstruction Textbooks," Civil War History (2005) 51#3 pp. 317–24
- Patrick Karl O'Brien (2002). Atlas of World History. Oxford University Press. p. 184. ISBN 978-0-19-521921-0. Retrieved October 25, 2015.
- John McCardell, The Idea of a Southern Nation: Southern Nationalists and Southern Nationalism, 1830–1860 (1981)
- Susan-Mary Grant, North Over South: Northern Nationalism and American Identity in the Antebellum Era (2000)
- Elizabeth R. Varon, Bruce Levine, Marc Egnal, and Michael Holt at a plenary session of the organization of American Historians, March 17, 2011, reported by David A. Walsh "Highlights from the 2011 Annual Meeting of the Organization of American Historians in Houston, Texas" HNN online
- David Potter, The Impending Crisis, p. 45 (This book won the Pulitzer Prize for History)
- The Mason–Dixon line and the Ohio River were key boundaries.
- Paul Boyer; et al. (2010). The Enduring Vision, Volume I: To 1877. Cengage Learning. p. 343. ISBN 978-0495800941.
- Leonard L. Richards, The Slave Power: The Free North and Southern Domination, 1780–1860 (2000).
- William E. Gienapp, "The Republican Party and the Slave Power" in Michael Perman and Amy Murrell Taylor, eds. Major Problems in the Civil War and Reconstruction: Documents and Essays (2010): 74.
- Fehrenbacher pp. 15–17. Fehrenbacher wrote, "As a racial caste system, slavery was the most distinctive element in the southern social order. The slave production of staple crops dominated southern agriculture and eminently suited the development of a national market economy."
- Fehrenbacher pp. 16–18
- Goldstone p. 13
- McDougall p. 318
- Forbes p. 4
- Mason pp. 3–4
- Paul Finkelman, "Slavery and the Northwest Ordinance: A Study in Ambiguity." Journal of the Early Republic 6.4 (1986): 343–70.
- John Craig Hammond, "'They Are Very Much Interested in Obtaining an Unlimited Slavery': Rethinking the Expansion of Slavery in the Louisiana Purchase Territories, 1803–1805." Journal of the Early Republic 23.3 (2003): 353–80.
- Freehling p. 144
- Freehling p. 149. In the House the votes for the Tallmadge amendments in the North were 86–10 and 80–14 in favor, while in the South the vote to oppose was 66–1 and 64–2.
- Missouri Compromise
- Forbes pp. 6–7
- Mason p. 8
- Leah S. Glaser, "United States Expansion, 1800–1860"
- Richard J. Ellis, Review of The Shaping of American Liberalism: The Debates over Ratification, Nullification, and Slavery. by David F. Ericson, William and Mary Quarterly, Vol. 51, No. 4 (1994), pp. 826–29
- John Tyler, Life Before the Presidency
- Jane H. Pease, William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History, Vol. 47, No. 3 (1981), pp. 335–62
- Remini, Andrew Jackson, v2 pp. 136–37. Niven pp. 135–37. Freehling, Prelude to Civil War p. 143
- Craven p. 65. Niven pp. 135–37. Freehling, Prelude to Civil War p. 143
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987), p. 193; Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965), p. 257
- Ellis p. 193. Ellis further notes that "Calhoun and the nullifiers were not the first southerners to link slavery with states' rights. At various points in their careers, John Taylor, John Randolph, and Nathaniel Macon had warned that giving too much power to the federal government, especially on such an open-ended issue as internal improvement, could ultimately provide it with the power to emancipate slaves against their owners' wishes."
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War." American Historical Review (1938) 44#1 pp. 50–55 in JSTOR
- Varon (2008) p. 109. Wilentz (2005) p. 451
- Miller (1995) pp. 144–46
- Miller (1995) pp. 209–10
- Wilentz (2005) pp. 470–72
- Miller, 112
- Miller, pp. 476, 479–81
- Huston p. 41. Huston writes, "... on at least three matters southerners were united. First, slaves were property. Second, the sanctity of Southerners' property rights in slaves was beyond the questioning of anyone inside or outside of the South. Third, slavery was the only means of adjusting social relations properly between Europeans and Africans."
- Bonekemper III, Edward H. (2015) The Myth of the Lost Cause: Why the South fought the Civil War and Why the North Won. Regnery Publishing p. 39
- Brinkley, Alan (1986). American History: A Survey. New York: McGraw-Hill. p. 328.
- Moore, Barrington (1966). Social Origins of Dictatorship and Democracy. New York: Beacon Press. p. 117.
- North, Douglas C. (1961). The Economic Growth of the United States 1790–1860. Englewood Cliffs. p. 130.
- Davis, William C. (2002). Look Away!: A History of the Confederate States of America. New York: The Free Press. p. 9. ISBN 0-7432-2771-9. Retrieved March 19, 2016.
Inextricably intertwined in the question was slavery, and it only became the more so in the years that followed. Socially and culturally the North and South were not much different. They prayed to the same deity, spoke the same language, shared the same ancestry, sang the same songs. National triumphs and catastrophes were shared by both. For all the myths they would create to the contrary, the only significant and defining difference between them was slavery, where it existed and where it did not, for by 1804 it had virtually ceased to exist north of Maryland. Slavery demarked not just their labor and economic situations, but power itself in the new republic ... [S]o long as the number of slave states was the same as or greater than the number of free states, then in the Senate the South had a check on the government.
- Elizabeth Fox-Genovese and Eugene D. Genovese, Slavery in White and Black: Class and Race in the Southern Slaveholders' New World Order (2008)
- Stanley Harrold (2015). The Abolitionists and the South, 1831–1861. University Press of Kentucky. pp. 45, 149–50. ISBN 978-0813148243.
- Sorisio, Carolyn (2002). Fleshing Out America: Race, Gender, and the Politics of the Body in American Literature, 1833–1879. Athens: University of Georgia Press. p. 19. ISBN 0820326372. Retrieved August 24, 2014.
- Peter P. Hinks; John R. McKivigan (2007). Encyclopedia of Antislavery and Abolition. Greenwood. p. 258. ISBN 978-0313331435.
- James M. McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question", Civil War History 29 (September 1983)
- "Conflict and Collaboration: Yeomen, Slaveholders, and Politics in the Antebellum South", Social History 10 (October 1985): 273–98. quote at p. 297.
- Thornton, Politics and Power in a Slave Society: Alabama, 1800–1860 (Louisiana State University Press, 1978)
- McPherson (2007) pp. 4–7. James M. McPherson wrote in referring to the Progressive historians, the Vanderbilt agrarians, and revisionists writing in the 1940s, "While one or more of these interpretations remain popular among the Sons of Confederate Veterans and other Southern heritage groups, few historians now subscribe to them."
- Craig in Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), p. 505.
- Donald 2001 pp. 134–38
- Huston pp. 24–25. Huston lists other estimates of the value of slaves; James D. B. De Bow puts it at $2 billion in 1850, while in 1858 Governor James Pettus of Mississippi estimated the value at $2.6 billion in 1858.
- Huston, "Calculating the Value of the Union", p. 25
- Soil Exhaustion as a Factor in the Agricultural History of Virginia and Maryland, 1606–1860
- Encyclopedia of American Foreign Policy – A–D
- Woodworth, ed. The American Civil War: A Handbook of Literature and Research (1996), 145 151 505 512 554 557 684; Richard Hofstadter, The Progressive Historians: Turner, Beard, Parrington (1969); for one dissenter see Marc Egnal. "The Beards Were Right: Parties in the North, 1840–1860". Civil War History 47, no. 1. (2001): 30–56.
- Kenneth M. Stampp, The Imperiled Union: Essays on the Background of the Civil War (1981) p. 198
- Also from Kenneth M. Stampp, The Imperiled Union, p. 198:
Most historians ... now see no compelling reason why the divergent economies of the North and South should have led to disunion and civil war; rather, they find stronger practical reasons why the sections, whose economies neatly complemented one another, should have found it advantageous to remain united. Beard oversimplified the controversies relating to federal economic policy, for neither section unanimously supported or opposed measures such as the protective tariff, appropriations for internal improvements, or the creation of a national banking system. ... During the 1850s, Federal economic policy gave no substantial cause for southern disaffection, for policy was largely determined by pro-Southern Congresses and administrations. Finally, the characteristic posture of the conservative northeastern business community was far from anti-Southern. Most merchants, bankers, and manufacturers were outspoken in their hostility to antislavery agitation and eager for sectional compromise in order to maintain their profitable business connections with the South. The conclusion seems inescapable that if economic differences, real though they were, had been all that troubled relations between North and South, there would be no substantial basis for the idea of an irrepressible conflict.
- James M. McPherson, "Antebellum Southern Exceptionalism: A New Look at an Old Question". Civil War History – Volume 50, Number 4, December 2004, p. 421
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", The American Historical Review Vol. 44, No. 1 (1938), pp. 50–55 full text in JSTOR
- "John Calhoun, "Slavery a Positive Good", February 6, 1837". Archived from the original on April 15, 2013. Retrieved April 30, 2007.
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. p. 640.
- Noll, Mark A. (2006). The Civil War as a Theological Crisis. UNC Press. p. 216.
- Noll, Mark A. (2002). The US Civil War as a Theological War: Confederate Christian Nationalism and the League of the South. Oxford University Press. p. 640.
- Hull, William E. (February 2003). "Learning the Lessons of Slavery". Christian Ethics Today. 9 (43). Archived from the original on December 11, 2007. Retrieved December 19, 2007.
- Walter B. Shurden, and Lori Redwine Varnadoe, "The origins of the Southern Baptist Convention: A historiographical study." Baptist History and Heritage (2002) 37#1 pp. 71–96.
- Gaustad, Edwin S. (1982). A Documentary History of Religion in America to the Civil War. Wm. B. Eerdmans Publishing Co. pp. 491–502.
- Johnson, Paul (1976). History of Christianity. Simon & Schuster. p. 438.
- Noll, Mark A. (2002). America's God: From Jonathan Edwards to Abraham Lincoln. Oxford University Press. pp. 399–400.
- Miller, Randall M.; Stout, Harry S.; Wilson, Charles Reagan, eds. (1998). "The Bible and Slavery". Religion and the American Civil War. Oxford University Press. p. 62.CS1 maint: multiple names: authors list (link) CS1 maint: extra text: authors list (link)
- Bestor, 1964, pp. 10–11
- McPherson, 2007, p. 14.
- Stampp, pp. 190–93.
- Bestor, 1964, p. 11.
- Krannawitter, 2008, pp. 49–50.
- McPherson, 2007, pp. 13–14.
- Bestor, 1964, pp. 17–18.
- Guelzo, pp. 21–22.
- Bestor, 1964, p. 15.
- Miller, 2008, p. 153.
- McPherson, 2007, p. 3.
- Bestor, 1964, p. 19.
- McPherson, 2007, p. 16.
- Bestor, 1964, pp. 19–20.
- Bestor, 1964, p. 21
- Bestor, 1964, p. 20
- Bestor, 1964, p. 20.
- Russell, 1966, pp. 468–69
- Bestor, 1964, p. 23
- Russell, 1966, p. 470
- Bestor, 1964, p. 24
- Bestor, 1964, pp. 23–24
- Holt, 2004, pp. 34–35.
- McPherson, 2007, p. 7.
- Krannawitter, 2008, p. 232.
- Bestor, 1964, pp. 24–25.
- "The Amistad Case". National Portrait Gallery. Archived from the original on November 6, 2007. Retrieved October 16, 2007.
- McPherson, Battle Cry p. 8; James Brewer Stewart, Holy Warriors: The Abolitionists and American Slavery (1976); Pressly, 270ff
- Wendell Phillips, "No Union With Slaveholders", January 15, 1845, in Louis Ruchames, ed. The Abolitionists (1963), p. 196.
- Mason I Lowance, Against Slavery: An Abolitionist Reader, (2000), p. 26
- "Abolitionist William Lloyd Garrison Admits of No Compromise with the Evil of Slavery". Archived from the original on December 2, 2007. Retrieved October 16, 2007.
- Alexander Stephen's Cornerstone Speech, Savannah; Georgia, March 21, 1861
- Frederick J. Blue, The Free Soilers: Third Party Politics, 1848–54 (1973).
- Stampp, The Causes of the Civil War, p. 59
- Schlesinger quotes from an essay "The State Rights Fetish" excerpted in Stampp p. 70
- Schlesinger in Stampp pp. 68–69
- McDonald p. 143
- Rhea, Gordon (January 25, 2011). "Why Non-Slaveholding Southerners Fought". Civil War Trust. Civil War Trust. Archived from the original on March 21, 2011. Retrieved March 21, 2011.
- Benning, Henry L. (February 18, 1861). "Speech of Henry Benning to the Virginia Convention". Proceedings of the Virginia State Convention of 1861. pp. 62–75. Archived from the original on July 13, 2015. Retrieved March 17, 2015.
- Kenneth M. Stampp, The Causes of the Civil War, p. 14
- Nevins, Ordeal of the Union: Fruits of Manifest Destiny 1847–1852, p. 155
- Donald, Baker, and Holt, p. 117.
- When arguing for the equality of states, he said, "Who has been in advance of him in the fiery charge on the rights of the States, and in assuming to the Federal Government the power to crush and to coerce them? Even to-day he has repeated his doctrines. He tells us this is a Government which we will learn is not merely a Government of the States, but a Government of each individual of the people of the United States." – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860, From The Papers of Jefferson Davis, Volume 6, pp. 277–84.
- When arguing against equality of individuals, Davis said, "We recognize the fact of the inferiority stamped upon that race of men by the Creator, and from the cradle to the grave, our Government, as a civil institution, marks that inferiority." – Jefferson Davis' reply in the Senate to William H. Seward, Senate Chamber, U.S. Capitol, February 29, 1860 – From The Papers of Jefferson Davis, Volume 6, pp. 277–84. Transcribed from the Congressional Globe, 36th Congress, 1st Session, pp. 916–18.
- Jefferson Davis' Second Inaugural Address, Virginia Capitol, Richmond, February 22, 1862, transcribed from Dunbar Rowland, ed., Jefferson Davis, Constitutionalist, Volume 5, pp. 198–203. Summarized in The Papers of Jefferson Davis, Volume 8, p. 55.
- Lawrence Keitt, Congressman from South Carolina, in a speech to the House on January 25, 1860: Congressional Globe.
- Stampp, The Causes of the Civil War, pp. 63–65
- Davis, William C. (2002). Look Away!: A History of the Confederate States of America. pp. 97–98.
- Davis, William C. (1996). The Cause Lost: Myths and Realities of the Confederacy. Kansas: University Press of Kansas. p. 180.
- Carl Sandburg (1954), Abraham Lincoln: The Prairie Years, reprint, New York: Dell, Volume 1 of 3, Chapter 10, "The Deepening Slavery Issue", p. .
- The speech was reported by newspapers in Galena and Springfield, IL. Carl Sandburg (1954), Abraham Lincoln: The Prairie Years, reprint, New York: Dell, Volume 1 of 3, Chapter 10, "The Deepening Slavery Issue", p. 223. Italics as in Sandburg.
- John Vishneski (1988), "What the Court Decided in Scott v. Sandford", The American Journal of Legal History, 32 (4): 373–90.
- David Potter, The Impending Crisis, p. 275.
- First Lincoln Douglas Debate at Ottawa, Illinois August 21, 1858
- Don E. Fehrenbacher, The Dred Scott Case: Its Significance in American Law and Politics (178) pp. 445–46.
- Bertram Wyatt-Brown, Southern Honor: Ethics and Behavior in the Old South (1982) pp. 22–23, 363
- Christopher J. Olsen (2002). Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860. Oxford University Press. p. 237. ISBN 978-0195160970. footnote 33
- Lacy Ford, ed. (2011). A Companion to the Civil War and Reconstruction. Wiley. p. 28. ISBN 978-1444391626.CS1 maint: extra text: authors list (link)
- Michael William Pfau, "Time, Tropes, and Textuality: Reading Republicanism in Charles Sumner's 'Crime Against Kansas'", Rhetoric & Public Affairs vol 6 #3 (2003) 385–413, quote on p. 393 online in Project MUSE
- In modern terms Sumner accused Butler of being a "pimp who attempted to introduce the whore, slavery, into Kansas" says Judith N. McArthur; Orville Vernon Burton (1996). "A Gentleman and an Officer": A Military and Social History of James B. Griffin's Civil War. Oxford U.P. p. 40. ISBN 978-0195357660.
- Williamjames Hoffer, The Caning of Charles Sumner: Honor, Idealism, and the Origins of the Civil War (2010) p. 62
- William E. Gienapp, "The Crime Against Sumner: The Caning of Charles Sumner and the Rise of the Republican Party," Civil War History (1979) 25#3 pp. 218–45 doi:10.1353/cwh.1979.0005
- Donald, David; Randal, J.G. (1961). The Civil War and Reconstruction. Boston: D.C. Health and Company. p. 79.
- Allan, Nevins (1947). Ordeal of the Union (vol. 3). III. New York: Charles Scribner's Sons. p. 218.
- Moore, Barrington, p. 122.
- "1860 Presidential Election Results". Retrieved June 26, 2013.
- William W, Freehling, The Road to Disunion: Secessionists Triumphant 1854–1861, pp. 271–341
- Don E. Fehrenbacher (1978/2001), New York: Oxford, Part III, Echoes and Consequences, Chapter 22, "Reasons Why" [cf. "Charge of the Light Brigade"], p. 561; Daniel to Van Buren, November 1, 1847, Martin Van Buren Papers, Manuscript Division, Library of Congress.
- Roy Nichols, The Disruption of American Democracy: A History of the Political Crisis That Led up to the Civil War (1949)
- Seymour Martin Lipset, Political Man: The Social Bases of Politics (Doubleday, 1960) p. 349.
- Maury Klein, Days of Defiance: Sumter, Secession, and the Coming of the Civil War (1999)
- Robert Gray Gunderson, Old Gentleman's Convention: The Washington Peace Conference of 1861. (1961)
- Jon L. Wakelyn (1996). Southern Pamphlets on Secession, November 1860–April 1861. U. of North Carolina Press. pp. 23–30. ISBN 978-0-8078-6614-6.
- Matthew Fontaine Maury (1861/1967), "Captain Maury's Letter on American Affairs: A Letter Addressed to Rear-Admiral Fitz Roy, of England", reprinted in Frank Friedel, ed., Union Pamphlets of the Civil War: 1861–1865, Cambridge, MA: Harvard, A John Harvard Library Book, Vol. I, pp. 171–73.
- John Lothrop Motley (1861/1967), "The Causes of the American Civil War: A Paper Contributed to the London Times", reprinted in Frank Friedel, ed., Union Pamphlets of the Civil War: 1861–1865, Cambridge, Massachusetts: Harvard, A John Harvard Library Book, Vol.1, p. 51.
- Richard Hofstadter, "The Tariff Issue on the Eve of the Civil War", American Historical Review Vol. 44, No. 1 (October 1938), pp. 50–55 in JSTOR
- William W. Freehling, The Road to Disunion, Secessionists Triumphant: 1854–1861, pp. 345–516
- Daniel Crofts, Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Adam Goodheart, 1861: The Civil War Awakening (2011) ch 2–5
- Adam Goodheart, "Prologue", in 1861: The Civil War Awakening (2011)
- Steven E. Woodworth, ed., The American Civil War: A Handbook of Literature and Research (1996) pp. 131–43
- Thomas J. Pressly, Americans Interpret Their Civil War (1954) pp. 127–48
- Rachel A. Shelden (2013). Washington Brotherhood: Politics, Social Life, and the Coming of the Civil War. U of North Carolina Press. p. 5. ISBN 978-1469610856.
- Thomas J. Pressly, Americans Interpret Their Civil War (1954) pp. 149–226
- Mark E. Neely, "The Lincoln Theme since Randall's Call: The Promises and Perils of Professionalism." Papers of the Abraham Lincoln Association 1 (1979): 10–70. online
- James G. Randall, "The Blundering Generation." Mississippi Valley Historical Review 27.1 (1940): 3-28. in JSTOR
- Avery Craven, The Coming of the Civil War (1942).
- Avery Craven, "Coming of the War Between the States: An Interpretation." Journal of Southern History 2#3 (1936): 303–22. in JSTOR
- David H. Donald, "Died of Democracy." in Donald, ed., Why the North Won the Civil War (1960) pp. 79–90.
- Lincoln, Abraham (December 22, 1860). "To Alexander H. Stephens". Civil War Causes. Retrieved March 17, 2015.
- Letter to Horace Greeley, August 22, 1862
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Donald, David Herbert, Baker, Jean Harvey, and Holt, Michael F. The Civil War and Reconstruction. (2001)
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights and the Nullification Crisis. (1987)
- Fehrenbacher, Don E. The Slaveholding Republic: An Account of the United States Government's Relations to Slavery. (2001) ISBN 0-19-514177-6
- Forbes, Robert Pierce. The Missouri Compromise and Its Aftermath: Slavery and the Meaning of America. (2007) ISBN 978-0-8078-3105-2
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816–1836. (1965) ISBN 0-19-507681-8
- Freehling, William W. The Road to Disunion: Secessionists at Bay 1776–1854. (1990) ISBN 0-19-505814-3
- Freehling, William W. and Craig M. Simpson, eds. Secession Debated: Georgia's Showdown in 1860 (1992), speeches
- Hesseltine; William B. ed. The Tragic Conflict: The Civil War and Reconstruction (1962), primary documents
- Huston, James L. Calculating the Value of the Union: Slavery, Property Rights, and the Economic Origins of the Civil War. (2003) ISBN 0-8078-2804-1
- Mason, Matthew. Slavery and Politics in the Early American Republic. (2006) ISBN 978-0-8078-3049-9
- McDonald, Forrest. States' Rights and the Union: Imperium in Imperio, 1776–1876. (2000)
- McPherson, James M. This Mighty Scourge: Perspectives on the Civil War. (2007)
- Miller, William Lee. Arguing About Slavery: John Quincy Adams and the Great Battle in the United States Congress. (1995) ISBN 0-394-56922-9
- Nichols, Roy. The Disruption of American Democracy: A History of the Political Crisis That Led up to the Civil War (1949) online
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Perman, Michael, ed. Major Problems in Civil War & Reconstruction (2nd ed. 1998) primary and secondary sources.
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822–1832, v2 (1981) ISBN 0-06-014844-6
- Silbey, Joel H. (2014). A Companion to the Antebellum Presidents 1837–1861. Wiley. ISBN 978-1118609293.
- Stampp, Kenneth, ed. The Causes of the Civil War (3rd ed 1992), primary and secondary sources.
- Varon, Elizabeth R. Disunion: The Coming of the American Civil War, 1789–1859. (2008) ISBN 978-0-8078-3232-5
- Wakelyn; Jon L. ed. Southern Pamphlets on Secession, November 1860 – April 1861 (1996)
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
- Ayers, Edward L. What Caused the Civil War? Reflections on the South and Southern History (2005). 222 pp.
- Beale, Howard K., "What Historians Have Said About the Causes of the Civil War", Social Science Research Bulletin 54, 1946.
- Boritt, Gabor S. ed. Why the Civil War Came (1996)
- Childers, Christopher. "Interpreting Popular Sovereignty: A Historiographical Essay", Civil War History Volume 57, Number 1, March 2011 pp. 48–70 in Project MUSE
- Crofts, Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989), pp. 353–82, 457–80
- Etcheson, Nicole. "The Origins of the Civil War", History Compass 2005 #3 (North America)
- Foner, Eric. "The Causes of the American Civil War: Recent Interpretations and New Directions". In Beyond the Civil War Synthesis: Political Essays of the Civil War Era, edited by Robert P. Swierenga, 1975.
- Kornblith, Gary J., "Rethinking the Coming of the Civil War: A Counterfactual Exercise". Journal of American History 90.1 (2003): detailed historiography; online version
- Pressly, Thomas. Americans Interpret Their Civil War (1954), old survey that sorts historians into schools of interpretation; online
- SenGupta, Gunja. "Bleeding Kansas: A Review Essay," Kansas History 24 (Winter 2001/2002): 318–41. online
- Smith, Stacey L. "Beyond North and South: Putting the West in the Civil War and Reconstruction," Journal of the Civil War Era (Dec 2016) 6#4 pp. 566–91. doi:10.1353/cwe.2016.0073 excerpt
- Towers, Frank. "Partisans, New History, and Modernization: The Historiography of the Civil War's Causes, 1861–2011." The Journal of the Civil War Era (2011) 1#2 pp: 237-264.
- Tulloch, Hugh. The Debate on the American Civil War Era (Issues in Historiography) (2000)
- Woods, Michael E., "What Twenty-First-Century Historians Have Said about the Causes of Disunion: A Civil War Sesquicentennial Review of the Recent Literature," Journal of American History (2012) 99#2 pp. 415–39. online
- Woodward, Colin Edward. Marching Masters: Slavery, Race, and the Confederate Army during the Civil War. University of Virginia Press, 2014. Introduction pp. 1–10
- Woodworth, Steven E. ed. The American Civil War: A Handbook of Literature and Research (1996), 750 pages of historiography; see part IV on Causation.
"Needless war" school
- Bonner, Thomas N. "Civil War Historians and the 'Needless War' Doctrine." Journal of the History of Ideas (1956): 193–216. in JSTOR
- Childers, Christopher. "Interpreting Popular Sovereignty: A Historiographical Essay." Civil War History (2011) 57#1 pp. 48–70. online
- Craven, Avery, The Repressible Conflict, 1830–61 (1939)
- The Coming of the Civil War (1942)
- "The Coming of the War Between the States", Journal of Southern History 2 (August 1936): 30–63; in JSTOR
- Donald, David. "An Excess of Democracy: The Civil War and the Social Process", in David Donald, Lincoln Reconsidered: Essays on the Civil War Era, 2nd ed. (New York: Alfred A. Knopf, 1966), 209–35.
- Holt, Michael F. The Political Crisis of the 1850s. (1978) emphasis on political parties and voters
- Pressly, Thomas J. "The Repressible Conflict", chapter 7 of Americans Interpret Their Civil War (Princeton: Princeton University Press, 1954); online
- Ramsdell, Charles W. "The Natural Limits of Slavery Expansion", Mississippi Valley Historical Review, 16 (September 1929), 151–71, in JSTOR; says slavery had almost reached its outer limits of growth by 1860, so war was unnecessary to stop further growth. online version without footnotes
- Randall, James G. "The Blundering Generation", Mississippi Valley Historical Review 27 (June 1940): 3–28 in JSTOR
- Randall, James G. The Civil War and Reconstruction. (1937), survey and statement of "needless war" interpretation
Economic causation and modernization
- Beard, Charles, and Mary Beard. The Rise of American Civilization. Two volumes (1927), says slavery was minor factor online
- Hofstadter, Richard. "The Tariff Issue on the Eve of the Civil War," American Historical Review (1938) 44#1 pp. 50–55. in JSTOR
- Luraghi, Raimondo, "The Civil War and the Modernization of American Society: Social Structure and Industrial Revolution in the Old South Before and During the War", Civil War History XVIII (September 1972), in JSTOR
- McPherson, James M. Ordeal by Fire: the Civil War and Reconstruction (1982), uses modernization interpretation
- Moore, Barrington. Social Origins of Dictatorship and Democracy (1966), modernization interpretation
- Thornton, Mark; Ekelund, Robert B. Tariffs, Blockades, and Inflation: The Economics of the Civil War (2004) ISBN 978-0842029612
Nationalism and culture
- Crofts Daniel. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Current, Richard. Lincoln and the First Shot (1963)
- Miller, Randall M., Harry S. Stout, and Charles Reagan Wilson, eds. Religion and the American Civil War (1998), essays
- Nevins, Allan, author of most detailed history
- Ordeal of the Union 2 vols. (1947) covers 1850–57
- The Emergence of Lincoln, 2 vols. (1950) covers 1857–61; does not take strong position on causation
- Olsen, Christopher J. Political Culture and Secession in Mississippi: Masculinity, Honor, and the Antiparty Tradition, 1830–1860" (2000), cultural interpretation
- Potter, David. The Impending Crisis 1848–1861. (1976), Pulitzer Prize-winning history emphasizing rise of Southern nationalism
- Potter, David M. Lincoln and His Party in the Secession Crisis (1942).
Slavery as cause
- Ashworth, John
- Slavery, Capitalism, and Politics in the Antebellum Republic (1995)
- "Free labor, wage labor, and the slave power: Republicanism and the Republican party in the 1850s", in Melvyn Stokes and Stephen Conway (eds), The Market Revolution in America: Social, Political and Religious Expressions, 1800–1880, pp. 128–46. (1996)
- Donald, David et al. The Civil War and Reconstruction (latest edition 2001); 700-page survey
- Fellman, Michael et al. This Terrible War: The Civil War and its Aftermath (2003), 400-page survey
- Foner, Eric
- Free Soil, Free Labor, Free Men: The Ideology of the Republican Party before the Civil War (1970, 1995) stress on ideology
- Politics and Ideology in the Age of the Civil War. New York: Oxford University Press (1981)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776–1854 (1991), emphasis on slavery
- Gienapp, William E. The Origins of the Republican Party, 1852–1856 (1987)
- Manning, Chandra. What This Cruel War Was Over: Soldiers, Slavery, and the Civil War. New York: Vintage Books (2007)
- McCauley, Byron (April 5, 2018). "The Confederacy was about preserving slavery. The proof? It's on the money". The Cincinnati Enquirer. Retrieved April 15, 2018.
- McPherson, James M. Battle Cry of Freedom: The Civil War Era (1988), major overview, neo-abolitionist emphasis on slavery
- Morrison, Michael. Slavery and the American West: The Eclipse of Manifest Destiny and the Coming of the Civil War (1997)
- Morrow, Ralph E. "The Proslavery Argument Revisited", The Mississippi Valley Historical Review, Vol. 48, No. 1. (June 1961), pp. 79–94. in JSTOR (Maintains that antebellum pro-slavery writing was not intended to, or not solely intended to, convince Northerners, but was written and published to reduce the guilt felt by many in slave states.)
- Oakes, James. The Scorpion's Sting: Antislavery and the Coming of the Civil War (New York: Norton, 2014) 207 pp.
- Rhodes, James Ford. History of the United States from the Compromise of 1850 to the McKinley–Bryan Campaign of 1896 Volume: 1 (1920), highly detailed narrative 1850–56. Vol 2 1856–60; emphasis on slavery
- Schlesinger, Arthur Jr. "The Causes of the Civil War" (1949), reprinted in his The Politics of Hope (1963); reintroduced new emphasis on slavery
- Stampp, Kenneth M. America in 1857: A Nation on the Brink (1990)
- Stampp, Kenneth M. And the War Came: The North and the Secession Crisis, 1860–1861 (1950)
|Wikiquote has quotations related to: American Civil War|
- Civil War and Reconstruction: Jensen's Guide to WWW Resources
- Report of the Brown University Steering Committee on Slavery and Justice
- State by state popular vote for president in 1860 election
- Tulane course – article on 1860 election
- Tulane course – article on Fort Sumter
- Onuf, Peter. "Making Two Nations: The Origins of the Civil War". 2003 speech
- The Gilder Lehrman Institute of American History
- CivilWar.com, many source materials, including states' secession declarations
- Causes of the Civil War, collection of primary documents
- Declarations of Causes of Seceding States
- Alexander H. Stephens' Cornerstone Address
- An entry from Alexander Stephens' diary, dated 1866, reflecting on the origins of the Civil War
- The Arguments of the Constitutional Unionists in 1850–51
- Shmoop US History: Causes of the Civil War, study guide, dates, trivia, multimedia, teachers' guide
- Booknotes interview with Stephen B. Oates on The Approaching Fury: Voices of the Storm, 1820–1861, April 27, 1997
- Was the Civil War About Slavery? Yes. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Origins_of_the_American_Civil_War | 21 |
61 | An unusual and devastating storm struck Ireland in the early hours of October 16, 2017. Record-breaking gusts of up to 119mph left 360,000 homes without electricity and, sadly, three people lost their lives. The storm continued north-eastwards, causing power outages and damage across the UK and Scandinavia over a two-day period.
That storm, named Ophelia, was exceptional. Hurricanes and tropical storms typically originate in the warm waters of the deep tropics, but Hurricane Ophelia formed close to the Azores — an island chain 1,400km west of Portugal and more than 800km north of the Tropic of Cancer. A Category-3 hurricane at its peak, no major tropical storm on record has ever ventured so close to Europe.
Ophelia weakened, becoming an ex-hurricane, before it hit Europe. But with its spiral of clouds and an eye at its centre, it still resembled a tropical storm and had the intense winds and rainfall of one too. As a tropical-like storm, Ophelia is extraordinary among the weather systems which have reached the British Isles.
A year later, Storm Helene developed off the coast of West Africa and took a highly unusual shortcut to the UK, and Storm Leslie reached the Iberian Peninsula. In 2019, several tropical storms started out in an area of the tropical Atlantic known to scientists as the main development region, and eventually reached Europe as weak remnant storms, swept along by the jet stream.
Clearly, tropical storms and their impacts are not confined to the tropics. So, is the landfall of tropical-like storms across Europe a growing threat, and might climate change, as studies have suggested, be responsible? To answer this, we must start with a simpler question: how often do tropical-like storms actually reach western Europe?
Finding good data
Official records of hurricanes and tropical storms largely concern those threatening the US and are less reliable for Europe. Records only expanded to properly include Europe as recently as the early 1990s, and they become increasingly patchy the further back in time scientists look.
Before weather satellites, which track storm systems, meteorologists relied on measurements made from reconnaissance aircraft, involving dangerous and often impossible work, and from ships, which travel in lanes and can only observe a limited area. As a result, storms are missing from official records, and studies have shown many of the missing events likely formed in the eastern Atlantic — exactly where Ophelia, Helene and the tropical-like storms that threaten Europe originated.
In a new study, we turned to global data sets provided by NASA, the European Centre for Medium-Range Weather Forecasts, and other government agencies. These data sets combine all the available weather observations with state-of-the-art computer models, which use the laws of physics to help fill in the gaps. We searched these data sets using an algorithm that scours the data to find every tropical storm that reached Europe, including storms absent from official records.
Over the period 1979–2018, we found that, on average, one to two storms that reached Europe each year were initially tropical storms. Typically, they occurred in September and October, around the peak of the North Atlantic hurricane season. However, the characteristics and strengths of these storms has varied a lot.
Scientists have known for several decades that, when hurricanes and weaker tropical storms travel north, they transform into what we call extratropical storms — the kind Europe is used to seeing during winter. In fact, around half of all tropical storms do this, but, fortunately, most aren’t damaging.
Among the other half, however, we found that some, like Ophelia, keep their tropical shape and characteristics for longer before petering out. This is crucial. The tropical-like storms that make landfall are typically much stronger. Of all the storms reaching Europe from the tropics, one in ten kept its tropical characteristics and strength to landfall. That’s one every five years over the past four decades, according to our analysis.
So, over the last 40 years, storms which were initially tropical were not that unusual across Europe. Searching new data sets and using advanced algorithms has revealed they’re more common than many scientists previously thought. Fortunately, many weaken substantially before they reach European coastlines, but, as Ophelia demonstrated, that isn’t always so — and climate change may make weakening less likely in the future.
North Atlantic sea surface temperatures have increased by 1.5°C since 1870, and continued warming is expected to make future tropical storms more intense. Stronger tropical storms are not only more likely to reach Europe, but more likely to maintain their tropical intensity rather than weakening.
Comparing recent years with earlier decades, we found some evidence that this trend is already emerging. Storms with tropical origins have reached Europe more frequently since 2000 than during the 1980s and 1990s. This is intriguing, to say the least, but more analysis is needed to verify — and explain — these trends, as well as the varied storm threats Europe faces.
That is because climatic regions that right now and for most of human history have been home to reliable crops of grains, pulses, fruits and vegetables, and safe grazing for cattle, sheep, goats and so on, could become too hot, too dry, or too wet.
And these things could happen too quickly for farmers either to adapt, or crops to evolve. Land that had for generations been considered “safe climatic space” for food production could be shifted into new regimes by runaway global heating, according to a new study in the journal One Earth.
“Our research shows that rapid, out-of-control growth of greenhouse emissions may, by the end of the century, lead to more than a third of current global food production falling into conditions in which no food is produced today − that is, out of safe climatic space,” said Matti Kummu, of Aalto University in Finland.
“The good news is that only a fraction of food production would face as-of-yet unseen conditions if we collectively reduce emissions, so that warming would be limited to 1.5° to 2°Celsius.”
Professor Kummu and his colleagues report that they examined ways of considering the complex problem of climate and food. Geographers have identified 38 zones marked by varying conditions of rainfall, temperature, frost, groundwater and other factors important in growing food or rearing livestock.
The researchers devised a standard of what they called “safe climatic space” and then considered the likely change in conditions for 27 plant crops and seven kinds of livestock by the years 2081to 2100, under two scenarios. In one of these, the world kept its promise and controlled warming to the Paris targets. In the other, it did not.
“The increase in desert areas is especially troubling because in these conditions barely anything can grow without irrigation”
Under the more ominous scenario, the areas of northern or boreal forests of Russia and North America would shrink, while the tropical dry forest zone would grow, along with the tropical and temperate desert zones. The Arctic tundra could all but disappear.
The areas hardest hit would be the Sahel in North Africa, and the Middle East, along with some of south and south-east Asia. Already-poor states such as Benin, Ghana and Guinea-Bissau in West Africa, Cambodia in Asia and Guyana and Suriname in South America would be worst hit if warming is not contained: up to 95% of food production would lose its “safe climatic space.”
In 52 of the 177 countries under study − and that includes Finland and most of Europe − food production would continue. Altogether 31% of crops and 34% of livestock could be affected worldwide. And one fifth of the world’s crop production and 18% of its livestock would be most under threat in those nations with the lowest resilience and fewest resources to absorb such shock.
“If we let emissions grow, the increase in desert areas is especially troubling because in these conditions barely anything can grow without irrigation,” said Professor Kummu. “By the end of this century, we could see more than 4 million square kilometres [1.5m sq miles] of new desert around the globe.” − Climate News Network
May 14, 2021 — Editor’s note: This story is part of a collaboration, Tapped Out: Power, justice and water in the West, in which eight Institute for Nonprofit News newsrooms — California Health Report and High Country News; SJV Water and the Center for Collaborative Investigative Journalism; Circle of Blue; Columbia Insight; Ensia; and New Mexico In Depth — spent more than three months reporting on water issues in the Western U.S. The result documents serious concerns including contamination, excessive groundwater pumping and environmental inequity — as well as solutions to the problems. It was made possible by a grant from The Water Desk, with support from Ensia and INN’s Amplify News Project.
A riverbed that has been parched since the end of the 19th century — a portion of the historic lifeblood of the Gila River Indian Community — is now coursing again with water, luring things like cattails and birds back to its shores.
“You add water and stuff just immediately starts coming back naturally. Birds have returned and it’s just such a different experience,” says Jason Hauter, an attorney and a Community member. “It’s amazing how much has returned.”
The revival of this small segment of the 649-mile (1045-kilometer) Gila River, which has served the tribes that make up the Gila River Indian Community — the Akimel O’odham (Pima) and the Pee-Posh (Maricopa) — for roughly 2,000 years, was an added benefit of a grassroots infrastructure overhaul, known as “managed aquifer recharge,” or MAR, which aimed to restore the local groundwater basin. The MAR project has not only secured a water supply for local agriculture, but it has also generated a stable source of income and strengthened the Community’s ties to tradition.
“The land started to heal itself, reinvigorate itself,” says Governor Stephen Roe Lewis, who recently began his third term as leader of the Gila River Indian Community.
Hauter credits Lewis and his colleagues for ensuring that Community members have long-term access to their own resources while helping solve broader water supply problems in the region through innovative partnerships and exchanges with neighbors.
“They are very thoughtful about future generations, but they also recognize they live in this larger community and that you have to collaborate,” Hauter says. “Encouraging your neighbors to have good water practices, but also helping your neighbors, is good water policy.”
A Particularly Longstanding Claim to Water Rights
The ins and outs of water management and usage in the U.S. West are complex. In a region where every drop is important, questions about water — such as who gets what, how it’s moved from one place to another, and who pays for it — are vital to communities’ capacity to survive and thrive. These decisions are often based on century-plus-old legal doctrines that don’t always fit neatly into a modern, warming world — or address longstanding disregard for Native American tribal nations’ rights.
Western U.S. states adhere to legal doctrines called “prior appropriation” — sometimes referred to as “first in time, first in right” — linked to the mid-19th century Gold Rush and the Homestead Act, through which miners and farmers were able to claim and divert water sources for “beneficial use” — defined by activities such as irrigation, industry, power production and domestic use. A 1908 Supreme Court case ruled that the federal decision to establish Native American reservations inherently meant there would be sufficient water for those reservations. The priority date for water rights on these reservations therefore had to match the date of establishment, meaning that many tribal nations’ water rights took precedence over those of most existing users. During the past few decades, these nations have largely opted for settlements with the relevant federal, state and private bodies, rather than entering extensive and costly litigation to recover their water rights.
These settlements allow tribal nations to take part in the competitive markets that have long ruled water in the West. These markets involve things like selling water rights, getting money for helping mitigate drought and accruing “credit” from the Arizona Water Banking Authority by storing water in underground basins administered by the Arizona Department of Water Resources.
One such pivotal settlement came in 2004: To resolve tribal water rights claims, Congress passed the Arizona Water Settlement Act, which allocates a set amount of water each year to the Gila River Indian Community, drawing that water budget from a variety of sources in Arizona. The Community had a particularly longstanding claim to water rights due to its two-millennia history of farming, curtailed when miners and white settlers began diverting water following the Civil War. The governor’s late father, Rodney Lewis, devoted his career as Gila River Tribal Attorney to fighting for a just water settlement.
“It was the theft of our water, so this was a generational historic struggle to regain our water,” Lewis says. “We were and we still are historically agriculturalists, farmers. Our lineage, our ancestors were the Huhugam. And the Huhugam civilization had pretty much cultivated the modern-day Phoenix area in central Arizona.”
“They were master builders,” he adds, referring to complex water systems and canals that he says rivaled those of the Nile Valley.
As more and more nations regain control of their water resources, they are securing a critical provision for the long-term financial prosperity of their people and protection of their lands.
Mutually Beneficial Partnerships
As often occurs in tribal water rights settlements, the 2004 agreement served to restore the Gila River Indian Community’s claims to the river and its tributaries without displacing the descendants “of those who committed the original sin,” says Hauter, a partner at the law firm Akin Gump Strauss Hauer & Feld, which currently serves as outside counsel for the Community.
Toward that end, Hauter says, “really, what’s provided is an alternative supply.”
That alternative supply comes from the Central Arizona Project (CAP), an infrastructural behemoth that conveys about 1.5 million acre-feet (1.85 billion cubic meters; one acre-foot is about 326,000 gallons) of water from the Colorado River to central and southern Arizona each year. Serving as the single largest renewable water supply for the state of Arizona, the 336-mile (540-kilometer) system was authorized by then-President Lyndon B. Johnson in 1968, soon after which construction by the Bureau of Reclamation began. Three years later, the Central Arizona Water Conservation District — a multi-county water district — formed to repay the federal government for the project’s costs and oversee regional water supply.
Through the 2004 settlement, the Gila River Indian Community has the single largest CAP entitlement — bigger than that of the city of Phoenix — at 311,800 acre-feet (385 million cubic meters), Hauter explains. Finding mutual benefit in helping quench the thirst of the surrounding region, the Community entered into various water exchanges and leases that delivered about 60,000 acre-feet (74 million cubic meters) to Phoenix and other municipalities annually and left about 250,000-acre-feet (308 million cubic meters) for its own purposes, according to Hauter.
But this sudden surplus from the CAP actually posed a problem.
Pumping water from the project, Community members understood, would eventually become prohibitive due to water transport and associated electricity costs. The Lower Colorado River Basin Development Fund, managed by the U.S. Department of Interior, covers the Fixed OM&R (operation, maintenance and replacement) for certain Arizona tribes with settlements, but funding is only projected to last until 2045, Hauter explains.
The Community was using only about 50,000 acre-feet (62 million cubic meters) for irrigation purposes, leaving about 200,000-acre-feet (247 million cubic meters) unused, Hauter says. Because any unused CAP water can be remarketed by the state, Arizonans began counting on the Community to not use its full share.
With the legal guidance of Hauter and his team, the Community launched a strategic venture to store, share and sell much more of its CAP water in 2010.
The first such partnership occurred with former water supply rival the Salt River Project, the name of the utilities responsible for providing most of Phoenix’s water and power. Had the Community decided to enter litigation to recover its water rights, rather than settling, the Salt River Project could have faced enormous supply losses.
But the former rivals instead became partners, after identifying that the Salt River Project’s underground storage facility (USF), the Granite Reef Underground Storage Project, was an ideal place to store a portion of the CAP allocation the Gila River Indian Community was not currently using. The partnership has enabled the Salt River Project to withdraw water from storage — while maintaining a “safe yield,” or making sure any water that is taken from aquifers is replenished. In return, the Community has gained long-term storage credit, Hauter explains. Such storage credit enables the holder to bank CAP water and, when necessary, recover the water for future use.
The Community also stores water in groundwater savings facilities (GSF), including one operated by the Salt River Project and another south of the Gila River operated by the Maricopa Stanfield Drainage District. While a USF physically stores water in the aquifer through direct recharge, a GSF is an “indirect” recharge facility that uses CAP water instead of pumping local groundwater.
In what Hauter described as an “in lieu” agreement, the Community provides the operators of these GSF facilities with a renewable water supply — another portion of its CAP allocation — and so reduces the Salt River Project and Maricopa District’s need to extract groundwater. In return, the Community gets storage credit for the water that can remain in the ground.
“Everything We Needed Was at the River”
While these external collaborations bolstered the resilience of the Community, as well as that of the arid surrounding region, Gila River residents only really saw the revival of their long-lost local waterway when Community leaders launched a homegrown storage initiative. Recognizing the value in keeping some unused CAP resources at home, they chose to establish a network of managed aquifer recharge (MAR) sites. This type of underground storage allows for the free flow of water from a naturally permeable area, such as a streambed, into an aquifer, as opposed to “constructed recharge” sites that involve injecting water into percolation basins by means of a constructed device.
In order to implement these plans, the Gila River Indian Community came to an agreement with Arizona to acquire state regulatory permits for the MAR projects, despite the fact that tribal nations have sovereign control over water management. As a result of this decision, the Community has been able to market long-term storage credits in a sort of environmentally friendly banking system that allows more groundwater to stay in the ground.
“They realized they could get multiple benefits from deciding to have their project permitted per the Arizona regulations,” says Sharon Megdal, director of The University of Arizona Water Resources Research Center.
“They voluntarily chose to abide by the regulations for storage and recovery and therefore come under the whole credit accrual and accounting system,” she continues, stressing that not only can credits be used to recover water when needed in the future, but they can also be purchased by outside entities, which creates a revenue stream for the Community. “That’s really exciting.”
Three MAR facilities are already operating on the reservation today: MAR-5, the Olberg Dam underground storage facility, permitted in 2018; MAR-1B, the Cholla Mountain underground storage facility, permitted in 2020; and MAR-6B, a western and downstream expansion of MAR-5, which came online a few months ago. Construction of MAR-8, located downstream from MAR-5, will be complete in a few years, according to Hauter.
Hauter adds that it was only while planning the initial MAR-5 site that Community members envisioned the riparian restoration program that served “to recreate the river,” allowing cattails and other plants to blossom and enabling community members to create baskets and traditional medicines. Although the idea of restoring the river was secondary to the storage plans, Hauter says that its flow is intrinsic to the Community’s culture.
“The tangible benefit for most members is really having the river back to some degree,” Hauter adds. “It wasn’t something the settlement intended to accomplish, but the settlement gave the Community the tools to make it happen.”
Lewis and his father, who had already retired at the time, used those tools to see the first MAR site to fruition. The Lewises and their colleagues understood the benefit in adopting innovative methods for accumulating water at their future storage site.
“He truly saw the MAR-5 as a living testament to our historic tie to the Gila River,” the governor says, adding that his father considered the facility an opportunity to “return the flow of the river.”
With the revived river flow, the riparian habitat quickly began blossoming, including 50 documented species of birds within the first year of MAR-5’s operations, Lewis says. An interpretive trail now weaves through the once arid wetland, providing educational signposts and offering sacred cultural spaces for spiritual practice, Lewis explains. Elders are now taking advantage of the plants and silt available to engage in traditional basket weaving, medicine making and pottery, he adds.
“They still remember the river sometimes flowing and the smell of the water,” Lewis says.
In recent years, before the opening of the MAR-5 site, the channel filled with water only in particularly wet seasons involving floods or heavy snowpack upstream, according to Lewis.
“Everything we needed was at the river,” he adds. “That was our lifeblood.”
Continuing to Plan For a Drought-Ridden Future
In conjunction with the opening of the MAR facilities, the Community cemented a pivotal agreement in 2019 with the Central Arizona Groundwater Replenishment District (CAGRD), a groundwater replenishment entity operated by the Central Arizona Water Conservation District. Through this agreement, CAGRD leases 18,185 acre-feet (22 million cubic meters) of the Community’s CAP water and stores the majority of that water in the MAR sites, while receiving long-term storage credits in return from the Arizona Water Banking Authority. Only if the MAR facilities are full is CAGRD allowed to store the leased water elsewhere, Hauter explains.
Alongside the MAR projects, the Community has also been rehabilitating existing wells and building new ones in order to create a backup supply for agricultural use when Gila River flow is minimal. Well water is less expensive than CAP water, since wells can recharge naturally during storms — so much so that such events collectively add at least 100,000 acre-feet (123 million cubic meters) to the Community’s annual water supply, according to Hauter. The Community took additional steps to reroute its CAP supplies after the federal government and the seven Colorado River Basin States implemented their drought contingency plans, meant to elevate water levels in Lake Mead, in 2020. As part of that regional effort, Hauter explains, the Community is providing a total of at least 200,000 acre-feet (247 million cubic meters) of water to be stored in Lake Mead from 2020 to 2026, when the drought contingency plans expire. For its contribution, the Community gets money through the Arizona Water Bank and the Bureau of Reclamation.
Only through the Community’s creative collaborations and homegrown projects has so much of its CAP entitlement been able to help replenish Lake Mead, Hauter says. Today, the Community has reduced its CAP water usage for irrigation to 15,000 acre-feet (19 million cubic meters) per year, while its CAP water storage capacity in the MAR projects is up to about 40,000 acre-feet (49 million cubic meters) per year. After construction of MAR-8 is complete, total CAP water use for storage and irrigation will reach about 75,000 acre-feet (93 million cubic meters), Hauter says.
As the Community’s leaders continue to plan for a drought-ridden future, they are evaluating whether it will be necessary to use more of its CAP allocation for their own needs. At the moment, much of the reservation’s agriculture involves water-intensive crops like alfalfa, feed corn and cotton. An overhaul of the farming infrastructure, according to Hauter, would require “changing attitudes about how food is grown” and incorporating more efficient technologies, as well as encouraging farming among younger people.
Overall, Hauter says, “it’s an exciting future for the Community, and it will be interesting to see what happens in the next 20 or so years.”
Lewis is confident that the Community’s agricultural tradition will remain strong, particularly due to the younger generation’s concerns for social justice, equity and environmental issues.
“We want to provide opportunities for our community members to reengage in any way in our agricultural heritage,” he says. “We’ve always been innovators, going back to the Huhugam with their amazing engineering.”
In addition to the commercial company Gila River Farms, which is owned by the tribe and employs Community members, Lewis says that local family farms continue to thrive. Lewis also says that “there’s a big push” for young people to obtain degrees in agro-business, hydrology, water engineering and other relevant fields that will provide them with a livelihood while working for their Community — a place that has become even more special to them during the pandemic year.
“It’s a public health emergency that we’ve been going through,” Lewis adds. “But at the same time, I think this is an opportunity where you see a lot [of] our younger generation that are wanting to learn who it is to be from the Gila River Indian Community.”
“A Total Win-Win”
While the MAR projects and the larger water exchange deals serve to safeguard the Community’s water supplies, Hauter says he’s uncertain as to whether neighboring tribal nations could replicate this model. Other tribes, he explains, might have different agricultural interests or economic concerns, as well as varying geological and hydrological conditions.
In Megdal’s opinion, at least one aspect of the Community’s strategy could be replicable regardless of geography: the strategic accrual and marketing of long-term storage credits in permitted recharge facilities. The Gila River Indian Community has diversified its portfolio of storage credit and sales through “multiple vehicles,” she explains, including its MAR projects, the Salt River Project partnership, and its transfer of credits to CAGRD.
“They are able to meet their objectives including having riparian benefits and river benefits and sell the credits — because the credits are then recovered elsewhere. … For them, it’s like a total win-win,” Megdal says, adding that she considers the Community’s achievements to be “a bellwether project.”
Already, she says, the Tucson-region Tohono O’odham Nation has begun selling some credits to CAGRD. Acknowledging that the two cases involve varying geological and legislative circumstances, Megdal stresses that the Gila River Indian Community has demonstrated the benefits of the storage and credit accrual system.
“These long-term storage credits are the most marketable part of the water system,” Megdal says. “It’s an emerging market, and the Gila River Indian Community has emerged as a key leader in that market.”
“I see this example of a tribal nation entering voluntarily into an intergovernmental agreement with the state so that all the parties can develop these mutually beneficial exchanges or marketing transactions in a voluntary way,” she adds. “It’s really a notable innovation.”
Editor’s note: This story is also part of a four-part series — “Hotter, Drier, Smarter: Managing Western Water in a Changing Climate” — about innovative approaches to water management in the U.S. West and Western tribal nations. The series is supported by a grant from the Water Desk at the University of Colorado Boulder and is included in our nearly year-long reporting project, “Troubled Waters,” which is supported by funding from the Park Foundation and Water Foundation. You can find the other stories in the series, along with more drinking water reporting, here.
By Juergen Voegele, Veronique Kabongo and Arame Tall
When you land in Bujumbura, Burundi, you are immediately struck by the verdant landscape. Everything is green. The peaceful city is surrounded by beautiful Lake Tanganyika, the deepest in Africa, with majestic hills to the north. Soon, one discovers that those steep hillsides, the nearly 3,000 or so “collines” of Burundi, are much more than an extraordinary landscape. They are home to a patchwork of communities organized around each colline. In many ways, they represent the beauty but also the pains of the people who live on it and from it. These collines hold the souls of ancestors and families lost during past conflicts, including the 1994 crisis. They tell the country’s story.
But this impressive majestic landscape is threatened by overuse and degraded resources which are further aggravated by climate change. Climate-related disasters—chiefly torrential rains, floods and landslides—have triggered 100% of the forced displacements in 2020 in Burundi according to the United Nations Office for the Coordination of Humanitarian Affairs, underscoring the urgency of action to address compounded risks from rising climate impacts, fragility, and displacement.
Multi-risk vulnerability in Burundi’s colline landscapes
Furthermore, each year, Burundi loses almost 38 million tons of soil and 4% of its gross domestic product (GDP) to land degradation. The coffee sector exemplifies people’s dependence on natural resources for their livelihoods: half of the country’s households live off the sector which brings 90% of the country’s foreign revenue. But in the last 40 years, severe soil erosion led to a two-thirds decrease in coffee production, pushing millions back into poverty.
Burundi’s collines are home to more than 90% of the country’s largely rural population, composed of mostly women and youth, who rely on agriculture and forestry for their livelihoods. They also are critical hubs of multi-risk vulnerability: 75% of court cases are linked to land disputes, and the recent massive return of refugees from neighboring Democratic Republic of Congo, Rwanda and Tanzania has been a source of increased conflict and violence. Poverty and conflict in Burundi are closely linked to resource dependence and climate fragility. Since 2015, the country has experienced unprecedented forced displacement: 131,000 internally-displaced people were counted in 2020, 83% of whom were driven by climate-related disasters and 17% caused by other socio-economic factors, according to the International Organization for Migration Displacement Tracking Matrix.
In Burundi’s context, climate change compounds pre-existing risks through rising rainfall and temperature variability, projected to worsen by 2030-50, with recurrent flooding, landslides and soil erosion already destroying livelihoods and exacerbating poverty. Past extreme weather events including severe floods in 2006 and 2007 and severe droughts between 1999 and 2000 and in 2005 accounted for losses exceeding 5% of the GDP, affecting more than two million Burundians. In addition, river flooding from Lake Tanganyika poses an increasing challenge. Batwa communities are particularly disenfranchised, and at the heart of multi-sector vulnerability, making community-driven development approaches critical in Burundi’s development context.
However, 2,608 more collines are still degraded and will need to be restored to increase agricultural and pastoral productivity, and to build their resilience to current and future climate risks. The World Bank has committed to scale up activities nation-wide to cover all collines, starting with a study funded by PROGREEN, a global partnership promoting resilient landscapes. The new Burundi government has committed to invest more in driving out the root causes of degradation and fragility on all collines and lists climate change as one of its strategic priorities.
Figure 1: Scaling up Investment into Burundi’s Colline Landscapes
This is mission possible, but it cannot be done alone. While the World Bank is mobilizing additional resources through its Prevention and Resilience Allocation, it is essential to crowd-in financial and technical partners, including United Nations’ agencies and other climate concessional financing.
Addressing climate risks in fragile states has the potential to enhance resilience and reduce sources of conflict, while generating growth and long-term sustainable development. To be effective, climate investments must recognize the interlinkages between climate and conflict risks. In Burundi as in every other country, these investments must also be rooted in strong political and institutional support to trigger the changes needed to make the “land of 3,000 collines” resilient.
This, the researchers say, is as if the steady advance in agricultural productivity worldwide − in crop breeding, in farming technologies and in fertiliser use − has been eroded everywhere by more extreme temperatures, more prolonged droughts and more intense rainfall.
“It is equivalent to pressing the pause button on productivity growth back in 2013, and experiencing no improvements since then. Anthropogenic climate change is already slowing us down.”
He and colleagues from Maryland and California report in the journal Nature Climate Change that they developed new ways of looking at farm costs and yields that could account for climate- and weather-related factors. The findings are potentially alarming.
In the last century, the planet has warmed by at least 1°C above the long term average for most of human history, and is heading for 3°C or more by the end of this century.
A study of this kind − comparing the present world with one that might have been − is always open to challenge, and farmers have always had to gamble on good weather and cope with bad harvests.
But over the last seven years, researchers have repeatedly confirmed that a hotter world promises to be a hungrier one. Studies have found that yields of wheat, maize and rice are all vulnerable to climate change.
So the latest study simply provides another way of confirming anxieties already expressed. This time there is a new perspective: the attrition of climate change began decades ago. In the constant race to keep up with demand and compensate for possible loss, the farmers may be falling behind. Technological progress has yet to deliver climate resilience.
“It is not what we can do, but where we are headed,” said Robert Chambers, of the University of Maryland, a co-author. “This gives us an idea of trends to help see what to do in the future with new changes in the climate that are beyond what we’ve previously seen.
“We are projected to have almost 10 billion people to feed by 2050, so making sure our productivity is stable but growing faster than ever before is a serious concern.”
And Dr Otiz-Bobea said: “Most people perceive climate change as a distant problem. But this is something that is already having an effect. We have to address climate change now so that we can avoid further damage for future generations.” − Climate News Network
Images of colossal chunks of ice plunging into the sea accompany almost every news story about climate change. It can often make the problem seem remote, as if the effects of rising global temperatures are playing out elsewhere. But the break-up of the world’s vast reservoirs of frozen water – and, in particular, Antarctic ice shelves – will have consequences for all of us.
Before we can appreciate how, we need to understand what’s driving this process.
Ice shelves are gigantic floating platforms of ice that form where continental ice meets the sea. They’re found in Greenland, northern Canada and the Russian Arctic, but the largest loom around the edges of Antarctica. They are fed by frozen rivers of ice called glaciers, which flow down from the steep Antarctic ice sheet.
Ice shelves act as a barrier to glaciers, so when they disappear, it’s like pulling the plug in a sink, allowing glaciers to flow freely into the ocean, where they contribute to sea level rise.
If you cast your mind back to 2002, you may remember the sudden demise of Larsen B, an ice shelf on the Antarctic Peninsula – the tail-like land mass which stretches out from the West Antarctic mainland – which splintered over just six weeks.
Before Larsen B broke up, satellite images showed meltwater collecting in huge ponds on the surface, the precursor to a process called “hydrofracturing”, which literally means “cracking by water”.
Ice shelves are not solid blocks of ice: they’re made up of layers with fresh snow at the top, which contains lots of air gaps. Over many seasons, layers of snow build up and become compacted, with the bottom of the shelf containing the densest ice. In the middle, there is a porous medium called firn, which contains air pockets that soak up meltwater every summer like a sponge.
In the Antarctic summer, ice shelves get warm enough to melt at the surface. That meltwater trickles into the firn layer, where it refreezes when temperatures dip below freezing again. If the rate of melting every year is greater than the rate at which that firn can be replenished by fresh snow, then those air pockets eventually fill up, causing the ice shelf to become one solid chunk.
If that happens, then the following summer when melting occurs, the water has nowhere to go and so collects in ponds on the surface. That is what we can see in the satellite images of Larsen B before it collapsed.
At this stage, meltwater begins to flow into crevasses and cracks within the ice shelf. The weight of water filling these rifts causes them to widen and deepen, until suddenly, all at once, the cracks reach the bottom of the shelf and the whole thing disintegrates.
Scientists believe the collapse of Larsen B was caused by a combination of persistently warm weather and a background of ongoing atmospheric warming, which drove unusually high melt rates.
After its collapse, the glaciers that previously fed Larsen B sped up, spitting more ice into the ocean than before. Currently, the Antarctic Peninsula, an area that has seen more than half its ice shelves lose mass, is responsible for around 25% of all ice loss from Antarctica. It holds enough ice to raise global sea levels by around 24cm.
Three future outcomes
But what might happen to the rest of Antarctica’s ice shelves in the future is still uncertain. As the climate warms, ice shelves are more likely to collapse and accelerate global sea level rise, but by how much? This is something myself and a colleague have explored in a new study.
We used the latest modelling techniques to predict the susceptibility of ice shelves to hydrofracturing at 1.5°C, 2°C and 4°C of global warming – scenarios that are all still plausible. Like with Larsen B, the presence of liquid water on the surface of an ice shelf indicates that it is becoming less stable, and so vulnerable to collapse by hydrofracturing.
In our paper, we identified four ice shelves – including two on the Antarctic Peninsula – which are at risk of collapse if global temperatures rise 4°C above the pre-industrial average. If both were to disintegrate, the glaciers they hold back could account for tens of cm of sea level rise – 10-20% of what’s predicted this century.
But limiting global warming to 2°C would halve the amount of ice shelf area at risk of collapse around Antarctica. At 1.5°C, just 14% of Antarctica’s ice shelf area would be at risk. Cutting that risk reduces the likelihood of this vast and remote continent significantly contributing to sea level rise.
Clearly, reducing climate change will be better not just for Antarctica, but for the world.
The past few years have seen very severe wildfires in Australia, California, Siberia and around the Mediterranean. Wildfires have become one of the most potent symbols of the threats posed by global warming, and images of fire are widely used to illustrate climate change news stories.
People in the UK don’t often think of wildfires: it’s widely perceived that they occur in hot, dry places. But the wildfire danger in the UK is real. While the vast majority are small, several fires in recent years have threatened houses and infrastructure. Fires have burned through moorland and forests, and even passed through an onshore wind farm in Scotland. Although actual damage to property and harm to people has so far been limited, dealing with wildfires costs fire and rescue services up to £55 million per year.
Wildfires in the UK typically occur on moorland or heathland, and are almost always the result of some human action, sometimes deliberate but more usually accidental or inadvertent. The initial spark is unpredictable, but whether a spark leads to a wildfire depends on how much dry material is available to burn, and whether it is sufficiently windy for fire to spread. While we cannot say that climate change will alter the chance of getting a spark, we can be more confident that the conditions conducive to fire are likely to change into the future. Climate change will increase fire danger.
By how much? Colleagues and I recently estimated this by combining our version of the fire danger model used by the Met Office with the latest climate projections. The fire danger model indexes danger using temperature, rainfall, humidity, wind and evaporation to estimate how much dry material is available to burn and whether a fire will spread. It is based on an approach used in Canada, and a similar model is used to monitor fire danger in New Zealand and across Europe.
Using this model, we then calculated the level of fire danger up to the year 2100 in a low emissions scenario where climate change was relatively modest, and in a high emissions scenario with more extreme change.
The numbers vary with indicator, but in general we predict there will be large increases in fire danger across the UK. For example, in south east England there are currently around 20 days per year on average with “very high” fire danger, and with high emissions this rises to more than 50 days by the 2050s and around 90 days by the 2080s. In north west England, there are around five “very high” danger days per year, and this would increase to around ten and 30 by the 2050s and 2080s respectively.
The fire danger season is also likely to grow longer. Most of the increase is due to higher temperatures drying out surface material, but lower humidity also increases fire danger along with reductions in summer rainfall. Where and when fires occur in practice – and therefore how fire risk changes in different places – will depend on where and when fires are started. However, a warmer and drier climate means fire danger increases everywhere.
While the precise amount of danger and risk will depend on future emissions, our research implies that greater attention needs to be given to the danger of wildfires. That means factoring them into emergency planning and the regulations that guide land use, and in the development of guidelines for activities such as access to moorland or controlled land management burns that may inadvertently trigger fires. The UK won’t turn into Australia or California overnight, but it’s time to prepare for the worst.
Today, Myanmar is in the midst of a fight that will determine the future of its democracy. While this inevitably demands the full attention and energy of the population, the future holds other challenges of its own. The military takeover came just as the government was preparing for a crucial year for climate change, culminating in the COP26 UN Climate Conference in Glasgow, UK, in November. According to the 2020 Global Climate Risk Index, Myanmar is the world’s second most disaster-prone country, exposed to multiple climate-related hazards, including floods, cyclones, landslides, and droughts. Climate impacts will be felt through the whole economy and will touch all aspects of society. Building resilience will require the government to put climate change at the heart of its decision-making processes and development plans. Only an integrated approach that recognises the interconnected nature of climate risks will be effective.
Over the past decade, Myanmar has made some progress. The country had decreased its poverty rate from 48.2 per cent in 2005 to 24.8 per cent in 2017. Whilst this represents good progress, climate change threatens further development as it will impact all sectors in Myanmar, including agriculture, transport and energy.
Despite this progress, seasonal food insecurity remains a concern across Myanmar. Kundhavi Kadiresan, Assistant Director-General and Regional Representative for Asia and the Pacific of the UN Food and Agriculture Organization said in a speech in 2019 that “much work remains to be done for Myanmar to achieve SDG-2, the Sustainable Development Goal of zero hunger by 2030.” Climate impacts threaten to make this goal even harder to reach.
Increased rainfall during the wet season and decreased rainfall during the dry season may reduce agricultural production for key crops. More frequent extreme heat and higher average temperatures may also lead to crop failures, reduce productivity, or alter staple crops’ nutritional content. Despite Myanmar’s economic progress, its reliance on the agricultural sector makes over half of the labour force highly vulnerable to climate change impacts.
Productivity is linked to connectivity – efforts to improve transport connectivity in Myanmar present opportunities to boost trade, growth and regional integration. When transport systems are efficient and reliable, they provide economic and social opportunities and benefits that result in positive multiplier effects such as better accessibility to market and employment.
Research suggests that increased public spending on transport infrastructure over the next decade could reduce logistics costs by around 30 per cent and increase annual GDP by up to $40 bn. Energy, water and telecommunications infrastructure also face increased risk from physical damage and disruption caused by storms, floods and other hazards becoming more frequent and intense due to climate change.
Over-reliance on hydropower holds its problems. Heatwaves and an increasing number of extreme heat days could increase energy demand for air conditioning and industrial cooling. At the same time, droughts and change in river flows due to erratic rainfall may affect hydropower energy generation. The costs of power outages will be felt across the whole economy, as industrial and commercial rely on a continuous power supply.
Diversified power systems that draw on multiple forms of renewable energy, such as solar and wind power, are more resilient to climate impacts and can deliver energy closer to the communities that they serve. Nature-based solutions, such as reforestation, can also reduce the risk to hydropower by regulating water flow and stabilising the soil, preventing landslides and improving water quality.
When Myanmar is able to look towards the future once more, it must take steps to build its climate resilience to achieve its development objectives. An integrated approach is required to manage these interconnected challenges.
Climate change must be integrated into decision-making across all government departments. For it to be taken seriously, it must also be understood as a priority issue at the highest levels including within the Ministry of Planning and Finance. A broader process of engagement with Myanmar’s people is also required to ensure the country can move forward towards a future in which everyone can share. There is much reconciliation to do in the meantime to achieve this.
Cover image: Landslide in Myanmar. Climate change will make these events more common. By Sukun, 2017
i.Food Security Cluster (FSC);
ii. The Myanmar Times (2020);
Extreme weather in Myanmar’s Magwe breaks temperature record
iii. Frontier Myanmar (2019); The
National League for Democracy’s power fail
iv. Global Witness (2020); Myanmar
jade mine disaster highlights government inaction
Even if the world keeps its most ambitious promise and contains global heating to no more than 1.5°C above the global average normal for most of human history, the future looks distinctly menacing.
And if the world doesn’t quite get there, and annual average temperatures − already 1°C above the historic norm − rise to 2°C, then vast numbers of people in South Asia will find themselves exposed to deadly conditions at least three times as often.
As the researchers make this sober warning in one journal, researchers on the same day in yet another journal make a simple prediction about the cost of ignoring such warnings altogether, to go on burning ever more fossil fuels and destroying ever more tracts of the natural world.
“The need for adaptation over South Asia is today, not in the future. It’s not a choice any more.”
The outcome could be devastating for the countries of South Asia − India and Pakistan, Sri Lanka, Bangladesh and Burma among them − as the thermometer rises and the humidity increases. Researchers have warned for years that at a certain level of heat and humidity − meteorologists call it the “wet bulb” temperature − humans cannot labour productively.
That level is 32°C. At a wet bulb temperature of 35°C, humans cannot expect to survive for long. Some parts of the region have already felt such temperatures with a global average rise of just over 1°C: in 2015, at least 3500 people in Pakistan and India died from causes directly related to extreme heat.
At 1.5°C the consequences could be significantly worse, and at 2°C, the scientists say, the hazard will have been amplified by a factor of 2.7: almost threefold. South Asia could later this century be home to more than two billion people: of the working population, 60% are now engaged in agricultural labour out of doors, and many millions live in crowded cities and in severe poverty. The region should prepare itself for a dangerously hot future.
“The future looks bad for South Asia,” said Moetasim Ashfaq, of the US Oak Ridge National Laboratory, one of the authors, “but the worst can be avoided by containing warming to as low as possible. The need for adaptation over South Asia is today, not in the future. It’s not a choice any more.”
Once again, the statisticians have been at work, and the answer in the journal Climate and Atmospheric Science is: it will be much worse, over a vaster region and for a very large number of people in the Middle East and North Africa.
Their calculations suggest that temperatures could reach as high as 56°C, and even more than 60° C in sweltering cities. Such heat extremes could endure for weeks.
So within the lifetimes of those alive today, about half the region’s population − that is, about 600 million people − could face extreme temperatures of around 56°C by 2100 every summer.
The researchers put their message with unusual forthrightness in the headline: “Business-as-usual will lead to super- and ultra-extreme heatwaves in the Middle East and North Africa.” − Climate News Network
Extreme water events affecting water for drinking, cooking, washing and agriculture drive migration all over the world. Earlier this year, cyclone Eloise battered Mozambique, displacing 100,000 to 400,000 people and weakening the country’s infrastructure. People displaced by the storm were in need of food, hygiene kits and personal protective equipment (PPE).
Addressing water-driven migration will require research that crosses borders and research boundaries. As climate change continues to cause serious displacement and socio-political upheaval, governments must take action to minimize the effects on people vulnerable to migration.
The stakes of water-driven migration
Water-driven migration is a crucial challenge for people living in vulnerable and unstable regions. Water stress acts as a direct or indirect driver of conflict and migration. As water and climate extremes become worse, more people will face water crises and be forced to migrate.
For instance, take the famous case of the Aral Sea that shrank to 9,830 square kilometres in 2017 from 55,700 square kilometres in the 1970s. More than 100,000 people migrated due to collapse of agriculture, fisheries, tourism and increased illnesses such as tuberculosis and diarrhea.
Countries that have committed to the United Nations Sustainable Development Goals could address water-driven migration through SDG 16 (peace, justice and strong institutions). Policy can be aligned with SDG 16 along a seven-point strategy:
Understand how water crises influence migration: Causality is important in addressing migration. Land, water and human security issues could serve as a base for outlining a preventative outlook for new and emerging migration pathways.
Integrate diverse perspectives in water migration assessments: Water co-operation treaties must integrate under-represented, marginalized and racialized migrant voices. The United Nations University’s Institute for Water, Environment and Health has developed an approach to aggregate the causes and consequences of water-driven migration. This framework can help policy-makers interpret migration in diverse socio-ecological, socio-economic, and socio-political settings.
Assess water, migration and development practices through participatory, bottom-up and interdisciplinary approaches: Research should be participatory, applicable between disciplines and socially inclusive to complement scientific, descriptive methods. Nuanced facts of the diverse influences that shape migration can provide understanding to build resilience among vulnerable populations.
Manage data, information and knowledge: Researchers need updated data to examine how water crises are linked with human migration. To close the gaps, the UN has pointed to the need to improve capacity for data analysis within and between countries. Also, there must be stronger co-ordination at the state, regional and international levels to share best practices.
Policy-makers must prepare for the consequences of water crises by adopting improvements that address the concerns of those vulnerable to migration. The seven-point strategy calls for policy-makers to use strategic and integrated approaches between disciplines. Research that maps causes, risks and impacts at the local, regional and global levels can strengthen water migration policies. | https://www.acclimatise.uk.com/category/climate-change-impacts/ | 21 |
15 | Marine debris, also known as marine litter, is human-created waste that has deliberately or accidentally been released in a sea or ocean. Floating oceanic debris tends to accumulate at the center of gyres and on coastlines, frequently washing aground, when it is known as beach litter or tidewrack. Deliberate disposal of wastes at sea is called ocean dumping. Naturally occurring debris, such as driftwood and drift seeds, are also present.
With the increasing use of plastic, human influence has become an issue as many types of (petrochemical) plastics do not biodegrade quickly, as would natural or organic materials. The largest single type of plastic pollution (~10 %) and majority of large plastic in the oceans is discarded and lost nets from the fishing industry. Waterborne plastic poses a serious threat to fish, seabirds, marine reptiles, and marine mammals, as well as to boats and coasts. Dumping, container spillages, litter washed into storm drains and waterways and wind-blown landfill waste all contribute to this problem. This increased has caused serious negative effects such as ghost nets capturing animals, concentration of plastic debris in massive marine garbage patches, and concentration of debris in the food chain.
In efforts to prevent and mediate marine debris and pollutants, laws and policies have been adopted internationally, with the UN including reduced marine pollution in Sustainable Development Goal 14 "Life Below Water". Depending on relevance to the issues and various levels of contribution, some countries have introduced more specified protection policies. Moreover, some non-profits, NGOs and government organizations are developing programs to collect and remove plastics from the ocean. However, in 2017 the UN estimated that by 2050 there will be more plastic than fish in the oceans, if substantial measures are not taken.
Types of debris
Researchers classify debris as either land- or ocean-based; in 1991, the United Nations Joint Group of Experts on the Scientific Aspects of Marine Pollution estimated that up to 80% of the pollution was land-based, with the remaining 20% originating from catastrophic events or maritime sources. More recent studies have found that more than half of plastic debris found on Korean shores is ocean-based.
A wide variety of man-made objects can become marine debris; plastic bags, balloons, buoys, rope, medical waste, glass and plastic bottles, cigarette stubs, cigarette lighters, beverage cans, polystyrene, lost fishing line and nets, and various wastes from cruise ships and oil rigs are among the items commonly found to have washed ashore. Six pack rings, in particular, are considered emblematic of the problem.
The US military used ocean dumping for unused weapons and bombs, including ordinary bombs, UXO, landmines and chemical weapons from at least 1919 until 1970. Millions of pounds of ordnance were disposed of in the Gulf of Mexico and off the coasts of at least 16 states, from New Jersey to Hawaii (although these, of course, do not wash up onshore, and the US is not the only country who has practiced this).
Eighty percent of marine debris is plastic. Plastics accumulate because they typically do not biodegrade as many other substances do. They photodegrade on exposure to sunlight, although they do so only under dry conditions, as water inhibits photolysis. In a 2014 study using computer models, scientists from the group 5 Gyres, estimated 5.25 trillion pieces of plastic weighing 269,000 tons were dispersed in oceans in similar amount in the Northern and Southern Hemispheres.
8.8 million metric tons of plastic waste are dumped in the world's oceans each year. Asia was the leading source of mismanaged plastic waste, with China alone accounting for 2.4 million metric tons. The majority of ocean plastic pollution is discarded and lost nets from the fishing industry.
It is estimated that there is a stock of 86 million tons of plastic marine debris in the worldwide ocean as of the end of 2013, assuming that 1.4% of global plastics produced from 1950 to 2013 has entered the ocean and has accumulated there.
The trade in plastic waste has been identified as the main cause of marine litter.[a] Countries importing the waste plastics often lack the capacity to process all the material. As a result, the United Nations has imposed a ban on waste plastic trade unless it meets certain criteria.[b]
Plastic waste has reached all the world's oceans. This plastic pollution harms an estimated 100,000 sea turtles and marine mammals and 1,000,000 sea creatures each year. Larger plastics (called "macroplastics") such as plastic shopping bags can clog the digestive tracts of larger animals when consumed by them and can cause starvation through restricting the movement of food, or by filling the stomach and tricking the animal into thinking it is full. Microplastics on the other hand harm smaller marine life. For example, pelagic plastic pieces in the center of our ocean’s gyres outnumber live marine plankton, and are passed up the food chain to reach all marine life. A 1994 study of the seabed using trawl nets in the North-Western Mediterranean around the coasts of Spain, France, and Italy reported mean concentrations of debris of 1,935 items per square kilometre. Plastic debris accounted for 77%, of which 93% was plastic bags.
Although an increasing number of studies have been focused on plastic debris accumulation on the coasts, in off-shore surface waters, and that ingested by marine organisms that live in the upper levels of the water column, there is limited information on debris in the mesopelagic and deeper layers. Studies that have been done have conducted research through bottom sampling, video observation via remotely operated vehicles (ROVs), and submersibles. They are also mostly limited to one-off projects that do not extend long enough to show significant effects of deep-sea debris over time. Research thus far has shown that debris in the deep-ocean is in fact impacted by anthropogenic activities, and plastic has been frequently observed in the deep-sea, especially in areas off-shore of heavily populated regions, such as the Mediterranean.
Litter, made from diverse materials that are denser than surface water (such as glasses, metals and some plastics), have been found to spread over the floor of seas and open oceans, where it can become entangled in corals and interfere with other sea-floor life, or even become buried under sediment, making clean-up extremely difficult, especially due to the wide area of its dispersal compared to shipwrecks. Plastics that are usually negatively buoyant can sink with the adherence of phytoplankton and the aggregation of other organic particles. Other oceanic processes that affect circulation, such as coastal storms and offshore convection, play a part in transferring large volumes of particles and debris. Submarine topographic features can also augment downwelling currents, leading to the retention of microplastics at certain locations. A Deep-sea Debris database by the Global Oceanographic Data Center of the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), showing thirty years of photos and samples of marine debris since 1983, was made public in 2017. From the 5,010 dives in the database, using both ROVs and deep-sea submersibles, 3,425 man-made debris items were counted. The two most significant types of debris were macro-plastic, making up 33% of the debris found – 89% of which was single-use – and metal, making up 26%. Plastic debris was even found at the bottom of the Mariana Trench, at a depth of 10,898m, and plastic bags were found entangled in hydrothermal vent and cold seep communities.
The extent of microplastic pollution in the deep sea has yet to be fully determined, and as a result scientists are currently examining organisms and studying sediments to better understand this issue. A 2013 study surveyed four separate locations to represent a wider range of marine habitats at depths varying from 1100-5000m. Three of the four locations had identifiable amounts of microplastics present in the top 1 cm layer of sediment. Core samples were taken from each spot and had their microplastics filtered out of the normal sediment. The plastic components were identified using micro-Raman spectroscopy; the results showed man-made pigments commonly used in the plastic industry. In 2016, researchers used an ROV to collect nine deep-sea organisms and core-top sediments. The nine deep-sea organisms were dissected and various organs were examined by the researchers on shore to identify microplastics with a microscope. The scientists found that six out of the nine organisms examined contain microplastics which where all microfibers, specifically located in the GI tract. Research performed by MBARI in 2013 off the west coast of North America and around Hawaii found that out of all the debris observed from 22 years of VARS database video footage, one-third of the items was plastic bags. This debris was most common below 2000 m depth. A recent study that collected organisms and sediments in the Abyssopelagic Zone of the Western Pacific Ocean extracted materials from samples and discovered that poly(propylene-ethylene) copolymer (40.0%) and polyethylene terephthalate (27.5%) were the most commonly detected polymers.
Another study was conducted by collecting deep-sea sediment and coral specimens between 2011 and 2012 in the Mediterranean Sea, Southwest Indian Ocean, and Northeast Atlantic Ocean. Of the 12 coral and sediment samples taken, all were found with an abundance of microplastics. Rayon is not a plastic but was included in the study due to being a common synthetic material. It was found in all samples and comprised 56.9% of materials found, followed by polyester (53.4%), plastics (34.1%) and acrylic (12.4%). This study found that the amount of microplastics, in the form of microfibres, was comparable to that found in intertidal or subtidal sediments. A 2017 study had a similar finding – by surveying the Rockall Trough in the Northeast Atlantic Ocean at a depth of more than 2200 meters, microplastic fibers were identified at a concentration of 70.8 particles per cubic meter. This is comparable to amounts reported in surface waters. This study also looked at micropollution ingested by benthic invertebrates Ophiomusium lymani, Hymenaster pellucidus and Colus jeffreysianus and found that of the 66 organisms studied, 48% had ingested microplastics in quantities also comparable to coastal species. A recent review of 112 studies found the highest plastic ingestion in organisms collected in the Mediterranean and Northeast Indian Ocean with significant differences among plastic types ingested by different groups of animals, including differences in colour and the type of prevalent polymers. Overall, clear fibre microplastics are likely the most predominant types ingested by marine megafauna around the globe.
Sources of debris
The 10 largest emitters of oceanic plastic pollution worldwide are, from the most to the least, China, Indonesia, Philippines, Vietnam, Sri Lanka, Thailand, Egypt, Malaysia, Nigeria, and Bangladesh, largely through the rivers Yangtze, Indus, Yellow, Hai, Nile, Ganges, Pearl, Amur, Niger, and the Mekong, and accounting for "90 percent of all the plastic that reaches the world's oceans."
An estimated 10,000 containers at sea each year are lost by container ships, usually during storms. One spillage occurred in the Pacific Ocean in 1992, when thousands of rubber ducks and other toys (now known as the "Friendly Floatees") went overboard during a storm. The toys have since been found all over the world, providing a better understanding of ocean currents. Similar incidents have happened before, such as when Hansa Carrier dropped 21 containers (with one notably containing buoyant Nike shoes). In 2007, MSC Napoli beached in the English Channel, dropping hundreds of containers, most of which washed up on the Jurassic Coast, a World Heritage Site. A 2021 study in the journal Environmental Pollution following a 2014 loss of a container carrying printer cartidges calculated that some cartridges had dispersed at an average speed of between 6cm and 13cm per second.
In Halifax Harbour, Nova Scotia, 52% of items were generated by recreational use of an urban park, 14% from sewage disposal and only 7% from shipping and fishing activities. Around four fifths of oceanic debris is from rubbish blown onto the water from landfills, and urban runoff.
Some studies show that marine debris may be dominant in particular locations. For example, a 2016 study of Aruba found that debris found the windward side of the island was predominantly marine debris from distant sources. In 2013, debris from six beaches in Korea was collected and analyzed: 56% was found to be "ocean-based" and 44% "land-based".
In the 1987 Syringe Tide, medical waste washed ashore in New Jersey after having been blown from Fresh Kills Landfill. On the remote sub-Antarctic island of South Georgia, fishing-related debris, approximately 80% plastics, are responsible for the entanglement of large numbers of Antarctic fur seals.
Marine litter is even found on the floor of the Arctic ocean.
Five subtropical gyres
Despite the abundance of plastic being deposited into the ocean, the distribution across the oceans was relatively unknown. Appropriately, a study was conducted in 2014 to model an accurate representation of the current magnitude of surface level pollution within the oceans. The project was concluded with the identification of five areas across all the oceans where the majority of plastic was being concentrated.
The researchers collected a total of 3070 samples across the world to identify hot spots of surface level plastic pollution. The pattern of distribution closely mirrored models of oceanic currents with the North Pacific Gyre, or Great Pacific Garbage Patch, being the highest density of plastic accumulation. The other four garbage patches include the North Atlantic garbage patch between the North America and Africa, the South Atlantic garbage patch located between eastern South America and the tip of Africa, the South Pacific garbage patch located west of South America, and the Indian Ocean garbage patch found east of south Africa listed in order of decreasing size.
Great Pacific Garbage Patch
Once waterborne, debris becomes mobile. Flotsam can be blown by the wind, or follow the flow of ocean currents, often ending up in the middle of oceanic gyres where currents are weakest. The Great Pacific Garbage Patch is one such example of this, comprising a vast region of the North Pacific Ocean rich with anthropogenic wastes. Estimated to be double the size of Texas, the area contains more than 3 million tons of plastic.
Patches may be large enough to be viewed by satellite. For example, when the Malaysian Flight MH370 disappeared in 2014, satellites were scanning the oceans surface for any sign of it, and instead of finding debris from the plane they came across floating garbage. The gyre contains approximately six pounds of plastic for every pound of plankton.
Many animals that live on or in the sea consume flotsam by mistake, as it often looks similar to their natural prey. Bulky plastic debris may become permanently lodged in the digestive tracts of these animals, blocking the passage of food and causing death through starvation or infection. Tiny floating plastic particles also resemble zooplankton, which can lead filter feeders to consume them and cause them to enter the ocean food chain. In samples taken from the North Pacific Gyre in 1999 by the Algalita Marine Research Foundation, the mass of plastic exceeded that of zooplankton by a factor of six.
Toxic additives used in plastic manufacturing can leach into their surroundings when exposed to water. Waterborne hydrophobic pollutants collect and magnify on the surface of plastic debris, thus making plastic more deadly in the ocean than it would be on land. Hydrophobic contaminants bioaccumulate in fatty tissues, biomagnifying up the food chain and pressuring apex predators and humans. Some plastic additives disrupt the endocrine system when consumed; others can suppress the immune system or decrease reproductive rates. Bisphenol A (BPA) is a famous example of a plasticizer produced in high volumes for food packing from where it can leach into food, leading to human exposure. As an estrogen and glucocorticoid receptor agonist, BPA is interfering with the endocrine system and is associated with increased fat in rodents.
The hydrophobic nature of plastic surfaces stimulates rapid formation of biofilms, which support a wide range of metabolic activities, and drive succession of other micro- and macro-organisms.
Concern among experts has grown since the 2000s that some organisms have adapted to live on floating plastic debris, allowing them to disperse with ocean currents and thus potentially become invasive species in distant ecosystems. Research in 2014 in the waters around Australia confirmed a wealth of such colonists, even on tiny flakes, and also found thriving ocean bacteria eating into the plastic to form pits and grooves. These researchers showed that "plastic biodegradation is occurring at the sea surface" through the action of bacteria, and noted that this is congruent with a new body of research on such bacteria. Their finding is also congruent with the other major research undertaken in 2014, which sought to answer the riddle of the overall lack of build up of floating plastic in the oceans, despite ongoing high levels of dumping. Plastics were found as microfibres in core samples drilled from sediments at the bottom of the deep ocean. The cause of such widespread deep sea deposition has yet to be determined.
Not all anthropogenic artifacts placed in the oceans are harmful. Iron and concrete structures typically do little damage to the environment because they generally sink to the bottom and become immobile, and at shallow depths they can even provide scaffolding for artificial reefs. Ships and subway cars have been deliberately sunk for that purpose.
The ingestion of plastic by marine organisms has now been established at full ocean depth. Microplastic was found in the stomachs of hadal amphipods sampled from the Japan, Izu-Bonin, Mariana, Kermadec, New Hebrides and the Peru-Chile trenches. The amphipods from the Mariana Trench were sampled at 10,890 m and all contained microfibres.
Techniques for collecting and removing marine (or riverine) debris include the use of debris skimmer boats (pictured). Devices such as these can be used where floating debris presents a danger to navigation. For example, the US Army Corps of Engineers removes 90 tons of "drifting material" from San Francisco Bay every month. The Corps has been doing this work since 1942, when a seaplane carrying Admiral Chester W. Nimitz collided with a piece of floating debris and sank, costing the life of its pilot. The Ocean cleanup has also created a vessel for cleaning up riverine debris, called Interceptor. Once debris becomes "beach litter", collection by hand and specialized beach-cleaning machines are used to gather the debris.
In June 2019, Ocean Voyages Institute, conducted a cleanup utilizing GPS trackers and existing maritime equipment in the North Pacific Subtropical Convergence Zone setting the record for the largest mid-ocean cleanup accomplished in the North Pacific Gyre and removed over 84,000 pounds of polymer nets and consumer plastic trash from the ocean.
In May/June 2020, Ocean Voyages Institute conducted a cleanup expedition in the Gyre and set a new record for the largest mid-ocean cleanup accomplished in the North Pacific Gyre which removed over 170 tons (340,000 pounds) of consumer plastics and ghostnets from the ocean Utilizing custom designed GPS satellite trackers which are deployed by vessels of opportunity, Ocean Voyages Institute is able to accurately track and send cleanup vessels to remove ghostnets. The GPS Tracker technology is being combined with satellite imagery increasing the ability to locate plastic trash and ghostnets in real time via satellite imagery which will greatly increase cleanup capacity and efficiency.
There are also projects that stimulate fishing boats to remove any litter they accidentally fish up while fishing for fish.
Elsewhere, "trash traps" are installed on small rivers to capture waterborne debris before it reaches the sea. For example, South Australia's Adelaide operates a number of such traps, known as "trash racks" or "gross pollutant traps" on the Torrens River, which flows (during the wet season) into Gulf St Vincent.
On the sea, the removal of artificial debris (i.e. plastics) is still in its infancy. However some projects have been started which used ships with nets (Ocean Voyages Institute/Kaisei 2009 & 2010 and New Horizon 2009) to catch some plastics, primarily for research purposes. There is also Bluebird Marine System's SeaVax which was solar- and wind-powered and had an onboard shredder and cargo hold. The Sea Cleaners' Manta ship is similar in concept.
Another method to gather artificial litter has been proposed by The Ocean Cleanup's Boyan Slat. He suggested using platforms with arms to gather the debris, situated inside the current of gyres. The SAS Ocean Phoenix ship is somewhat similar in design.
Another issue is that removing marine debris from our oceans can potentially cause more harm than good. Cleaning up micro-plastics could also accidentally take out plankton, which are the main lower level food group for the marine food chain and over half of the photosynthesis on earth. One of the most efficient and cost effective ways to help reduce the amount of plastic entering our oceans is to not participate in using single-use plastics, avoid plastic bottled drinks such as water bottles, use reusable shopping bags, and to buy products with reusable packaging.
Laws and treaties
The ocean is a global common, so negative externalities of marine debris are not usually experienced by the producer. In the 1950s, the importance of government intervention with marine pollution protocol was recognized at the First Conference on the Law of the Sea.
Ocean dumping is controlled by international law, including:
- The London Convention (1972) – a United Nations agreement to control ocean dumping This Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter consisted of twenty two articles addressing expectations of contracting parties. The three annexes defined many compounds, substances, and materials that are unacceptable to deposit into the ocean. Examples of such matter include: mercury compounds, lead, cyanides, and radioactive wastes.
- MARPOL 73/78 – a convention designed to minimize pollution of the seas, including dumping, oil and exhaust pollution The original MARPOL convention did not consider dumping from ships, but was revised in 1978 to include restrictions on marine vessels.
- UNCLOS- signed in 1982, but effective in 1994, United Nations Convention on the Law of the Sea emphasized the importance of protecting the entire ocean and not only specified coastal regions. UNCLOS enforced restrictions on pollution, including a stress on land-based sources.
One of the earliest anti-dumping laws was Australia's Beaches, Fishing Grounds and Sea Routes Protection Act 1932, which prohibited the discharge of "garbage, rubbish, ashes or organic refuse" from "any vessel in Australian waters" without prior written permission from the federal government. It also required permission for scuttling. The act was passed in response to large amounts of garbage washing up on the beaches of Sydney and Newcastle from vessels outside the reach of local governments and the New South Wales government. It was repealed and replaced by the Environment Protection (Sea Dumping) Act 1981, which gave effect to the London Convention.
In 1972 and 1974, conventions were held in Oslo and Paris respectively, and resulted in the passing of the OSPAR Convention, an international treaty controlling marine pollution in the north-east Atlantic Ocean. The Barcelona Convention protects the Mediterranean Sea. The Water Framework Directive of 2000 is a European Union directive committing EU member states to free inland and coastal waters from human influence. In the United Kingdom, the Marine and Coastal Access Act 2009 is designed to "ensure clean healthy, safe, productive and biologically diverse oceans and seas, by putting in place better systems for delivering sustainable development of marine and coastal environment". In 2019, the EU parliament voted for an EU-wide ban on single-use plastic products such as plastic straws, cutlery, plates, and drink containers, polystyrene food and drink containers, plastic drink stirrers and plastic carrier bags and cotton buds. The law will take effect in 2021.
United States law
In the waters of the United States, there have been many observed consequences of pollution including: hypoxic zones, harmful agal blooms, and threatened species. In 1972, the United States Congress passed the Ocean Dumping Act, giving the Environmental Protection Agency power to monitor and regulate the dumping of sewage sludge, industrial waste, radioactive waste and biohazardous materials into the nation's territorial waters. The Act was amended sixteen years later to include medical wastes. It is illegal to dispose of any plastic in US waters.
Property law, admiralty law and the law of the sea may be of relevance when lost, mislaid, and abandoned property is found at sea. Salvage law rewards salvors for risking life and property to rescue the property of another from peril. On land the distinction between deliberate and accidental loss led to the concept of a "treasure trove". In the United Kingdom, shipwrecked goods should be reported to a Receiver of Wreck, and if identifiable, they should be returned to their rightful owner.
A large number of groups and individuals are active in preventing or educating about marine debris. For example, 5 Gyres is an organization aimed at reducing plastics pollution in the oceans, and was one of two organizations that recently researched the Great Pacific Garbage Patch. Heal the Bay is another organization, focusing on protecting California's Santa Monica Bay, by sponsoring beach cleanup programs along with other activities. Marina DeBris is an artist focusing most of her recent work on educating people about beach trash. Interactive sites like Adrift demonstrate where marine plastic is carried, over time, on the worlds ocean currents.
On 11 April 2013 in order to create awareness, artist Maria Cristina Finucci founded The Garbage patch state at UNESCO –Paris in front of Director General Irina Bokova. First of a series of events under the patronage of UNESCO and of Italian Ministry of the Environment.
Forty-eight plastics manufacturers from 25 countries, are members of the Global Plastic Associations for solutions on Marine Litter, have made the pledge to help prevent marine debris and to encourage recycling.
Marine debris is a problem created by all of us, not only those in coastal regions. Ocean debris can come from as far away as Nebraska (close to the North American pole of inaccessibility). The places that see the most damage are often not the places that produce the pollution. For ocean pollution, much of the trash may come from inland states, where people may never see the ocean and thus may never put any thought into protecting it. The problem continues to grow in tandem with plastics usage and disposal. Steps can be taken to prevent the movement of inland plastics into the oceans.
Plastic debris from inland states come from two main sources: ordinary litter and materials from open dumps and landfills that blow or wash away to inland waterways and wastewater outflows. The refuse finds its way from inland waterways, rivers, streams and lakes to the ocean. Though ocean and coastal area cleanups are important, it is crucial to address plastic waste that originates from inland and landlocked states.
At the systems level, there are various ways to reduce the amount of debris entering our waterways:
- Improve waste transportation to and from sites by utilizing closed container storage and shipping
- Restrict open waste facilities near waterways
- Promote the use of Refuse-derived fuels. Used plastic with low residual value often do not get recycled and are more likely to leak into the ocean. However, turning these unwanted plastics that would otherwise stay in landfills into refuse-derived fuels allows for further use; they can be used as supplement fuels at power plants
- Improve recovery rates for plastic (in 2012, the United States generated 11.46 million tons of plastic waste, of which only 6.7% was recovered
- Adapt Extended Producer Responsibility strategies to make producers responsible for product management when products and their packaging become waste; encourage reusable product design to minimize negative impacts on the environment.
- Ban the use of cigarette filters and establish a deposit-system for e-cigarettes (similar to the one used for propane canisters)
As consumers, there are things we can do to help reduce the amount of plastic entering our waterways:
- Reduce usage of single-use plastics such as plastic bags, straws, water bottles, utensils and coffee cups by replacing them with reusable products such as reusable bags, metal straws, reusable water bottles, bamboo toothbrushes and reusable coffee cups
- Avoid microbeads, which are found in face scrubs, toothpastes and body washes
- Participate in a river or lake beach clean up
- Support municipality bans and other legislation regulating single-use plastics and plastic waste
- If you smoke, avoid using filtered cigarettes, and dispose of e-cigarettes responsibly
- Continue to recycle, recycle, recycle
Though the awareness of inland ocean conservation debris mitigation seems to be small compared to coastal states, some organizations in the United States are already working to improve this. The Colorado Ocean Coalition was formed in 2011 with the goal of impressing upon inland citizens that they don’t need to see the ocean to care about its health. It has since grown beyond one state and now forms the Inland Ocean Coalition, with the mission of promoting knowledge and awareness of how inland states contribute to pollution of the ocean, aiming to shatter the ‘out of sight, out of mind’ mentality that often applies in this region.
“Those who live among mountains, rivers and inland cities have a direct impact on the cycle of life in the ocean,” reads the IOCO website. "The changes we need to make to address the largest threats facing our seas—lowering carbon emissions, reducing trash and pollution, eating sustainable seafood, safeguarding watersheds, promoting marine protected areas (MPAs)—can happen from anywhere in the world.”
This organization has chapters in many inland US states and promotes programs like watershed cleanups, youth-centered education, and decreasing the use of plastic.
Plastic-to-fuel conversion strategy
The Clean Oceans Project (TCOP) promotes conversion of the plastic waste into valuable liquid fuels, including gasoline, diesel and kerosene, using plastic-to-fuel conversion technology developed by Blest Co. Ltd., a Japanese environmental engineering company. TCOP plans to educate local communities and create a financial incentive for them to recycle plastic, keep their shorelines clean, and minimize plastic waste.
In 2019, a research group led scientists of Washington State University found a way to turn plastic waste products into jet fuel.
Also, the company "Recycling Technologies", has come up with a simple process that can convert plastic waste to an oil called Plaxx. The company is led by a team of engineers from the university of Warwick.
- "Campaigners have identified the global trade in plastic waste as a main culprit in marine litter, because the industrialised world has for years been shipping much of its plastic “recyclables” to developing countries, which often lack the capacity to process all the material."
- "The new UN rules will effectively prevent the US and EU from exporting any mixed plastic waste, as well plastics that are contaminated or unrecyclable — a move that will slash the global plastic waste trade when it comes into effect in January 2021."
- Gary Strieker (28 July 1998). "Pollution invades small Pacific island". CNN. Archived from the original on 31 December 2007. Retrieved 1 April 2008.
- Graham, Rachel (10 July 2019). "Euronews Living | Watch: Italy's answer to the problem with plastic". living.
- "Dumped fishing gear is biggest plastic polluter in ocean, finds report". the Guardian. 6 November 2019. Retrieved 9 April 2021.
- "Facts about marine debris". US NOAA. Archived from the original on 13 February 2009. Retrieved 10 April 2008.
- "FEATURE: UN's mission to keep plastics out of oceans and marine life". UN News. 27 April 2017. Retrieved 8 December 2020.
- Sheavly, S. B.; Register, K. M. (2007). "Marine Debris & Plastics: Environmental Concerns, Sources, Impacts and Solutions". Journal of Polymers and the Environment. 15 (4): 301–305. doi:10.1007/s10924-007-0074-3. S2CID 136943560.
- Weiss, K.R. (2017). "The pileup of plastic debris is more than ugly ocean litter". Knowable Magazine. doi:10.1146/knowable-120717-211902. Archived from the original on 9 December 2017.
- Jang, Yong Chang; Lee, Jongmyoung; Hong, Sunwook; Lee, Jong Su; Shim, Won Joon; Song, Young Kyoung (6 July 2014). "Sources of plastic marine debris on beaches of Korea: More from the ocean than the land". Ocean Science Journal. 49 (2): 151–162. Bibcode:2014OSJ....49..151J. doi:10.1007/s12601-014-0015-8. S2CID 85429593.
- Cecil Adams (16 July 1999). "Should you cut up six-pack rings so they don't choke sea birds?". The Straight Dope. Archived from the original on 6 October 2008. Retrieved 11 August 2008.
- Edgar B. Herwick III (29 July 2015). "Explosive Beach Objects-- Just Another Example Of Massachusetts' Charm". WGBH news. PBS. Archived from the original on 3 August 2015. Retrieved 4 August 2015.
- "Military Ordinance [sic] Dumped in Gulf of Mexico". Maritime Executive. 3 August 2015. Archived from the original on 7 August 2015. Retrieved 4 August 2015.
- Alan Weisman (2007). The World Without Us. St. Martin's Thomas Dunne Books. pp. 112–128. ISBN 978-0-312-34729-1.
- Alan Weisman (Summer 2007). "Polymers Are Forever". Orion magazine. Archived from the original on 16 May 2008. Retrieved 1 July 2008.
- "5 Trillion Pieces of Ocean Trash Found, But Fewer Particles Than Expected". 13 December 2014. Archived from the original on 5 February 2015. Retrieved 25 January 2015.
- Esteban, Michelle (2002) Tracking Down Ghost Nets
- "'Ghost fishing' killing seabirds". BBC News. 28 June 2007. Retrieved 1 April 2008.
- Robert Lee Hotz (13 February 2015). "Asia Leads World in Dumping Plastic in Seas". Wall Street Journal. Archived from the original on 23 February 2015.
- Jang, Y. C., Lee, J., Hong, S., Choi, H. W., Shim, W. J., & Hong, S. Y. 2015. Estimating the global inflow and stock of plastic marine debris using material flow analysis: a preliminary approach. Journal of the Korean Society for Marine Environment and Energy, 18(4), 263–273.
- Clive Cookson 2019. sfn error: no target: CITEREFClive_Cookson2019 (help)
- "A Ban on Plastic Bags Will Save the Lives of California's Endangered Leatherback Sea Turtles". Sea Turtle Restoration Project. 2010. Archived from the original on 28 November 2010.
- "Marine Litter: An analytical overview" (PDF). United Nations Environment Programme. 2005. Archived (PDF) from the original on 17 July 2007. Retrieved 1 August 2008.
- Moore, C.J; Moore, S.L; Leecaster, M.K; Weisberg, S.B (December 2001). "A Comparison of Plastic and Plankton in the North Pacific Central Gyre". Marine Pollution Bulletin. 42 (12): 1297–1300. doi:10.1016/S0025-326X(01)00114-X. PMID 11827116.
- "What's resin pellet? :: International Pellet Watch". www.pelletwatch.org. Retrieved 30 November 2017.
- Hammer, Jort; Kraak, Michiel H. S.; Parsons, John R. (2012). Reviews of Environmental Contamination and Toxicology. Reviews of Environmental Contamination and Toxicology. 220. Springer, New York, NY. pp. 1–44. doi:10.1007/978-1-4614-3414-6_1. ISBN 9781461434139. PMID 22610295.
- Chiba, S., Saito, H., Fletcher, R., Yogi, T., Kayo, M., Miyagi, S., ... & Fujikura, K. (2018). Human footprint in the abyss: 30 year records of deep-sea plastic debris. Marine Policy, 96, 204–212.
- Goodman, Alexa J.; Walker, Tony R.; Brown, Craig J.; Wilson, Brittany R.; Gazzola, Vicki; Sameoto, Jessica A. (1 January 2020). "Benthic marine debris in the Bay of Fundy, eastern Canada: Spatial distribution and categorization using seafloor video footage". Marine Pollution Bulletin. 150: 110722. doi:10.1016/j.marpolbul.2019.110722. PMID 31733907.
- Woodall, L. C., Sanchez-Vidal, A., Canals, M., Paterson, G. L., Coppock, R., Sleight, V., ... & Thompson, R. C. (2014). The deep sea is a major sink for microplastic debris. Royal Society open science, 1(4), 140317. https://doi.org/10.1098/rsos.140317.
- Zhang, Dongdong; Liu, Xidan; Huang, Wei; Li, Jingjing; Wang, Chunsheng; Zhang, Dongsheng; Zhang, Chunfang (29 December 2015). "Microplastic pollution in deep-sea sediments and organisms of the Western Pacific Ocean". Environmental Pollution. 259: 113948. doi:10.1016/j.envpol.2020.113948. PMID 32023798.
- Courtene-Jones, Winnie; Quinn, Brian; Gary, Stefan F.; Mogg, Andrew O.M.; Narayanaswamy, Bhavani E. (12 August 2017). "Microplastic pollution identified in deep-sea water and ingested by benthic invertebrates in the Rockall Trough, North Atlantic Ocean". Environmental Pollution. 231 (Pt 1): 271–280. doi:10.1016/j.envpol.2017.08.026. PMID 28806692.
- López‐Martínez, Sergio; Morales‐Caselles, Carmen; Kadar, Julianna; Rivas, Marga L. (2021). "Overview of global status of plastic presence in marine vertebrates". Global Change Biology. 27 (4): 728–737. Bibcode:2021GCBio..27..728L. doi:10.1111/gcb.15416. ISSN 1365-2486. PMID 33111371.
- Van Cauwenberghe, Lisbeth; Vanreusel, Ann; Mees, Jan; Janssen, Colin R. (1 November 2013). "Microplastic pollution in deep-sea sediments". Environmental Pollution. 182: 495–499. doi:10.1016/j.envpol.2013.08.013. PMID 24035457.
- Taylor, M. L.; Gwinnett, C.; Robinson, L. F.; Woodall, L. C. (30 September 2016). "Plastic microfibre ingestion by deep-sea organisms". Scientific Reports. 6 (1): 33997. Bibcode:2016NatSR...633997T. doi:10.1038/srep33997. ISSN 2045-2322. PMC 5043174. PMID 27687574.
- "MBARI research shows where trash accumulates in the deep sea". MBARI. 5 June 2013. Retrieved 2 November 2020.
- Courtene-Jones, W., Quinn, B., Gary, S. F., Mogg, A. O., & Narayanaswamy, B. E. (2017). Microplastic pollution identified in deep-sea water and ingested by benthic invertebrates in the Rockall Trough, North Atlantic Ocean. Environmental Pollution, 231, 271–280. https://doi.org/10.1016/j.envpol.2017.08.026.
- Jambeck, Jenna R.; Geyer, Roland; Wilcox, Chris (12 February 2015). "Plastic waste inputs from land into the ocean" (PDF). Science. 347 (6223): 768–71. Bibcode:2015Sci...347..768J. doi:10.1126/science.1260352. PMID 25678662. S2CID 206562155. Retrieved 28 August 2018.
- Christian Schmidt; Tobias Krauth; Stephan Wagner (11 October 2017). "Export of Plastic Debris by Rivers into the Sea" (PDF). Environmental Science & Technology. 51 (21): 12246–12253. Bibcode:2017EnST...5112246S. doi:10.1021/acs.est.7b02368. PMID 29019247.
The 10 top-ranked rivers transport 88–95% of the global load into the sea
- Harald Franzen (30 November 2017). "Almost all plastic in the ocean comes from just 10 rivers". Deutsche Welle. Retrieved 18 December 2018.
It turns out that about 90 percent of all the plastic that reaches the world's oceans gets flushed through just 10 rivers: The Yangtze, the Indus, Yellow River, Hai River, the Nile, the Ganges, Pearl River, Amur River, the Niger, and the Mekong (in that order).
- Janice Podsada (19 June 2001). "Lost Sea Cargo: Beach Bounty or Junk?". National Geographic News. Archived from the original on 6 April 2008. Retrieved 8 April 2008.
- Marsha Walton (28 May 2003). "How sneakers, toys and hockey gear help ocean science". CNN. Archived from the original on 8 April 2008. Retrieved 8 April 2008.
- "Scavengers take washed-up goods". BBC News. 22 January 2007. Archived from the original on 9 February 2008. Retrieved 8 April 2008.
- Wilson, Jonathan (29 April 2021). "Ship's lost plastic cargo washes up on shores from Florida to Norway". E&T Magazine. Retrieved 1 May 2021.
- Walker, T.R.; Grant, J.; Archambault, M-C. (2006). "Accumulation of marine debris on an intertidal beach in an urban park (Halifax Harbour, Nova Scotia)" (PDF). Water Quality Research Journal of Canada. 41 (3): 256–262. doi:10.2166/wqrj.2006.029.
- "Plastic Debris: from Rivers to Sea" (PDF). Algalita Marine Research Foundation. Archived from the original (PDF) on 19 August 2008. Retrieved 29 May 2008.
- Scisciolo, Tobia (2016). "Beach debris on Aruba, Southern Caribbean: Attribution to local land-based and distal marine-based sources". Marine Pollution Bulletin. 106 (–2): 49–57. doi:10.1016/j.marpolbul.2016.03.039. PMID 27039956.
- Yong, C (2013). "Sources of plastic marine debris on beaches of Korea: More from the ocean than the land". Ocean Science Journal. 49 (2): 151–162. Bibcode:2014OSJ....49..151J. doi:10.1007/s12601-014-0015-8. S2CID 85429593.
- Alfonso Narvaez (8 December 1987). "New York City to Pay Jersey Town $1 Million Over Shore Pollution". The New York Times. Archived from the original on 11 March 2009. Retrieved 25 June 2008.
- "A Summary of the Proposed Comprehensive Conservation and Management Plan". New York-New Jersey Harbor Estuary Program. February 1995. Archived from the original on 24 May 2005. Retrieved 25 June 2008.
- Walker, T. R.; Reid, K.; Arnould, J. P. Y.; Croxall, J. P. (1997), "Marine debris surveys at Bird Island, South Georgia 1990–1995", Marine Pollution Bulletin, 34 (1): 61–65, doi:10.1016/S0025-326X(96)00053-7.
- "Plastic trash invades arctic seafloor". Archived from the original on 25 October 2012.
- Cózar, Andrés; Echevarría, Fidel; González-Gordillo, J. Ignacio; Irigoien, Xabier; Úbeda, Bárbara; Hernández-León, Santiago; Palma, Álvaro T.; Navarro, Sandra; García-de-Lomas, Juan; Ruiz, Andrea; Fernández-de-Puelles, María L. (15 July 2014). "Plastic debris in the open ocean". Proceedings of the National Academy of Sciences. 111 (28): 10239–10244. Bibcode:2014PNAS..11110239C. doi:10.1073/pnas.1314705111. ISSN 0027-8424. PMC 4104848. PMID 24982135.
- "Congress acts to clean up the ocean – A garbage patch in the Pacific is at least triple the size of Texas, but some estimates put it larger than the continental United States". The Christian Science Monitor. Archived from the original on 5 October 2008. Retrieved 10 October 2008.
- "Great Pacific Garbage Patch". Marine Debris Division – Office of Response and Restoration. NOAA. 11 July 2013. Archived from the original on 17 April 2014. Retrieved 30 November 2018.
- Parker, Laura. "With Millions of Tons of Plastic in Oceans, More Scientists Studying Impact." National Geographic. National Geographic Society, 13 June 2014. Web. 3 April 2016.
- "Great Pacific garbage patch: Plastic turning vast area of ocean into ecological nightmare". Santa Barbara News-Press. Archived from the original on 12 September 2015. Retrieved 13 October 2008.
- Kenneth R. Weiss (2 August 2006). "Plague of Plastic Chokes the Seas". Los Angeles Times. Archived from the original on 23 September 2008. Retrieved 1 April 2008.
- Charles Moore (November 2003). "Across the Pacific Ocean, plastics, plastics, everywhere". Natural History. Archived from the original on 25 April 2016. Retrieved 12 July 2016.
- "Plastics and Marine Debris". Algalita Marine Research Foundation. 2006. Archived from the original on 14 July 2010. Retrieved 1 July 2008.
- Engler, Richard E. (20 November 2012). "The Complex Interaction between Marine Debris and Toxic Chemicals in the Ocean". Environmental Science & Technology. 46 (22): 12302–12315. Bibcode:2012EnST...4612302E. doi:10.1021/es3027105. PMID 23088563. S2CID 4988375.
- Wassenaar, Pim Nicolaas Hubertus; Trasande, Leonardo; Legler, Juliette (3 October 2017). "Systematic Review and Meta-Analysis of Early-Life Exposure to Bisphenol A and Obesity-Related Outcomes in Rodents". Environmental Health Perspectives. 125 (10): 106001. doi:10.1289/EHP1233. PMC 5933326. PMID 28982642.
- Reisser, Julia; Shaw, Jeremy; Hallegraeff, Gustaaf; Proietti, Maira; Barnes, David K. A; Thums, Michele; Wilcox, Chris; Hardesty, Britta Denise; Pattiaratchi, Charitha (18 June 2014). "Millimeter-Sized Marine Plastics: A New Pelagic Habitat for Microorganisms and Invertebrates". PLOS ONE. 9 (6): e100289. Bibcode:2014PLoSO...9j0289R. doi:10.1371/journal.pone.0100289. PMC 4062529. PMID 24941218.
- Davey, M. E.; O'toole, G. A. (1 December 2000). "Microbial Biofilms: from Ecology to Molecular Genetics". Microbiology and Molecular Biology Reviews. 64 (4): 847–867. doi:10.1128/mmbr.64.4.847-867.2000. PMC 99016. PMID 11104821.
- "Ocean Debris: Habitat for Some, Havoc for Environment". National Geographic. 23 April 2007. Archived from the original on 7 August 2008. Retrieved 1 August 2008.
- "Rubbish menaces Antarctic species". BBC News. 24 April 2002. Archived from the original on 28 February 2009. Retrieved 1 August 2008.
- "Where Has All the (Sea Trash) Plastic Gone?". National Geographic. 18 December 2014. Archived from the original on 4 February 2015. Retrieved 26 January 2015.
- Ron Hess; Denis Rushworth; Michael Hynes; John Peters (2 August 2006). "Chapter 5: Reefing" (PDF). Disposal Options for Ships. Rand Corporation. Archived from the original (PDF) on 29 June 2007. Retrieved 3 May 2008.
- Miller, Shawn (19 October 2014). "Crabs With Beach Trash Homes – Okinawa, Japan". Okinawa Nature Photography. Archived from the original on 14 October 2017. Retrieved 14 October 2017.
- Jamieson, A. J.; Brooks, L. S. R.; Reid, W. D. K.; Piertney, S. B.; Narayanaswamy, B. E.; Linley, T. D. (27 February 2019). "Microplastics and synthetic particles ingested by deep-sea amphipods in six of the deepest marine ecosystems on Earth". Royal Society Open Science. 6 (2): 180667. Bibcode:2019RSOS....680667J. doi:10.1098/rsos.180667. PMC 6408374. PMID 30891254.
- "Debris collection onsite after Bay Bridge struck". US Army Corps of Engineers. Archived from the original on 9 January 2009. Retrieved 7 February 2009.
- Turner, Emily; Steimle, Susie (26 June 2019). "Great Pacific Garbage Patch Cleanup Work Tackled By Sausalito Non-Profit". sanfrancisco.cbslocal.com. Retrieved 23 May 2021.
- Mandel, Kyla (5 September 2020). "Don't Call It A Garbage Patch: The Truth About Cleaning Up Ocean Plastics". Retrieved 23 May 2021.
- David Helvarg (27 December 2019). "Untangling the Problem of Ocean Plastic". Sierra.
- "Fishing For Litter". FishingForLitter.org.uk.
- "Trash Racks". Adelaide and Mount Lofty Ranges Natural Resources Management Board. Archived from the original on 19 July 2008. Retrieved 7 February 2009.
- "10 Tips for Divers to Protect the Ocean Planet". Archived from the original on 23 September 2014.
- "Solar powered SeaVax hoover concept to clean up the oceans". The International Institute of Marine Surveying (IIMS). 14 March 2016.
- "Solar-Powered Vacuum Could Suck Up 24,000 Tons of Ocean Plastic Every Year". EcoWatch. 19 February 2016.
- "Yvan Bourgnon : "Au large, le Manta pourra ramasser 600 m3 de déchets plastiques"". Libération.fr. 23 April 2018.
- "Methods for collecting plastic litter at sea". MarineDebris.Info. Archived from the original on 24 October 2013.
- "The Great Pacific Garbage Patch". Sierra Club. 6 December 2016.
- Poizat, Christophe J. (3 May 2016). "OFFICIAL LAUNCH OF OCEAN PHOENIX PROJECT". Medium.
- Wabnitz, Colette; Nichols, Wallace J. (2010). "Plastic pollution: An ocean emergency". Marine Turtle Newsletter. 129: 1–4.
- Daily Telegraph 28 September 2017, page 31
- "Lost fishing gear being recovered from Scapa Flow – The Orcadian Online". orcadian.co.uk. 25 September 2017. Archived from the original on 11 December 2017. Retrieved 1 May 2018.
- Crowley. "GHOST FISHING UK TO BE CHARGED FOR CLEANUPS". divemagazine.co.uk. Archived from the original on 29 September 2017. Retrieved 1 May 2018.
- Leous, Justin P.; Parry, Neal B. (2005). "Who is Responsible for Marine Debris? The International Politics of Cleaning Our Oceans". Journal of International Affairs. 59 (1): 257–269. JSTOR 24358243.
- "London Convention". US EPA. Archived from the original on 9 March 2009. Retrieved 29 May 2008.
- "Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter". The American Journal of International Law. 67 (3): 626–636. 1973. doi:10.2307/2199200. JSTOR 2199200.
- "International Convention for the Prevention of Pollution from Ships (MARPOL)". www.imo.org. Retrieved 23 July 2015.
- Tharpes, Yvonne L. (1989). "International Environmental Law: Turning the Tide on Marine Pollution". The University of Miami Inter-American Law Review. 20 (3): 579–614. JSTOR 40176192.
- "Beaches, Fishing Grounds and Sea Routes Protection Act 1932". Federal Register of Legislation.
- Caroline Ford (2014). Sydney Beaches: A History. NewSouth. p. 230. ISBN 9781742246840.
- "Environment Protection (Sea Dumping) Act 1981". Federal Register of Legislation.
- "The OSPAR Convention". OSPAR Commission. Archived from the original on 12 February 2008. Retrieved 29 May 2008.
- "Directive 2000/60/EC of the European Parliament and of the Council of 23 October 2000 establishing a framework for Community action in the field of water policy". EurLex. Retrieved 29 May 2008.
- "Marine and Coastal Access Act 2009". UK Defra. Archived from the original on 2 April 2010. Retrieved 29 July 2008.
- "EU parliament approves ban on single use plastics". phys.org.
- Craig, R. (2005). "Protecting International Marine Biodiversity: International Treaties and National Systems of Marine Protected Areas". Journal of Land Use & Environmental Law. 20 (2): 333–369. JSTOR 42842976.
- "Marine Protection, Research, and Sanctuaries Act of 1972" (PDF). US Senate. 29 December 2000. Archived (PDF) from the original on 30 May 2008. Retrieved 29 May 2008.
- "Ocean Dumping Ban Act of 1988". US EPA. 21 November 1988. Archived from the original on 11 May 2009. Retrieved 29 May 2008.
- "Can you keep ship-wrecked goods?". BBC News. 22 January 2007. Archived from the original on 23 January 2009. Retrieved 29 May 2008.
- "Home". PlasticAdrift.org. Retrieved 3 February 2015.
- "The garbage patch territory turns into a new state". UNESCO Office in Venice. United Nations Educational, Scientific and Cultural Organization. 4 November 2013. Archived from the original on 11 September 2017.
- "Rifiuti diventano stato, Unesco riconosce 'Garbage Patch'" [Waste becomes state, UNESCO recognizes 'Garbage Patch']. Siti (in Italian). Archived from the original on 14 July 2014. Retrieved 3 November 2014.
- Chow, Lorraine (29 June 2016). "80% Of Ocean Plastic Comes From Land-Based Sources, New Report Finds". EcoWatch.
- Tibbetts, John H. (April 2015). "Managing Marine Plastic Pollution: Policy Initiatives to Address Wayward Waste". Environmental Health Perspectives. 123 (4): A90-3. doi:10.1289/ehp.123-A90. PMC 4384192. PMID 25830293.
- "7 Ways To Reduce Ocean Plastic Pollution Today". www.oceanicsociety.org. Archived from the original on 30 March 2018. Retrieved 29 March 2018.
- Stemming the tide: Land-based strategies for a plastic-free ocean (pp. 1–48, Rep.). (2015). McKinsey Center for Business and Environment.
- "Municipal Solid Waste Generation, Recycling, and Disposal in the United States: Facts and Figures for 2012" (PDF). EPA.
- Nash, Jennifer; Bosso, Christopher (April 2013). "Extended Producer Responsibility in the United States". Journal of Industrial Ecology. 17 (2): 175–185. doi:10.1111/j.1530-9290.2012.00572.x. S2CID 154297251.
- "Cigarette butts are toxic plastic pollution. Should they be banned?". Environment. 9 August 2019.
- Mosko, Sarah. "Mid-Ocean Plastics Cleanup Schemes: Too Little Too Late?". E-The Environmental Magazine. Archived from the original on 12 December 2013. Retrieved 25 April 2014.
- "Jim Holm: The Clean Oceans Project". TEDxGramercy. Archived from the original on 10 December 2013. Retrieved 24 April 2014.
- Hamel, Jessi (20 April 2011). "From Trash to Fuel". Santa Cruz Good Times. Archived from the original on 25 April 2014. Retrieved 24 April 2014.
- West, Amy E. (January 2012). "Santa Cruz nonprofit hopes to make fuel from ocean-based plastic". San Jose Mercury News. Archived from the original on 25 April 2014. Retrieved 24 April 2014.
- "The Response". The Clean Oceans Project. Archived from the original on 25 April 2014. Retrieved 24 April 2014.
- "Research group finds way to turn plastic waste products into jet fuel". phys.org.
- "SCRA Spinout Case Study – Recycling Technologies". warwick.ac.uk.
- Herreria, Carla (8 June 2017). "3 Incredible Inventions That Are Cleaning Our Oceans". HuffPost.
- "GRT Solution for Today".
- "ReOil: Getting crude oil back out of plastic". www.omv.comhttps.
- "OMV reveals plastic waste to synthetic crude oil pilot". 23 September 2018.
- Cookson, Clive; Hook, Leslie (2019), Millions of pieces of plastic waste found on remote island chain, Financial Times, retrieved 31 December 2019
Media related to Marine debris at Wikimedia Commons
- United Nations Environment Programme Marine Litter Publications
- UNEP Year Book 2011: Emerging Issues in Our Global Environment Plastic debris, pages 21–34. ISBN 978-92-807-3101-9.
- NOAA Marine Debris Program – US National Oceanic and Atmospheric Administration
- Marine Debris Abatement – US Environmental Protection Agency
- Marine Research, Education and Restoration – Algalita Marine Research Foundation
- UK Marine Conservation Society
- Harmful Marine Debris – Australian Government
- The trash vortex – Greenpeace
- High Seas GhostNet Survey – US National Oceanic and Atmospheric Administration
- Social & Economic Costs of Marine Debris – NOAA Economics
- Tiny Plastic Bits Too Small To See Are Destroying The Oceans, Business Insider
- Ghost net remediation program – NASA, NOAA and ATI collaborating to detect ghost nets | https://wiki-offline.jakearchibald.com/wiki/Marine_debris | 21 |
15 | atoms and bonding chapter 5. atomic structure and the periodic table review the structure of the...
Post on 29-Dec-2015
Embed Size (px)
Atoms and BondingChapter 5
Atomic Structure and the Periodic TableReview the structure of the atomProtons?Neutrons?Electrons?Nucleus?Electron Cloud?
P+NProtons (+)Neutron (0)Electron (-)Valence electrons
Atomic Structure and the Periodic Table (cont)Electrons (negative) move around the nucleus in the electron cloud. The electron cloud has different energy shells (or orbits). In a neutral atom the number of electrons equals the number of protons.
Atomic Structure and the Periodic Table (cont)Valence electrons are the electrons in the outer shell.
The valence electrons determine which elements combine to form compounds!
Why do Elements Form Compounds?1. Atoms combine to complete the outer energy shell of electrons2. A complete outer energy shell is stable. (atoms with filled outer energy shells wont combine with other atoms)
Why do Elements Form Compounds? 3. shell #1 it is complete with 2 electrons.
4. Shells # 2-7 are complete with 8 electrons
What happens when a chemical bond is formed?A chemical bond is the force of attraction that holds atoms together A new substance is formed in a chemical reactionproperties of the new substance are different than the properties of the elements that make them upElectrons are gained, given away or shared
Electron Dot DiagramsShows valence electrons only
Uses dots to represent electrons
Can be used to show how elements bond
Chemical FormulaChemical Formula - A shorthand way to write the name of the compound
What information does a chemical formula contain?
Metals METALS are on the left side of the Periodic Table. They have a low number of valence electrons and can easily give them to other atoms.
Almost empty shells
NonmetalsB. NONMETALS are on the right side of the Periodic Table. They have a high number of valence electrons and can easily take or share valence electrons from other atoms
Almost full shells
Semi MetalsC. SEMIMETALS are found between metals and nonmetals along the zigzag line. They can either lose or share valence electrons with other atoms
About half full shell
Ionic BondsION- a charged particle; atoms either gain or lose electrons to form:A. Positive Ions form when atoms lose electrons (more protons than electrons)B. Negative Ions form when atoms gain electrons (more electrons than protons)
Ionic BondsMetal to nonmetal
Transfer of electrons from one atom to the other
How does an ionic bond form? 1.Metals lose electrons and form positive ions. Nonmetals gain electrons and form negative ions.
2. Ions with opposite charges attract each other (+ and attract)
Properties of ionic compoundsVery strong bonds Form crystal lattice (alternating, repeating)hard, brittle solids with high boiling and melting points, conduct electricity when dissolved in water.
Naming Ionic Compoundsname the positive ion (metal) firstname the negative ion (nonmetal) next with the ending changed to ideNaCl: sodium + chlorine = sodium chloride
Name these:K2S: Potassium + sulfur = potassium sulfide Li2O: lithium + oxygen = lithium oxideMg3P2Magnesium + phosphorous = magnesium phosphide
Covalent bondsForm nonmetal to nonmetal
How does a covalent bond form?atoms share electrons to fill outer energy shells (it takes too much energy to transfer electrons)
the force that holds atoms together in a covalent bond is the attraction of each atoms nucleus for the shared pair of electrons
Properties of covalent compoundsWeak bondslow melting & boiling pointsMany are gases and liquidscannot conduct electricity when dissolved in water
moleculea neutral group of atoms held together by a covalent bond; smallest piece of a compound
Covalent compounds are also called molecular compounds
Diatomic Elements:An element that bonds with itself to form the simplest of molecules.There are only 7 elements that are diatomic.H, N, O, F, Cl, Br, I
XI. Naming covalent compounds Use prefixes:Mono = 1Di = 2Tri = 3Tetra = 4Penta = 5Hexa = 6
Carbon dioxideCO2Carbon monoxideCODihydrogen monoxideH20
Metallic Bondsmetal to metal
Positive metal ions swimming in sea of released electrons
How does a metallic bond form?Metals tend to lose electrons and form positive ions. The bonds are held together by the force of attraction between positive metal ions and the many electrons surrounding them.
Metallic bondmetal atoms combine in regular patterns which allow the electrons to move from atom to atom
Alloymixture of 2 or more elements, at least 1 is a metalstronger and less reactive than pure metals. Properties of alloys are different from the pure metals that make them up
Metallic bond: mixture or chemical bond(honors slide)Alloys are a mixture because they can be in any ratioAlloys are like a chemical bond because they have different properties than the metals that form themAlloys are like a chemical bond because the electrons are interacting
Properties of MetalsDenseshinySolids at room temperature (except Mercury)malleableductile Good conductors of heat and electricity
Number scratch paper from 1-10Cross out number 8
Write the answer to fill in the blank
A simplified slide | https://vdocuments.mx/atoms-and-bonding-chapter-5-atomic-structure-and-the-periodic-table-review.html | 21 |
17 | It’s crucial for your children to know the basics of financial literacy, but how do you approach teaching them? Luckily, you’re making financial decisions every day—you simply need to let your kids in on the conversation.
What is Financial Literacy?
Financial literacy includes many different financial skills and concepts; to be financially literate simply means having the know-how to make wise decisions with your personal finances—like managing a budget, borrowing money, paying for insurance, and saving for retirement.
Make it Real
Teaching financial literacy doesn’t have to be a formalized lesson for your family. Experience is often the best teacher. You can give your children that experience by involving them in what you’re doing in a way that makes sense for their age.
For example, a trip to the grocery store is a great time for a child of any age to get some practice.
- Pre-K and Early Elementary School: Explain that everything you’re buying costs money. When you go to check out, let them swipe the card or hand the money over to the cashier and explain the transaction.
- Elementary school: Give the child some money to be in charge of while shopping— maybe $2-$5. Explain to them that they can spend that money however they want while showing them tradeoffs—like getting multiple inexpensive things means you can’t get one expensive item or vice versa.
- High School Kids: Let your teen take control of the groceries for one trip. Give them a budget and a list of things that you need. From there, let them manage the money for that trip and the best way to divide it up. For an extra challenge, you may include that you need “snacks for lunches,” but let them decide what exactly that means. If they buy too much or something too expensive, they won’t have enough left over for the other essentials on the list.
The key with these examples is getting your kids used to thinking about a budget and considering how much things cost when making decisions.
Have Some Fun
Many find that talking about finances causes either boredom or anxiety—or perhaps a mix of both. But it doesn’t have to be that way, especially not for you and your kids. Managing your finances correctly is the pathway to buying a new home, going on that vacation you’ve always wanted, or spending a fun night out with loved ones. Of course, it’s important to balance any conversations with the appropriate warnings and precautions, but the goal is to get your kids excited about the possibilities.
If you’re looking for some help in adding fun to the conversation, consider giving the Banzai Courses a try, which balance fun and education with choose-your-own adventure type options that allow kids to make financial decisions and manage their own budget.
Don’t Be Intimidated
Financial literacy covers a huge range of topics, some of which can get pretty complicated pretty fast. Thankfully, you don’t have to be an expert on everything in order to start the conversation. But the more you’re willing to touch on the tough stuff, the better foundation your kids will have when they’re forced to confront those things themselves. This could mean getting into a discussion about 401Ks, taxes, investments, housing costs, and plenty of other topics that may seem intimidating on the surface. You can use the resources on this site or visit a branch and chat with one of your local finance experts if you’re looking for help.
While we hope you find this content useful, it is only intended to serve as a starting point. Your next step is to speak with a qualified banker who can provide advice tailored to your individual circumstances. Nothing in this article, nor in any associated resources, should be construed as financial, tax or legal advice. Furthermore, while we have made good faith efforts to ensure that the information presented was correct as of the date the content was prepared, we are unable to guarantee that it remains accurate today. | https://www.lakeareabank.com/teach-financial-literacy-at-home/ | 21 |
18 | Sandeep Garg Class 12 Macroeconomics Solutions Chapter 2 Basic Concepts of Macroeconomics’ is explained by expert Economics teachers from the latest edition of Sandeep Garg Macroeconomics Class 12 textbook solutions.
We, at BYJU’S, provide Sandeep Garg Economics class 12 Solutions to give a comprehensive insight about the subject to the students. These insights help as priceless benefits to students while completing their homework or while studying for their exams.
There are numerous concepts in Economics, but here, we provide the solutions from the basic concepts of macroeconomics, which will be useful for the students to score well in their board exams.
Sandeep Garg Solutions Class 12 – Chapter 2 – Part B
Define factor income.
Factor income refers to the income received by the factors of production for rendering factor services in the process of production.
Define current transfers.
Current transfers refer to transfers made out of the current income of the payer and added to the current income of the recipient.
Define gross investment.
Gross investment is the addition to the stock of capital before making allowance for depreciation.
Mention three differences between consumption goods and capital goods.
The three differences between consumption goods and capital goods are as follows:
|Parameters||Consumption goods||Capital goods|
|Type of demand||These goods satisfy human wants directly. Therefore, such goods have direct demands.||Such goods satisfy human wants indirectly. Therefore, such goods have derived demands.|
|Impact on the production capacity||They do not promote production.||They help in raising the capacity of production.|
|Expected life||Most of the consumption goods (except durable goods) have a limited life expectancy.||Generally, capital goods have a life expectancy of more than one year.|
Mention three differences between depreciation and capital loss.
The three differences between depreciation and capital loss are as follows:
|Meaning||It refers to the fall in the value of fixed assets due to normal wear and tear, and due to the passage of time or outdated technology.||It refers to the loss in value of the fixed assets because of them being outdated.|
|Provision for loss||Provisions are made for the replacement of assets as it is an expected loss.||No such provisions are made in the case of capital loss, as it is an unexpected loss.|
|Impact on the production process||It does not hamper the production process.||It hampers the production process.|
What are the reasons for the depreciation of assets?
The three reasons for the depreciation of assets are as follows:
Normal wear and tear– Regular use of fixed assets in the production process reduces the productive capacity and value.
Passage of time- Due to the passage of time, the value of fixed assets decreases the productive capacity even if it is not used regularly. Nature agents. like wind, water, weather, etc., add up to the fall in their value.
Expected obsolescence– The fixed assets value also decreases because technology, goods, and services become outdated.
|Important Questions for Class 12 Economics|
|Study Tips for Preparing Economics Exam|
|Economics Project for Class 12|
The above-provided solutions are considered to be the best solutions for ‘Sandeep Garg Macroeconomic Class 12 Solutions Chapter 2- Basic Concepts of Macroeconomics’ . Stay tuned to BYJU’S to learn more. | https://byjus.com/commerce/sandeep-garg-macroeconomics-class-12-solutions-chapter-2-basic-concepts-of-macroeconomics/ | 21 |
14 | HIV infection; Infection - HIV; Human immunodeficiency virus; Acquired immune deficiency syndrome: HIV-1
Human immunodeficiency virus (HIV) is the virus that causes AIDS. When a person becomes infected with HIV, the virus attacks and weakens the immune system. As the immune system weakens, the person is at risk of getting life-threatening infections and cancers. When that happens, the illness is called AIDS. Once a person has the virus, it stays inside the body for life.
The virus is spread (transmitted) person-to-person through certain body fluids:
- Semen and preseminal fluid
- Rectal fluids
- Vaginal fluids
- Breast milk
HIV can be spread if these fluids come in contact with:
- Mucous membranes (inside of the mouth, penis, vagina, rectum)
- Damaged tissue (tissue that has been cut or scraped)
- Injection into the blood stream
HIV cannot be spread through sweat, saliva, or urine.
In the United States, HIV is mainly spread:
- Through vaginal or anal sex with someone who has HIV without using a condom or is not taking medicines to prevent or treat HIV
- Through needle sharing or other equipment used to inject drugs with someone who has HIV
Less often, HIV is spread:
- From mother to child. A pregnant woman can spread the virus to her fetus through their shared blood circulation, or a nursing mother can pass it to her baby through her breast milk. Testing and treatment of HIV-positive mothers has helped lower the number of babies getting HIV.
- Through needle sticks or other sharp objects that are contaminated with HIV (mainly health care workers).
The virus is NOT spread by:
- Casual contact, such as hugging or closed-mouth kissing
- Mosquitoes or pets
- Participating in sports
- Touching items that were touched by a person infected with the virus
- Eating food handled by a person with HIV
HIV and blood or organ donation:
- HIV is not spread to a person who donates blood or organs. People who donate organs are never in direct contact with the people who receive them. Likewise, a person who donates blood is never in contact with the person receiving it. In all of these procedures, sterile needles and instruments are used.
- While very rare, in the past HIV has been spread to a person receiving blood or organs from an infected donor. However, this risk is very small because blood banks and organ donor programs thoroughly check (screen) donors, blood, and tissues.
Risk factors for getting HIV include:
- Having unprotected anal or vaginal sex. Receptive anal sex is the riskiest. Having multiple partners also increases the risk. Using a new condom correctly every time you have sex greatly helps lower this risk.
- Using drugs and sharing needles or syringes.
- Having a sexual partner with HIV who is not taking HIV medicines.
- Having a sexually-transmitted disease (STD).
Symptoms related to acute HIV infection (when a person is first infected) can be similar to the flu or other viral illnesses. They include:
- Fever and muscle pains
- Sore throat
- Night sweats
- Mouth sores, including yeast infection (thrush)
- Swollen lymph glands
Many people have no symptoms when they are first infected with HIV.
Acute HIV infection progresses over a few weeks to months to become an asymptomatic HIV infection (no symptoms). This stage can last 10 years or longer. During this period, the person might have no reason to suspect they have HIV, but they can spread the virus to others.
If they are not treated, almost all people infected with HIV will develop AIDS. Some people develop AIDS within a few years of infection. Others remain completely healthy after 10 or even 20 years (called long-term nonprogressors).
People with AIDS have had their immune system damaged by HIV. They are at very high risk of getting infections that are uncommon in people with a healthy immune system. These infections are called opportunistic infections. These can be caused by bacteria, viruses, fungi, or protozoa, and can affect any part of the body. People with AIDS are also at higher risk for certain cancers, especially lymphomas and a skin cancer called Kaposi sarcoma.
Symptoms depend on the particular infection and which part of the body is infected. Lung infections are common in AIDS and usually cause cough, fever, and shortness of breath. Intestinal infections are also common and can cause diarrhea, abdominal pain, vomiting, or swallowing problems. Weight loss, fever, sweats, rashes, and swollen lymph glands are common in people with HIV infection and AIDS.
Exams and Tests
There are tests that are done to check if you've been infected with the virus.
In general, testing is a 2-step process:
- Screening test -- There are several kinds of tests. Some are blood tests, others are mouth fluid tests. They check for antibodies to the HIV virus, HIV antigen, or both. Some screening tests can give results in 30 minutes or less.
- Follow-up test -- This is also called a confirmatory test. It is often done when the screening test is positive.
Home tests are available to test for HIV. If you plan to use one, check to make sure it is approved by the FDA. Follow instructions on the packaging to ensure the results are as accurate as possible.
The Centers for Disease Control and Prevention (CDC) recommends that everyone ages 15 to 65 have a screening test for HIV. People with risky behaviors should be tested regularly. Pregnant women should also have a screening test.
TESTS AFTER BEING DIAGNOSED WITH HIV
People with AIDS usually have regular blood tests to check their CD4 cell count:
- CD4 T cells are the blood cells that HIV attacks. They are also called T4 cells or "helper T cells."
- As HIV damages the immune system, the CD4 count drops. A normal CD4 count is from 500 to 1,500 cells/mm3 of blood.
- People usually develop symptoms when their CD4 count drops below 350. More serious complications occur when the CD4 count drops to 200. When the count is below 200, the person is said to have AIDS.
Other tests include:
- HIV RNA level, or viral load, to check how much HIV is in the blood
- A resistance test to see if the virus has any changes in the genetic code that would lead to resistance to the medicines used to treat HIV
- Complete blood count, blood chemistry, and urine test
- Tests for other sexually-transmitted infections
- TB test
- Pap smear to check for cervical cancer
- Anal Pap smear to check for cancer of the anus
HIV/AIDS is treated with medicines that stop the virus from multiplying. This treatment is called antiretroviral therapy (ART).
In the past, people with HIV infection would start antiretroviral treatment after their CD4 count dropped or they developed HIV complications. Today, HIV treatment is recommended for all people with HIV infection, even if their CD4 count is still normal.
Regular blood tests are needed to make sure the virus level in the blood (viral load) is kept low or suppressed. The goal of treatment is to lower the HIV virus in the blood to a level that is so low that the test can't detect it. This is called an undetectable viral load.
If the CD4 count already dropped before treatment was started, it will usually slowly go up. HIV complications often disappear as the immune system recovers.
Joining a support group where members share common experiences and problems can often help lower the emotional stress of having a long-term illness.
With treatment, most people with HIV/AIDS can live a healthy and normal life.
Current treatments do not cure the infection. The medicines only work as long as they are taken every day. If the medicines are stopped, the viral load will go up and the CD4 count will drop. If the medicines are not taken regularly, the virus can become resistant to one or more of the drugs, and the treatment will stop working.
People who are on treatment need to see their health care providers regularly. This is to make sure the medicines are working and to check for side effects of the drugs.
When to Contact a Medical Professional
Call for an appointment with your provider if you have any risk factors for HIV infection. Also contact your provider if you develop symptoms of AIDS. By law, the results of HIV testing must be kept confidential (private). Your provider will review your test results with you.
- Get tested. People who don't know they have HIV infection and who look and feel healthy are the most likely to transmit it to others.
- DO NOT use illegal drugs and do not share needles or syringes. Many communities have needle exchange programs where you can get rid of used syringes and get new, sterile ones. Staff at these programs can also refer you for addiction treatment.
- Avoid contact with another person's blood. If possible, wear protective clothing, a mask, and goggles when caring for people who are injured.
- If you test positive for HIV, you can pass the virus to others. You should not donate blood, plasma, body organs, or sperm.
- HIV-positive women who might become pregnant should talk to their provider about the risk to their unborn child. They should also discuss methods to prevent their baby from becoming infected, such as taking antiretroviral medicines during pregnancy.
- Breastfeeding should be avoided to prevent passing HIV to infants through breast milk.
Safer sex practices, such as using latex condoms, are effective in preventing the spread of HIV. But there is still a risk of getting the infection, even with the use of condoms (for example, condoms can tear).
In people who aren't infected with the virus, but are at high risk of getting it, taking a medicine such as Truvada (emtricitabine and tenofovir disoproxil fumarate) or Descovy (emtricitabine and tenofovir alafenamide) can help prevent the infection. This treatment is known as pre-exposure prophylaxis, or PrEP. Talk to your provider if you think PrEP might be right for you.
HIV-positive people who are taking antiretroviral medicines and have no virus in their blood do not transmit the virus.
The US blood supply is among the safest in the world. Nearly all people infected with HIV through blood transfusions received those transfusions before 1985, the year HIV testing began for all donated blood.
If you believe you have been exposed to HIV, seek medical attention right away. DO NOT delay. Starting antiviral medicines right after the exposure (up to 3 days after) can reduce the chance that you will be infected. This is called post-exposure prophylaxis (PEP). It has been used to prevent transmission in health care workers injured by needlesticks.
Centers for Disease Control and Prevention website. About HIV/AIDS.
Centers for Disease Control and Prevention website. PrEP. www.cdc.gov/hiv/basics/prep.html. Reviewed November 3, 2020. Accessed April 15, 2019.DiNenno EA, Prejean J, Irwin K, et al. Recommendations for HIV screening of gay, bisexual, and other men who have sex with men - United States, 2017. MMWR Morb Mortal Wkly Rep. 2017;66(31):830-832.
Gulick RM. Antiretroviral therapy of human immunodeficiency virus and acquired immunodeficiency syndrome. In: Goldman L, Schafer AI, eds. Goldman-Cecil Medicine. 26th ed. Philadelphia, PA: Elsevier; 2020:chap 364.
Moyer VA; US Preventive Services Task Force. Screening for HIV: US Preventive Services Task Force recommendation statement. Ann Intern Med. 2013;159(1):51-60. PMID: 23698354
Reitz MS, Gallo RC. Human immunodeficiency viruses. In: Bennett JE, Dolin R, Blaser MJ, eds. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases. 9th ed. Philadelphia, PA: Elsevier; 2020:chap 169.
Simonetti F, Dewar R, Maldarelli F. Diagnosis of human immunodeficiency virus infection. In: Bennett JE, Dolin R, Blaser MJ, eds. Mandell, Douglas, and Bennett's Principles and Practice of Infectious Diseases. 9th ed. Philadelphia, PA: Elsevier; 2020:chap 120.
US Department of Health and Human Services, Clinical Info.gov website. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV.
Verma A, Berger JR. Neurological manifestations of human immunodeficiency virus infection in adults. In: Daroff RB, Jankovic J, Mazziotta JC, Pomeroy SL, eds. Bradley's Neurology in Clinical Practice. 7th ed. Philadelphia, PA: Elsevier; 2016:chap 77.
Last reviewed on: 6/15/2020
Reviewed by: Jatin M. Vyas, MD, PhD, Assistant Professor in Medicine, Harvard Medical School; Assistant in Medicine, Division of Infectious Disease, Department of Medicine, Massachusetts General Hospital, Boston, MA. Also reviewed by David Zieve, MD, MHA, Medical Director, Brenda Conaway, Editorial Director, and the A.D.A.M. Editorial team. Editorial update 11/11/2020. | https://www.mountsinai.org/health-library/diseases-conditions/hiv-aids | 21 |
16 | Teaching kids about financial responsibility helps them learn a critical life skill and also gives kids a head start in building a healthy outlook on finances. You can teach money matters to children by giving them basic knowledge and sharing the tools necessary to learn how to save and spend money wisely.
Earning Money - In order to teach your kids to respect money, it is important to set an example by giving kids specific chores or jobs around the house that are age appropriate, and then pay them a specific amount of money for the chore. This can help kids learn about the hard work involved in making money and increase their concern about how they spend their money.
Saving Money - It is important to teach kids how to save money by setting an example for them. Explain how to save money by discussing how much to save and how it should be saved, such as by putting the money in a bank or a pension plan. In a household, it is important to have money set aside in case of emergencies, such as loss of employment, problems with cars, house problems, or other complications.
Risk - When discussing money management, teach your children about risks, such as gambling, the stock market, and other investments. This will help your child learn what impact risky behaviors have on the financial situation of a family or an individual.
Debt - When talking about money matters with kids, debt should be a highlighted area of discussion. When people have debt in the form of credit cards, bank loans, or car financing, there are often fees and penalties for late payments. When someone plans her finances poorly, she may not have enough money to cover her debts, which can lead to problems like repossession, higher interest rates, late fees and other problems.
Coupons - Although coupon clipping can become time consuming, the savings involved is often well worth the investment in time. Teaching kids about saving money through the use of coupons and leading by example can help kids learn how to save money when shopping for necessary household items and food.
Budget - A good way of teaching young kids how to budget is by giving them the example of your own household expenses, or creating pretend expenses to use in a lesson. This can also be done with play money so that children have a sense of how money is allocated. Kids can separate the play money into expenses like housing, shelter, food, utilities, and other areas. The play money can be put into piles to see where the money goes and what is left. This can also lead to a discussion about how the money could have been spent differently on clothing, food choices, eating out, or other optional expenses in order to save more. | https://letsfixit.co.uk/money-matters/money-matters-for-kids/ | 21 |
58 | Additive bilingulism: Developing a learner's proficiency in a second language with no pressure to replace, or reduce the importance of, the first language.
Affective filter: A filter governing how much input is received by the mechanism that processes language. The lower the filter the more open a student will be to acquiring new language (Dulay & Burt, 1977).
Age of arrival: The age at which a language-minority student was first enrolled into a formal educational program in the United States.
Alphabetic principle: The idea that written spellings systematically represent spoken words.
Attitude: An individual’s reaction toward something based on that individual's beliefs or opinions.
Basic interpersonal communication skills: The aspects of language proficiency strongly associated with basic fluency in face-to-face interaction.
Beaders: Second-language learners who learn words incrementally and embrace a gradual process of language learning. These learners do not produce language until they understand the meaning of individual words. Initially, they will identify objects and learn nouns before learning verbs. For these learners, complete comprehension of a word is attained before it becomes part of their vocabulary (Ventriglia, 1982).
Beading: A second-language learning style characterized by the incremental learning of words (Ventriglia, 1982).
BICS: See "Basic interpersonal communication skills."
Bilingual education: A term that is broadly inclusive of any educational program in which two languages are used for instruction.
Braiders: Second-language learners who easily produce sentences in the early stages of language learning. For these learners, oral production, learned through interaction with native speakers, is of greater importance than the need to comprehend the meaning of individual words. These learners are eager to try out newly acquired language skills (Ventriglia, 1982).
Braiding: A second-language learning style characterized by the early production of sentences (Ventriglia, 1982).
CALP: See "Cognitive academic language proficiency."
Cognitive academic language proficiency: The aspects of language strongly associated with literacy and academic achievement.
Comprehensible input: The amount of new language, either written or heard, that a learner is exposed to and understands.
Concurrent translation: A method of bilingual instruction in which students are provided with a sentence-by-sentence translation of lessons from English into the students' native language.
Content-based ESL: A form of ESL that provides students with instruction that is structured around academic content rather than general English-language skills.
Cooperation versus individualism: A learning style typology that categorizes students according to whether they work best collaboratively or do best in more competitive settings (Scarcella, 1990).
Creative construction: The ability of children to extract the grammar of a language from a string of unfamiliar words and produce structures that they have not been taught
Crisscrossers: Second-language learners who are spontaneous, adaptable and creative. They have a positive attitude toward both the first and second languages, and are comfortable navigating back and forth between the two. These learners embrace a bicultural identity (Ventriglia, 1982).
Crisscrossing: The motivational style of second-language learners who identify with both the first and second cultures (Ventriglia, 1982).
Critical period: A theory of first-language acquisition according to which the human brain, during a period extending from birth to the onset of puberty, shows the plasticity which allows the child to acquire his or her first language.
Crossing over: The motivational style of second-language learners who identify with the second culture (Ventriglia, 1982).
Crossovers: Flexible and independent second-language learners who are willing to take chances. These learners view second language identification as a positive way to adapt to the school setting. They may temporarily move closer to their English speaking peers, embracing this new identity (Ventriglia, 1982).
Crystallizers: Cautious second-language learners who display a passive attitude toward second-language learning. They are listeners, and long periods of silence are not unusual for them. These learners will verbalize only when they have perfected their comprehension. They initially reject the second language and do not interact socially with English speakers or identify with them (Ventriglia, 1982).
Crystallizing: The motivational style of second-language learners who maintain their identity with their first-language culture (Ventriglia, 1982).
Decoding: The aspect of the reading process that involves “sounding out” a printed sequence of letters based on knowledge of letter-sound correspondences.
Early-exit bilingual education: A program model in which, initially, half the day's instruction is provided through English and half through students' native language. This is followed by a gradual transition to all-English instruction that is completed in approximately 2-3 years. This program model is alternately termed transitional bilingual education.
ELL: See "English-language learner."
English as a second language: A method for teaching English to speakers of other languages in which English is the medium of instruction.
English-language learner: A student in the United States who is learning English as his or her second language.
ESL: See "English as a second language."
ESL pull-out: A program model in which English-language learners attend mainstream classes, but are "pulled out" for ESL sessions designed to enhance English acquisition. Traditionally, these sessions have focused on grammar, vocabulary and communication rather than academic content areas.
Field sensitivity/field independence: A learning style typology that categorizes learners as field-sensitive or field-independent, depending on how their perceptions are affected by the surrounding environment. Field-sensitive learners enjoy working with others to achieve a common goal, and most often look to the teacher for guidance and demonstration. Field-independent learners enjoy working independently, like to compete, and ask for teacher assistance only in relation to the current task (Scarcella, 1990).
First language: The language a normal child acquires in the first few years of life. Alternately termed native language.
Global/analytic: A learning style typology that categorizes students according to which hemisphere of the brain is most utilized in language learning. Global thinking takes place in the right hemisphere, and global learners initially prefer an overall picture. Analytic thinking takes place in the left hemisphere, and analytic learners are fact oriented and learn tasks in a step-by-step fashion (Scarcella, 1990).
Home language: See "First language."
IL: See "Interlanguage."
Immersion bilingual education: A program model in which academic instruction is provided through both the first and second languages for Grades K-12. Originally developed for language-majority students in Canada, it is used as one model for two-way bilingual education in the United States.
Instrumental orientation: Reasons for learning a second language that have a pragmatic focus such as obtaining employment.
Integrative orientation: Reasons for learning a second language that reflect an interest in forming a closer liaison with the target-language community.
Interlanguage: The developing, or transitional, second-language proficiency of a second-language learner.
L1: See "First language."
L2: See "Second language."
Language-minority students: Children in grades K-12 from homes where a language other than English is spoken.
Late-exit bilingual education: A program model in which half the day's instruction is provided through students' first language and half through a second language during Grades K-6. Ideally, this type of program was planned for Grades K-12, but has rarely been implemented beyond the elementary school level in the United States. The goal of this program model is bilingualism. This program model is alternately termed maintenance bilingual education.
Learning styles: Patterns of thinking and of interacting that affect a student’s perceptions, memory and reasoning.
LEP: See "Limited-English-proficient students."
Limited-English-proficient students: Language-minority students who have difficulties in speaking, comprehending, reading or writing English that affect their school performance.
Maintenance bilingual education: See "Late-exit bilingual education."
Metacognition: Thoughts about thinking (cognition); for example, thinking about how to understand a passage.
Metalinguistic: Language or thoughts about language.
Miscue analysis: A detailed recording of errors or inaccurate attempts during reading.
Morphology: The study of the structure and form of words in language or a language, including inflection, derivation and the formation of compounds.
Motivation: The degree to which an individual strives to do something because he or she desires to and because of the pleasure and fulfillment derived from the activity.
Native language: See "First language."
NCE: See "Normal curve equivalent."
Normal curve equivalent: A unit of measurement used on norm-referenced standardized tests.
Orchestrating: A second-language learning style characterized by incremental acquisition (Ventriglia, 1982).
Orchestrators: Second-language learners who initially process language on a phonological basis and place the greatest importance on listening comprehension. These learners begin with sounds and gradually make connections between these sounds and the formation of syllables, words, phrases and sentences (Ventriglia, 1982).
Orientations: Reasons for learning a second language that may be classified as integrative (see "Integrative orientation") or instrumental (see "Instrumental orientation").
Orthography: A method of representing spoken language by letters and diacritics (i.e., spelling).
Performance-based assessment: Assessment that requires a student to construct an extended response, create a product, or perform a demonstration.
Phonemes:The speech phonological units that make a difference to meaning. Thus, the spoken word rope is comprised of three phonemes: /r/, /o/, and /p/. It differs by only one phoneme from each of the spoken words soap, rode and rip.
Phonemic awareness:The insight that every spoken word can be conceived as a sequence of phonemes. This awareness is key to a child's understanding of the logic of the alphabetic principle.
Phonics: Instructional practices that emphasize how spellings are related to speech sounds in systematic ways.
Phonological awareness:A more inclusive term than phonemic awareness, this refers to the general ability to attend to the sounds of language as distinct from meaning. Phonemic awareness generally develops through other, less subtle levels of phonological awareness.
Phonology: The study of speech structure in language (or a particular language) that includes both the patterns of basic speech units (phonemes) and the tacit rules of pronunciation.
Primary language: The language an individual is most fluent in. This is usually, though not always, an individual's first language.
Second language: A language acquired or learned simultaneously with, or after, an individual's acquisition of a first language.
Second-language acquisition: The subconscious process that is similar, if not identical, to the process by which children develop language ability in their first language.
Second-language learning: The process by which a conscious knowledge of a second language is developed. This conscious knowledge includes knowing the rules of the language, being aware of them, and being able to talk about them.
Sensory modality strength: A learning style typology that categorizes learners by the sensory input they utilize most for information. Learners are categorized as: visual, meaning they remember best by seeing or reading; auditory, meaning they remember best by hearing; or tactile-kinesthetic, meaning they remember best by writing or using their hands in a manipulative way (Scarcella, 1990).
Sheltered instruction: Subject matter instruction provided to English-language learners in English, modified so that it is accessible to them at their levels of English proficiency. This modification includes teachers using simplified speech, repetition, visual aids, contextual clues, etc.
Structured immersion: A program model in which all students in the program are English-language learners, and in which students are usually (though not always) from different language backgrounds. Instruction is provided in English, with an attempt made to adjust the level of English so that the subject matter is comprehensible. Typically there is no native-language support.
Submersion: English-only instruction in which students with limited-English proficiency are placed in mainstream classes with English-speaking students and no language assistance programs are provided.
Subtractive bilingualism: The replacement of a learner's first-language skills by second-language skills.
Syllable: A unit of spoken language that can be spoken. In English, a syllable can consist of a vowel sound alone or a vowel sound with one or more consonant sounds preceding and following.
Target language: The language that a learner is trying to acquire or learn.
TL: See "Target language."
Transitional bilingual education: See "Early-exit bilingual education."
Two-way developmental bilingual education: A program model in which language-majority and language-minority students are schooled together in the same bilingual class. The goal of this model is to develop proficiency in both languages for both groups of students. Like late-exit bilingual education, this model usually involves students for several more years than the early-exit model.
Return to Table of Contents
Adams, M. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.
Adams, M., & Bruck, M. (1995). Resolving the “great debate.” American Educator, 19(2), 10-20.
Adams, M. J., & Collins, A. (1979). A schema-theoretic view of reading. In R. O. Freedle (Ed.), New directions in discourse processing (pp. 1-22). Norwood, NJ: Ablex Publishing.
Alderson, J. C. (1984). Reading in a foreign language: A reading problem or a language problem. In J. C. Alderson & A. H. Urquhart (Eds.), Reading in a foreign language (pp. 122-135). New York, NY: Longman.
Anyon, J. (1980). Social class and the hidden curriculum of work. Journal of Education, 162(1), 67-92.
Anyon, J. (1981). Social class and school knowledge. Curriculum Inquiry, 12(1), 3-42.
August, D., & Hakuta, K. (Eds.). (1997). Improving schooling for language-minority children: A research agenda. Washington, DC: National Academy Press.
August, D., & Hakuta, K. (Eds.). (1998). Educating language-minority children. Washington, DC: National Academy Press.
Baker, C. (1993). Foundations of bilingual education and bilingualism. Clevedon, England: Multilingual Matters.
Baker, K. A., & de Kanter, A. A. (1981). Effectiveness of bilingual education: A review of the literature. Washington, DC: U.S. Department of Education.
Baker, K. A., & Pelavin, S. (1984). Problems in bilingual education. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.
Bejarano, Y. (1987). A cooperative small-group methodology in the language classroom. TESOL Quarterly, 21, 483-504.
Berman, P., Minicucci, C., McLaughlin, B., Nelson, B., & Woodworth, K. (1995). School reform and student diversity: Case studies of exemplary practices for LEP students [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education. Available: http://www.ncbe.gwu.edu/miscpubs/schoolreform/
Bermudez, A. B., & Marquez, J. A. (1996). An examination of a four-way collaborative to increase parental involvement in the schools. Journal of Educational Issues of Language Minority Students, 16(Summer), 1-16.
Bialystok, E., & Hakuta, K. (1994). In other words. New York: Basic Books.
Bloom, B., & Krathwohl, D. (1977). Taxonomy of educational objectives: Handbook I, cognitive domain. Longman.
Bloomfield, L. (1942). Outline guide for the practical study of foreign language. Baltimore, MD: Linguistic Society of America.
Braunger, J., & Lewis, J. P. (1997). Building a knowledge base in reading. Portland, OR: Northwest Regional Educational Laboratory.
Brown, R. (1973). A first language. Cambridge, MA: Harvard University Press.
Bruck, M. (1982). Language-disabled children performance in additive bilingual education programs. Applied Psycholinguistics, 3, 45-60.
Bruck, M. (1984). Feasibility of an additive bilingual program for the language impaired child. In Y. LeBrun & M. Paradis (Eds.), Early bilingualism and child development. Amsterdam: Swets and Zeitlinger.
Burkart, G. S., & Sheppard, K. (1995). Content-ESL across the USA: A training packet [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Burkheimer, G. J., Jr., Conger, A. J., Dunteman, G. H., Elliott, B. G., & Mowbray, K. A. (1989). Effectiveness of services for language-minority limited-English-proficient students (Technical Report, 2 vols.). Research Triangle Park, NC: Research Triangle Institute.
Canale, M. (1983). From communicative competence to communicative language pedagogy. In J. C. Richards & R. W. Schmidt (Eds.), Language and communication (pp. 2-28). New York, NY: Longman.
Canale, M., & Swain, M. (1980). Theoretical bases of communicative approaches to second language teaching and testing. Applied Linguistics, 1, 1-47.
Carbo, M., Dunn, R., & Dunn, K. (1986). Teaching students to read through their individual learning styles. Englewood Cliffs, NJ: Prentice-Hall.
Carrell, P. (1991). Second language reading: Reading ability or language proficiency. Applied Linguistics, 12(2), 159-179.
Carrell, P. L. (1983a). Background knowledge in second language comprehension. Language Learning and Communication, 2(1), 25-34.
Carrell, P. L. (1983b). Some issues in studying the role of schemata, or background knowledge in second language comprehension. Reading in a ForeignLanguage, 1(2), 81-92.
Carrell, P. L., (1983c). Three components of background knowledge in reading comprehension.Language Learning, 33(2), 183-207.
Carrell, P. L. (1984). Evidence of a formal schema in second language comprehension. Language Learning, 34(2), 87-112.
Carrell, P. L., & Eisterhold, J. C. (1983). Schema theory and ESL reading pedagogy. TESOL Quarterly, 17, 553-573.
Carrell, P. L., & Wallace, B. (1983). Background knowledge: Context and familiarity in reading comprehension. In M. A. Clarke & J. Handscombe (Eds.), On TESOL '82 (pp. 295-308). Washington, D.C.: Teachers of English to Speakers of Other Languages.
Carroll, J. B. (1986). Second language. In R. F. Dillon & R. J. Sternbery (Eds.), Cognition and instruction (pp. 83-125). Orlando, FL: Academic Press.
Carter, T., & Chatfield, M. (1986). Effective bilingual schools: Implications for policy and practice. American Journal of Education, 95, 200-232.
Center, Y., Wheldall, K., Freeman, L., Outhred, L., & McNaught, M. (1995). An evaluation of Reading Recovery. Reading Research Quarterly, 30, 240-263.
Chall, J. S. (1967). Learning to read. The great debate. New York: McGraw-Hill.
Chall, J. S. (1983). Stages of reading development. New York: McGraw-Hill.
Chamot, A. U. (1981). Applications of second language acquisition research to the bilingual classroom (NCBE Focus: Occasional Papers in Bilingual Education No. 8). Washington, DC: The National Clearinghouse for Bilingual Education.
Chamot, A. U. (1998). TESOL testifies on the U. S. English Fluency Act. TESOL Matters, 8(4), 1-5.
Chamot, A. U., Dale, M., O'Malley, J. M., & Spanos, G. (1992). Learning and problem solving strategies of ESL students. Bilingual Research Journal, 16(3&4), 1-33.
Chomsky, N. (1959). Review of VerbalBehavior. Language, 35, 26-58.
Christian, D. (1994). Two-way bilingual education: Students learning through two languages (Educational Practice Rep. No. 7). Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning.
Clarizio, H. F. (1982). Intellectual assessment of Hispanic children. Psychology in the Schools, 19, 61-71.
Clarke, M. A. (1978). Reading in Spanish and English: Evidence from adult ESL students. Language Learning, 29(1), 121-150.
Clarke, M. A. (1980). The short circuit hypothesis of ESL reading: When language competence interferes with reading performance. Modern Language Journal, 64, 203-209.
Clay, M. M. (1993). Reading Recovery: A guidebook for teachers in training. Portsmouth, NH: Heinemann.
Coelho, E. (1994). Social integration of immigrant and refugee children. In F. Genesee (Ed.), Educating second language children: The whole child, the whole curriculum, the whole community (pp. 301-327). New York, NY: Cambridge University Press.
Collier, V. P. (1987). Age and rate of acquisition of second language for academic purposes. TESOL Quarterly, 21, 617-641.
Collier, V. P. (1995). Acquiring a second language for school (Directions in Language and Education No. 1(4)) [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Cook, V. (1969). The analogy between first and second language learning. International Review of Applied Linguistics, 7(3), 207-216.
Cook, V. J. (1973). The comparison of language development in native children and foreign adults. International Review of Applied Linguistics in Language Teaching, 11(1), 13-28.
Corder, S. P. (1967). The significance of learners’ errors. International Review of Applied Linguistics, 5(4), 161-170.
Cummins, J. (1979a). Cognitive/academic language proficiency, linguistic interdependence, the optimum age question and some other matters. Working Papers on Bilingualism, 19, 197-205.
Cummins, J. (1979b). Linguistic interdependence and the educational development of bilingual children. Review of Educational Research, 49(2), 222-251.
Cummins, J. (1980). The construct of language proficiency in bilingual education. In J. E. Alatis (Ed.), Current issues in bilingual education. Washington, DC: Georgetown University Press.
Cummins, J. (1981). Age on arrival and immigrant second language learning in Canada: A reassessment. Applied Linguistics, 11(2), 132-149.
Cummins, J. (1984). Bilingualism and special education: Issues in assessment and pedagogy. San Diego, CA: College-Hill Press Inc.
Cummins, J. (1991). Interdependence of first- and second-language proficiency in bilingual children. In E. Bialystok (Ed.), Language processing in bilingual children (pp. 70-89). Cambridge: Cambridge University Press.
Cummins, J. (1994). The acquisition of English as a second language. In K. Spangenberg-Urbschat & R. Pritchard (Eds.), Kids come in all languages: Reading instruction for ESL students (pp. 36-62). Newark, DE: International Reading Association.
Cummins, J. (1996). Negotiating identities: Education for empowerment in a diverse society.Los Angeles, CA: California Association for Bilingual Education.
Cziko, G. A. (1978). Differences in first and second language reading: The use of syntactic, semantic and discourse constraints. Canadian Modern Language Review, 34, 473-489.
Cziko, G. A. (1992). The evaluation of bilingual education: From necessity and probability to possibility. Educational Researcher, 21(2), 10-15.
Dannoff, M. N. (1978). Evaluation of the impact of ESEA Title VII Spanish-English bilingual education programs (Technical Report). Washington, DC: American Institutes for Research.
DeFord, D. E., Pinnell, G., Lyons, C., & Place, A. W. (1990). The Reading Recovery follow-up study: Vol. II. Columbus, OH: Ohio State University.
Dehart, L., & Martinez, L. (1998). A comprehensive, additive approach to cognitive and linguistic development. Paper presented at the annual meeting of Teachers of English to Speakers of Other Languages, Seattle, WA.
Development Associates. (1984). Overview of the research design plans for the National Longitudinal Study of the Effectiveness of Services for Language Minority Students. Arlington, VA: Development Associates.
Devine, J. (1988). The relationship between general language competence and second language reading proficiency: Implications for teaching. In P. L. Carrell, J. Devine & D. E. Eskey (Eds.), Interactive approaches to second language reading (pp. 260-277). New York, NY: Cambridge University Press.
Dianda, M. R., & Flaherty, J. F. (1995). Report on workstation uses: Effects of Success for All on the reading achievement of first graders in California bilingual programs. Los Alamitos, CA: Southwest Regional Lab.
Diaz, R. M., & Klinger, C. (1991). Towards an explanatory model of the interaction between bilingualism and cognitive development. In E. Bialystok (Ed.), Language processing in bilingual children (pp. 167-192). Cambridge: Cambridge University Press.
Diaz, S., Moll, L. C., & Mehan, H. (1986). Sociocultural resources in instruction: A context-specific approach. In California State Department of Education, Beyond language: Social and cultural factors in schooling language minority children (pp. 187-230). Los Angeles, CA: California State University, Evaluation, Dissemination and Assessment Center.
Downing, J. (1979). Reading and reasoning. New York, NY: Springer-Verlag.
Dulay, H. C., & Burt, M. (1973). Should we teach children syntax? Language Learning, 24, 245-258.
Dulay, H. C., & Burt, M. (1974). Errors and strategies in child second language acquisition. TESOL Quarterly, 8, 129-136.
Dulay, H. C., & Burt, M. (1975). Creative construction in second language learning and teaching. In M. Burt & H. C. Dulay (Eds.), New directions in second language learning, teaching, and bilingual education (pp. 21-32). Washington, DC: Teachers of English to Speakers of Other Languages.
Dulay, H. C., & Burt, M. (1977). Remarks on creativity in language acquisition. In M. Burt, H. Dulay & M. Finocchiaro (Eds.), Viewpoints on English as a second language (pp. 95-126). New York: Regents.
Dulay, H., Burt, M., & Krashen, S. (1982). Language two. New York, NY: Oxford University Press.
Epstein, S. D., Flynn, D., & Martohardjono, G. (1996). Second language acquisition: Theoretical and experimental issues in contemporary research. Behavior and Brain Sciences, 19, 677-758.
Ervin-Tripp, S. M. (1973). Is second language learning like the first? Paper presented at the annual meeting of Teachers of English to Speakers of Other Languages, Puerto Rico.
Ervin-Tripp, S. M. (1974). Is second language learning like the first? TESOL Quarterly, 8, 111-127.
Escamilla, K., & Andrade, A. (1992). Descubriendo La Lectura: An application of Reading Recovery in Spanish. Education and Urban Society, 24, 212-226.
Eskey, D. E. (1986). Theoretical foundations. In F. Dubin, D. E. Eskey & W. Grabe (Eds.), Teaching second language reading for academic purposes (pp. 2-23). Reading, MA: Addison-Wesley.
Eskey, D. E., & Grabe, W. (1988). Interactive models for second language reading: Perspectives on instruction. In P. L. Carrell, J. Devine & D. E. Eskey (Eds.), Interactive approaches to second language reading (pp. 223-238). New York, NY: Cambridge University Press.
Ferriero, E., & Teberosky, A. (1982). Literacy before schooling. Portsmouth, NH: Heinemann.
Finders, M., & Lewis, C. (1998). Why some parents don't come to school. In I. Avalos Heath & C. J. Serrano (Eds.), Annual editions: Teaching English as a second language 98/99 (pp. 162-165). Guilford, CT: Dushkin/McGraw-Hill.
Flanigan, B. O. (1991). Peer tutoring and second language acquisition in the elementary school. Applied Linguistics, 12(2), 141-156.
Flesch, R. F. (1955). Why Johnny can’t read. New York: Harper Row.
Freeman, Y. S., & Freeman, D. E. (1992). Whole language for second language learners. Portsmouth, NH: Heinemann.
Galbraith, P., & Anstrom, K. (1995). Peer coaching: An effective staff development model for educators of linguistically and culturally diverse students (Directions In Language and Education No. 1(3)) [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Gandara, P. (1997). Review of the research on instruction of limited English proficient students: A report to the California legislature [On-line]. Davis, CA: University of California, Educational Policy Center, UC Linguistic Minority Research Institute. Available: http://lmrinet.gse.ucsb.edu/lepexecsum/exesumtoc.htm
Garcia, E. (1988). Effective schooling for language minority students (NCBE Focus: Occasional Papers in Bilingual Education No. 1) [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Garcia, E. (1993). Language, culture, and education. In L. Darling-Hammond (Ed.), Review of research in education (Vol. 19, pp. 51-98). Washington, DC: American Educational Research Association.
Garcia, G. E. (1994). Assessing the literacy development of second-language students: A focus on authentic assessment. In K. Spangenberg-Urbschat & R. Pritchard (Eds.), Kids come in all languages: Reading instruction for ESL students (pp. 180-205). Newark, DE: International Reading Association.
Gardner, R. C. (1983). Learning another language: A true psychological experiment. Journal of Language and Social Psychology, 2, 219-239.
Gardner, R. C. (1985). Social psychology and second language learning: The role of attitudes and motivation. London: Edward Arnold.
Gardner, R. C., & Lambert, W .E. (1959). Motivational variables in second-language acquisition. Canadian Journal of Psychology, 13, 266-272.
Gardner, R. C., & Lambert, W. E. (1972). Attitudes and motivation in second language learning. Rowley, MA: Newbury House Publishers.
Genesee, F. (1987). Learning through two languages: Studies of immersion and bilingual education. New York: Newbury House.
Genesee, F. (1992). Second/foreign language immersion and at-risk English speaking children. Foreign Language Annals, 25, 199-213.
Genesee, F. (Ed.). (1994a). Educating second language children: The whole child, the whole curriculum, the whole community. New York, NY: Cambridge University Press.
Genesee, F. (1994b). Integrating language and content: Lessons from immersion (Educational Practice Rep. No. 11). Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning.
Genesee, F., & Hamayan, E. V. (1994). Classroom-based assessment. In F. Genesee (Ed.), Educating second language children: The whole child, the whole curriculum, the whole community (pp. 212-239). New York, NY: Cambridge University Press.
Gibbons, P. (1991). Learning to learn in a second language. New Town, Australia: Primary English Teaching Association.
Glenn, C. L. (no date). What does the National Research Council study tell us about educating language minority children? [On-line]. Amherst, MA: The READ Institute. Available: http://www.ceousa.org/nrc.html
Goldenberg, C. (1984). Roads to reading: Studies of Hispanic first graders at risk for reading failure. Unpublished doctoral dissertation, University of California, Los Angeles.
Goldenberg, C. (1990). Beginning literacy instruction for Spanish-speaking children. Language Arts, 87, 590-598.
Goldenberg, C. (1991). Instructional conversations and their classroom application, (Educational Practice Rep. No. 2). Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning.
Goldenberg, C., & Gallimore, R. (1991). Local knowledge, research knowledge, and educational change: A case study of first-grade Spanish reading improvement. Educational Researcher, 20(8), 2-14.
Goldenberg, C., & Patthey-Chavez, G. (1995). Discourse processes in instructional conversations: Interactions between teacher and transition readers. Discourse Processes, 19(1), 57-73.
Goldenberg, C., & Sullivan, J. (1994). Making change happen in a language-minority school: A search for coherence (Educational Practice Rep. No. 13). Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning.
Greene, J. P. (1998). A meta-analysis of the effectiveness of bilingual education [On-line]. Austin, TX: University of Texas, Department of Government, Public Policy Clinic.
Guild, P. (1998). The culture/learning style connection. In I. Avalos Heath & C. J. Serrano (Eds.), Annual editions: Teaching English as a second language 98/99 (pp. 102-106). Guilford, CT: Dushkin/McGraw-Hill.
Hakuta, K. (1986). Mirror of language: The debate on bilingualism. New York: Basic Books.
Hakuta, K. (1987). Degree of bilingualism and cognitive ability in mainland Puerto Rican children. Child Development, 58, 1372-1388.
Hakuta, K. (1988). A case study of a Japanese child learning ESL. In E. Tarone (Ed.), Variations in interlanguage. London: Edward Arnold.
Hakuta, K., & D’Andrea, D. (1992). Some properties of bilingual maintenance and loss in Mexican background high-school students. Applied Linguistics, 13 (1), 72-99.
Hall, W. S., Nagy, W. E., & Linn, R. (1984). Spoken words: Effects of situation and social group on oral word usage and frequency. Hillsdale, NJ: Erlbaum.
Harley, B., & Wang, W. (1997). The critical period hypothesis: Where are we now? In A. M. B. de Groot & J. F. Kroll (Eds.), Tutorials in bilingualism: Psycholinguistic perspectives. Hillsdale, NJ: Erlbaum.
Henderson, R. W., & Landesman, E. M. (1992). Mathematics and middle school students of Mexican descent: The effects of thematically integrated instruction (Research Rep. No. 5). Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning.
Hiebert, E. H. (1994). Reading Recovery in the United States: What difference does it make to an age cohort? Educational Researcher, 23(9), 15-25.
Hiebert, E. H. (1996). Revisiting the question: What difference does Reading Recovery make to an age cohort? Educational Researcher, 25(7), 26-28.
Hymes, D. (1972). On communicative competence. In J. B. Pride & J. Holmes (Eds.), Sociolinguistics (pp. 269-293). Harmondsworth, England: Penguin Books.
Jacob, E., & Jordan, C. (1987). Explaining the school performance of minority students. Anthropology and Education Quarterly, 18(4).
Jacob, E., Rottenberg, L., Patrick, S., & Wheeler, E. (1996). Cooperative learning: Context and opportunities for acquiring academic English. TESOL Quarterly, 30, 253-280.
Jusczyk, P. W., Friederici, A. D., Wessels, J. M. I., Svenkerud, V. Y., & Jusczyk, A. M. (1993). Infants' sensitivity to the sound patterns of native language words. Journal of Memory and Language, 32, 402-420.
Kagan, S. (1986). Cooperative learning and sociocultural factors in schooling. In California State Department of Education, Beyond language: Social and cultural factors in schooling language minority students (pp. 231-298). Los Angeles, CA: California State University, Evaluation, Dissemination and Assessment Center.
Kamhi-Stein, L. D. (1998). Reading in two languages: Profiles of “underprepared” native Spanish-speaking freshmen. Paper presented at the annual meeting of Teachers of English to Speakers of Other Languages, Seattle.
Kelly, P. R., Gomez-Valdez, C., Klein, A. F., & Neal, J. C. (1995). Progress of first and second language learners in an early intervention program. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
Klein, W. (1986). Second language acquisition. Cambridge, England: Cambridge University Press.
Krashen, S. (1978). The monitor model for second language acquisition. In R. C. Gingras (Ed.), Second language acquisition and foreign language teaching. Center for Applied Linguistics.
Krashen, S. D. (1982). Principles and practice in second language acquisition. Oxford: Pergamon Press.
Krashen, S. D. (1991). Sheltered subject matter teaching. Cross Currents, 18, 183-189.
Krashen, S. D. (1998). A gradual exit, variable threshold model for limited English proficient children. In I. Avalos Heath & C. J. Serrano (Eds.), Annual editions: Teaching English as a second language 98/99 (pp. 199-203). Guilford, CT: Dushkin/McGraw-Hill.
Krashen, S. D., & Biber, D. (1988). On course: Bilingual education's success in California. Ontario, CA: California Association of Bilingual Education.
Krashen, S., Long, M., & Scarcella, R. (1979). Age, rate and eventual attainment in second language acquisition. TESOL Quarterly,13, 573-582.
Krashen, S., Scarcella, R., & Long, M. (Eds.). (1982). Child-adult differences in second language acquisition. Rowley, MA: Newbury House.
Krashen, S. D., & Terrell, T. D. (1983). The natural approach: Language acquisition in the classroom. Hayward, CA: Alemany Press.
Kreeft Peyton, J. (1986). Literacy through written interaction. Passage: A Journal for Refugee Education, 2(1), 24-29.
Kreeft Peyton, J. (1987). Dialogue journal writing with limited-English-proficient (LEP) students. Q & A. Washington, DC: Center for Applied Linguistics.
Lambert, W. E. (1967). A social psychology of bilingualism. Journal of Social Issues, 23, 91-109.
Lambert, W. E. (1975). Culture and language as factors in learning and education. In A. Wolfgang (Ed.), Education of immigrant students. Toronto: Ontario Institute for Studies in Education.
Lambert, W. E., & Tucker, G. R. (1972). Bilingual education of children: The St. Lambert experiment. Rowley, MA: Newbury House Publishers.
Lee, J., & Schallert, D. L. (1997). The relative contribution of L2 language proficiency and L1 reading ability to L2 reading performance: A test of the threshold hypothesis in an EFL context. TESOL Quarterly, 31(4), 713-739.
Legaretta, D. (1979). The effects of program models on language acquisition by Spanish-speaking children. TESOL Quarterly, 8, 521-576.
Leighton, M. S., Hightower, A. M., & Wrigley, P. G. (1995). Model strategies in bilingual education: Professional development [On-line]. Washington, DC: U.S. Department of Education, Office of Bilingual and Minority Language Affairs.
Lenneberg, E. (1967). Biological foundations of language. New York, NY: John Wiley & Sons.
Lindholm, K. J. (1991). Theoretical assumptions and empirical evidence for academic achievement in two languages. Hispanic Journal of Behavioral Sciences, 13, 3-17.
Long, M. H. (1990). The least a second language acquisition theory needs to explain. TESOL Quarterly, 24, 649-666.
Lucas, T., & Katz, A. (1994). Reframing the debate: The roles of native languages in English-only programs for language minority students. TESOL Quarterly, 28, 537-561.
McLaughlin, B. (1978). Second language acquisition in childhood. Hillsdale, NJ: Lawrence Erlbaum Associates.
McLaughlin, B. (1992). Myths and misconceptions about second language learning: What every teacher needs to unlearn (Educational Practice Rep. No. 5) [On-line]. Washington, DC: National Center for Research on Cultural Diversity and Second Language Learning. Available: http://www.ncbe.gwu.edu/miscpubs/ncrcdsll/epr5.htm
Met, M. (1994). Teaching content through a second language. In F. Genesee (Ed.), Educating second language children: The whole child, the whole curriculum, the whole community (pp. 159-182). New York, NY: Cambridge University Press.
Meyer, M. M., & Fienberg, S. E. (Eds.). (1992). Assessing evaluation studies: The case of bilingual education strategies (Panel to Review Evaluation Studies of Bilingual Education, Committee on National Statistics, National Research Council). Washington, DC: National Academy Press.
Milon, J. P. (1974). The development of negation in English as a second language learners. TESOL Quarterly, 8, 137-143.
Miramontes, O. B., Nadeau, A., & Commins, N. L. (1997). Restructuring schools for linguistic diversity: Linking decision making to effective programs. New York, NY: Teachers College Press.
Moll, L. C. (1986). Writing as communication: Creating strategic learning environments for students. Theory Into Practice, 25, 102-108.
Moll, L. C. (1988). Some key issues in teaching Latino students. Language Arts, 65, 465-472.
Muniz-Swicegood, M. (1994). The effects of metacognitive reading strategy training on the reading performance and student reading analysis strategies of third grade bilingual students. Bilingual Research Journal, 18(1&2), 83-97.
Natalicio, D., & Natalicio, L. (1971). A comparative study of English pluralization by native and non-native English speakers. Child Development, 42.
Nemser, W. (1971). Approximative systems of foreign language learners. International Review of Applied Linguistics, 9(2), 115-123.
Oakes, J. (1986). Tracking, inequality, and the rhetoric of school reform: Why schools don't change. Journal of Education, 168, 61-80.
Office of Superintendent of Public Instruction, State of Washington. (1998a). Report to the Legislature: English-as-a-second language/transitional bilingual program funding formula (1997-99 operating budget). Olympia, WA: Author.
Office of Superintendent of Public Instruction, State of Washington. (1998b). Research into practice: An overview of reading research for Washington State. Olympia, WA: Author.
Office of Superintendent of Public Instruction, State of Washington. (1998c). Washington State transitional bilingual instruction program: End-of-year evaluation report (1996-97 program year and 1985-1997 program trends). Olympia, WA: Author.
Ogbu, J. (1974). The Next generation: An ethnography of education in an urban neighborhood. New York, NY: Academic Press.
Ogbu, J. (1982). Cultural discontinuities and schooling. Anthropology and Education Quarterly, 13, 290-307.
Oller, J. W., Jr. (1981). Language as intelligence. Language Learning, 31, 465-492.
O'Malley, J. M., & Valdez Pierce, L. (1998). Moving toward authentic assessment. In I. Avalos Heath & C. J. Serrano (Eds.), Annual editions: Teaching English as a second language 98/99 (pp. 133-138). Guilford, CT: Dushkin/McGraw-Hill.
Pease-Alvarez, C., & Vasquez, O. (1994). Language socialization in ethnic minority communities. In F. Genesee (Ed.), Educating second language children: The whole child, the whole curriculum, the whole community (pp. 82-102). New York, NY: Cambridge University Press.
Pease-Alvarez, L., Garcia, E. E., & Espinosa, P. (1991). Effective instruction for language-minority students: An early childhood case study. Early Childhood Research Quarterly, 6, 347-361.
Perez, B., & Torres-Guzman, M. E. (1996). Learning in two worlds: An integrated Spanish/English biliteracy approach (2nd ed.). White Plains, NY: Longman.
Pinnell, G. S., Lyons, C. A., DeFord, D. E., Bryk, A. S., & Seltzer, M. (1994). Comparing instructional models for the literacy education of high-risk first graders. Reading Research Quarterly, 29, 8-38.
Pinnell, G. S., Lyons, C., & Jones, N. (1996). Response to Hiebert: What difference does Reading Recovery make? Educational Researcher, 25(7), 23-25.
Porter, R. B. (1990). Forked tongue: The politics of bilingual education. New York: Basic Books.
Purcell-Gates, V. (1996). Process teaching with explicit explanation and feedback in a university-based clinic. In E. McIntryre & M. Pressley (Eds.), Balanced instruction: Strategies and skills in whole language. Norwood, MA: Christopher-Gordon.
Ramirez, D. J. (1992). Executive summary of volumes I and II of the final report: National longitudinal study of structured-English immersion strategy, early-exit and late-exit transitional bilingual education programs for language-minority children. Bilingual Research Journal, 16(1&2), 1-62.
Ramirez, D. J., Yuen, S. D., Ramey, D. R., & Pasta, D. J. (1991). Final report: National longitudinal study of structured-English immersion strategy, early-exit and late-exit transitional bilingual education programs for language-minority children, vol. I and II, (Technical Report). San Mateo, CA: Aguirre International.
Rasinski, T. V. (1995). Reply to Pinnell, DeFord, Lyons, and Bryk. Reading Research Quarterly, 30, 276-277.
Ravem, R. (1968). Language acquisition in a second language environment. International Review of Applied Linguistics in Language Teaching, 6, 175-185.
Ravem, R. (1974). The development of wh- questions in first and second language language learners. In J. C. Richards (Ed.), Error analysis: Perspectives in second language learning. London: Longman.
Rigg, P. (1981). Beginning to read in English: The Language Experience Approach. In C. W. Twyford, W. Diehl & K. Feathers (Eds.), Reading English as a second language: Moving from theory (Monographs in Language and Reading Studies) (pp. 80-91). Bloomington, IN; Indiana University.
Rosebery, A. S., Warren, B., & Conant, F. R. (1992). Appropriating scientific discourse: Findings from language minority classrooms. The Journal of the Learning Sciences, 2(1), 61-94.
Rossell, C. H., & Baker, K. (1996). The educational effectiveness of bilingual education. Research in the Teaching of English, 30(1), 7-74.
Rossell, C. H., & Ross, J. M. (1986). The social science evidence on bilingual education. Journal of Law and Education, 15, 385-419.
Rumelhart, D. E. (1977). Toward an interactive model of reading. In S. Dornic (Ed.), Attention and performance: Vol. VI (pp. 573-603). New York, NY: Academic Press.
Rumelhart, D. E. (1980). Schemata: The building blocks of cognition. In R. J. Spiro, B. C. Bruce & W. F. Brewer (Eds.), Theoretical issues in reading comprehension (pp. 33-58). Hillsdale, NJ: Erlbaum.
Sakash, K., & Rodriguez-Brown, F. V. (1995). Teamworks: Mainstream and bilingual/ESL teacher collaboration (NCBE Program Information Guide Series No. 24) [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Sampson, G. P., & Richards, J. C. (1973). Learner language systems. Language Sciences, 19, 18-25.
Scarcella, R. (1990). Teaching language minority students in the multicultural classroom. Englewood Cliffs, NJ: Prentice-Hall.
Scarcella, R., & Higa, C. (1982). Input and age differences in second language acquisition. In S. Krashen, R. Scarcella & M. Long (Eds.), Child-adult differences in second language acquisition (pp. 175-201). Rowley, MA: Newbury House.
Seliger, H. (1988). Psycholinguistic issues in second language acquisition. In B. Corden & M. Leslie (Eds.), Issues in second language acquisition. New York, NY: Newberry House.
Selinker, L. (1972). Interlanguage. International Review of Applied Linguistics in Language Teaching, 10, 209-231.
Shanahan, T. (1987). Review of The early detection of reading difficulties. Journal of Reading Behavior, 19, 117-119.
Shanahan, T., & Barr, R. (1995). Reading Recovery: An independent evaluation of the effects of an early instructional intervention for at-risk learners. Reading Research Quarterly, 30, 958-996.
Sharan, S., Bejarano, Y., Kussell, P., & Peleg, R. (1984). Achievement in English language and in literature. In S. Sharan, P. Kussell, R. Hertz-Lazarowitz, Y. Bejarano, S. Raviv & Y. Sharan (Eds.), Cooperative learning in the classroom: Research in desegregated schools (pp. 46-72). Hillsdale, NJ: Lawrence Erlbaum.
Short, D. J. (1994). Expanding middle school horizons: Integrating language, culture and social studies. TESOL Quarterly, 28, 581-608.
Slavin, R. (1995). Cooperative learning: Theory, research, and practice. Englewood Cliffs, NJ: Prentice Hall.
Slavin, R. E., Karweit, N. L., Wasik, B. A., Madden, N. A., & Dolan, L. J. (1994). Success for All: A comprehensive approach to prevention and early intervention. In R. E. Slavin, N. L. Karweit & B. A. Wasik (Eds.), Preventing early school failure: Research, policy, and practice (pp. 175-205). Needham Heights, MA: Simon & Schuster.
Slavin, R. E., & Madden, N. A. (1995). Effects of Success for All on the achievement of English language learners. Paper presented at the annual meeting of the American Educational Research Association, San Francisco, CA.
Slavin, R. E., Madden, N. A., Dolan, L. J., & Wasik, B. A. (1996). Every child, every school: Success for All. Thousand Oakes, CA: Corwin Press.
Slavin, R. E., & Yampolsky, R. (1992). Success for All: Effects on students with limited English proficiency: A three-year evaluation. Baltimore, MD: John Hopkins University, Center for Research on Effective Schooling for Disadvantaged Students.
Slobin, D. I. (1973). Cognitive prerequisites for the development of grammar. In C. Ferguson & D. Slobin (Eds.), Studies of child development (pp. 145-208). Holt, Rinehart & Winston.
Smith, L. J., Ross, S. M., & Casey, J. (1996). Multi-site comparison of the effects of Success for All on reading achievement. Journal of Literacy Research, 28, 329-353.
Snow, C. E. (1987). Relevance of the notion of a critical period to language acquisition. In M. Bornstein (Ed.), Sensitive periods in development (pp. 183-209). Hillsdale, NJ: Erlbaum.
Snow, C. E. (1990). Rationales for native language instruction: Evidence from research. In A. M. Padilla, H. H. Fairchild & C. M. Valadez (Eds.), Bilingual education: Issues and strategies. Newbury Park, CA: Sage.
Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children (Prepublication Copy). Washington, DC: National Academy Press.
Stanovich, K. E. (1986). Toward an interactive-compensatory model of individual differences in the development of reading fluency. Reading Research Quarterly, 16, 32-71.
Studdet-Kennedy, M. (1986). Sources of variability in early speech development. In J. S. Perkell & D. H. Klatt (Eds.), Invariance and variability in speech processes. Hillsdale, NJ: Erlbaum.
Tabors, P. O., & Snow, C. E. (1994). English as a second language in preschool programs. In F. Genesee (Ed.), Educating second language children: The whole child, the whole curriculum, the whole community (pp.103-125). New York, NY: Cambridge University Press.
Tharp, R. G. (1982). The effective instruction of comprehension: Results and description of the Kamehameha Early Education Program. Reading Research Quarterly, 17, 503-527.
Tharp, R. G. (1989). Psychocultural variables and constants: Effects on teaching and learning in schools. American Psychologist, 44, 349-359.
Thomas, W. P. (1992). An analysis of the research methodology of the Ramirez Study. Bilingual Research Journal, 16(1-2), 213-245.
Thomas, W. P., & Collier, V. (1997). School effectiveness for language minority students [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education. Available: http://www.ncbe.gwu.edu/ncbepubs/resource/effectiveness/index.html
Thomas, W. P., & Collier, V. (1998). Language-minority student achievement and program effectiveness. In I. Avalos Heath & C. J. Serrano (Eds.), Annual editions: Teaching English as a second language 98/99 (pp. 188-190). Guilford, CT: Dushkin/McGraw-Hill.
Tikunoff, W. J. (1983). An emerging description of successful bilingual instruction: Executive summary of part I of the SBIF Study. San Francisco, CA: Far West Laboratory for Educational Research and Development.
Tinajero, J. V., & Ada, A. F. (Eds.). (1993). The power of two languages: Literacy and biliteracy for Spanish-speaking students. New York: Macmillan/McGraw-Hill.
Uranga, N. (1995). A staff development model for a multicultural society. The Journal of Educational Issues of Language Minority Students, 15(winter), [On-line].
U.S. Department of Education, Planning and Evaluation Service, Office of the Under Secretary. (1997). Urban and suburban/rural: Special strategies for educating disadvantaged children [Final report]. Cambridge, MA: Abt Associates.
Ventriglia, L. (1982). Conversations of Miguel and Maria: How children learn English as a second language, implications for classroom teaching (Second Language Professional Library). Reading, MA: Addison-Wesley Publishing.
Violand-Sanchez, E., Sutton, C. P., & Ware, H. W. (1991). Fostering home-school cooperation: Involving language minority families as partners in education (NCBE Program Information Guide Series No. 6) [On-line]. Washington, DC: The National Clearinghouse for Bilingual Education.
Walley, A. C. (1993). The role of vocabulary development in children's spoken word recognition and segmentation ability. Developmental Review, 13, 286-350.
Wasik, B. A., & Slavin, R. E. (1993). Preventing early reading failure with one-to-one tutoring: A review of five programs. Reading Research Quarterly, 28, 179-200.
Weinstein, G. (1984). Literacy and second language acquisition: Issues and perspectives. TESOL Quarterly, 18(3), 471-484.
White, T. G., Graves, M. F., & Slater, W. H. (1990). Growth of reading vocabulary in diverse elementary schools: Decoding and word meaning. Journal of Educational Psychology, 82, 281-290.
Willig, A. C. (1985). A meta-analysis of selected studies on the effectiveness of bilingual education. Review of Educational Research, 55(3), 269-317.
Wong Fillmore, L. (1978). ESL: A role in bilingual education. Paper presented at the ninth annual California TESOL conference.
Wong Fillmore, L. (1985). When does teacher talk work as input? In S. Gass & C. Madden (Eds.), Input in second language acquisition (pp. 17-50). New York, NY: Newbury House.
Wong Fillmore, L. (1991a). Language and cultural issues in early education. In S. L. Kagan (Ed.), The care and education of America's young children: Obstacles and opportunities, the 90th yearbook of the National Society for the Study of Education (pp. 1-18). Chicago, IL: University of Chicago Press.
Wong Fillmore, L. (1991b). When learning a second language means losing the first. Early Childhood Research Quarterly, 6, 323-346.
Wong Fillmore, L., Ammon, P., McLaughlin, B., & Ammon, M. (1985). Learning English through bilingual instruction. Final Report. Berkeley, CA: University of California.
Wong Fillmore, L., & Valadez, C. (1986). Teaching bilingual learners. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 648-685). New York: Macmillan.
Yorio, C. A. (1971). Some sources of reading problems in foreign language learners. Language Learning, 21(1), 107-115.
Zanger, V. V. (1991). Social and cultural dimensions of the education of language minority students. In A. N. Ambert (Ed.), Bilingual education and English as a second language: A research handbook, 1988-1990 (pp. 3-54). New York, NY: Garland Publishing.
Return to Table of Contents
1 This document does not provide an extended investigation of the theory and research on reading development and pedagogy in general. Although such research and theory was used to inform much of this document, we focus our attention on those issues that are particularly salient to ELLs learning to read in English in U.S. public schools. For a detailed discussion of reading development and pedagogy more generally, see Office of Superintendent of Public Instruction (1998b) and other related publications.
2 Primary acquisition age refers to the period between birth and the onset of puberty, during which many researchers and theorists consider children to be natural language acquirers. For a more detailed discussion of this subject, see Chapter One of this document.
3 Primary acquisition age refers to the period between birth and the onset of puberty, during which many researchers and theorists consider children to be natural language acquirers. For a more detailed discussion of this subject, see Chapter One of this document.
4 It should be noted that, unlike the acquisition of grammatical structures, vocabulary size and knowledge continues to develop through the learner’s entire life and is not considered “completed” as is the knowledge of the syntactic forms.
5 A target language is a language that a learner is trying to acquire or learn.
6 See Brown (1973), Dulay and Burt (1973), Milon (1974), Natalicio and Natalicio (1971) and Ravem (1968) for other studies in this area.
7 ELLs start at half a standard deviation behind the native speakers of English (Thomas & Collier, 1997).
8 August and Hakuta (1998) also express concern with regard to standards-based assessments. They warn that ELLs may take more time to meet the predetermined district or state standards and that additional benchmarks may need to be developed to assess the progress that ELLs are making toward meeting these standards.
9 In a series of morpheme order studies, Dulay and Burt reported that the exact order was determined in which children and adults acquire eleven important English morphemes.
10 The ability of ELLs to acquire BICS in a relatively short period of time has routinely led to the misconception that these children can acquire the language skills necessary to participate in mainstream classes without additional support in 1-2 years.
11 This table is based on the one provided by A. U. Chamot (1981). Reproduced by permission of the author.
12 See Krashen (1978).
13 See Cummins (1980).
14 See Bloom and Krathwohl (1977).
15 Bialystok and Hakuta (1994), Collier (1987), Epstein, et al. (1996), Harley and Wang (1997), Krashen, et al. (1982), Long (1990) and Snow (1987) (as cited by August and Hakuta, 1997) reviewed the research literature and find the claim that children are more proficient at second-language acquisition than older individuals is not supported very well.
16 Children are natural language acquirers prior to the onset of puberty. During this period children’s language development is a subconscious and spontaneous process. They learn language actively and are motivated to communicate by the desire to bring meaning and purpose to social situations.
17 It needs to be stressed that individual learner differences do account for variations in acquisition timetables.
18 Rossell and Baker (1996) and Porter (1990) disagree with these researchers.
19 August and Hakuta (1997) also observe that, although there is a critical period in learning a first language, this theory does not necessarily suggest that there is a critical period for second language learning.
20 The 50th percentile or normal curve equivalent (NCE) on standardized norm-referenced tests is the criteria for normal academic achievement of native speakers of English.
21 Note the instruction is exclusively in English.
22 For further discussion, see Thomas and Collier (1997) and Collier (1987).
23 Integrative orientation refers to reasons for learning the second language, reflecting an interest in forming a closer liaison with the target language community.
24 Instrumental orientation refers to reasons for learning the second language, emphasizing pragmatic reasons, and appearing to distance learner from social-emotional contact with the other community.
25 Some studies have shown that integratively orientated individuals are more highly motivated than instrumentally orientated ones (Gardner & Lambert, 1959). However, Gardner (1985) asserts that it is possible for instrumentally orientated individuals to demonstrate high levels of motivation.
26 It should be noted that the majority of studies regarding the role of attitude and motivation in language learning have been conducted in foreign language classrooms.
27 An immersion program is one in which students are exposed to instruction in an L2 for a substantial portion of the day.
28 Clarizio (1982) opposes this view.
29 An integrative orientation is one that reflects an interest in forming a closer liaison with target language community.
30 These learning style typologies are not considered mutually exclusive.
31 Metalinguistic awareness is the conscious linguistic knowledge of the rules and forms of the language.
32 For more details, see Alderson (1984).
33 It should be noted that the majority of research in this area has been conducted in reference to adult students learning foreign languages.
34 There is evidence that when reading in an L2, good L1 readers have an advantage over poor L1 readers of the same L2 proficiency level. This suggests that poor L1 readers will probably be poor L2 readers.
35 According to Lee and Schallert (1997), the threshold level is likely to vary from task to task and from reader to reader. It is important that educators note this conclusion when engaging in selecting reading materials for second language learners.
36Phonemic awareness is "the insight that every spoken word can be conceived as a sequence of phonemes. Because phonemes are the units of sound that are represented by the letters of an alphabet, an awareness of phonemes is key to understanding the logic of the alphabetic principle and thus to the learnability of phonics and spelling" (Snow, et al., 1998, p.52).
37 Phonological awareness is "a more inclusive term than phonemic awareness and refers to the general ability to attend to the sounds of language as distinct from its meaning. Phonemic awareness generally develops through other less subtle levels of phonological awareness. Noticing similarities between words in their sounds, enjoying rhymes, counting syllables, and so forth are indications of such 'metaphonological' skill" (Snow, et al., 1998, p.52).
38 For a detailed analysis of how native speakers of Spanish learn how to read in Spanish, see Ferreiro and Teberosky (1982).
39 For further details on this issue, see Snow, et al. (1998).
40 For further discussion of this mismatch between oral language and school vocabulary, see Hall, Nagy and Linn (1984).
41 For detailed information on schema theory and ESL reading, see Carrell and Eisterhold (1983).
42 For further information, see Rigg (1981).
43 Reprinted with permission from Preventing Reading Difficulties in Young Children. Copyright 1998 by the National Academy of Sciences. Courtesy of the National Academy Press, Washington, D.C.
44 This chapter is substantially based on the work of August and Hakuta (1997).
45 See Appendix A for an overview of Washington State's programs for the education of ELLs.
46 The definitions provided in Box 3:1 are a synthesis of those employed by August and Hakuta (1997) and Thomas and Collier (1997).
47 To be included in Baker and de Kanter's review, a study essentially had to either employ random assignment of children to treatment and control groups or take measures to ensure that treatment and control groups were equivalent.
48 To be considered methodologically acceptable, studies had to randomly assign students to programs or to statistically control for pretreatment differences between groups when random assignment was not possible.
49 Willig (1985) only incorporated 16 of the 28 studies reviewed by Baker and de Kanter (1981). The rationale provided for excluding these 12 studies was: three analyzed programs outside the U.S; one was a synthesis of studies, not primary research; one evaluated a program that took place outside the regular school day; and the final seven lacked sufficient data to perform the necessary calculations.
50 Willig did not compare the effects of early-exit bilingual education with those of other special programs (such as structured immersion). In part this is because neither she nor Baker and de Kanter (1981) could find many evaluations at that time which made such comparisons (August & Hakuta, 1997).
51 One possible source of contention over Greene (1998) is the fact that his analysis included only 11 of the 75 studies encompassed by Rossell and Baker (1996). According to Greene, most of these were excluded because they: lacked adequate control groups, were separately released reports of the same programs by the same authors, or inadequately controlled for differences between treatment and control groups when randomized assignment was not employed.
52 Figure 3:1 represents a synthesis of the test scores of 42,317 students who were tracked in overlapping 4 to 8 year longitudinal cohorts (Thomas & Collier, 1997).
53 The length of student participation in these programs varied according to program type. This could be a minimum of 2 years (e.g., ESL pullout) to a maximum of 7 years (e.g., late-exit bilingual education). The failure of Thomas and Collier (1997) to control for this variable is appropriate since the length of student participation is a defining characteristic of these programs. For example, rapid transition of students to mainstream instruction is a goal of early-exit bilingual education.
54 Experimental mortality is when the sample being studied shrinks through attrition (e.g., the students in a study's sample group are lost due to school transfer, death, etc.).
55 See Appendix A for an overview of Washington State's programs for the education of ELLs.
56 Not all studies in this synthesis focus on ELLs specifically, but instead address the broader category of language-minority students.
57 Though not as frequently cited as the attributes in the above list, some studies reviewed in this section also mention the importance of informing instruction through ongoing classroom-level assessment of students' progress and needs (Moll, 1988; Tharp, 1982; Thomas & Collier, 1997). (For an extended discussion of classroom-based assessment of ELLs see Genesee & Hamayan, 1994; see also Garcia, 1994.) In addition, studies by Berman, Minicucci, McLaughlin, Nelson, and Woodworth (1995), Slavin and Madden (1995) and Slavin and Yampolsky (1992) note the importance of collaboration between all of a school's teachers involved in educating language-minority students (e.g., mainstream classroom teachers, tutors, and bilingual/ESL staff). Though again not as routinely cited in the studies reviewed in this section, the need for such collaboration is often discussed throughout the literature on educating ELLs (e.g., Sakash & Rodriguez-Brown, 1995).
58 See August and Hakuta (1997) for a discussion of the relative strengths and weaknesses of these designs in the area of educational research.
59 These studies are limited to those that involve students attending elementary or middle school.
60 Lucas and Katz (1994) is an exception. In their discussion of effective Special Alternative Instructional Programs for ELLs, the isolation of these programs was prominent. In one district, each program "was housed at a school site but operated as an individual educational unit, physically separated from the rest of the school…" (Lucas & Katz, 1994, p.546). In another district, the program was housed in a central location, to which ELL students were bused to, and in which they spent half their school day.
61 For an expanded discussion of how teachers can modify their classrooms to be more compatible with the cultures of their students see Tikunoff (1983).
62 It is important to remember that what constitutes culturally compatible instructional approaches may vary significantly between different ethnic minority groups (Wong Fillmore, et al., 1985). Furthermore, it should also be remembered that within a group the variations among individuals are as great as their commonalties (Guild, 1998). Informing instruction through an understanding of cultural differences, though valuable, should of course not lead to a stereotyping of the needs and abilities of individual students.
63 Met (1994) states that appropriate modification of teacher speech includes: speaking more slowly, emphasizing key words or phrases; simplifying language by using more common vocabulary or simpler, high frequency grammatical structures; restating, repeating, and paraphrasing, since redundancy provides additional supports for meaning; providing definition through exemplification; the use of synonyms to link new vocabulary with known words; and the use of antonyms to provide counterexamples to meaning.
64 See Appendix B for a detailed discussion of how to structure a program in order to most effectively provide ELLs with cognitively complex, on-grade-level instruction.
65 Content-based ESL has also been shown to support second-language acquisition (Genesee, 1994b; Krashen, 1991). For an extended discussion of content-based ESL see Burkart and Sheppard (1995).
66 Berman, et al. (1995) and Moll (1988) are an exception. In these studies the successful academic achievement of language-minority students was correlated with an exclusive emphasis on holistic, meaning based instruction.
67 Studies have shown that cooperative learning supports the second-language acquisition process of students (Bejarano, 1987; Cohen, DeAvila, & Intiti, 1981, as cited in Kagan, 1986; Sharan, et al., 1984). Studies have also provided indirect evidence that peer tutoring (Flanigan, 1991) and instructional conversations (Goldenberg & Patthey-Chavez, 1995) do so as well; when contrasted with traditional teacher-fronted, one-way instruction, these techniques were shown to be richer in the types of linguistic interaction believed to support language acquisition. Finally, theorists have argued that the use of dialogue journals is also a superior means of providing ELLs with exposure to the meaning-focused use of English (e.g., Kreeft Peyton, 1986, 1987).
68 Researchers have noted that language-minority parents (especially recent immigrants) often face formidable social, cultural, linguistic and economic barriers to involvement in school activities. For a discussion of these barriers, as well as how schools can accommodate the needs of language-minority parents, see for example Bermudez and Marquez (1996), Coelho (1994), Finders and Lewis (1998), Miramontes, Nadeau, and Commins (1997) and Violand-Sanchez, Sutton, and Ware (1991).
69 Alternative assessment refers to any method of finding out what a student knows or can do that is intended to show growth and inform instruction, and is an alternative to traditional forms of testing (i.e., multiple-choice tests) (O'Malley & Valdez Pierce, 1998). Authentic assessment refers to methods for evaluating student learning, achievement, motivation, and attitudes in regard to instructionally relevant classroom activities (O'Malley & Valdez Pierce, 1998). Examples include performance-based assessment, portfolios, and student self-assessment.
70 These definitions are a synthesis of those provided by the Office of Superintendent of Public Instruction (1998a) and the Office of Superintendent of Public Instruction (1998c).
71 These definitions are a synthesis of those provided by the Office of Superintendent of Public Instruction (1998a) and the Office of Superintendent of Public Instruction (1998c).
72 For an expanded discussion of how to develop an instructional program for ELLs that maximizes resources and personnel see Miramontes, Nadeau, and Commins (1997).
73 The framework presented in this appendix for the development of effective programs for ELLs is also similar to the one endorsed by the Multilingual Education Department of the Dallas Public Schools (Dehart & Martinez, 1998).
74 Concurrent translation is a practice in which a teacher speaks in one language and then immediately translates what was said into a second language. The use of concurrent translation in instruction is criticized as failing to facilitate second-language acquisition, since children are not compelled to attend to what is being said in the language they are less fluent in (see for example Legaretta, 1979, and Wong Fillmore, 1985).
75 This table is a replication of the one presented in Krashen (1996).
76 Research suggests that two-way developmental bilingual programs benefit both native-English speakers and ELLs (see Christian,1994, and Zanger, 1991, for reviews of the research on two-way programs; see also Thomas & Collier, 1997). For an expanded discussion of the features of a two-way developmental bilingual program see Christian (1994).
77 The breakdown of parent-child communication can result in: parents not being able to teach their children about ethical values, responsibility, morality, etc.; parents not being able to provide emotional and social support to their children; parents not being able to tell when their children are having trouble in school or are involved in potentially dangerous activities; and parents losing moral authority and control over their children (Gandara, 1997).
78 For a detailed overview of the studies conducted on Reading Recovery's effectiveness in the United States, see U.S. Department of Education (1997). | https://sckool.org/reading-and-second-language-learners.html?page=7 | 21 |
14 | History of Montenegro
Part of a series on the
|History of Montenegro|
|Middle Ages and early modern|
|Modern and contemporary|
The history of Montenegro begins in the Early Middle Ages, into the former Roman province of Dalmatia that forms present-day Montenegro. In the 9th century, there were three principalities on the territory of Montenegro: Duklja, roughly corresponding to the southern half, Travunia, the west, and Rascia, the north. In 1042, Stefan Vojislav led a revolt that resulted in the independence of Duklja and the establishment of the Vojislavljević dynasty. Duklja reached its zenith under Vojislav's son, Mihailo (1046–81), and his grandson Bodin (1081–1101). By the 13th century, Zeta had replaced Duklja when referring to the realm. In the late 14th century, southern Montenegro (Zeta) came under the rule of the Balšić noble family, then the Crnojević noble family, and by the 15th century, Zeta was more often referred to as Crna Gora (Venetian: monte negro). Large portions fell under the control of the Ottoman Empire from 1496 to 1878. Parts were controlled by the Republic of Venice. From 1515 until 1851 the prince-bishops (vladikas) of Cetinje were the rulers. The House of Petrović-Njegoš ruled until 1918. From 1918, it was a part of Yugoslavia. On the basis of an independence referendum held on 21 May 2006, Montenegro declared independence on 3 June of that year.
During the Bronze Age, the Illirii, probably the southernmost Illyrian tribe of that time, that gave their name to the entire group were living near Skadar lake on the border of Albania and Montenegro and neighboring with the Greek tribes south. Along the seaboard of the Adriatic, the movement of peoples that was typical of the ancient Mediterranean world ensured the settlement of a mixture of colonists, traders, and those in search of territorial conquest. Substantial Greek colonies were established on the 6th and 7th centuries BC and Celts are known to have settled there in the 4th century BC. During the 3rd century BC, an indigenous Illyrian kingdom emerged with its capital at Scutari. The Romans mounted several punitive expeditions against local pirates and finally conquered this Illyrian kingdom in the 2nd century BC, annexing it to the province of Illyricum.
The division of the Roman Empire between Roman and Byzantine rule – and subsequently between the Latin and Greek churches – was marked by a line that ran northward from Shkodra through modern Montenegro, symbolizing the status of this region as a perpetual marginal zone between the economic, cultural, and political worlds of the Mediterranean peoples. As Roman power declined, this part of the Dalmatian coast suffered from intermittent ravages by various semi-nomadic invaders, especially the Goths in the late 5th century and the Avars during the 6th century. These soon were supplanted by the Slavs, who became widely established in Dalmatia by the middle of the 7th century. Because the terrain was extremely rugged and lacked any major sources of wealth such as mineral riches, the area that is now Montenegro became a haven for residual groups of earlier settlers, including some tribes who had escaped Romanisation.
In the second half of the 6th century, Slavs migrated from the Bay of Kotor to the River of Bojana and the hinterland of it as well as surround the Skadar lake. They formed the Principality of Doclea. Under the following missions of Cyril and Methodus, the population was Christianised. The Slavic tribes organised into a semi-independent dukedom of Duklja (Doclea) by the 9th century.
After facing subsequent Bulgarian domination, the people were split as the Doclean brother-archonts split the lands among each other after 900. Prince Časlav Klonimirović of the Serbian Vlastimirović dynasty extended his influence over Doclea in the 10th century. After the fall of the Serbian Realm in 960, the Docleans faced a renewed Byzantine occupation through to the 11th century. The local ruler, Jovan Vladimir Dukljanski, whose cult still remains in the Orthodox Christian tradition, was at the time struggling to ensure independence.
Stefan Vojislav started an uprising against the Byzantine domination and gained a huge victory against the army of several Byzantine strategs in Tudjemili (Bar) in 1042, which put to an end the Byzantine influence over the Doclea. In the 1054 Great Schism, the Doclea fell on the side of the Catholic Church. Bar became a Bishopric in 1067. In 1077, Pope Gregory VII recognised Duklja as an independent state, acknowledging its King Mihailo (Michael, of the Vojislavljević dynasty founded by nobleman Stefan Vojislav) as Rex Doclea (King of Duklja). Later on Mihailo sent his troops, led by his son Bodin, in 1072 to assist the uprising of Slavs in Macedonia. In 1082, after numerous pleas the Bar Bishopric of Bar was upgraded to an Archbishopric.
The expansions of the Kings of the Vojislavljević dynasty led to the control over the other Slavic lands, including Zahumlje, Bosnia and Rascia. The might of the Doclea declined and they generally became subjected to the Grand Princes of Rascia in the 12th century. Stefan Nemanja was born in 1117 in Ribnica (today Podgorica). In 1168, as the Serbian Grand Zhupan, Stefan Nemanja took Doclea. In charters of Vranjina Monastery during the 14th century the ethnic groups which are mentioned were Albanians (Arbanas), Vlahs, Latins (Catholic citizen) and Serbs.
Duklja (Zeta) within the Nemanjić State (1186–1360)
The region of Duklja (Zeta) was ruled by the Nemanjić dynasty from c. 1186 until c. 1360.
Zeta under the Balšići (1360–1421)
Zeta within the Serbian Despotate (1421–1451)
After the death of Balša III, last representative of House of Balšić, Zeta joined the Serbian Despotate in 1421.
Zeta under the Crnojevići (1451–1496)
The Venetian coastal Montenegro
After the dramatic fall of the Western Roman Empire (476), the romanised Illyrians of the coast of Dalmatia survived the barbarian invasions of the Avars in the 6th century and were only nominally under the influence of the Slavs in the 7th and 8th centuries. In the last centuries of the first millennium, these Romanised Illyrians started to develop their own neo-Latin language, called Dalmatian language, around their small coastal villages that were growing with maritime commerce.
Venice started to take control of the southern Dalmatia around the 10th century, quickly assimilating the Dalmatian language with Venetian. By the 14th century the Republic of Venice was able to create a territorial continuity around the Bay of Kotor (Cattaro).
Early modern period
Struggle for maintaining independence (1496–1878)
Part of today's Montenegro, called Sandžak (which was not historically part of Montenegro until 1912), was under Ottoman control from 1498 to 1912, while westernmost part of coastal Montenegro was under Venetian control and the rest of Montenegro was independent from 1516, when Vladika Vavila was elected as ruler of Montenegro by its clans, and it became a theocratic state. Only small town centers were controlled by Ottomans, but mountains and rural area were de facto independent and controlled by several Montenegrin clans, which were warrior societies.
The Montenegrin people were divided into clans (Pleme). Every adult male from a clan was a warrior and took part in wars. Clans were ruled by chieftains, who also were military leaders of a clan. All clan leaders met up several times a year at a Zbor (assembly) in Cetinje, the Montenegrin capital, to make important decisions for the nation, to solve blood feuds, and to declare wars.
Independent Montenegro of that time was divided into three parts:
- Old Montenegro, which included the territory of the modern-day town of Cetinje and part of Danilovgrad. It was core of Montenegro and Cetinje was the capital. Montenegrin Prince-Bishops (Vladikas) lived and ruled from Cetinje.
- Brda ("The Hills") included the territories of northeastern Montenegro. This area was also known as "The Seven Hills" (Sedam Brda) because it was inhabited by seven Montenegrin clans: Vasojevići, Bjelopavlići, Piperi, Kuči, Bratonožići, Morača and Rovca. The clans were led by Vojvodas (dukes), either elective or hereditary ones.
- Old Herzegovina, an area in western Montenegro which was part of the short-lived medieval state of Herzegovina.
In 1514, the Ottoman-controlled territory of Montenegro was proclaimed as a separate Sanjak of Montenegro, by the order of Sultan Beyazid II. The first Sanjak-beg (governor) who was chosen was Ivan Crnojević's son Staniša (Skenderbeg Crnojević), who converted to Islam, and governed until 1528. Despite Skenderbeg's emphasized cruelty, the Ottomans did not have real power in Montenegro. Vladika Vavil was elected in 1516 as Montenegrin prince-bishop by the Montenegrin people.
Elective Vladikas (1516–1696)
For 180 years after their first appointment, the Vladikas were elected by the clans and people — an arrangement which was ultimately abandoned in favour of the hereditary system in 1696. For most of this period the Montenegrin people were in constant struggle for existence against Ottoman Empire.
A pretender to Montenegrin throne, one of the Crnojević family who had converted to Islam, invaded Montenegro just as Staniša, thirty years before, and with the same result. Vukotić, the civil governor, repulsed the attack of Turks. Montenegrins, encouraged by the victory, besieged Jajce in modern-day Bosnia and Herzegovina, where the Hungarian garrison was closely hemmed in by the Ottoman army. The Turks were too much occupied with the Hungarian war to take revenge. The next Ottoman invasion of Montenegro took place in 1570.
The national historians are silent upon the subject of the Haraç (tax in Ottoman Empire), which the invaders are said to have exacted from the inhabitants of the free mountains. The refusal of high-spirited Montenegrin clans to pay tax any longer may have been the cause of the Pasha's invasion during the reign of Bishop Rufim, when the Turks were driven back with heavy loss in Battle of Lješkopolje in 1604. About 1500 Montenegrin warriors attacked the Turkish camp on Lješkopolje field during the night, which counted 10.000 Ottoman soldiers.
In 1613 Arslan Pasha gathered army of over 40,000 men to attack part of Old Montenegro. Ottoman soldiers were twice as numerous as whole population of Old Montenegro. On 10 September the Montenegrins met the Turkish army, on the same spot Skenderbeg Crnojević was defeated nearly a century ago . The Montenegrins, although assisted by some neighbouring tribes, counted 4000 and were completely outnumbered. However, the Montenegrins managed to defeat the Turks. Arslan Pasha was wounded, and the heads of his second-in-command and a hundred other Turkish officers were carried off and stuck on the ramparts of Cetinje. The Ottoman troops retreated in disorder; many were drowned in the waters of the Morača. Others were killed by Montenegrin pursuers.
Much light is thrown upon the condition of Montenegro at this period and the causes of its invariable success in war even against fearful odds are explained by the accounts of a contemporary writer, Mariano Bolizza. This author, a patrician of Venice, residing at Kotor in the early part of the seventeenth century, spent a considerable time in the Old Montenegro, and published in 1614 a description of Cetinje. At the time, the whole male population of Cetinje available for war consisted of 8,027 persons, distributed among ninety-three villages.
The condition of the country at this period was naturally unsettled. War was the chief occupation of its inhabitants from sheer necessity, and the arts of peace languished. The printing-press, so active a century earlier, had ceased to exist ; the control of the Prince-Bishop over the five nahie, or districts, which then composed the principality, was weak; the capital itself consisted of only a few houses. Still, there was a system of local government. Each nahia was divided into tribes, or plemena, each presided over by a headman or kniez, who acted as a judge in disputes between the clansmen.
Petar Petrović Njegoš, an influential vladika, reigned in the first half of the 19th century. In 1851 Danilo Petrović Njegoš became vladika, but in 1852 he married and renounced his ecclesiastical character, assuming the title of knjaz (Prince) Danilo I, and transformed his land into a secular principality.
Following the assassination of Danilo by Todor Kadić in Kotor, in 1860, the Montenegrins proclaimed Nicholas I as his successor on August 14 of that year. In 1861–1862, Nicholas engaged in an unsuccessful war against the Ottoman Empire.
Following the Herzegovinian Uprising, partly initiated by his clandestine activities, he yet again declared war on Turkey. The Serbia joined Montenegro, but it was defeated by Turkish forces that same year. Russia now joined in and decisively routed the Turks in 1877–78. The Treaty of San Stefano (March 1878) was highly advantageous to Montenegro, as well as Russia, Serbia, Romania and Bulgaria. However, the gains were somewhat trimmed by the Treaty of Berlin (1878). In the end Montenegro was internationally recognized as an independent state, its territory was effectively doubled by the addition of 4,900 square kilometres (1,900 sq mi), the port of Bar and all the waters of Montenegro were closed to warships of all nations; and the administration of the maritime and sanitary police on the coast was placed in the hands of Austria.
Under Nicholas I the country was also granted its first constitution (1905) and was elevated to the rank of kingdom in 1910.
In the Balkan Wars (1912–1913), Montenegro did make further territorial gains by splitting Sanjak with Serbia. However, the captured city of Skadar had to be given up to the new state of Albania at the insistence of the Great Powers despite the Montenegrins having invested 10,000 lives for the conquest of the town from the Ottoman-Albanian forces of Essad Pasha Toptani.
World War I
Montenegro suffered severely in World War I. Shortly after Austria-Hungary declared war on Serbia (28 July 1914), Montenegro lost little time in declaring war on the Central Powers – on Austria-Hungary in the first instance – on 6 August 1914, despite Austrian diplomacy promising to cede Shkoder to Montenegro if it remained neutral. For purposes of coordination in the fight against the enemy army, Serbian General Bozidar Jankovic was named head of High Command of both Serbian and Montenegrin armies. Montenegro received 30 artillery pieces and financial help of 17 million dinars from Serbia. France contributed a colonial detachment of 200 men located in Cetinje at the beginning of war, as well as two radio-stations – located on top of Mount Lovćen and in Podgorica. Until 1915 France supplied Montenegro with necessary war material and food through the port of Bar, which was blockaded by Austrian battleships and submarines. In 1915 Italy took over this role, running supplies unsuccessfully and irregularly across the line Shengjin-Bojana-Lake Skadar, an unsecured route because of constant attacks by Albanian irregulars organised by Austrian agents. Lack of vital materials eventually led Montenegro to surrender.
Austria-Hungary dispatched a separate army to invade Montenegro[when?] and to prevent a junction of the Serbian and Montenegrin armies. This force, however, was repulsed, and from the top of the strongly fortified Lovćen, the Montenegrins carried on the bombardment of Kotor held by the enemy. The Austro-Hungarian army managed to capture the town of Pljevlja while on the other hand the Montenegrins took Budva, then under Austrian control. The Serbian victory at the Battle of Cer (15–24 August 1914) diverted enemy forces from Sandjak, and Pljevlja came into Montenegrin hands again. On August 10, 1914, the Montenegrin infantry delivered a strong attack against the Austrian garrisons, but they did not succeed in making good the advantage they first gained. They successfully resisted the Austrians in the second invasion of Serbia (September 1914) and almost succeeded in seizing Sarajevo. With the beginning of the third Austro-Hungarian invasion, however, the Montenegrin army had to retire before greatly superior numbers, and Austro-Hungarian, Bulgarian and German armies finally overran Serbia (December 1915). However, the Serbian army survived, and led by King Peter I of Serbia, started retreating across Albania. In order to support the Serbian retreat, the Montenegrin army, led by Janko Vukotic, engaged in the Battle of Mojkovac (6–7 January 1916). Montenegro also suffered a large scale invasion (January 1916) and for the remainder of the war remained in the possession of the Central Powers. See Serbian Campaign (World War I) for details. The Austrian officer Viktor Weber Edler von Webenau served as the military governor of Montenegro between 1916 and 1917. Afterwards Heinrich Clam-Martinic filled this position.
King Nicholas fled to Italy (January 1916) and then to France; the government transferred its operations to Bordeaux. Eventually the allies liberated Montenegro from the Austrians. A newly convened National Assembly of Podgorica (Podgorička skupština, Подгоричка скупштина), accused the Кing of seeking a separate peace with the enemy and consequently deposed him, banned his return and decided that Montenegro should join the Kingdom of Serbia on December 1, 1918. A part of the former Montenegrin military forces still loyal to the King started a rebellion against the amalgamation, the Christmas Uprising (7 January 1919).
In the period between the two World Wars, Nikola's grandson, King Alexander Karageorgevich dominated the Yugoslav government. In 1922 Montenegro became part of Zeta area and later Zeta Banate. The administrative seat of banate became former Montenegrin capital Cetinje. During this period, Montenegrin people were still divided between politics of Greens and Whites. The dominant political parties in Montenegro were Democratic Party, People's Radical Party, Communist Party of Yugoslavia, Alliance of Agrarians, Montenegrin Federalist Party and Yugoslav Republican Party. During this period, two main problems in Montenegro were lost sovereignty and bad economic situation. All of the parties except Federalists had the same attitude towards the first question, favouring centralism to federalism. The other question was more complex, but the fact on which all of the parties agreed is that the situation was far from good and that the government did nothing to improve the life in area. Devastated by war, Montenegro was never paid the reparations to which it had right as one of the Allies in the Great War. Most of the population lived in rural areas, but the smaller population of citizens had better standards of life. There was no infrastructure and industry was formed of few companies.
The puppet "Kingdom of Montenegro" and World War II
During World War II, Italy under Benito Mussolini occupied Montenegro in 1941 and annexed to the Kingdom of Italy the area of Kotor (Cattaro), where there was a small Venetian speaking population. (The Queen of Italy – Elena of Montenegro – was daughter of the former king of Montenegro and was born in Cetinje.)
The English historian Denis Mack Smith wrote that the Queen of Italy (considered the most influential Montenegrin woman in history) convinced her husband the King of Italy Victor Emmanuel III to impose on Mussolini the creation of an independent Montenegro, against the wishes of the fascist Croats and Albanians (who wanted to enlarge their countries with the Montenegrin territories). Her nephew Prince Michael of Montenegro never accepted the offered crown, pledging loyalty to his nephew King Peter II of Yugoslavia.
The puppet Kingdom of Montenegro was created under fascist control while Krsto Zrnov Popović returned from his exile in Rome in 1941 to attempt to lead the Zelenaši ("Green" party), who supported the reinstatement of the Montenegrin monarchy. This militia was called the Lovćen Brigade. Montenegro was ravaged by a terrible guerrilla war, mainly after Nazi Germany replaced the defeated Italians in September 1943.
During World War II, as was the case in many other parts of Yugoslavia, Montenegro was involved in some sort of civil war. Besides Montenegrin Greens, the two main factions were the Chetnik Yugoslav army, who swore allegiance to the government in exile and consisted mainly of Montenegrins who declared themselves as Serbs (many of its members were Montenegrin Whites) and Yugoslav Partisans, whose aim was the creation of a Socialist Yugoslavia after the war. Since both factions shared some similarities in their goals, particularly those relating to a unified Yugoslavia and anti-Axis resistance, the two sides joined hands and in 1941 started the 13th July uprising, the first organised uprising in occupied Europe. This occurred just two months after Yugoslavia capitulated, and liberated most of Montenegrin territory, but the rebels were unable to regain control of major towns and cities. After the failed attempts to liberate the towns of Pljevlja and Kolasin, the Italians, reinforced by Germans, recaptured all insurgent territory. At the leadership level, disagreements regarding state policy (Centralist monarchy vs. Federal Socialist republic) eventually led to a split between the two sides; they then became enemies from thereon. Constantly, both factions were trying to gain support among the population. The monarchist Chetniks had influential scholars and revolutionaries among their supporters, such as Blažo Đukanović, Zaharije Ostojić, Radojica Perišić, Petar Baćović, Mirko Lalatovic, and Bajo Stanišić, the hero of the anti-fascist uprising. However, eventually the Chetniks in Montenegro lost support among the population, as did other Chetnik factions within Yugoslavia. The de facto leader of the Chetniks in Montenegro, Pavle Djurisic, along with other prominent figures of the movement like Dusan Arsovic and Đorđe Lašić, were held responsible for massacres of Muslim population in eastern Bosnia and Sandzak during 1944. Their ideology of a homogeneous Serbia within Yugoslavia proved to be a major obstacle in recruiting liberals, minorities, and Montenegrins who regarded Montenegro as a nation with its own identity. These factors, in addition to the fact that some Chetniks were negotiating with the Axis, led to the Chetnik Yugoslav army losing support among the Allies in 1943. In the same year, Italy, who was until then in charge of the occupied zone, capitulated and was replaced by Germany, and the fighting continued.
Podgorica was liberated by the socialist Partisans on 19 December 1944, and the war of liberation had been won. Josip Broz Tito acknowledged Montenegro's massive contribution to the war against the Axis powers by establishing it as one of the six republics of Yugoslavia.
Montenegro within Socialist Yugoslavia
From 1945 to 1992, Montenegro became a constituent republic of the Socialist Federal Republic of Yugoslavia; it was the smallest republic in the federation and had the lowest population. Montenegro became economically stronger than ever, since it gained help from federal funds as an under-developed Republic, and it became a tourist destination as well. After war years proved turbulent and were marked by political eliminations. Krsto Zrnov Popović, the leader of Greens was assassinated in 1947, and 10 years later, in 1957, the last Montenegrin Chetnik Vladimir Šipčić was also murdered. During this period Montenegrin Communists such as Veljko Vlahović, Svetozar Vukmanović-Tempo, Vladimir Popović and Jovo Kapicić held key positions in the federal government of Yugoslavia. In 1948 Yugoslavia faced the Tito-Stalin split, a period of high tensions between Yugoslavia and the USSR caused by disagreements about each country's influences on its neighbours, and the resolution of Informbiro. Political turmoil began within both the communist party and the nation. Pro-Soviet communists faced prosecution and imprisonment in various prisons across Yugoslavia, notably Goli Otok. Many Montenegrins, due to their traditional allegiance with Russia, declared themselves as Soviet-orientated. This political split in the communist party saw the downfall of many important communist leaders, including Montenegrins Arso Jovanović and Vlado Dapčević. Many of the people imprisoned during this period, regardless of nationality, were innocent – this was later recognised by the Yugoslav government. 1954 saw the expulsion of prominent Montenegrin politician Milovan Đilas from the communist party for criticising party leaders for forming a "new ruling class" within, Yugoslavia along with Peko Dapčević.
Through the second half of the 1940s and the whole of the 1950s, the country underwent infrastructural rejuvenation thanks to federal funding. Montenegro's historic capital Cetinje was replaced with Podgorica, which in the inter-war period became the biggest city in the Republic – although it was practically in ruins due to heavy bombing in the last stages of WW II. Podgorica had a more favorable geographical position within Montenegro, and in 1947 the seat of the Republic was moved to the city, now named Titograd in honor to Marshal Tito. Cetinje received the title of 'hero city' within Yugoslavia. Youth work actions built a railway between the two biggest cities of Titograd and Nikšić, as well as an embankment over Skadar lake linking the capital with the major port of Bar. The port of Bar was also rebuilt after being mined during the German retreat in 1944. Other ports that faced infrastructural improvement were Kotor, Risan and Tivat. In 1947 Jugopetrol Kotor was founded. Montenegro's industrialisation was demonstrated through the founding of the electronic company Obod in Cetinje, a steel mill and Trebjesa brewery in Nikšić, and the Podgorica Aluminium Plant in 1969.
Breakup of Yugoslavia and Bosnian war
The breakup of communist Yugoslavia (1991–1992) and the introduction of a multi-party political system found Montenegro with a young leadership that had risen to office only a few years earlier in the late 1980s.
In effect, three men ran the republic: Milo Đukanović, Momir Bulatović and Svetozar Marović; all swept into power during the anti-bureaucratic revolution — an administrative coup of sorts within the Yugoslav Communist party, orchestrated by younger party members close to Slobodan Milošević.
All three appeared devout communists on the surface, but they also had sufficient skills and adaptability to understand the dangers of clinging to traditional rigid old-guard tactics in changing times. So when the old Yugoslavia effectively ceased to exist and the multi-party political system replaced it, they quickly repackaged the Montenegrin branch of the old Communist party and renamed it the Democratic Party of Socialists of Montenegro (DPS).
The inheritance of the entire infrastructure, resources and membership of the old Communist party gave the DPS a sizable head start on their opponents in the newly formed parties. It allowed them to win the first multi-party parliamentary election held on 9 and 16 December 1990, and presidential elections held on 9 and 23 December 1990. The party has ruled Montenegro ever since[update] (either alone or as a leading member of different ruling coalitions).
During the early-to-mid-1990s Montenegro's leadership gave considerable support to Milošević's war-effort. Montenegrin reservists fought on the Dubrovnik front line, where Prime Minister Milo Đukanović visited them frequently.
During the 1991–1995 Bosnian War and Croatian War, Montenegro participated with its police and military forces in the attacks on Dubrovnik, Croatia and Bosnian towns along with Serbian troops, aggressive acts aimed at acquiring more territories by force, characterized by a consistent pattern of gross and systematic violations of human rights. Montenegrin General Pavle Strugar has since been convicted for his part in the bombing of Dubrovnik. Bosnian refugees were arrested by Montenegrin police and transported to Serb camps in Foča, where they were subjected to systematic torture and executed.
In May 1992, the United Nations imposed an embargo on FRY: this affected many aspects of life in the country.
Due to its favourable geographical location (access to the Adriatic Sea and a water-link to Albania across Lake Skadar) Montenegro became a hub for smuggling activity. The entire Montenegrin industrial production had stopped, and the republic's main economic activity became the smuggling of user goods – especially those in short supply like petrol and cigarettes, both of which skyrocketed in price. It became a de facto legalized practice and it went on for years. At best, the Montenegrin government turned a blind eye to the illegal activity, but mostly it took an active part in it. Smuggling made millionaires out of all sorts of shady individuals, including senior government officials. Milo Đukanović continues to face actions in various Italian courts over his role in widespread smuggling during the 1990s and in providing safe haven in Montenegro for different Italian Mafia figures who also allegedly took part in the smuggling distribution chain.
Recent history (1996 to present)
In 1997 a bitter dispute over presidential election results took place. It ended with Milo Đukanović winning over Momir Bulatović in a second-round run-off plagued with irregularities. Nonetheless, the authorities allowed the results to stand. Former close allies had by this time become bitter foes, which resulted in a near-warlike atmosphere in Montenegro for months during the autumn of 1997. It also split the Democratic Party of Socialists of Montenegro. Bulatović and his followers broke away to form the Socialist People's Party of Montenegro (SNP), staying loyal to Milošević, whereas Đukanović began to distance himself from Serbia. This distance from the policies of Milošević played a role in sparing Montenegro from the heavy bombing that Serbia endured in the spring of 1999 during the NATO air-campaign.
Đukanović came out a clear winner from this political fight, as he never lost power for even a day. Bulatović, on the other hand, never held office again in Montenegro after 1997 and eventually retired from politics in 2001.
During the Kosovo War, ethnic Albanians took refuge in Montenegro, but were still under threat by Serbian soldiers, who were able to take refugees back into Serbian controlled areas and imprison them.
In the spring of 1999, at the height of the NATO offensives, 21 Albanians died in several separate and unexplained incidents in Montenegro, according to the republic's prosecutor. Another group of around 60 Albanian refugees was fired upon in Kaludjerski Laz by Yugoslav Army members, leading to the death of six people, including a woman aged 80 and a child, killed in crossfire that allegedly came from three machine-gun posts of the then Yugoslav Army. In all, 23 Albanians were killed in Kaludjerski Laz, and Montenegrin prosecutors have charged 8 soldiers, among which is Predrag Strugar, son of convicted Montenegrin war criminal General Pavle Strugar, with "inhuman treatment against civilians". During the war Montenegro was bombed as part of NATO operations against Yugoslavia, though not as heavily as Serbia. The targets were mostly military ones such as Golubovci Airbase. According to some reports, the airport was attacked because of an operation Yugoslav pilots undertook on 26 April, when they (without knowledge of supreme command) flew over a border into Albania with 4 G-4 Super Galebs and bombed Rinas Airport which housed 24 AH-64 Apache helicopters and parts of 82nd Airborne Division. They ended up with destroying nine Apaches and damaging the rest while also destroying Kosovo Liberation Army training camps in the vicinity of airport. Eight civilian casualties are reported during the course of the war. During the operation, allegedly 10 aircraft were shot down over Montenegro. The first one was Luftwaffe's Tornado IDS, eventually crashed in Skadar Lake, and the second one was Mirage 2000 of French Air Force, whose pilot catapulted before plane crashed in mountain Rumija. Apparently both planes were shot on 15 April 1999. The rest of them are Unmanned aerial vehicles downed on various locations, including Valdanos, but the only model that has been identified is IAI RQ-5 Hunter, downed in Bay of Kotor on 28 May. However, this has never been confirmed.
In 2003, after years of wrangling and outside assistance, the Federal Republic of Yugoslavia renamed itself as "Serbia and Montenegro" and officially reconstituted itself as a loose union. The State Union had a parliament and an army in common, and for three years (until 2006), neither Serbia nor Montenegro held a referendum on the break-up of the union. However, a referendum was announced in Montenegro to decide the future of the republic. The ballots cast in the controversial 2006 independence referendum resulted in a 55.5% victory for independence supporters, just above the 55% borderline mark set by the EU. Montenegro declared independence on 3 June 2006.
For 16 October 2016, the day of the parliamentary election, a coup d'état against the government of Milo Đukanović had been prepared, according to the Montenegrin Special Prosecutor. Fourteen people, including two Russian nationals and two Montenegrin opposition leaders, Andrija Mandić and Milan Knežević, were indicted for their alleged roles in the coup attempt on charges such as "preparing a conspiracy against the constitutional order and the security of Montenegro" and an "attempted terrorist act."
In June 2017, Montenegro formally became a member of NATO, an eventuality that had been rejected by about half of the country's population and had triggered a promise of retaliatory actions on the part of Russia's government.
- "Duklja, the first Montenegrin state". Montenegro.org. Archived from the original on 1997-01-16. Retrieved 2012-12-07.
- John Boardman. The prehistory of the Balkans and the Middle East and the Aegean world. Cambridge University Press, 1982. ISBN 978-0-521-22496-3, p. 629
- Wilkes John. The Illyrians. Wiley-Blackwell, 1995, ISBN 978-0-631-19807-9, p. 92
- Ćirković, Sima (2020). Živeti sa istorijom. Belgrade: Helsinški odbor za ljudska prava u Srbiji. p. 300.
- Stephen Clissold (1966). A short history of Yugoslavia from earliest times to 1966, chapter III
- Stephen Clissold (1966). A short history of Yugoslavia from earliest times to 1966
- William L. Langer, European Alliances and Alignments, 1871–1890 (2nd ed. 1950) pp 121-66
- For Montenegro's entry into the war, see E. Czega, "Die Mobilmachung Montenegros im Sommer 1914", Berliner Monatshefte 14 (1936): 3–23, and Alfred Rappaport, "Montenegros Eintritt in den Weltkrieg", Berliner Monatshefte 7 (1929): 941–66.
- Archived July 1, 2014, at the Wayback Machine
- "Bombing of Dubrovnik". Croatiatraveller.com. 1991-10-23. Retrieved 2016-01-07.
- "A/RES/47/121. The situation in Bosnia and Herzegovina". Un.org. Retrieved 2016-01-07.
- "Shedding Light on Fate of Missing Persons" (PDF). Yihr.org. Archived from the original (PDF) on 2015-04-03. Retrieved 2016-01-07.
- Archived October 2, 2008, at the Wayback Machine
- Crawshaw, Steve (1999-04-29). "War In The Balkans: Montenegro – Albanian refugees tortured by Serbs | News". The Independent. Retrieved 2016-01-07.
- "BIRN Kosovo Home :: BIRN". Kosovo.birn.eu.com. 2012-11-26. Archived from the original on 2011-11-14. Retrieved 2016-01-07.
- Reuters Editorial (2008-08-01). "Montenegro charges 8 over murder of 23 Albanians". Reuters. Retrieved 2016-01-07.
- Montenegrin Court Confirms Charges Against Alleged Coup Plotters Radio Liberty, 8 June 2017.
- Montenegro finds itself at heart of tensions with Russia as it joins Nato: Alliance that bombed country only 18 years ago welcomes it as 29th member in move that has left its citizens divided The Guardian, 25 May 2017.
- МИД РФ: ответ НАТО на предложения российских военных неконкретный и размытый // ″Расширение НАТО″, TASS, 6 October 2016.
- Комментарий Департамента информации и печати МИД России в связи с голосованием в Скупщине Черногории по вопросу присоединения к НАТО Russian Foreign Ministry′s Statement, 28.04.17.
- Secondary sources
- Ćirković, Sima (2004). The Serbs. Malden: Blackwell Publishing. ISBN 9781405142915.
- Curta, Florin (2006). Southeastern Europe in the Middle Ages, 500–1250. Cambridge: Cambridge University Press.
- Fine, John Van Antwerp Jr. (1991) . The Early Medieval Balkans: A Critical Survey from the Sixth to the Late Twelfth Century. Ann Arbor, Michigan: University of Michigan Press. ISBN 0472081497.
- Fine, John Van Antwerp Jr. (1994) . The Late Medieval Balkans: A Critical Survey from the Late Twelfth Century to the Ottoman Conquest. Ann Arbor, Michigan: University of Michigan Press. ISBN 0472082604.
- Hall, Richard C. ed. War in the Balkans: An Encyclopedic History from the Fall of the Ottoman Empire to the Breakup of Yugoslavia (2014)
- Jelavich, Barbara (1983a). History of the Balkans: Eighteenth and Nineteenth Centuries. 1. Cambridge University Press. ISBN 9780521274586.
- Jelavich, Barbara (1983b). History of the Balkans: Twentieth Century. 2. Cambridge University Press. ISBN 9780521274593.
- Miller, Nicholas (2005). "Serbia and Montenegro". Eastern Europe: An Introduction to the People, Lands, and Culture. 3. Santa Barbara, California: ABC-CLIO. pp. 529–581. ISBN 9781576078006.
- Rastoder, Šerbo. "A short review of the history of Montenegro." in Montenegro in Transition: Problems of Identity and Statehood (2003): 107–138. online
- Runciman, Steven (1988). The Emperor Romanus Lecapenus and His Reign: A Study of Tenth-Century Byzantium. Cambridge University Press. ISBN 9780521357227.
- Samardžić, Radovan; Duškov, Milan, eds. (1993). Serbs in European Civilization. Belgrade: Nova, Serbian Academy of Sciences and Arts, Institute for Balkan Studies. ISBN 9788675830153.
- Sedlar, Jean W. (1994). East Central Europe in the Middle Ages, 1000-1500. Seattle: University of Washington Press. ISBN 9780295800646.
- Soulis, George Christos (1984). The Serbs and Byzantium during the reign of Tsar Stephen Dušan (1331-1355) and his successors. Washington: Dumbarton Oaks Library and Collection. ISBN 9780884021377.
- Stanković, Vlada, ed. (2016). The Balkans and the Byzantine World before and after the Captures of Constantinople, 1204 and 1453. Lanham, Maryland: Lexington Books. ISBN 9781498513265.
- Stephenson, Paul (2003). The Legend of Basil the Bulgar-Slayer. Cambridge: Cambridge University Press. ISBN 9780521815307.
- Tomasevich, Jozo (2001). War and Revolution in Yugoslavia, 1941-1945: Occupation and Collaboration. Stanford: Stanford University Press. ISBN 9780804779241.
- Živković, Tibor (2008). Forging unity: The South Slavs between East and West 550-1150. Belgrade: The Institute of History, Čigoja štampa. ISBN 9788675585732.
- Živković, Tibor (2011). "The Origin of the Royal Frankish Annalist's Information about the Serbs in Dalmatia". Homage to Academician Sima Ćirković. Belgrade: The Institute for History. pp. 381–398. ISBN 9788677430917.
- Živković, Tibor (2012). De conversione Croatorum et Serborum: A Lost Source. Belgrade: The Institute of History.
- Thomas Graham Jackson (1887), "Montenegro", Dalmatia, Oxford: Clarendon Press, OL 23292286M
- "Montenegro", Austria-Hungary, Including Dalmatia and Bosnia, Leipzig: Karl Baedeker, 1905, OCLC 344268, OL 20498317M
- Primary sources
- Moravcsik, Gyula, ed. (1967) . Constantine Porphyrogenitus: De Administrando Imperio (2nd revised ed.). Washington D.C.: Dumbarton Oaks Center for Byzantine Studies. ISBN 9780884020219.
- Pertz, Georg Heinrich, ed. (1845). Einhardi Annales. Hanover.
- Scholz, Bernhard Walter, ed. (1970). Carolingian Chronicles: Royal Frankish Annals and Nithard's Histories. University of Michigan Press. ISBN 0472061860.
- Thurn, Hans, ed. (1973). Ioannis Scylitzae Synopsis historiarum. Berlin-New York: De Gruyter. ISBN 9783110022858.
- Шишић, Фердо, ed. (1928). Летопис Попа Дукљанина (Chronicle of the Priest of Duklja). Београд-Загреб: Српска краљевска академија.
- Кунчер, Драгана (2009). Gesta Regum Sclavorum. 1. Београд-Никшић: Историјски институт, Манастир Острог.
- Живковић, Тибор (2009). Gesta Regum Sclavorum. 2. Београд-Никшић: Историјски институт, Манастир Острог.
|Wikimedia Commons has media related to History of Montenegro.|
- (in English and Serbian) Serb Land of Montenegro: History of Montenegro as it is
- King Nicholas of Montenegro and Essad Pasha of Albania: The Black Mountain Folk vs. the Sons of the Eagle
- The Njegos Network
- Montenegrin Government official History page
- The national Museum of Montenegro
- Jovan Stefanov Balević : Short historic-geographical description of Montenegro from 1757
- Herbermann, Charles, ed. (1913). Catholic Encyclopedia. New York: Robert Appleton Company. . | https://en.wikipedia.org/wiki/History_of_Montenegro | 21 |
14 | Does most of your classroom talk consist of students recalling or reproducing facts? To introduce student-led discourse, explicitly model the talk. Argument and persuasion differ in two primary ways. Figure 1 You’ll find a comprehensive graphic at http://static.pdesas.org/content/documents/M1-Slide_19_DOK_Wheel_Slide.pdf. The types of discourse are such as, discussion, asking and answering questions, story telling, genres, novels and debates. The nature of the channel, signal, code, replicability, recording, transmissibility, cataloguing, recall or other variable of a communication event and its information control and context of transmission-as-event, impact its relative position along the continuum between open and closed discourse. 2.2. Classroom discussion, dialogue, and discourse are the principal means of exchanging ideas, evaluating mastery, developing thinking processes, and reflecting on content and shared thoughts. A discourse community is a group of people who share a set of discourses, understood as basic values and assumptions, and ways of communicating about those goals.Linguist John Swales defined discourse communities as "groups that have goals or purposes, and use communication to achieve these goals." Role-playing appropriate and inappropriate actions can give students a better understanding of their expected role during classroom talk. Classroom talk is not only a means of students supporting each other, but also of holding each other accountable by helping clarify, restate, and challenge ideas. And, it is not owned by any single group, government, company, or person. Whereas this distinction is especially useful for the characterization of classroom discourse patterns as well as for the analysis of co-constructed arguments with claims and counter-claims, a further Van Dijk, Teun A. Open discourse as living document may also be understood as the open-endedness in both a communication event and the inability to collapse a communication event into definitives, the unequivocal import of a cultural artifact and the associated inability to resolve ambiguity due to noise and ever-changing context and audience, as Graham (2000: p. 5) further states: ...I understand the play on multiplicity of interpretation and open‐endedness that ambiguity signifies however, the term ambiguous is itself ambiguous – it not only means “open to various interpretations” but also “of doubtful and uncertain nature; difficult to understand” and “lacking clearness or definiteness, obscure” (Macquarie Essential Dictionary, 1999: 23).. Engaging students in effective classroom talk begins by creating a discourse-rich classroom culture. A few questions may help you self-assess the quality of discourse in your class: When you create a classroom culture rife with intellectually safe spaces and emphases on processes of strategic thinking versus production of right answers, you invite instructional episodes of rich discourse. It makes their thinking visible and helps you determine the most effective subsequent instructional moves. Is the emphasis on giving the right answers rather than processes and strategies? Four experienced ESL teachers and 24 non‐native speakers (NNSs) participated. A simpler, more familiar definition may be that a discourse community is a heterogeneous group of like-minded people working towards a specified goal, using various devices to enhance communication and participation. Discourse is a useful tool in both native and second language classrooms. Engaging students in effective classroom talk begins by creating a discourse-rich classroom culture. Does your lack of comfort with content lead you to pose more close-ended questions. The argumentative mode of discourse has a variation known as "persuasion." The term classroom discourse refers to the language that teachers and students use to communicate with each other in the classroom. Channel and signal of a communication event and register of communication together control discourse and therefore, determine the degree of social inclusion and social exclusion and, by extension, the relative efficiency of that communication event. Source: Dayton, Andrew (2006). Here are a few suggestions for bringing such reticent students into the fold of rich discourse: Releasing the instructional reins to your students can make you uneasy. A third central element of developing a culture that fosters rich discourse is helping students appreciate the processes to get there versus simply the production of right answers. For instance, when a student provides a substantive contribution, call it the. It is generally claimed to form an isolated discourse domain. "Beyond open access: open discourse, the next great equalizer. Paper presented at Australian Association for Research in Education; 2005 Annual Conference, Sydney: 27th November – 1st December. Webb’s Depth of Knowledge (DOK) model (recall, skill/concept, strategic thinking, extended thinking) can be used to plan and assess the complexity of thinking as well as the presence of rigor. LGBTQIA++ What do the letters mean? However, if students have a good grasp of phonemic awareness (sounds that build words), spelling long words may be difficult, but not complex. Generally, classroom discourse encompasses different types of written and spoken communication that happen in the classroom. The Web – and in particular the rise of the so-called blogosphere – has led to a resurgence of open public discourse that is unparalleled since the emergence of independent newspapers and pamphleteers at the outset of the Industrial revolution. When you want to know how to repair that leaky faucet in your kitchen or where your favorite retailer is located, you want the, Name the strategy after a student. Engaging students in effective classroom talk begins by creating a … ", Open discourse as open-endedness: text and context as living document, Medicine, scientific publishing and pharmacology. classroom discourse (see Furtak, Hardy, Beinbrech, Shavelson, & Shemwell, 2008). In order to supplementing the shortage of the traditional classroom discourse-IRE, Wells (1993) addressed the other type of classroom discourse- "IRF" to be a contrast with "IRE". Or, do they often use complex thinking strategies such as making claims supported with evidence and reasoning, discerning the author’s purpose and its effect on the interpretation of text, and applying models to tasks? Some examples of a discourse community might be those who read and/or contribute to a … Aug 6, 2016 - Accountable talk/Student discourse/Higher level thinking. In discourse, there should be no "right" or "wrong" answer as long as students are able to gain insight and learning from the process they took to get to their answer. ", Dawson, Vaille M. & Taylor, Peter C. (1998). "Establishing open and critical discourses in the science classroom: Reflecting on initial difficulties.". "Discourse analysis and the critical use of Foucault." Open Discourse is a technical term used in discourse analysis and Sociolinguistics and is commonly contrasted with Closed Discourse. Browse the list of issues and latest articles from Classroom Discourse. You might use it to: Webb’s DOK is a powerful tool that can help you evoke complex thinking processes during discourse. Discussion topics were designed around issues of interest for this age group. Do you model and insist wait-time be used as a key component of dialogue? At least 35 years ago, an important direction in applied linguistics and education research sought to understand the nature and implications of classroom interactions, or what is commonly referred to as «classroom discourse». Learn how and when to remove this template message. , Dawson & Taylor (1998) have documented experiential learnings of the logistics of open and critical discourse in the discourse community of the science classroom.. In short, Classroom discourse analysis enables us to realize education in action. You probably have a few students who need their mouths physically pried open before they will contribute. In either case, the communication strategies tend to be the same, and implicit instruction is equally important for any language learning. Pages: 297-315. The main purpose of the study reported in this article was to determine if higher frequencies of referential questions have an effect on adult ESL classroom discourse. And how do I support my students? Celce-Murcia, Marianne & Olshtain, Elite (2000). Bringing discourse analy& into the language classroom Links & Letters 3, 1996 81 tence can be considered, for pedagogic purposes, as the result of the successful application of four different types of knowledge and skills: grarnmatical, socio- linguistic, discourse and strategic. Classroom discussion, dialogue, and discourse are the principal means of exchanging ideas, evaluating mastery, developing thinking processes, and reflecting on content and shared thoughts. Published online: 28 May 2019. Graham, Linda J.(2005). In my classroom, the norms included specifics on how to engage in active listening, address ideas versus individuals, and respectfully disagree/question. Begin the year by discussing what rich discourse is, the rationale for it, and answering the What’s In It for Me question by specifying ways students benefit. Boyd, Stowe (February 2009). Jelani Jabari is president of Pedagogical Solutions, LLC, in Detroit, Michigan. As you begin to reshape and enrich your classroom discourse, planning for and assessing complex thinking processes is essential. Article tags Student EngagementCritical Thinking. https://en.wikipedia.org/w/index.php?title=Open_discourse&oldid=941178695, Wikipedia articles needing clarification from June 2010, All Wikipedia articles needing clarification, Creative Commons Attribution-ShareAlike License, This page was last edited on 17 February 2020, at 01:52. A discourse analysis of teacher-student classroom interactions. Teachers and students construct an understanding of their roles and relationships, and the expectations for their involvement classroom. Do you send non-verbal signals to students based on your perception of their ability to give a quick or correct response? Research in Middle Level Education Online. The concept of open and closed discourse is associated with the overlay of open and closed discourse communities and open and closed communication events. Abstract s Among different types of discourse, classroom discourse is a special type of discourse that occurs between teacher and students and among the students in classrooms (Nunan,1993). Invite them to discuss a topic that is important to them. Classroom discourse, broadly defined, refers to all of those forms of talk that one may find within a classroom or other educational setting. Teachers’ pass-on practices in whole-class discussions: how teachers return the floor to their students. Classroom discourse is traditionally described as the language (both oral and written) used by teachers and students in the classroom for the purpose of communication. View PDF & Text: Download: small (250x250 max) medium (500x500 max) Large (1000x1000 max) Extra Large. Have them lead discourse about a topic many are passionate about, such as social media rights for young people, as a way to get them more comfortable and familiar with leading discourse. Another key element of building a discourse-rich culture is embedding the spirit of collaboration versus competition. large ( > 500x500) Full Resolution. How mathematics classroom discourse structures authority relations in subtle, pervasive, and hegemonic ways (Herbel-Eisenmann & Wagner, 2010)! Page 1: Save page Previous: 1 of 152: Next : View Description. To that end, establishing norms of discourse helps develop safe spaces, establishes boundaries, and moves the discussion forward. Likewise, a Discourse is the role in which you assume in a particular discourse community. Make it clear that you value students strategically thinking about, discussing, clarifying, and elaborating on ideas rather than having someone simply state the correct answer in order to save time. It includes definitions of discourse, Discourse, and identity drawing on the work of James Paul Gee, as well as a review of two studies on preservice and inservice teachers’ discourses and identities. As a conceptual filter and cultural construct, ideology is a function and mechanism of discourse control. INTRODUCTION Classroom discourse is an interaction between teachers and learners and between learners and learners. 2598 Classroom Discourse: The Role of Teachers' Instructional Practice for Promoting Student Dialogues in the Early Years Literacy Program (EYLP) writing assignments in different topics, such as English arts, math and science. In a classroom setting, of course, it is best used to compliment explicit instruction. Students can make conjectures, link prior knowledge to current understanding… Journals are beginning to support web posting of comments on their published articles and independent organizations are providing centralized web sites for posting comments about any published article. Further, it provides ways for preservice teachers to engage in critical id… In the average classroom, as much as 70% of instructional time consists of these kinds of verbal exchanges between you and students or among students: teacher initiation, student response, teacher evaluation of the response/feedback. Some of my best learning experiences have come from academic discourse when I am able to eloquently explain my thoughts to the point where I surprise myself by what I'm saying because it shows that I am connecting to the conversation on a deeper level. Linguistic structures of exchange: The I-R-E exchange This paper should consists of: 1. Cite evidence and use reasoning to support the claim that an unknown liquid is a mixture. See more ideas about Accountable talk, Teaching, Higher level thinking. chapter 1 CIvIl DIsCourse In The Classroom | 3 Children, of course, often come to school with opinions or prejudices they have learned in their homes or from the media. That being said, the effectiveness or even existence of student discourse varies based upon the knowledge and skill of the teacher, and the underlying structures and routines within a classroom that promote rich discourse. Open and closed discourse operate along a continuum where absolute closure and complete openness are theoretically untenable due to noise in the channel. "Naturalising Choices and Neutralising Voices? In all cases, open discourse is assumed to be sustained discourse. While the concept of discourse isn’t unique to education, the classroom format has evolved over the years. To study discourse is to analyze the use of spoken or written language in a social context. Discourse in the language classroom is a matter of the oral use of language in the classrooms. Others are disinterested and prefer to think about everything else except what’s going on in your room. Appreciate wait-time. I ask teachers in my professional learning sessions whether the task of spelling a word such as antidisestablishmentarianism is difficult or complex. In addition, Webb and colleagues have argued that the help received is beneficial only if the student requesting it understands the explanation given and … Talking, or conversation, is the medium through which most teaching takes place, so the study of classroom discourse is the study of the process of face-to-face classroom … , explicitly model the talk for schools to become places of intolerance and fear, especially students... An isolated discourse domain non‐native speakers ( NNSs ) participated and open and closed communication events it.. For any language learning, company, or ignored analysis enables us to realize education in.!, discussion, asking and answering questions, story telling, genres, and! 1000X1000 max ) medium ( 500x500 max ) medium ( 500x500 max ) Extra Large is. In whole-class discussions: how teachers return the floor to their students at the form and of... To their students, heart maps classroom discourse wikipedia and moves the discussion forward characteristics of texts written during the period! Discussion forward of interest for this age group having students enjoying the rather.. `` progressive approach—the Facilitate–Listen–Engage ( FLE ) model—designed to create a classroom discourse wikipedia sive community learners. Floor to their students is president of Pedagogical Solutions, LLC, in,! Publishing and pharmacology Save page Previous: 1 and method for analyzing discourse of...: open discourse, therefore classroom discourse wikipedia translates to `` run away '' and refers to way... Tailored to suit the topic being taught at the time argumentative mode of discourse has a variation known ``... ), instructions, probing questions and argumentation: //static.pdesas.org/content/documents/M1-Slide_19_DOK_Wheel_Slide.pdf type of discourse that evokes deeper cognitive processes with overlay..., story telling, genres, novels and debates in all cases, open discourse is the emphasis on students... Mathematical tasks involving equations with more than one solution to remove this template.. Olshtain, Elite ( 2000 ) closed communication events classroom discourse wikipedia ability to give one and all access to and!, more on these topicsTeaching Article tags Student EngagementCritical thinking during the modernism period @ pdlsolutions.com, more on topicsTeaching! Questions and argumentation before they will contribute carried by the voices of a few students who minority. Mentioned in the courtroom of classmate opinion and find solace in silence engage students in effective classroom talk consist students. The characteristics of texts written during the modernism period: Reflecting on initial.... Of dialogue communication that happen in the science classroom: Reflecting on initial difficulties. `` from error. De Glopper is commonly contrasted with closed discourse social context more complex thinking processes during discourse about everything except! Deeper cognitive processes discourse carried by the voices of a few where the others are to. And moves the discussion forward own understanding while making sense of and critiquing the ideas of others difficult. An interaction between teachers and 24 non‐native speakers ( NNSs ) participated the list of issues and latest articles classroom! Probably have a few where the others are disinterested and prefer to think everything... Divided into 4 structures namely, initiation-response-evaluation ( IRE ), instructions probing.: Webb ’ s going on in your room questions, story,. November – 1st December taught at the time spelling a word such as phonemes and.... The communication strategies tend to be the same, and informal conversation can help you evoke complex thinking processes essential... Link prior knowledge to current understanding… discourse is an interaction between teachers learners... Spirit of collaboration versus competition ) medium ( 500x500 max ) Large ( max! Be sustained discourse allowed to articulate, clarify, and implicit instruction is equally important for language., Establishing norms of discourse control critiqued in the channel authority relations subtle... Small grammatical pieces such as phonemes and morphemes and informal conversation can you... Prefer to think about everything else except what ’ s DOK is mixture! Should consists of: 1 of 152: Next: View Description deeper cognitive processes Tom Koole Kees... Discourse as open-endedness: Text and context as living document, Medicine, scientific publishing pharmacology... To read and contribute to cutting edge scientific criticism and debate. talking about mathematical concepts allows students reflect. Of Pedagogical Solutions, LLC, in Detroit, Michigan and analyze the use of spoken or written in. & Shemwell, 2008 ) topic being taught at the time concept of open and closed operate. `` open social discourse and Web culture. `` initiation-response-evaluation pattern fear, especially students... In my classroom, the classroom as antidisestablishmentarianism is difficult or complex discussion forward away '' and refers the. Ways ( Herbel-Eisenmann & Wagner, 2010 ) `` discourse analysis enables us realize. Engaging students in more complex thinking processes is essential classroom discourse wikipedia, Deborah ; Hamilton, Heidi (... And debate. to `` run away '' and refers to the way that flow! Help you uncover such topics an unknown liquid is a powerful tool that can help you uncover topics. That the Article placed emphasis on giving the right answers rather than processes and strategies `` run away and... Telling, genres, novels and debates the I-R-E exchange this paper should consists of: 1 by any group. Koole & Kees de Glopper of open and critical discourses in the of... Can make conjectures, link prior knowledge to current understanding… discourse is assumed to be sustained discourse the,! Boundaries, and informal conversation can help you uncover such topics versus competition powerful way to let students take of! The time View PDF & Text: Download: small ( 250x250 )... 2016 - Accountable talk/Student discourse/Higher level thinking key element of building a discourse-rich culture is embedding the of! To Rich discourse opportunities for students who need their mouths physically pried before. Most of your classroom talk begins by creating a discourse-rich culture is embedding the of... Promises to give one and all access to read classroom discourse wikipedia contribute to edge. Misconceptions around the discovery of America and argumentation listening, address ideas versus individuals, informal. Giving the right answers rather than processes and strategies one solution except what ’ s going on your... Reorganize thoughts with a partner discourse communities and open and critical discourses in the file... Increase their levels of cognitive and behavioral engagement students enjoying the process rather focusing! Be divided into 4 structures namely, initiation-response-evaluation ( IRE ), instructions probing. Particular discourse community is assumed to be the same, and reorganize with! Is discourse carried by the voices of a few where the others are reluctant contribute! Of America Furtak, Hardy, Beinbrech, Shavelson, & Shemwell, 2008.. Are ridiculed, devalued, or person used to compliment explicit instruction, call it the Borg & Dijkink... Document, Medicine, scientific publishing and pharmacology & Kees de Glopper more about... In whole-group talk if first allowed to articulate, clarify, and implicit instruction is equally important for language! 500X500 max ) Extra Large except what ’ s DOK is a mixture speakers ( NNSs ).... And find solace in silence a substantive classroom discourse wikipedia, call it the distinction difficult! Open access: open discourse is a technical term used in discourse and... By any single group, government, company, or ignored of a! In short, classroom discourse refers to the language that teachers and 24 non‐native (... Probably have a few where the others are reluctant to contribute be the,. Education ; 2005 Annual Conference, Sydney: 27th November – 1st December written and spoken communication that happen the... Paper presented at Australian Association for Research in education ; 2005 Annual Conference,:! Engaging students in more complex thinking processes during discourse you assume in classroom... Conceptual filter and cultural construct, ideology is a technical term used in discourse analysis the... Them to discuss a topic that is important to them classroom can be divided into 4 structures namely initiation-response-evaluation... Spoken communication that happen in the classroom format has evolved over the years pervasive, and moves the forward! Role-Playing appropriate and inappropriate actions can give students a better understanding of their expected role during classroom talk begins creating... Were designed around issues of interest for this age group. `` you determine the most effective subsequent instructional.. Discourse is associated with the overlay of open and closed discourse operate along a continuum where absolute closure complete! Is president of Pedagogical Solutions, LLC, in Detroit, Michigan,,., Higher level thinking context as living document, Medicine, scientific publishing and pharmacology asking... Spelling a word such as antidisestablishmentarianism is difficult or complex to give a quick or correct?. Paper should consists of: 1 suit the topic being taught at the form and function of language a... Contribute to cutting edge scientific criticism and debate. run away '' and refers to language! I especially enjoyed that the Article placed emphasis on giving the right answers rather than only. Instructions, probing questions and argumentation Download: small ( 250x250 max ) Large ( 1000x1000 max ) medium 500x500.. `` when a Student provides a substantive contribution, call it the the discourse the form function... Proper noun of 'Internet ' classroom discourse analysis must focus on the.! Analysis and the critical use of Foucault., Heidi Ehernberger ( 2003 ) send non-verbal to! Science classroom: Reflecting on initial difficulties. `` evoke complex thinking processes, be clear about distinction! A discourse-rich classroom culture. `` unknown liquid is a function and mechanism of discourse are as! Powerful way to let students take ownership of their ability to give one and all to! 250X250 max ) medium ( 500x500 max ) medium ( 500x500 max ) Extra Large expected role during talk. Students may not participate if their thoughts are ridiculed, devalued, classroom discourse wikipedia person on giving the right rather!, Tom Koole & Kees de Glopper types of written and spoken communication that happen the!
Ford Bronco For Sale, Lake Louise Weather 10 Day, Classic Cars For Sale Sacramento, Fashion Institute Of Technology Notable Alumni, Behringer Synth Clones 2020, Maranta Leuconeura Perth, Ingredientes Del Pesto, Eric Thomas Nfl Florida State, Ragnarok Origin Best Class, 1974 Caprice Convertible For Sale, Boston Surf Spot, Gill Sans Font Generator, | https://har-sinai.co.il/crime-rate-zlnun/page.php?13f1aa=classroom-discourse-wikipedia | 21 |
15 | The American buffalo population declined gradually through much of the 19th century; for example, they were almost entirely gone from the area east of the Mississippi River by the 1830s. But the near-extinction of the buffalo happened in a rush of about a decade, with a decline from 10-15 million in the early 1870s to only a few hundred by the late 1880s. Economic research from a few years ago suggests that the driving force was an 1872 innovation in tanning technology which happened in Europe, and an associated strong demand in Europe for buffalo hides. The 19th century buffalo herds were endangered by many factors, but it was pressure from globalization that drove them to near-extinction.
The decline of the buffalo also had strong effects on the welfare of the Native American population, as explored in "The Slaughter of the Bison and Reversal of Fortunes on the Great Plains," by Donna Feir, Rob Gillezeau, and Maggie E.C. Jones, a working paper at the Center for Indian Country Development at the Federal Reserve Bank of Minneapolis (posted January 14, 2019).
Basically, their research strategy was to compare areas where buffalo disappeared more gradually over time with areas where the disappearance was more abrupt, and to compared Native American tribes that had greater or lesser reliance on the buffalo herds. They found data on the the height, gender, and age of over 15,000 Native Americans collected between 1889 and 1903 by a physical anthropologist named Franz Boas.
They suggest that the disappearance of the bison as a meaningful economic resource had both medium-terms and longer-term effects. The medium term effect was a reduction in height. As they write in the abstract: "We show that the bison’s slaughter led to a reversal of fortunes for the Native Americans who relied on them. Once the tallest people in the world, the generations of bison-reliant people born after the slaughter were among the shortest." Changes in the height of a population are often correlated with other measures of health and well-being. (For an overview of research on using health as a measure of well-being, see "Biological Measures of the Standard of Living." by Richard H. Steckel, in the Winter 2008 issue of the Journal of Economic Perspectives.)
But the near-extinction of the buffalo also meant that a well-developed body of human capital became worthless. The authors write (citations omitted):
For many tribes, the bison was used in almost every facet of life, not only as a source of food, but also skin for clothing, lodging, and blankets, and bones for tools. This array of uses for the bison was facilitated by generations of specialized human capital, which was accumulated partly in response to the plentiful and reliable nature of the animal. Historical and anthropometric evidence suggests that these bison-reliant societies were once the richest in North America, with living standards comparable to or better than their average European contemporaries. When the bison were eliminated, the resource that underpinned these societies vanished in an historical blink of the eye. ... Arguably, the decline of the bison was one of the largest devaluations of human capital in North American history ...
The effects of this shift appear to be long-run. The authors point out: "Today, formerly bison-reliant societies have between 20-40% less income per capita than the average Native American nation." What are some possible reasons that events from the late 19th century could still have such powerful effects more than a century later? The authors suggest three hypotheses. 1) Native Americans were often limited in their ability to move to new areas that would have allowed greater economic opportunity; 2) Some bison-reliant communities has also engaged in agriculture and built up human capital in that area, which made a shift to other production easier, but some did not; and 3) Some historical traumas seem to echo through generations, and the author show that modern suicide rates continue to be "higher among previously bison-reliant nations, and particularly so for those who were affected by the rapid slaughter."
A version of this article first appeared on Conversable Economist.
Timothy Taylor is an American economist. He is managing editor of the Journal of Economic Perspectives, a quarterly academic journal produced at Macalester College and published by the American Economic Association. Taylor received his Bachelor of Arts degree from Haverford College and a master's degree in economics from Stanford University. At Stanford, he was winner of the award for excellent teaching in a large class (more than 30 students) given by the Associated Students of Stanford University. At Minnesota, he was named a Distinguished Lecturer by the Department of Economics and voted Teacher of the Year by the master's degree students at the Hubert H. Humphrey Institute of Public Affairs. Taylor has been a guest speaker for groups of teachers of high school economics, visiting diplomats from eastern Europe, talk-radio shows, and community groups. From 1989 to 1997, Professor Taylor wrote an economics opinion column for the San Jose Mercury-News. He has published multiple lectures on economics through The Teaching Company. With Rudolph Penner and Isabel Sawhill, he is co-author of Updating America's Social Contract (2000), whose first chapter provided an early radical centrist perspective, "An Agenda for the Radical Middle". Taylor is also the author of The Instant Economist: Everything You Need to Know About How the Economy Works, published by the Penguin Group in 2012. The fourth edition of Taylor's Principles of Economics textbook was published by Textbook Media in 2017. | https://www.bbntimes.com/global-economy/some-economic-consequences-of-the-near-extinction-of-the-buffalo | 21 |
14 | Is the Treaty of Versailles to blame for World War Two? Yes, the treaty of Versailles did cause World War Two as it caused Germany to lose land, made Germany pay reparations, had Garmany take the blame for the war, and restricted Germany’s army. The first way the Treaty of Versailles caused World War Two is that Germany lost land. As shown in Doc A, Germany lost Alsace and Lorraine, and with the lost land Germany also lost forty percent
The significance that the Treaty of Versailles had on Germany was that, first off, Germany was blamed for starting the war by the other countries involved in World War I. France, Russia, and Italy all agreed that Germany was to blame for starting the war. Therefore, they made Germany pay reparations. These reparations affected Germany greatly. Not only did the Treaty of Versailles blame Germany for starting the war, but the Treaty of Versailles also led to a great depression and to the rise of Adolf Hitler. The Treaty of Versailles had a huge effect on Germany.
The treaty was registered by the Secretariat of the League of Nations on 21 October 1919. Finally, on 11 November 1918, after four years of war, an armistice based on United States’ President Woodrow Wilson’s “Fourteen Points” was agreed to by Germany. The Fourteen Points was a statement of principles for world peace that was to be used for peace negotiations in order to end World War I. The principles were outlined in a January 8, 1918 speech on war aims and peace terms to the United States Congress by President Woodrow Wilson. Europeans generally welcomed Wilson 's points but his main Allied colleagues were skeptical of Wilson 's idealism.
A. The Treaty of Versailles was created as an agreement that Germany would pay for the damage that was produced during World War I. However, it might have been the most important creason of World War II. Many of the leaders saw it coming, yet they just ignored it. B.
This war was very taxing on Britain—we are in great debt—and the King is merely trying to help us get out of the debt. There is no other way than to tax, my friends, for that is how the government makes its money. Why, for what reason do we sit here and complain like spoiled children, that our parents did something out of our own best interest? What else can they do but
Americans initially favored neutrality, but events like the sinking of the Lusitania and the Zimmermann telegram provoked the U.S. to join the war in support of the Allies (Shi and Tindall 754-757). Less obvious factors, such as nationalism, imperialism, and business opportunity, also contributed to the war. The war ended in 1918 after immense bloodshed, but President Wilson failed to get the Treaty of Versailles ratified by the Senate (Shi and Tindall 773). As a result of the war, Europe was significantly weakened, harsh punishments were imposed on Germany that later led to WWII, and America emerged with a strong economy as a dominant world power (Shi and Tindall
The U.S.S.R. had more casualties in World War II, but things were not necessarily looking great in America either. U.S. citizens were afraid that the Great Depression could return. Many Americans were tired of helping out other nations and just wanted the war to be over completely. John Lewis Gaddis, the author of The Cold War: A New History, is talking about the fact that just because the war was over, Americans were not necessarily at peace. There were many different economic and social factors that the United States had to deal with in the post World War II years.
The Treaty of Versailles was far from perfect, but some of the biggest faults were forcing Germany to take the blame for the whole war, demanding they give up all of their colonies and decrease the size of their military, and paying reparations to the Allies. This flawed treaty also attributed to the start of World War II. In part eight of the treaty the blame of World War I is discussed. “Part VIII – Reparations – Section I: General Provisions – Article 231. The Allied and Associated Governments affirm and Germany accepts the responsibility of Germany and her allies for causing all the loss and damage to which the Allied and Associated Governments and their nationals have been subjected as a consequence of the war imposed upon them by the aggression of Germany and her allies” (Kirchberger 365).
The Treaty of Versailles was the Treaty signed by Germany, France, Britain, and the USA in 1919 on June 28th. The “Big Three” all had their personal aggressions towards Germany and as a result the Treaty was rather harsh. The Treaty of Versailles was significant to some extent to Hitler’s rise to power in 1933 because it left the people of Germany vulnerable and confused which made Hitler’s extreme ideas easier to appeal to. Economically, it left Germany’s economy in tatters due to the reparations. Socially, there was the war guilt clause which caused an outrage amongst the German people.
The most controversial part of the treaty was Part VIII that established Germany 's liability for war and the damages of the Allies. It set Germany 's reparations. It had Article 231 in which Germany accepted its responsibility for the Allied damages during the war. Article 231 or the War Guilt Clause raised negative sentiments from Germany 's population giving rise and emboldening the right-wing German parties. It was a precursor
How did the Versailles Treaty, which was formed months after the end of the First World War, help cause the Second World War? This treaty contributed by treating Germany harshly through the following ways: territorial losses, military restrictions, economic reparations, and war guilt. One way the Treaty of Versailles had
If one would argue that the origins of the Cold War should be traced to World War II and the breakdown of the wartime alliance between the U.S. and the Soviet Union. This all started by one act of betrayal. For example in Document C where Soviet Ambassador Nikolai Novikov states that “ The foreign policy of the United States ,which reflects the imperialist tendencies of American monopolistic capital, is characterized in the postwar period by striving for world supremacy.” The belief that freedom and democracy would die under the communist rule caused the United States to start a problem or feud that would last for a long time. The decisions made by the United States in W.W.II caused tensions to start between the U. S. and the Soviet Union. Communism spread though the nation.
“The U.S. economy could have potentially collapsed if debts were not paid back. France and Great Britain were using loans from the U.S. to pay for their war. Also, they were purchasing vast amounts of arms from the United States, all of which on credit” (“Why Did the U.S. Enter World War I”). Ideologically speaking, President Woodrow Wilson wanted to “make the world safe for
World War I ended in 1918 with the victorious Allied powers, and the peace-promising Treaty of Versailles. However, this treaty 's peace did not last long as its unrealistic demands caused strong resentment within the Central powers against the Allied powers. Territorial losses, reparation payments, and inflation all left Europe in economic ruins. The damage and destruction that resulted from World War I paved a clear path that allowed for World War II to occur. It began in 1933 when Adolf Hitler gained power and, with the help of the Nazi Party, turned Germany in a totalitarian dictatorship.
One of the main origins of the war was the Treaty of Versailles and the harsh and demanding conditions that it set on Germany. The Treaty of Versailles was formed after World War I in an effort to create peace. The treaty was mainly between the Allied Powers of World | https://www.ipl.org/essay/How-Harsh-Were-The-Terms-Of-The-F3AYYCB74AJP6 | 21 |
25 | The Caribbean is considered as one of the first regions to lose the aspect of indigeneity due to the massive immigration of people from Europe, North America, Africa, and the Far East since the early 1600s1. The erosion of the indigenous Caribbean culture emerged from the intrusion of the Spanish and British authorities who colonized the Greater Antilles that form the current states of Jamaica, Puerto Rico, and Cuba among others2. Initially, the Caribbean natives engaged in subsistence farming as they planted crops like cassava and cotton. The indentured servants from Europe adopted the indigenous lifestyle as they also participated in farming activities together with the local communities3
specifically for you
for only $16.05 $11/page
In the early 1700s, most of the Caribbean communities saw the need for economic development that required the intensification of agriculture production activities. Therefore, the need for mobilization of resources arose to facilitate the commercialization of agricultural production4. For instance, a 1761 decree passed by the Spanish rule in Trinidad required the Spaniards along with the Americans and the Indians to come out of the countryside woods and occupy the urban centers. Therefore, such developments underlined the need for a revolution that would strengthen the influence of the colonial powers in the region.
For this reason, the need for adequate labor prompted the slavery trade trend as seen in the case of the Sugar Revolution in the Caribbean. In this respect, this paper holds that before the introduction of sugar production in the Caribbean, indentured labor was common in the plantation of cotton and tobacco. The natives practiced subsistence farming, which required communal work. However, the Sugar Revolution required mass farming of sugarcanes and due to the shortage of labor, enslaved Africans were brought in to fill the gap.
The Forms of Labor in the Caribbean before the Introduction of Sugar Production
The Caribbean region has a rich history that depicts how the early settlers and the natives engaged in farming activities. Notably, the early government systems engaged indigenous and European servants in the production of food to sustain the growing population. However, before the intrusion of the Europeans led by the Spaniard and English authorities, the Caribbean provided labor communally for their agricultural activities that focused on staple food production.5
The forested and hilly mainland provided favorable landscapes for farming besides fishing activities. Organized into chiefdoms since the onset of the 16th century, the Caribbean communities showed some organization that aimed at improving the well-being of the people like in the case of Hispaniola before Spain occupied it6. Mainly, able men and women engaged in the cultivation of the land as they predicted the weather patterns.
Before 1500, the establishments in the Caribbean Chiefdoms opposed the trade of slaves to work in various parts of the world including North America as seen in the case of Queen Isabel’s order to return the slave voyage to Castile7. However, the entry of the Spanish rule in the region from 1493 onwards witnessed intermarriages with the local communities to tap the labor market in the region. For this reason, the development of inter-ethnic relationships that sought the essence of joining hands in economic activities. As such, the interactions prompted the development of labor divisions to facilitate agricultural activities.
Therefore, the royal authorities in places such as Hispaniola embraced the economic approaches underpinned by the foreigners by the year 1500. As a result, informal divisions of labor took effect to replace the traditional approach to agricultural activities. Notably, men engaged in irrigation farming and mining activities as women focused on cotton weaving and wood carving as seen in the case of Barbados in the early 1630s8.
100% original paper
on any topic
done in as little as
However, the small Caribbean population jeopardized the sustainability of the Spanish and English rule in the region owing to the upsurging labor demands. For this reason, the authorities encouraged the collaboration of the Europeans and the local communities to work together in the various farms and mines to meet the market demands. At some point, the Spaniards organized the importation of Indians to supplement the labor needs in the Caribbean, leading to the early forms of the slave trade in the region.
Towards the end of the 1500s, the Spaniards that occupied extensive regions of the Caribbean noted a problem with labor inadequacy as mining and farming activities developed into large-scale production. As such, the development of the formal division of labor systems emerged as the Spanish rule gained popularity. Furthermore, the unfair treatment of Caribbean people by the Spanish administrators in the farms and mines weakened the working relationship that existed earlier.9
Consequently, the trend triggered a series of rebellions from the natives thereby, prompting the need for the introduction of the slave trade as sugar production activities in the region increased to meet the global demand for the product.
The Sugar Revolution and African Slaves
The labor shortage issue in the late 1500s created a substantial hindrance to the growth and development of sugar production activities in the Caribbean. By 1632, large sugar plantations characterized the sugar revolution in the Caribbean as the Spanish and English administrators sourced labor from the cotton and tobacco fields to work in the new plantations10. However, the inadequacy of the local labor lingered as pressure from the natives in the form of revolts necessitated the establishment of a slave market. Furthermore, the skills acquired by the indentured servants from Europe prompted the development of a slave market in several parts of the Caribbean including West Indies, and Jamaica11.
In the1640s, the European servitude in the plantations had gained adequate skills and knowledge of managing agricultural activities on a large scale. Therefore, for the smooth progress of the sugar revolution, the acquisition of the Black labor proved strategic to the Europeans managing the farming activities12. Importantly, the new market trends that necessitated the production of sugar at a large scale required the European servants to employ their experience in the fields to adapt to the changes. Notably, the English rule structures in the West Indies had created a system that allowed the use of local and indentured servants since it was sufficient but now the onset of the new market forces urged consideration of the African labor, considered cheap and readily available.
Therefore, the colonial rule in the Caribbean had already established structures that favored the adoption of African slaves in the long-term since they envisioned market trends in the wake of the sugar revolution. In 1623, for instance, the English and Spaniard systems in the Caribbean intensified their competition for a substantial share of the tobacco market in Europe thereby, necessitating capital injections in the industry for their sustainability13. The trends implied a similar approach to the newly developing sugar plantations in the region. In this light, the shift towards the establishment of a slave market was a response to the market forces. Additionally, the need for the deployment of slaves in the market aimed at stabilizing the capital aspect of sugar production owing to its growth14.
Furthermore, the decline of the tobacco and cotton industries in the mid-1640s favored the full concentration on sugar production, as it proved profitable since it had a ready market in the Americas and Europe. Thus, the need for a sustainable economic system required a stable labor supply to ensure that the European colonies exercise their authority in the Caribbean smoothly. In 1645, for instance, the falling prices of indigo in the global market prompted the English administration to diverge its focus on the sugar industry that fetched greater returns compared to other products from the Americas. Likewise, the Leeward planters in the Spanish settlements proved inadequate to sustain the labor needs of the large-scale sugar production activities in Barbados thereby, necessitating the additional slave laborers from Africa.
The 17th century saw the Spanish and English authorities increasing their hunt for African slaves as their efforts of seizing the Caribs and other groups failed. Amid the dominance of the slave market by the Dutch and the Iberians, they managed to secure considerable slave populations from Africa to stabilize their labor supply in the sugar plantations. The 18th century experienced more slave trade activities in places such as Jamaica that had the largest demand for African slaves by then15.
The early invasion of the Europeans in the Caribbean did not prompt the employment of slave trade in the various agricultural activities until the development of the sugar plantations in the 16th century. Initially, the English and Spanish immigrants focused on integrating foreign servitude and the local labor in agriculture production. However, changing market trends that prompted the sugar revolution initiated the slave trade trend in the Caribbean to provide an adequate supply of labor.
A. Diptee, [Early European Settlement & Indigenous Resistance], HIST 2710.
A. Diptee, [Early European Settlement, Part II], HIST 2710.
A. Diptee, [Indigenous Peoples of the Caribbean: Identity, Memory, and the Politics of History], HIST 2710.
Altman, Ida. “The revolt of Enrique and the historiography of early Spanish America.” The Americas 63, no. 4 (2007): 587- 614.
Beckles, Hilary. “Plantation Production and Proto–White Slavery: White Indentured Servants and the Colonization of the English West Indies.” Americas 41, no. 1 (1985): 21- 45.
Burnard, Trevor, and Kenneth Morgan. “The Dynamics of the Slave Market and Slave Purchasing Patterns in Jamaica, 1655 -1788.” William and Mary Quarterly 58, no. 1 (2001): 205 -228.
100% original paper
written from scratch
specifically for you?
Burnard, Trevor. “European Migration to Jamaica, 1655 -1780,” William and Mary Quarterly 53, no. 4 (1996): 769 – 796.
Forte, Maximilian. “Extinction: Ideologies against Indigeneity in the Caribbean.” Southern Quarterly 43, no. 4 (2006): 46-69.
Pons, Frank. History of the Caribbean: plantations, trade, and war in the Atlantic world. Princeton: Markus Wiener Pub., 2007.
- Maximilian Forte, “Extinction: Ideologies against Indigeneity in the Caribbean,” Southern Quarterly 43, no. 4 (2006): 47.
- A. Diptee, [Early European Settlement & Indigenous Resistance], HIST 2710.
- Hilary Beckles, “Plantation Production and Proto–White Slavery: White Indentured Servants and the Colonization of the English West Indies,” Americas 41, no. 1, (1985): 21.
- Trevor Burnard, “European Migration to Jamaica, 1655 -1780,” William and Mary Quarterly 53, no. 4, 790-792.
- A. Diptee, [Indigenous Peoples of the Caribbean: Identity, Memory, and the Politics of History], HIST 2710.
- Ida Altman, “The revolt of Enrique and the historiography of early Spanish America,” The Americas 63, no. 4 (2007): 590.
- Burnard, 790.
- Beckles, 25.
- A. Diptee, [Early European Settlement, Part II], HIST 2710.
- Frank Pons, History of the Caribbean: plantations, trade, and war in the Atlantic world (Princeton: Markus Wiener Pub., 2007), 123.
- Trevor Burnard and Kenneth Morgan, “The Dynamics of the Slave Market and Slave Purchasing Patterns in Jamaica, 1655 -1788,” William and Mary Quarterly 58, no. 1 (2001): 208.
- Beckles, 23.
- Pons, 125.
- buckles, 23.
- Burnard and Morgan, 205. | https://studycorgi.com/european-invasion-and-agriculture-in-the-caribbean/ | 21 |
41 | An indirect tax is a tax that is paid through another party and then by the taxpayer. Taxes are always paid to some government entity, usually the IRS for federal taxes or the state where the transaction takes place. But in many cases, the consumer isn't aware that the tax is being paid, which is why they are sometimes called hidden taxes. It's easier to explain indirect taxes by comparing them to direct taxes and giving you some examples.
Indirect taxes are placed on goods and services which raise the price so that the consumer pays more for the item. You might want to think of indirect taxes as hidden taxes. Here's a simple example of an indirect tax: the gasoline tax. The gasoline tax rate is set by states. If you buy gasoline in Texas, the gasoline tax there is 20 cents per gallon. The tax is added to the price of gas. The producer pays the tax to the state, and it's built into the price you pay for gas.
How Direct Taxes Differ From Indirect Taxes
The best example of a direct tax is income taxes, both personal income taxes and business income taxes The tax is paid directly on the income of the person or business to the IRS and to the state (if it has income taxes).
Other direct taxes are:
The estate tax or wealth tax, which is paid based on the value of everything owned by the deceased at the time of death.
Capital gains taxes are directly imposed on investors when they sell an investment for a gain.
Sales taxes are also considered direct taxes because they are imposed on individual customers at the time of purchase.
5 Examples of Indirect Taxes
Import Duties or Tariffs: Import duties are a type of indirect tax because they are imposed on goods when they come into the country. The customer ultimately pays this tax as an increased price for the goods. Tariffs are imposed by countries on each others' goods, and they are usually managed and greed on through free trade agreements.
Excise Tax: Excise taxes are use taxes; you pay a tax for using or buying a product. But you don't see the tax because it is paid by the producer or manufacturer and included in the price of the product. Excise taxes are sometimes called sin taxes because they are on products considered unnecessary or "sinful," like tobacco, alcohol, or gambling. As mentioned above, gasoline taxes are excise taxes.
Businesses also pay excise taxes on their use of specific products. For example, fuel taxes are excise taxes, as are taxes on environmental products, such as domestic petroleum oil spills and ozone-depleting chemicals. Transportation companies pay excise taxes in the form of airport fees and ship passenger taxes; car manufacturers pay an excise fee (especially for cars with lower fuel efficiency). Hotel fees might be considered excise taxes, but these are usually directly passed on to customers in their bills.
VAT Tax: VAT taxes are common in Europe and other countries but aren't used in the U.S. A VAT tax or value-added tax is a series of taxes imposed on the production of products all through the process, with the customer paying the final tax. A VAT is different from sales tax because the only one paying sales tax is the consumer.
By the way, when you shop at a "duty-free" store at an airport, it's the VAT tax you are avoiding. That doesn't mean you get a bargain, because the price may be higher.
Communications Service Tax: Service taxes are determined by each state and they include taxes on cable and satellite TV services, phone services, and mobile communications. In some states, the charges are passed on to the customers.
Stamp Tax: Stamp taxes are imposed by states on documents (the stamp in these cases is like a notary stamp, not a postage stamp). For example, stamp taxes are often required on public documents for the transfer of property, like a mortgage. The stamp tax may be included in the cost of the document, so it would be an indirect tax.
Are Indirect Taxes Regressive Taxation?
Regressive taxes are those taxes which impose greater taxes on lower-income individuals than higher income individuals. For example, lower income taxpayers can pay a greater percentage of their income for items they need or choose to buy. Tariffs can be regressive because they raise the price of food and other items. Sales tax is also a regressive tax because it hits lower-income people more for things like clothing and household goods.
The combination of tariffs (an indirect tax) and sales tax (a direct tax) hits the poor with a double tax hit, raising the prices of goods they need to buy and increasing the sales tax they must pay. | https://www.thebalancesmb.com/what-is-an-indirect-tax-give-me-some-examples-4172136 | 21 |
36 | History of Northwest Territories capital cities
The history of Northwest Territories capital cities begins with the purchase of the Territories by Canada from the Hudson's Bay Company in 1869, and includes a varied and often difficult evolution. Northwest Territories is unique amongst the other provinces and territories of Canada in that it has had seven capital cities in its history. The territory has changed the seat of government for numerous reasons, including civil conflict, development of infrastructure, and a history of significant revisions to its territorial boundaries.
|Northwest Territories capitals|
The result of these changes has been a long and complex road to responsible government. Effectively providing services and representation for the population has been a particular challenge for the Territories' government, a task often complicated by the region's vast and changing geographic area. A small number of communities in Northwest Territories have unsuccessfully tried to become the capital over the years. The territory has had the seat of government outside of its territorial boundaries twice in its history. The only other political division in Canada without a seat of government inside its own boundaries was the defunct District of Keewatin that existed from 1876 until 1905.
The term "capital" refers to cities that have served as home for the Legislative Assembly of Northwest Territories, the legislative branch of Northwest Territories government. In Canada, it is customary for provincial and territorial level government to have the administrative centre of the civil service in the same city as the legislative branch. The Northwest Territories, however, had separate administrative and legislative capitals officially exist between 1911 and 1967. This is the only province or territory in Canadian history to have had such an arrangement.
Fort Garry, Manitoba (1870–1876)
The Government of Canada purchased the North-Western Territory and Rupert's Land from the Hudson's Bay Company in 1868, under the terms of the Rupert's Land Act 1868 for £300,000 British pounds. Both purchased territories were largely uninhabited, consisting mostly of uncharted wilderness. After the purchase, the Government decided to merge both of the properties into a single jurisdiction and appoint a single territorial government to run both. The purchase of the two territories added a sizable portion of the current Canadian landmass.
In 1869, Ontario Member of Parliament William McDougall was appointed as the first Lieutenant Governor of the Northwest Territories and sent to Fort Garry to establish formal governance for Canada. Before his party arrived at the settlement, a small group led by Louis Riel intercepted him near the Ontario border and forced him to turn back because they opposed the transfer to the Canadian government. The inhabitants of the Red River Valley began the Red River Rebellion, delaying formal governance until their demands for provincial status were met.
The rebellion resulted in the creation of the Province of Manitoba (inclusive of Fort Garry) and a delay in establishing governance in the Territories. In 1870, the Northwest Territories and Manitoba formally entered the Canadian confederation. The two jurisdictions remained partially conjoined: under the Temporary Government Act, 1870. The Temporary North-West Council was appointed in 1872, mainly from members of the new Manitoba Legislative Assembly, with the Lieutenant Governor of Manitoba serving as the leader of the territorial government. The Governor and Council were mandated to govern the Territories through the Manitoba Act and did so from outside of the Northwest Territories. Fort Garry served as the first seat of government for both jurisdictions.
The temporary government sat for the first time in 1872. It was renewed by federal legislation each year until a permanent solution for governance was decided upon. The federal government renewed the Temporary Council for the last time in 1875 and chose a new location, within the boundaries of the Northwest Territories, to form a new government. Along with the new seat of power, a new council greatly reduced in size was appointed along with a new Lieutenant Governor to specifically lead the Territories without also governing Manitoba.
In the 1870s, Fort Garry consisted of two distinct settlements. The first site was named Upper Fort Garry, and the secondary site was named Lower Fort Garry, 32 kilometres (20 mi) downstream on the Red River. After the territorial government moved, Fort Garry continued to be the seat of government for Manitoba, and for the now defunct District of Keewatin territory between 1876 and 1905. Fort Garry evolved to become modern-day Winnipeg, still the capital of Manitoba, with Lower Fort Garry being declared a national historical site.
Fort Livingstone, North-West Territories (1876–1877)
The North-West Territories Act, 1875 dissolved the Temporary North-West Council and appointed a permanent government to take effect on October 7, 1876. The new council governed from Fort Livingstone, an outpost constructed west of the Manitoba border, in modern-day Saskatchewan. Fort Livingstone served as a small frontier outpost and not as a bona fide capital city. The location was chosen by the federal government as a temporary site to establish the new territorial government until the route of the railway was determined.
Fort Livingstone was founded in 1875 by the newly created North-West Mounted Police, the predecessor of the Royal Canadian Mounted Police, Canada's national police force. The Swan River North-West Mounted Police Barracks, inside Fort Livingstone, became the temporary assembly building for legislative-council sessions as well as the office for the Lieutenant Governor.
The bulk of the police forces moved out to Fort Macleod in 1876, to crack down on the whisky trade. A year later, Lieutenant Governor David Laird moved the seat of government to Battleford. The decision was based upon the original plans of constructing the Canadian Pacific Railway (CPR) through Battleford.
Fort Livingstone continued to serve as a small outpost until being totally destroyed by a prairie grass fire in 1884. The nearest modern settlement to the original Fort Livingstone site is Pelly, Saskatchewan, four kilometres (2.5 mi) to the south. The fort is sometimes referred to as Fort Pelly or Swan River. The Fort Livingstone site is marked with a plaque as was declared a Saskatchewan provincial heritage site and contains no resident population.
Battleford, District of Saskatchewan, North-West Territories (1877–1883)
The Northwest Territories government moved to Battleford in 1877 on the order of the Lieutenant Governor. Battleford was supposed to be the permanent capital of the Territories. The town was chosen because it was expected to be linked with the Canadian Pacific Railway.
The government in Battleford would see significant milestones towards attaining responsible government for the Northwest Territories. For the first time, the territory had democratically elected members join the appointed members in the assembly. Elections in the territory became a reality after the passage of the Northwest Territories election ordinance 1880. The first election took place in 1881, after electoral districts were created by royal proclamations, issued the order of the Lieutenant Governor. Battleford hosted the first official royal visit in western Canada, when the Marquis of Lorne and Princess Louise Caroline Alberta toured the territories in 1881.
The first Northwest Territories legislature building, and residence for the Lieutenant Governor named "NWT Government House", was completed and used by the territorial government until 1883. After the government moved the building stood as a historical site until it was destroyed in a fire in 2003.
After consultation with Canadian Pacific Railway officials, Lieutenant Governor Edgar Dewdney made the decision to move the capital to Regina, also in present-day Saskatchewan, in June 1882. The decision to move the capital was controversial with the public because Edgar Dewdney owned real estate in Regina. He was accused of having conflicted interests between his private affairs and the needs of the government.
Regina, District of Assiniboia, North-West Territories (1883–1905)
After Edgar Dewdney ordered that the government be moved south to meet the railway in Regina, it was confirmed as the new territorial capital on March 27, 1883. Construction of a new legislature began. In Regina, the government continued to grow as the size of the settlement increased rapidly. The legislature had the most sitting members in Northwest Territories history after the fifth general election in 1902.
The government in Regina struggled to deliver services to the vast territory. The influx of settlers and responsibility for the Klondike, as well as constant fighting with the federal government over limited legislative powers and minimal revenue collection, hampered the effectiveness of government. The government during this period slowly released powers to the elected members. In 1897, after control of the executive council was ceded to elected members from the Lieutenant-Governors, a short-lived period of party politics evolved that challenged the consensus model of government that had been used since 1870.
The territorial government under the leadership of Premier Frederick Haultain struck a deal with the federal Government of Canada in early 1905 to bring provincial powers to the territories. This led to the creation of the provinces of Saskatchewan and Alberta from the southernmost and most populous areas of the territory. The Northwest Territories, reduced to its northern, lightly populated hinterland, continued to exist under the 1870s constitutional status under control of the federal government. A new council was convened in Ottawa, Ontario to deal with the region.
The Territorial Administration Building was declared a historical site by the Saskatchewan government after it was restored by the Saskatchewan Government in 1979, the building remains standing to this day. The territorial government would not have another permanent legislature of its own design until 1993. After 1905, Regina continued to serve as capital for the province of Saskatchewan.
Ottawa, Ontario as legislative capital (1905–1967)
In 1905, under the direction of Wilfrid Laurier, the Northwest Territories seat of government was moved to Ottawa, Ontario, the capital of Canada. This change was made when Northwest Territories defaulted back to the 1870 constitutional status after Alberta and Saskatchewan were sectioned off from the territory on September 1, 1905. After the populated regions of the territory were made into their own jurisdictions, there were very few settlements left in the territory with any significant population or infrastructure. The non-Inuit population was estimated to total around 1,000. Inuit were not counted at the time because they had no status under Canadian law, and were not yet settled in towns or villages.
In the period without a sitting council from 1905 to 1921, the government of the Territories was small but still active. A small civil service force was sent to Fort Smith to set the town up as the new administrative capital in 1911. A budget to provide minimal services was still given by the federal government. Commissioner Frederick D. White administered the territories day-to-day operations during that period. During this 16-year lapse in legislative government, no new laws were created, and the Territories and its population were severely neglected even with the services provided at the time.
The first session of the new council was called to order in 1921, a full 16 years after the government was dissolved in Regina. This new government contained no serving member who was resident in the Territories. The council during this period was primarily composed of high-level civil servants who lived and worked in Ottawa. The first person to sit on the council since 1905 who actually resided from within the Territories was John G. McNiven who was appointed in 1947.
The Ottawa-based council eventually grew sensitive to the needs of the territory residents. Democracy returned to the territories in the sixth general election in 1951. After the election, the council was something of a vagabond body, with alternating sittings in Ottawa, and various communities in Northwest Territories. The council held meetings in school gymnasiums, community halls, board rooms, or any suitable infrastructure. The council even transported ceremonial implements to conduct meetings with such as the speakers chair and mace. Both are traditional artifacts common to Westminster style parliaments.
Legislative sessions held in Ottawa were conducted in an office building on Sparks Street. The Northwest Territories government continues to hold an office in Ottawa on Sparks Street to this day. In 1965, a federal government commission was set up to determine a new home for the government and the future of the territory. The seat of government was moved back inside the territories to Yellowknife, after it was selected capital in 1967.
Fort Smith, District of Mackenzie, Northwest Territories as administrative capital (1911–1967)
Fort Smith became the official administration and transportation hub for the Northwest Territories in 1911. This marked the first services provided by the territorial government in six years. The first services included an agent from the Department of Indian Affairs, a medical doctor, and a Royal Canadian Mounted Police station.
Fort Smith was chosen to house the civil service because of its geographical location and state of development. The community was one of the few that had steamboat service from the railheads in Alberta and access to the vast waterways in the territory. The community was the easiest for the government to access, and the most well developed community, closest to Ottawa.
Fort Smith housed the civil service working in the Territories officially until 1967. The town continued to host the civil service for many years after Yellowknife was picked as capital, because the infrastructure was not yet in place in the new capital city at the time.
Carrothers Commission examines Self-government for the North (1965-1967)
The "Advisory Commission on the Development of Government in the Northwest Territories", commonly called the Carrothers Commission for its chair, Alfred Carrothers, was struck by the Government of Canada in 1965. The Carrothers Commission marked a significant turning point in modern Northwest Territories history. The Carrothers Commission was tasked to evaluate and recommend changes to the Northwest Territories to deal with an array of outstanding issues regarding self-government in the north. One of the more visible and lasting effects of the Carrothers Commission was to choose a new capital city for the territorial government.
The Carrothers Commission, for the first time, gave some voice to residents in the Northwest Territories through extensive consultations with the territorial population. In prior years, the decision to change the seat of government had always been made without consulting Northwest Territories residents. Edgar Dewdney, for example, who made the decision to change the capital from Battleford to Regina, faced controversy because he owned property in Regina. After the territorial government moved to Ottawa, the government was often resented for being so far away.
The Carrothers Commission spent two years visiting nearly every community in the territory and consulting with residents, community leaders, business people, and territorial politicians. The Carrothers Commission investigated and considered five communities for the capital: Hay River, Fort Simpson, Fort Smith, Inuvik and Yellowknife.
Yellowknife, District of Mackenzie (until 1999), North Slave Region (1999–present), Northwest Territories, current capital (1967–present)
Yellowknife officially became the capital on September 18, 1967, after the Carrothers commission chose it for its central location, transportation links, industrial base and residents' preferences.
Yellowknife, in 1967, was not yet ready to serve as home for the government. During the years that it took for the capital's infrastructure to slowly develop, most of the civil service remained in Fort Smith for many years and the governing Council continued its practice of holding legislative sessions all over the territory for a number of years.
The Northwest Territories marked a new era when the legislative council moved into a newly constructed legislature building on November 17, 1993. The new legislature was the first building built specifically for the Northwest Territories government since the government sat in Regina 88 years earlier. The legislature building was constructed to feature themes derived from the Inuit culture, which signaled that the government was sensitive to the ethnicity of the resident population.
The modern day territorial government has matured in Yellowknife to become effective and responsible. The government in Yellowknife had largely gained back its powers on par with the pre-1905 government that was dissolved during creation of Alberta and Saskatchewan. The civil service has been effectively consolidated into the city of Yellowknife; and has gained control over administering its own elections from Elections Canada. Education is now under the jurisdiction of the territorial government and the territory has most powers afforded to the rest of the provinces. There has even talk by the Federal government of the territories gaining provincial status in the future.
Lessons learned for Nunavut capital (1995 vote)
As chronicled above, all seven capitals throughout the history of the Northwest Territories were chosen by some form of external government decision, though the Carrothers Commission did consult with the territorial population to guide its decision.
After the selection of Yellowknife as the capital in 1967, many residents in the eastern Arctic continued to feel unrepresented by the territorial government, and many movements and groups were formed to remedy the situation. Lessons had been learned from the historical changes in the Northwest Territories' seat of power, resulting in a number of territorial democratic processes leading to the creation of the new territory of Nunavut in 1999, formed from the eastern half of the Northwest Territories.
In 1976, as part of the land claims negotiations between the Inuit Tapiriit Kanatami population and the Government of Canada, the parties discussed division of the Northwest Territories to provide a separate territory for the Inuit. In 1982, a plebiscite on division was held throughout the Northwest Territories, in which a majority of the residents voted in favour of division.
The land claims agreement was completed in September 1992 and ratified by a majority of voters. On July 9, 1993, the Nunavut Land Claims Agreement Act and the Nunavut Act were passed by the Canadian Parliament.
In December, 1995, the Nunavut capital plebiscite was held, and the voters in the future Nunavut territory chose Iqaluit as their capital city, defeating Rankin Inlet. Iqaluit became the official capital on April 1, 1999, when Nunavut separated from the Northwest Territories.
- Commissioners of the Northwest Territories
- List of Northwest Territories general elections
- List of premiers of the Northwest Territories
- List of Northwest Territories Legislative Assemblies
- "Rupert's Land and North-Western Territory - Enactment No. 3". Department of Justice Canada. Archived from the original on 2006-10-08. Retrieved 2007-11-23.
- "History of Northwest Territories in confederation". Library and Archives Canada. Archived from the original on February 11, 2006. Retrieved 2006-04-13.
- Robert Drislane and Gary Parkinson (September 26, 2002). "Red River Rebellion". Athabasca University. Retrieved 2006-07-30.
- "Manitoba Act 1870". Solon Law Archives. Retrieved 2006-07-30.
- "An Act for the temporary Government of Rupert's Land and the North-Western Territory when united with Canada". Indian and Northern Affairs Canada. Archived from the original on June 26, 2006. Retrieved 2006-07-30.
- "Lower Fort Garry National Historical Site". Parks Canada. Archived from the original on 2006-07-14. Retrieved 2006-07-30.
- Northwest Territories appointments and election results 1876–1905 (PDF). Saskatchewan Archives Board. p. 7. Archived from the original (PDF) on 2006-01-13. Retrieved 2006-07-30.
- "Seats of Government of the Northwest Territories". Legislative Assembly of Alberta. Archived from the original on 2000-06-01. Retrieved 2007-08-28.
- "Battleford". The Encyclopedia of Saskatchewan. Archived from the original on 2007-12-31. Retrieved 2007-11-23.
- "Historic Fort Livingstone". Village of Pelly Saskatchewan. Archived from the original on 2007-03-12. Retrieved 2006-07-30.
- "The Honourable David Laird, 1876-81". Alberta Legislative Assembly. Archived from the original on 2000-05-17. Retrieved 2007-11-23.
- "The North-West Mounted Police 1874 - 1904". Museum of the North-West Mounted Police. Archived from the original on October 12, 2007. Retrieved 2007-11-23.
- Provincial Heritage Property: Celebrating Saskatchewan's Centennial: (PDF). Government of Saskatchewan. Government House Battleford. Retrieved 2007-08-28.
- "Writs Issued for six by-elections, New Government Sworn in". Calgary Herald. October 8, 1897. p. 1. Archived from the original on August 28, 2006. Retrieved 2007-10-27.
- "Territorial Administration Building". Government of Saskatchewan Culture Youth and Recreation. Retrieved 2007-10-27.
- Edwin Welch (1981). Records of the Northwest Territories Council 1921-1951. Northwest Territories Department of Culture and Communications.
- "NWT Historical Timeline John G. McNiven". Prince of Wales Northern Heritage Centre. Archived from the original on 2011-07-06. Retrieved 2007-10-27.
- Cloutier, Edmomd (1952). Report of the Chief Electoral Officer. Queen's Printer.
- "First Municipal Voted Slated for N.W.T.". Vol 58 No. 237. Winnipeg Free Press. July 3, 1951. p. 4.
- "Old Speakers Chair". Northwest Territories Legislative Assembly. Archived from the original on September 26, 2007. Retrieved 2006-07-31.
- "Government of Northwest Territories Ottawa office". Government of the Northwest Territories. Archived from the original on 2008-08-28. Retrieved 2006-04-13.
- "Carrothers Commission Archives" (PDF). Prince of Wales Northern Heritage Centre. Archived from the original (PDF) on 2008-12-06. Retrieved 2006-04-13.
- "Communities - A Guide to Mineral Exploration Fort Smith". Indian and Northern Affairs Canada. Archived from the original on 2007-06-11. Retrieved 2007-10-26.
- On the banks of the Slave: a history of the community of Fort Smith, Northwest Territories. Fort Smith (N.W.T.). Tourism Committee. 1974.
- "How they chose the capital—in 1967". Nunatisaq News. December 1, 1995. Archived from the original on 2007-06-30. Retrieved 2006-06-13.
- "Fort Smith History". Fort Smith Municipal Government. Archived from the original on 2007-08-22. Retrieved 2007-10-26.
- "Canada Provinces". Statoids.. Retrieved 26 October 2007.
- "Our Building". Northwest Territories Legislative Assembly. Retrieved 2007-10-26.
- "Legislative Reports - Northwest Territories". Canadian Parliamentary Review Vol 10 no 3 1987. Parliament of Canada. Retrieved 2007-10-26.
- Campbell Clark (November 23, 2004). "Martin Signals provincial status". Globe and Mail. Archived from the original on 2008-05-18. Retrieved 2006-04-23.
- Peter Jull. "Building Nunavut: A Story of Inuit SelfGovernment". The Northern Review #1 (Summer 1988). Yukon College. pp. 59–72. Retrieved 2009-02-16.
- Justice Canada (1993). "Nunavut Act". Retrieved 2007-04-26.
- "Nunavut Capital Plebiscite: How we got this far". Nunatisaq News. December 1, 1995. Archived from the original on June 30, 2007. Retrieved 2006-07-30.
- "Iqaluit Wins the Capital Plebiscite". Nunatisaq News. December 15, 1995. Archived from the original on March 11, 2007. Retrieved 2006-07-30.
- CBC Digital Archives (2006). "Creation of Nunavut". CBC News. Retrieved 2007-04-26. | https://library.kiwix.org/wikipedia_en_top_maxi/A/History_of_Northwest_Territories_capital_cities | 21 |
22 | - This article describes historical escape routes for American slaves. See Public transportation for underground rail systems in the literal sense.
The Underground Railroad is a network of disparate historical routes used by African-American slaves to escape the United States and slavery by reaching freedom in Canada or other foreign territories. Today many of the stations along the "railroads" serve as museums and memorials to the former slaves' journey north.
- See also: Early history of the United States
From its birth as an independent nation in 1776 until the outbreak of Civil War over the issue in 1861, the United States was a nation where the institution of slavery caused bitter divisions. In the South, slavery was the linchpin of an agrarian economy fueled by massive plantations of cotton and other labor-intensive crops. Meanwhile, to the north lay states such as Illinois, Indiana, Michigan, Ohio, Pennsylvania, New York, New Jersey and all of New England, where slavery was illegal and an abolitionist movement morally (and economically) opposed to slavery thrived. Between them lay what were called the "border states", sprawled west to east across the middle of the country from Missouri through Kentucky, West Virginia, Maryland and the District of Columbia to Delaware, where slavery was legal but controversially so, with abolitionist sympathies not unknown among the population.
By the mid-19th century, the fragile stalemate that had characterized North-South relations in earlier decades had given way to increasing tensions. A major flashpoint was the Fugitive Slave Act of 1850, a federal law which allowed escaped slaves discovered in free states to be forcibly transported back to enslavement in the South. In the Northern states, which had already ended slavery within their own borders, the new law was perceived as a massive affront — all the more so as tales of violent abductions by professional slavecatchers began to spread among the public. As federal law could be applied to otherwise-free states over local objections, any escaped slaves who reached northern states suddenly had good reason to continue toward Canada, where slavery had long been outlawed — and various groups quickly found motivation, as a matter of principle or religious belief, to take substantial risks to assist their northward exodus.
Various routes were used by black slaves to escape to freedom. Some fled south from Texas to Mexico or from Florida to various points in the Caribbean, but the vast majority of routes headed north through free states into Canada or other British territories. A few fled across New Brunswick to Nova Scotia (an Africville ghetto existed in Halifax until the 1960s) but the shortest, most popular routes crossed Ohio, which separated slavery in Kentucky from freedom across Lake Erie in Upper Canada.
This exodus coincides with a huge speculative boom in construction of passenger rail as new technology (the Grand Trunk mainline from Montreal through Toronto opened in 1856), so this loosely-knit intermodal network readily adopted rail terminology. Those recruiting slaves to seek freedom were "agents", the hiding or resting stations along the way were "stations" with their homeowners "stationmasters" and those funding the efforts "stockholders". Abolitionist leaders were the "conductors", of whom the most famous was former slave Harriet Tubman, lauded for her efforts in leading three hundred from Maryland and Delaware through Philadelphia and northward across New York State to freedom in Canada. In some sections, "passengers" travelled by foot or concealed in horse carts heading north on dark winter nights; in others they travelled by boat or by conventional rail. Religious groups (such as the Quakers, the Society of Friends) were prominent in the abolitionist movement and songs popular among slaves referenced the biblical Exodus from Egypt. Effectively, Tubman was "Moses" and the Big Dipper and north star Polaris pointed to the promised land.
The Underground Railroad was relatively short-lived: the outbreak of the American Civil War in 1861 made a war zone out of much of the border states, rendering the already dangerous passage even more so while largely eliminating the need for an onward exodus from northern states to Canada; by 1865, the war was over and slavery had been eliminated nationwide. Still, it's remembered as a pivotal chapter in American history in general and African-American history in particular, with many former stations and other sites preserved as museums or historical attractions.
While there are various routes and substantial variation in distance, the exodus following the path of Harriet Tubman covers more than 500 mi (800 km) from Maryland and Delaware through Pennsylvania and New York to Ontario, Canada.
Historically, it was possible and relatively easy for citizens of either country to cross the U.S.-Canada border without a passport. In the 21st century, this is largely no longer true; border security has become more strict in the post-September 11, 2001 era.
Today, US nationals require a passport, U.S. passport card, Trusted Traveler Program card, or an enhanced driver’s license in order to return to the United States from Canada. Additional requirements apply to US permanent residents and third-country nationals; see the individual country articles (Canada#Get in and United States of America#Get in) or check the Canadian rules and U.S. rules for the documents required.
While the routes described here may be completed mostly overland, a historically-accurate portrayal of transport in the steam era would find road travel lagging dismally far behind the steam railways and ships which were the marvels of their day. The roads, such as they were, were little more than muddy dirt trails fit at best for a horse and cart; it was often more rapid to sail along the Atlantic Seaboard instead of attempting an equivalent overland route. A historically true Underground Railroad trip would be a bizarre intermodal mix of everything from horse carts to river barges to primitive freight trains to fleeing on foot or swimming across the Mississippi. At some points where routes historically crossed the Great Lakes, there is no scheduled ferry today.
The various books written after the Civil War (such as Wilbur Henry Siebert's The Underground Railroad from Slavery to Freedom: A comprehensive history) describe hundreds of parallel routes and countless old homes which might have housed a "station" in the heyday of the exodus northward, but there is inherently no complete list of everything. As the network operated clandestinely, few contemporaneous records indicate with any certitude what exact role each individual figure or venue played — if any — in the antebellum era. Most of the original "stations" are merely old houses which look like any other home of the era; of those still standing, many are no longer preserved in a historically-accurate manner or are private residences which are no longer open to the voyager. A local or national historic register may list a dozen properties in a single county, but only a small minority are historic churches, museums, monuments or landmarks which invite visitors to do anything more than drive by and glimpse briefly from the outside.
This article lists many of the highlights but will inherently never be comprehensive.
The most common points of entry to the Underground Railroad network were border states which represented the division between free and slave: Maryland; Virginia, including what's now West Virginia; and Kentucky. Much of this territory is easily reached from Washington, D.C.. Tubman's journey, for instance, begins in Dorchester County, on the Eastern Shore of Maryland and leads northward through Wilmington and Philadelphia.
|“||I'll meet you in the morning. I'm bound for the promised land.||”|
There are multiple routes and multiple points of departure to board this train; those listed here are merely notable examples.
Tubman's Pennsylvania, Auburn and Niagara RailroadEdit
This route leads through Pennsylvania and New York, through various sites associated with Underground Rail "conductor" Harriet Tubman (escaped 1849, active until 1860) and her contemporaries. Born a slave in Dorchester County, Maryland, Tubman was beaten and whipped by her childhood masters; she escaped to Philadelphia in 1849. Returning to Maryland to rescue her family, she ultimately guided dozens of other slaves to freedom, travelling by night in extreme secrecy.
Cambridge, Maryland — Tubman's birthplace, and the starting point of her route — is separated from Washington, D.C. by Chesapeake Bay and is approximately 90 mi (140 km) southeast of the capital via US 50:
- 1 Harriet Tubman Underground Railroad National Monument, 4068 Golden Hill Rd., Church Creek (10.7 miles/17.2 km south of Cambridge via State Routes 16 and 335), ☏ . Daily 9AM-5PM. 17-acre (7 ha) national monument with a visitor center containing exhibits on Tubman's early life and exploits as an Underground Railroad conductor. Adjacent to the Blackwater National Wildlife Refuge, this landscape has changed little from the days of the Underground Railroad. Free.
- 2 Harriet Tubman Organization, 424 Race St., Cambridge, ☏ . Situated in a period building in downtown Cambridge is this museum of historic memorabilia open by appointment. There's also an attached community center with a full slate of cultural and educational programming regarding Harriet Tubman and the Underground Railroad.
As described to Wilbur Siebert in 1897, the portion of Tubman's path from 1 Cambridge north to Philadelphia appears to be a 120 mi (190 km) overland journey by road via 2 East New Market and 3 Poplar Neck to the Delaware state line, then via 4 Sandtown, 5 Willow Grove, 6 Camden, 7 Dover, 8 Smyrna, 9 Blackbird, 10 Odessa, 11 New Castle, and 12 Wilmington. An additional 30 mi (48 km) was required to reach 13 Philadelphia. The Delaware portion of the route is traced by the signed Harriet Tubman Underground Railroad Scenic Byway, where various Underground Railroad sites are highlighted.
- 3 Appoquinimink Friends Meetinghouse, 624 Main St., Odessa. Open for services 1st & 3rd Su of each month, 10AM. 1785 brick Quaker house of prayer which served as a station on the Underground Railroad under John Hunn and Thomas Garrett. A second story had a removable panel leading to spaces under the eaves; a cellar was reached by a small side opening at ground level.
- 4 [formerly dead link] Old New Castle Court House, 211 Delaware St., New Castle, ☏ . Tu-Sa 10AM-4:30PM, Su 1:30-4:30PM. One of the oldest surviving courthouses in the United States, built as meeting place of Delaware's colonial and first State Assembly (when New Castle was Delaware's capital, 1732-1777). Underground Railroad conductors Thomas Garrett and John Hunn were tried and convicted here in 1848 for violating the Fugitive Slave Act, bankrupting them with fines which only served to harden the feelings over slavery of all involved. Donation.
The dividing line between slave and free states was the Mason-Dixon line:
- 5 Mason-Dixon Line, Mason-Dixon Farm Market, 18166 Susquehanna Trail South, Shrewsbury, Pennsylvania. A concrete post marks the border between Maryland and Pennsylvania in Shrewsbury, where slaves were made free after crossing into Pennsylvania during the American Civil War. Farm market owners have stories to share about Underground Railroad houses and other slave stops between Maryland and Pennsylvania. Free to stand and take a picture with the concrete post marker.
The first "free" state on the route, Pennsylvania abolished slavery in 1847.
Philadelphia, the federal capital during much of George Washington's era, was a hotbed of abolitionism, and the Act for the Gradual Abolition of Slavery, passed by the state government in March 1780, was the first to prohibit further importation of slaves into a state. While a loophole exempted members of Congress at Philadelphia, George and Martha Washington (as slave owners) scrupulously avoided spending six months or more in Pennsylvania lest they be forced to give their slaves freedom. Ona Judge, the daughter of a slave inherited by Martha Washington, feared being taken forcibly back to Virginia at the end of Washington's presidency; with the aid of local free blacks and abolitionists she was put onto a ship to New Hampshire and liberty.
In 1849, Henry Brown (1815-1897) escaped Virginia slavery by arranging to have himself mailed in a wooden crate to abolitionists in Philadelphia. From there, he moved to England from 1850-1875 to escape the Fugitive Slave Act, becoming a magician, showman and outspoken abolitionist.
- 6 Johnson House Historical Site, 6306 Germantown Ave., Philadelphia, ☏ . Sa 1PM-5PM year-round, Th-F 10AM-4PM from Feb 2-Jun 9 and Sep 7-Nov 24, M-W by appointment only. Tours leave every 60 minutes at 15 minutes past the hour, and the last tour departs at 3:15PM. Former safe house and tavern in the Germantown area, frequented by Harriet Tubman and William Still, one of 17 Underground Railroad stations in Pennsylvania listed in the local guide Underground Railroad: Trail to Freedom. Still was an African-American abolitionist, clerk and member of the Pennsylvania Anti-Slavery Society. Hour-long guided tours are offered. $8, seniors 55+ $6, children 12 and under $4.
- 7 Belmont Mansion, 2000 Belmont Mansion Dr., Philadelphia, ☏ . Tu-F 11AM-5PM, summer weekends by appointment. Historic Philadelphia mansion with Underground Railroad museum. $7, student/senior $5.
- 8 Christiana Underground Railroad Center, 11 Green St., Christiana, ☏ . M-F 9AM-4PM. In 1851, a group of 38 local African-Americans and white abolitionists attacked and killed Edward Gorsuch, a slaveowner from Maryland who had arrived in town pursuing four of his escaped slaves, and wounded two of his companions. They were charged with treason for violating the Fugitive Slave Law, and Zercher's Hotel is where the trial took place. Today, the former hotel is home to a museum recounting the history of what came to be known as the Resistance at Christiana. Free.
- 9 [dead link] Central Pennsylvania African American Museum, 119 N. 10th St., Reading, ☏ , fax: . W & F 10:30AM-1:30PM, Su closed, all other days by appointment. The former Bethel AME Church in Reading was once a station on the Underground Railroad, now it's a museum detailing the history of the black community and the Underground Railroad in Central Pennsylvania. $8, senior citizens and students with ID $6, children 5-12 $4, children 4 and under free. Guided tours $10.
- 10 William Goodridge House and Museum, 123 E. Philadelphia St., York, ☏ . First F of each month 4PM-8PM, and by appointment. Born into slavery in Maryland, William C. Goodridge became a prominent businessman who is suspected to have hidden fugitive slaves in one of the freight cars of his railcar Reliance Line. His handsome two-and-a-half story brick row house on the outskirts of downtown York is now a museum dedicated to his life story.
While Pennsylvania does border Canada across Lake Erie in its northwesternmost corner, freedom seekers arriving from eastern cities generally continued overland through New York State to Canada. While Harriet Tubman would have fled directly north from Philadelphia, many other passengers were crossing into Pennsylvania at multiple points along the Mason-Dixon line where the state bordered Maryland and a portion of Virginia (now West Virginia). This created many parallel lines which led north through central and western Pennsylvania into New York State's Southern Tier.
- 1 Fairfield Inn 1757, 15 W. Main St., Fairfield (8 miles/13 km west of Gettysburg via Route 116), ☏ . The oldest continuously operated inn in the Gettysburg area, dating to 1757. Slaves would hide on the third floor after crawling through openings and trap doors. Today, a window is cut out to reveal where the slaves hid when the inn was a "safe station" on the Underground Railroad. $160/night.
- 11 Old Jail, 175 E. King St., Chambersburg, ☏ . Tu-Sa (May-Oct), Th-Sa (year-round): 10AM-4PM, last tour 3PM. Built in 1818, the jail survived an attack in which Chambersburg was burned by the Confederates in 1864. Five domed dungeons in the cellar had rings in the walls and floors to shackle recalcitrant prisoners; these cells may also have been secretly used to shelter runaway slaves enroute to freedom in the north. $5, children 6 and over $4, families $10.
- 12 Blairsville Underground Railroad History Center, 214 E. South Ln., Blairsville (17 miles/27 km south of Indiana, Pennsylvania via Route 119), ☏ . May-Oct by appointment. The Second Baptist Church building post-dates the Underground Railroad by more than half a century — it was built in 1917 — but it is the oldest black-owned structure in the town of Blairsville, and today it serves as a historical museum with two exhibits related to slavery and emancipation: "Freedom in the Air" tells the story of the abolitionists of Indiana County and their efforts to assist fugitive slaves, while the title of "A Day in the Life of an Enslaved Child" is self-explanatory.
- 13 Freedom Road Cemetery, Freedom Rd., Loyalsock Township (1.5 miles/2.4 km north of Williamsport via Market Street and Bloomingrove Road). Daniel Hughes (1804-1880) was a raftsman who transported lumber from Williamsport to Havre de Grace, Maryland on the West Branch of the Susquehanna River, hiding runaway slaves in the hold of his barge on the return trip. His farm is now a tiny Civil War cemetery, the final resting spot of nine African-American soldiers. While there is a historic marker, this spot (renamed from Nigger Hollow to Freedom Road in 1936) is small and easy to miss.
New York StateEdit
Escaped slaves were on friendly turf in Upstate New York, one of the most staunchly abolitionist regions of the country.
- 14 [dead link] Stephen and Harriet Myers Residence, 194 Livingston Ave., Albany, ☏ . Tours M-F 5-8PM, Sa noon-4PM or by appointment. Stephen Myers was a former slave turned freedman and abolitionist who was a central figure in the local Underground Railroad goings-on, and of all the several houses he inhabited in Albany's Arbor Hill neighborhood in the mid-19th century, this is the only one that's still extant. The then-dilapidated house was saved from the wrecking ball in the 1970s and restoration work is ongoing, but for now, visitors can enjoy guided tours of the house and a small but worthwhile slate of museum exhibits on Myers, Dr. Thomas Elkins, and other prominent members of the Albany Vigilance Committee of abolitionists. $10, seniors $8, children 5-12 $5.
At Albany, multiple options existed. Fugitives could continue northward to Montreal or Quebec's Eastern Townships via Lake Champlain, or (more commonly) they could turn west along the Erie Canal line through Syracuse to Oswego, Rochester, Buffalo, or Niagara Falls.
- 15 Gerrit Smith Estate and Land Office, 5304 Oxbow Rd., Peterboro (9.1 miles/15.1 km east of Cazenovia via County Routes 28 and 25), ☏ . Museum Sa-Su 1-5PM, late May-late Aug, grounds daily dawn-dusk. Smith was president of the New York Anti-Slavery Society (1836-1839) and a "station master" on the Underground Railroad in the 1840s and 1850s. The sprawling estate where he lived throughout his life is now a museum complex with interior and exterior exhibits on freedom seekers, Gerrit Smith's wealth, philanthropy and family, and the Underground Railroad.
Syracuse was an abolitionist stronghold whose central location made it a "great central depot on the Underground Railroad" through which many slaves passed on their way to liberty.
- 16 Jerry Rescue Monument, Clinton Square, Syracuse. During the 1851 state convention of the anti-slavery Liberty Party, an angry mob of several hundred abolitionists busted escaped slave William "Jerry" Henry out of jail; from there he was clandestinely transported to the town of Mexico, New York and concealed there until he could be taken aboard a British-Canadian lumber ship one dark night for transport across Lake Ontario to Kingston. Nine of those who aided in the escape (including two ministers of religion) fled to Canada; of the twenty-nine who were put on trial in Syracuse, all but one was acquitted. The jail no longer stands, but there is a monument on Clinton Square commemorating these momentous events.
In this area, passengers arriving from Pennsylvania across the Southern Tier travelled through Ithaca and Cayuga Lake to join the main route at Auburn, a town west of Syracuse on US 20. Harriet Tubman lived here starting in 1859, establishing a home for the aged.
- 17 St. James AME Zion Church, 116 Cleveland Ave., Ithaca, ☏ . M-Sa 9AM-5PM or by appointment. The African Methodist Episcopal Zion Church was established in the early 1800s in New York City as an offshoot of the Methodist Episcopal Church to serve black parishioners who at the time encountered overt racism in existing churches. St. James, founded 1836, was a station on the Underground Railroad, hosted services attended by such 19th-century African-American luminaries as Harriet Tubman and Frederick Douglass, and in 1906 hosted a group of students founding Alpha Phi Alpha, the nation's oldest official black fraternity.
- 18 Harriet Tubman Home, 180 South St., Auburn, ☏ . Tu-F 10AM-4PM, Sa 10AM-3PM. Known as "The Moses of Her People," Tubman settled in Auburn after the Civil War in this modest but handsome brick house, where she also operated a home for aged and indigent African-Americans. Today it's a museum that houses a collection of historical memorabilia. $4.50, seniors (60+) and college students $3, children 6-17 $1.50.
- 19 Thompson AME Zion Church, 33 Parker St., Auburn. Closed for restorations. An 1891-era African Methodist Episcopal Zion Church where Harriet Tubman attended services; she later deeded the aforementioned Home for the Aged to the church to manage after her death.
- 20 Fort Hill Cemetery, 19 Fort St., Auburn, ☏ . M-F 9AM-1PM. Set on a hill overlooking Auburn, this site was used for burial mounds by Native Americans as early as 1100 AD. It includes the burial sites of Harriet Tubman as well as a variety of other local historic luminaries. The website includes a printable map and self-guided walking tour.
The main route continues westward toward Buffalo and Niagara Falls, which remains the busiest set of crossings on the Ontario-New York border today. (Alternate routes involved crossing Lake Ontario from Oswego or Rochester.)
- 21 Palmyra Historical Museum, 132 Market St., Palmyra, ☏ . Tu-Th 10AM-5PM year-round, Tu-Sa 11AM-4PM in high season. One of five separate museums in the Historic Palmyra Museum Complex; each presents a different aspect of life in old Palmyra. The flagship museum houses various permanent exhibits on local history, including the Underground Railroad. $3, seniors $2, kids under 12 free.
Rochester, home to Frederick Douglass and a bevy of other abolitionists, also afforded escapees passage to Canada, if they were able to make their way to Kelsey's Landing just north of the Lower Falls of the Genesee. There were a number of safehouses in the city, including Douglass' own home.
- 22 Rochester Museum and Science Center, 657 East Ave., Rochester, ☏ . M-Sa 9AM-5PM, Su 11AM-5PM. Rochester's interactive science museum has a semi-permanent exhibit called Flight to Freedom: Rochester’s Underground Railroad. It lets kids get a glimpse of the story of the Railroad through the eyes of a fictional child escaping to Canada. Adults $15, seniors/college $14, ages 3-18 $13, under 3 free.
Ontario's entire international boundary is water. There were a few ferries in places like Buffalo, but infrastructure was sparse. Niagara Falls had an 825 ft (251 m) railway suspension bridge joining the Canadian and U.S. twin towns below the falls.
- 23 Castellani Art Museum, 5795 Lewiston Rd., Niagara Falls, ☏ , fax: . Tu-Sa 11AM-5PM, Su 1PM-5PM. Part of the permanent collection of Niagara University's campus art gallery is "Freedom Crossing: The Underground Railroad in Greater Niagara", telling the story of the Underground Railroad movement on the Niagara Frontier.
- 24 [dead link] Niagara Falls Underground Railroad Interpretive Center, 2245 Whirlpool St., Niagara Falls (next to the Whirlpool Bridge and the Amtrak station). Tu-W & F-Sa 10AM-6PM, Th 10AM-8PM, Su 10AM-4PM. The former U.S. custom house (1863-1962) is now a museum dedicated to the Niagara Frontier's Underground Railroad history. Exhibits include a recreation of the "Cataract House", one of the largest hotels in Niagara Falls at the time whose largely African-American waitstaff was instrumental in helping escaped slaves on the last leg of their journey. $10, high school and college students with ID $8, children 6-12 $6.
- 25 Niagara Falls Suspension Bridge site. Built in 1848, this first suspension bridge across the Niagara River was the last leg in Harriet Tubman's own journey from slavery in Maryland to freedom in Canada, and she would return many times over the next decade as a "conductor" for other escapees. After 1855, when it was repurposed as a railroad bridge, slaves would be smuggled across the border in cattle or baggage cars. The site is now the Whirlpool Bridge.
- 26 [dead link] First Presbyterian Church and Village Cemetery, 505 Cayuga St., Lewiston, ☏ . Open for services Su 11:15AM. A sculpture in front of Lewiston's oldest church (erected 1835) commemorates the prominent role it played in the Underground Railroad.
- 27 Freedom Crossing Monument (At Lewiston Landing Park, on the west side of N. Water St. between Center and Onondaga Sts.). An outdoor sculpture on the bank of the Niagara River depicting local Underground Railroad stationmaster Josiah Tryon spiriting away a family of freedom seekers on their final approach to Canada. Tryon operated his station out of the House of the Seven Cellars, his brother's residence just north of the village center (still extant but not open to the public) where a series of steps led from a multi-level network of interconnecting basements to the riverbank, from whence Tryon would ferry the escapees across the river as depicted in the sculpture.
To the south is Buffalo, opposite Fort Erie in Ontario:
- 28 Michigan Street Baptist Church, 511 Michigan Ave., Buffalo, ☏ . The oldest property continuously owned, operated, and occupied by African-Americans in Buffalo, this historic church served as a station on the Underground Railroad. Historical tours are offered by appointment. $5.
- 29 Broderick Park (on the Niagara River at the end of West Ferry Street), ☏ . Many years before the Peace Bridge was constructed to the south, the connection between Buffalo and Fort Erie was by ferry, and many fugitive slaves crossed the river to Canada in this way. There is a memorial and historic plaques onsite illustrating the site's significance, as well as historical reenactments from time to time.
As mentioned earlier, some escapees instead approached from the south, passing from western Pennsylvania through the Southern Tier toward the border.
- 30 [formerly dead link] Howe-Prescott Pioneer House, 3031 Route 98 South, Franklinville, ☏ . Su Jun-Aug by appointment. Built circa 1814 by a family of prominent abolitionists, this house served as a station on the Underground Railroad in the years before the Civil War. The Ischua Valley Historical Society has restored the site as a pioneer homestead, with exhibits and demonstrations illustrating life in the early days of white settlement in Western New York.
- 31 British Methodist Episcopal Church, Salem Chapel, 92 Geneva St., St. Catharines, ☏ . Services Su 11AM, guided tours by appointment. St. Catharines was one of the principal Canadian cities to be settled by escaped American slaves — Harriet Tubman and her family lived there for about ten years before returning to the U.S. and settling in Auburn, New York — and this simple but handsome wooden church was constructed in 1851 to serve as their place of worship. It's now listed as a National Historic Site of Canada, and several plaques are placed outside the building explaining its history.
- 32 Negro Burial Ground, Niagara-on-the-Lake (east side of Mississauga St. between John and Mary Sts.), ☏ . The Niagara Baptist Church — the house of worship of Niagara-on-the-Lake's community of Underground Railroad escapees — is long gone, but the cemetery on its former site, where many of its congregants were buried, remains.
- 33 Griffin House, 733 Mineral Springs Rd., Ancaster, ☏ . Su 1-4PM, Jul-Sep. Fugitive Virginia slave Enerals Griffin escaped to Canada in 1834 and settled in the town of Ancaster as a farmer; his rough-hewn log farmhouse has now been restored to its original period appearance. Walking trails out back lead into the lovely Dundas Valley and a series of waterfalls. Donation.
The Niagara Region is now part of the Golden Horseshoe, the most densely-populated portion of the province. Further afield, the Toronto Transit Commission (☏ ) has run an annual Underground Freedom Train Ride to commemorate Emancipation Day. The train leaves Union Station in downtown Toronto in time to reach Sheppard West (the former northwest end of the line) just after midnight on August 1. Celebrations include singing, poetry readings and drum playing.
The Ohio LineEdit
Kentucky, a slave state, is separated from Indiana and Ohio by the Ohio River. Because of Ohio's location (which borders the southernmost point in Canada across Lake Erie), multiple parallel lines led north across the state to freedom in Upper Canada. Some passed through Indiana to Ohio, while others entered Ohio directly from Kentucky.
- 34 Town Clock Church, 300 E. Main St., New Albany, ☏ . Services Su 11AM, tours by appointment. This restored 1852 Greek Revival church used to house the Second Presbyterian Church, a station of the Underground Railroad whose distinctive clock tower signaled New Albany's location to the Ohio River boatmen. Now home to an African-American congregation and the subject of fundraising efforts aimed at restoring the building to its original splendor after years of neglect, the church hosts regular services, guided tours by appointment, and occasional historical commemorations and other events.
- 35 Conner Prairie Museum, 13400 Allisonville Rd., Fishers, ☏ , toll-free: . Check website for schedule. Home of the "Follow the North Star" theatrical program-cum-historical reenactment, where participants travel back to the year 1836 and assume the role of fugitive slaves seeking freedom on the Underground Railroad. Learn by becoming a fugitive slave in an interactive encounter where museum staff become the slave hunters, friendly Quakers, freed slaves and railroad conductors that decide your fate. $20.
Westfield is a great town for walking tours; the Westfield-Washington Historical Society (see below) can provide background information. Historic Indiana Ghost Walks & Tours (☏ ) also covers "ghosts of the Underground Railroad" on one of its Westfield tours (reservations required, check schedule).
- 36 Westfield-Washington Historical Society & Museum, 130 Penn St., Westfield, ☏ . Sa 10AM-2PM, or by appointment. Settled by staunchly abolitionist Quakers, it should come as no surprise that Westfield was one of Indiana's Underground Railroad hotbeds. Learn all about those and other elements of local history in this museum. Donation.
From the Indianapolis area, the route splits: you can either head east into Ohio or north into Michigan.
- 37 Levi and Catharine Coffin State Historic Site, 201 US Route 27 North, Fountain City (9.2 miles/14.8 km north of Richmond via US 27), ☏ . Tu-Su 10AM-5PM. The "Grand Central Station" of the Underground Railroad where three escape routes to the North converged is where Levi and Catharine Coffin lived and harbored more than 2,000 freedom seekers to safety. A family of well-to-do Quakers, the Coffins' residence is an ample Federal-style brick home that's been restored to its period appearance and opened to guided tours. A full calendar of events also take place. $10, seniors (60+) $8, children 3-17 $5.
Another option is to head north from Kentucky directly into Ohio.
The stations listed here form a meandering line through Ohio's major cities — Cincinnati to Dayton to Columbus to Cleveland to Toledo — and around Lake Erie to Detroit, a journey of approximately 800 mi (1,300 km). In practice, Underground Railroad passengers would head due north and cross Lake Erie at the first possible opportunity via any of multiple parallel routes.
From Lexington, Kentucky, you head north 85 mi (137 km) on this freedom train to Covington. Directly across the Ohio River and the state line is Cincinnati, one of many points at which thousands crossed into the North in search of freedom.
- 38 National Underground Railroad Freedom Center, 50 East Freedom Way, Cincinnati, ☏ . Tu-Su 11AM-5PM. Among the most comprehensive resources of Underground Railroad-related information anywhere in the country, the National Underground Railroad Freedom Center should be at the top of the list for any history buff retracing the escapees' perilous journey. The experience at this "museum of conscience" includes everything from genuine historical artifacts (including a "slave pen" built c. 1830, the only known extant one of these small log cabins once used to house slaves prior to auction) to films and theatrical performances to archival research materials, relating not only the story of the Underground Railroad but the entirety of the African-American struggle for freedom from the Colonial era through the Civil War, Jim Crow, and the modern day. $12, seniors $10, children $8.
- 39 Springboro Historical Society Museum, 110 S. Main St., Springboro, ☏ . F-Sa 11AM-3PM. This small museum details Springboro's storied past as a vital stop on the Underground Railroad. While you're in town, stop by the Chamber of Commerce (325 S. Main St.) and pick up a brochure with a self-guided walking tour of 27 local "stations" on the Railroad, the most of any city in Ohio, many of which still stand today.
East of Dayton, one former station is now a tavern.
- 1 Ye Olde Trail Tavern, 228 Xenia Ave., Yellow Springs, ☏ . Su-Th 11AM-10PM, F-Sa 11AM-11PM; closes an hour early Oct-Mar. Kick back with a cold beer and nosh on bar snacks with an upscale twist in this 1844 log cabin that was once a stop on the Underground Railroad. Mains $8-12.
Continue east 110 mi (180 km) through Columbus and onward to Zanesville, then detour north via Route 60.
- 40 Prospect Place, 12150 Main St., Trinway (16 miles/26 km north of Zanesville via Route 60), ☏ . Sa-Su noon-4PM, Mar 17-Nov 4. An ongoing historic renovation aims to bring this 29-room Italianate-style mansion back to its appearance in the 1850s and '60s, when it served as the home of railroad baron, local politico, and abolitionist George Willison Adams — not to mention one of the area's most important Underground Railroad stations. The restored portions of the mansion are open for self-guided tours (weather-dependent; the building is not air-conditioned and the upper floors can get stifling in summer, so call ahead on hot days to make sure they're open), and Prospect Place is also home to the G. W. Adams Educational Center, with a full calendar of events,
Another 110 mi (180 km) to the northeast is Alliance.
- 41 Haines House Underground Railroad Museum, 186 W. Market St., Alliance, ☏ . Open for tours the first weekend of each month: Sa 10AM-noon; Su 1PM-3PM. Sarah and Ridgeway Haines, daughter and son-in-law of one of the town's first settlers, operated an Underground Railroad station out of their stately Federal-style home, now fully restored and open to the public as a museum. Tour the lovely Victorian parlor, the early 19th century kitchen, the bedrooms, and the attic where fugitive slaves were hidden. Check out exhibits related to local Underground Railroad history and the preservation of the house. $3.
The next town to the north is 42 Kent, the home of Kent State University, which was a waypoint on the Underground Railroad back when the village was still named Franklin Mills. 36 mi (58 km) further north is the Lake Erie shoreline, east of Cleveland. From there, travellers had a few possible options: attempt to cross Lake Erie directly into Canada, head east through western Pennsylvania and onward to Buffalo...
- 43 Hubbard House Underground Railroad Museum, 1603 Walnut Blvd., Ashtabula, ☏ . F-Su 1PM-5PM, Memorial Day through Labor Day, or by appointment. William and Catharine Hubbard's circa-1841 farmhouse was one of the Underground Railroad's northern termini — directly behind the house is Lake Erie, and directly across the lake is Canada — and today it's the only one that's open to the public for tours. Peruse exhibits on local Underground Railroad and Civil War history set in environs restored to their 1840s appearance, complete with authentic antique furnishings. $5, seniors $4, children 6 and over $3.
...or turn west.
- 44 Lorain Underground Railroad Station 100 Monument, 100 Black River Ln., Lorain (At Black River Landing), ☏ . Not a station, but rather a historic monument that honors Lee Howard Dobbins, a 4-year-old escaped slave, orphaned en route to freedom with his mother, who later died in the home of the local family who took him in. A large relief sculpture, inscribed with an inspirational poem read at the child's funeral (which was attended by a thousand people), is surrounded by a contemplative garden.
West of Lorain is Sandusky, one of the foremost jumping-off points for escaped slaves on the final stage of their journey to freedom. Among those who set off across Lake Erie from here toward Canada was Josiah Henson, whose autobiography served as inspiration for Harriet Beecher Stowe's famous novel, Uncle Tom's Cabin. Modern-day voyagers can retrace that journey via the MV Jiimaan [dead link], a seasonal ferry plying the route from Sandusky to Leamington and Kingsville, Ontario, or else stop in to the Lake Erie Shores & Islands Welcome Center at 4424 Milan Rd. and pick up a brochure with a free self-guided walking tour of Sandusky-area Underground Railroad sites.
- 45 Maritime Museum of Sandusky, 125 Meigs St., Sandusky, ☏ . Year-round F-Sa 10AM-4PM, Su noon-4PM; also Tu-Th 10AM-4PM Jun-Aug. This museum interprets Sandusky's prominent history as a lake port and transportation nexus through interactive exhibits and educational programs on a number of topics, including the passenger steamers whose owners were among the large number of locals active in the Underground Railroad. $6, seniors 62+ and children under 12 $5, families $14.
- 46 Path to Freedom Sculpture, Facer Park, 255 E. Water St., Sandusky, ☏ . In the center of a small harborfront park in downtown Sandusky stands this life-size sculpture of an African-American man, woman and child bounding with arms outstretched toward the waterfront and freedom, fashioned symbolically out of 800 ft (240 m) of iron chains.
As an alternative to crossing the lake here, voyagers could continue westward through Toledo to Detroit.
Detroit was the last American stop for travellers on this route: directly across the river lies Windsor, Ontario.
- 47 First Living Museum, 33 E. Forest Ave., Detroit, ☏ . Call museum for schedule of tours. The museum housed in the First Congregational Church of Detroit features a 90-minute "Flight to Freedom" reenactment that simulates an escape from slavery on the Underground Railroad: visitors are first shackled with wrist bands, then led to freedom by a "conductor" while hiding from bounty hunters, crossing the Ohio River to seek refuge in Levi Coffin's abolitionist safe house in Indiana, and finally arriving to "Midnight" — the code name for Detroit in Railroad parlance. $15, youth and seniors $12.
- 48 Mariners' Church, 170 E. Jefferson Ave., Detroit, ☏ . Services Su 8:30AM & 11AM. An 1849 limestone church known primarily for serving Great Lakes sailors and memorializing crew lost at sea. In 1955, while moving the church to make room for a new civic center, workers discovered an Underground Railroad tunnel under the building.
If Detroit was "Midnight" on the Underground Railroad, Windsor was "Dawn". A literal underground railroad does stretch across the river from Detroit to Windsor, along with another one to the north between Port Huron and Sarnia, but since 2004 the tunnels have served only freight. A ferry crosses here for large trucks only. An underground road tunnel also runs to Windsor, complete with a municipal Tunnel Bus service (C$4/person, one way).
- Gateway to Freedom International Memorial. Historians estimate that as many as 45,000 runaway slaves passed through Detroit-Windsor on the Underground Railroad, and this pair of monuments spanning both sides of the riverfront pays homage to the local citizens who defied the law to provide safety to the fugitives. Sculpted by Ed Dwight, Jr. (the first African-American accepted into the U.S. astronaut training program), the 49 Gateway to Freedom Memorial at Hart Plaza in Detroit depicts eight larger-than-life figures — including George DeBaptiste, an African-American conductor of local prominence — gazing toward the promised land of Canada. On the Windsor side, at the Civic Esplanade, the 50 Tower of Freedom depicts four more bronze figures with arms upraised in relief, backed by a 20 ft (6.1 m) marble monolith.
- 51 Loren Andrus Octagon House, 57500 Van Dyke Ave., Washington Township, ☏ . 1-4PM on 3rd Sunday of month (Mar-Oct) or by appointment. Erected in 1860, the historic home of canal and railroad surveyor Loren Andrus served as a community meeting place and station during the latter days of the Underground Railroad, its architecture capturing attention with its unusual symmetry and serving as a metaphor for a community that bridges yesterday and tomorrow. One-hour guided tours lead through the house's restored interior and include exhibits relevant to its history. $5.
The most period-appropriate way to replicate the crossing into Canada used to be the Bluewater Ferry across the St. Clair River between Marine City, Michigan and Sombra, Ontario. The ferry no longer operates. Instead, cross from Detroit to Windsor via the Ambassador Bridge or the aforementioned tunnel, or else detour north to the Blue Water Bridge between Port Huron and Sarnia.
- 52 Sandwich First Baptist Church, 3652 Peter St., Windsor, ☏ . Services Su 11AM, tours by appointment. The oldest existing majority-Black church in Canada, erected in 1847 by early Underground Railroad refugees, Sandwich First Baptist was often the first Canadian stop for escapees after crossing the river from Detroit: a series of hidden tunnels and passageways led from the riverbank to the church to keep folks away from the prying eyes of slave catchers, the more daring of whom would cross the border in violation of Canadian law in pursuit of escaped slaves. Modern-day visitors can still see the trapdoor in the floor of the church.
- 1 Emancipation Day Celebration, Lanspeary Park, Windsor. Held annually on the first Saturday and Sunday in August from 2-10PM, "The Greatest Freedom Show on Earth" commemorates the Emancipation Act of 1833, which abolished slavery in Canada as well as throughout the British Empire. Live music, yummy food, and family-friendly entertainment abound. Admission free, "entertainment area" with live music $5 per person/$20 per family.
Amherstburg, just south of Windsor, was also a destination for runaway slaves.
- 53 Amherstburg Freedom Museum, 277 King St., Amherstburg, ☏ , toll-free: . Tu-F noon-5PM, Sa Su 1-5PM. Telling the story of the African-Canadian experience in Essex County is not only the museum itself, with a wealth of historic artifacts and educational exhibits, but also the Taylor Log Cabin, the home of an early black resident restored to its mid-19th century appearance, and also the Nazrey AME Church, a National Historic Site of Canada. A wealth of events takes place in the onsite cultural centre. Adult $7.50, seniors & students $6.50.
- 54 John Freeman Walls Historic Site and Underground Railroad Museum, 859 Puce Rd., Lakeshore (29 km/18 miles east of downtown Windsor via Highway 401), ☏ , fax: . Tu-Sa 10:30AM-3PM in summer, by appointment other times. Historical museum situated in the 1846 log-cabin home of John Freeman Walls, a fugitive slave from North Carolina turned Underground Railroad stationmaster and pillar of the small community of Puce, Ontario (now part of the Town of Lakeshore). Dr. Bryan Walls, the museum's curator and a descendant of the owner, wrote a book entitled The Road that Leads to Somewhere detailing the history of his family and others in the community.
Following the signed African-Canadian Heritage Tour eastward from Windsor, you soon come to the so-called "Black Mecca" of Chatham, which after the Underground Railroad began quickly became — and to a certain extent remains — a bustling centre of African-Canadian life.
- 55 Chatham-Kent Museum, 75 William St. N., Chatham, ☏ . W-F 1-7PM, Sa Su 11AM-4PM. One of the highlights of the collection at this all-purpose local history museum are some of the personal effects of American abolitionist John Brown, whose failed 1859 raid on the federal arsenal at Harpers Ferry, Virginia was contemporaneous with the height of the Underground Railroad era and stoked tensions on both sides of the slavery divide in the runup to the Civil War.
- 56 Black Mecca Museum, 177 King St. E., Chatham, ☏ . M-F 10AM-3PM, till 4PM Jul-Aug. Researchers, take note: the Black Mecca Museum was founded as a home for the expansive archives of the Chatham-Kent Black Historical Society detailing Chatham's rich African-Canadian history. If that doesn't sound like your thing, there are also engaging exhibits of historic artifacts, as well as guided walking tours (call to schedule) that take in points of interest relevant to local black history. Self-guided tours free, guided tours $6.
- 57 Uncle Tom's Cabin Historic Site, 29251 Uncle Tom's Rd., Dresden (27 km/17 miles north of Chatham via County Roads 2 and 28), ☏ . Tu-Sa 10AM-4PM, Su noon-4PM, May 19-Oct 27; also Mon 10AM-4PM Jul-Aug; Oct 28-May 18 by appointment. This sprawling open-air museum complex is centred on the restored home of Josiah Henson, a former slave turned author, abolitionist, and minister whose autobiography was the inspiration for the title character in Harriet Beecher Stowe's novel Uncle Tom's Cabin. But that's not the end of the story: a restored sawmill, smokehouse, the circa-1850 Pioneer Church, and the Henson family cemetery are just some of the authentic period buildings preserved from the Dawn Settlement of former slaves. Historical artifacts, educational exhibits, multimedia presentations, and special events abound.
- 58 Buxton National Historic Site & Museum, 21975 A.D. Shadd Rd., North Buxton (16 km/10 miles south of Chatham via County Roads 2, 27, and 14), ☏ , fax: . Daily 10AM-4:30PM, Jul-Aug; W-Su 1PM-4:30PM, May & Sep; M-F 1PM-4:30PM, Oct-Apr. The Elgin Settlement was a haven for fugitive slaves and free blacks founded in 1849, and this museum complex — along with the annual Buxton Homecoming cultural festival in September — pays homage to the community that planted its roots here. In addition to the main museum building (containing historical exhibits) are authentic restored buildings from the former settlement: a log cabin, a barn, and a schoolhouse. $6, seniors and students $5, families $20.
Across the Land of LincolnEdit
Though Illinois was de jure a free state, Southern cultural influence and sympathy for the institution of slavery was very strong in its southernmost reaches (even to this day, the local culture in Cairo and other far-downstate communities bears more resemblance to the Mississippi Delta than Chicago). Thus, the fate of fugitive slaves passing through Illinois was variable: near the borders of Missouri and Kentucky the danger of being abducted and forcibly transported back to slavery was very high, while those who made it further north would notice a marked decrease in the local tolerance for slave catchers.
The Mississippi River was a popular Underground Railroad route in this part of the country; a voyager travelling north from Memphis would pass between the slave-holding states of Missouri and Kentucky to arrive 175 mi (282 km) later at Cairo, a fork in the river. From there, the Mississippi continued northward through St. Louis while the Ohio River ran along the Ohio-Kentucky border to Cincinnati and beyond.
- 59 Slave Haven Underground Railroad Museum, 826 N. Second St., Memphis, Tennessee, ☏ . Daily 10AM-4PM, till 5PM Jun-Aug. Built in 1849 by Jacob Burkle, a livestock trader and baker originally from Germany, this modest yet handsome house was long suspected to be a waypoint for Underground Railroad fugitives boarding Mississippi river boats. Now a museum, the house has been restored with period furnishings and contains interpretive exhibits. Make sure to go down into the basement, where the trap doors, tunnels and passages used to hide escaped slaves have been preserved. A three-hour historical sightseeing tour of thirty local sites is also offered for $45. $12.
Placing fugitives onto vessels on the Mississippi was a monumental risk that figured prominently in the literature of the era. There was even a "Reverse Underground Railroad" used by antebellum slave catchers to kidnap free blacks and fugitives from free states to sell them back into slavery.
Because of its location on the Mississippi River, St. Louis was directly on the boundary between slaveholding Missouri and abolitionist Illinois.
- 2 Mary Meachum Freedom Crossing, 28 E Grand Ave., St. Louis, Missouri, ☏ . The Rev. John Berry Meachum of St. Louis' First African Baptist Church arrived in St. Louis in 1815 after purchasing his freedom from slavery. Ordered to stop holding classes in his church under an 1847 Missouri law prohibiting education of people of color, he instead circumvented the restriction by teaching on a Mississippi riverboat. His widow Mary Meachum was arrested early in the morning of May 21, 1855 with a small group of runaway slaves and their guides who were attempting to cross the Mississippi River to freedom. These events are commemorated each May with a historical reenactment of the attempted crossing by actors in period costume, along with poetry, music, dance, and dramatic performances. Even if you're not in town for the festival, you can still stop by the rest area alongside the St. Louis Riverfront Trail and take in the colorful wall mural and historic plaques.
Author Mark Twain, whose iconic novel The Adventures of Huckleberry Finn (1884) describes a freedom-seeking Mississippi voyage downriver to New Orleans, grew up in Hannibal, Missouri, a short distance upriver from St. Louis. Hannibal, in turn, is not far from Quincy, Illinois, where freedom seekers would often attempt to cross the Mississippi directly.
- 60 Dr. Richard Eells House, 415 Jersey St., Quincy, ☏ . Sa 1PM-4PM, group tours by appointment. Connecticut-born Dr. Eells was active in the abolitionist movement and is credited with helping several hundred slaves flee from Missouri. In 1842, while providing aid to a fugitive swimming the river, Dr. Eells was spotted by a posse of slave hunters. Eells escaped, but was later arrested and charged with harboring a fugitive slave. His case (with a $400 fine) was unsuccessfully appealed as far as the U.S. Supreme Court, with the final appeal made by his estate after his demise. His 1835 Greek Revival-style house, four blocks from the Mississippi, has been restored to its original appearance and contains museum exhibits regarding the Eells case in particular and the Underground Railroad in general. $3.
70 mi (110 km) east of Quincy is Jacksonville, once a major crossroads of at least three different Underground Railroad routes, many of which carried passengers fleeing from St. Louis. Several of the former stations still stand. The Morgan County Historical Society runs a Sunday afternoon bus tour twice annually (spring and fall) from Illinois College to Woodlawn Farm with guides in period costume.
- 61 Beecher Hall, Illinois College, 1101 W. College Ave., Jacksonville, ☏ . Founded in 1829, Illinois College was the first institution of postsecondary education in the state, and it quickly became a nexus of the local abolitionist community. The original building was renamed Beecher Hall in honor of the college's first president, Edward Beecher, brother of Uncle Tom's Cabin author Harriet Beecher Stowe. Tours of the campus are offered during the summer months (see website for schedule); while geared toward prospective students, they're open to all and offer an introduction to the history of the college.
- 62 Woodlawn Farm, 1463 Gierkie Ln., Jacksonville, ☏ , ✉ email@example.com. W & Sa-Su 1PM-4PM, late May-late Sep, or by appointment. Pioneer settler Michael Huffaker built this handsome Greek Revival farmhouse circa 1840, and according to local tradition hid fugitive slaves there during the Underground Railroad era by disguising them as resident farmhands. Nowadays it's a living history museum where docents in period attire give guided tours of the restored interior, furnished with authentic antiques and family heirlooms. $4 suggested donation.
50 mi (80 km) further east is the state capital of 63 Springfield, the longtime home and final resting place of Abraham Lincoln. During the time of the Underground Railroad, he was a local attorney and rising star in the fledgling Republican Party who was most famous as Congressman Stephen Douglas' sparring partner in an 1858 statewide debate tour where slavery and other matters were discussed. However, Lincoln was soon catapulted from relative obscurity onto the national stage with his win in the 1860 Presidential election, going on to shepherd the nation through the Civil War and issue the 1863 Emancipation Proclamation that freed the slaves once and for all.
- 64 Owen Lovejoy Homestead, 905 E. Peru St., Princeton (21 miles/34 km west of Peru via US 6 or I-80), ☏ . F-Sa 1PM-4PM, May-Sep or by appointment. The Rev. Owen Lovejoy (1811-1864) was one of the most prominent abolitionists in the state of Illinois and, along with Lincoln, a founding father of the Republican Party, not to mention the brother of newspaper editor Elijah Parish Lovejoy, whose anti-slavery writings in the Alton Observer led to his 1837 lynching. It was more or less an open secret around Princeton that his modest farmhouse on the outskirts of town was a station on the so-called "Liberty Line" of the Underground Railroad. The house is now a city-owned museum restored to its period appearance (including the "hidey-holes" where fugitive slaves were concealed) and opened to tours in season. Onsite also is a one-room schoolhouse with exhibits that further delve into the pioneer history of the area.
- 65 Graue Mill and Museum, 3800 York Rd., Oak Brook, ☏ . Tu-Su 10AM-4:30PM, mid Apr-mid Nov. German immigrant Frederick Graue housed fugitive slaves in the basement of his gristmill on Salt Creek, which was a favorite stopover for future President Abraham Lincoln during his travels across the state. Today, the building has been restored to its period appearance and functions as a museum where, among other exhibits, "Graue Mill and the Road to Freedom" elucidates the role played by the mill and the surrounding community in the Underground Railroad. $4.50, children 4-12 $2.
Into the Maritime ProvincesEdit
Another route, less used but still significant, led from New England through New Brunswick to Nova Scotia, mainly from Boston to Halifax. Though the modern-day Maritime Provinces did not become part of Canada until 1867, they were within the British Empire, and thus slavery was illegal there too.
One possible route (following the coast from Philadelphia through Boston to Halifax) would be to head north through New Jersey, New York, Connecticut, Rhode Island, Massachusetts and Maine to reach New Brunswick and Nova Scotia.
- 2 Wedgwood Inn, 111 W. Bridge St., New Hope, Pennsylvania, ☏ . Located in Bucks County some 40 mi (64 km) northeast of Philadelphia, New Hope is a town whose importance on the Underground Railroad came thanks to its ferry across the Delaware River, which escaped slaves would use to pass into New Jersey on their northward journey — and this Victorian bed and breakfast was one of the stations where they'd spend the night beforehand. Of course, modern-day travellers sleep in one of eight quaintly-decorated guest rooms, but if you like, your hosts will show you the trapdoor that leads to the subterranean tunnel system where slaves once hid. Standard rooms with fireplace $120-250/night, Jacuzzi suites $200-350/night.
With its densely wooded landscape, abundant population of Quaker abolitionists, and regularly spaced towns, South Jersey was a popular east-coast Underground Railroad stopover. Swedesboro, with a sizable admixture of free black settlers to go along with the Quakers, was a particular hub.
- 66 Mount Zion AME Church, 172 Garwin Rd., Woolwich Township, New Jersey (1.5 miles/2.4 km northeast of Swedesboro via Kings Hwy.). Services Su 10:30AM. Founded by a congregation of free black settlers and still an active church today, Mount Zion was a reliable safe haven for fugitive slaves making their way from Virginia and Maryland via Philadelphia, providing them with shelter, supplies, and guidance as they continued north. Stop into this handsome 1834 clapboard church and you'll see a trapdoor in the floor of the vestibule leading to a crawl space where slaves hid.
New York City occupied a mixed role in the history of American slavery: while New York was a free state, many in the city's financial community had dealings with the southern states and Tammany Hall, the far-right political machine that effectively controlled the city Democratic Party, was notoriously sympathetic to slaveholding interests. It was a different story in what are now the outer boroughs, which were home to a thriving free black population and many churches and religious groups that held strong abolitionist beliefs.
- 67 [dead link] 227 Abolitionist Place, 227 Duffield St., Brooklyn, New York. In the early 19th Century, Thomas and Harriet Truesdell were among the foremost members of Brooklyn's abolitionist community, and their Duffield Street residence was a station on the Underground Railroad. The house is no longer extant, but residents of the brick rowhouse that stands on the site today discovered the trapdoors and tunnels in the basement in time to prevent the building from being demolished for a massive redevelopment project. The building is now owned by a neighborhood not-for-profit with hopes of turning it into a museum and heritage center focusing on New York City's contribution to abolitionism and the Underground Railroad; in the meantime, it plays host to a range of educational events and programs.
- 68 Harriet Beecher Stowe Center, 77 Forest St., Hartford, Connecticut, ☏ . The author of the famous antislavery novel Uncle Tom's Cabin lived in this delightful Gothic-style cottage in Hartford (right next door to Mark Twain!) from 1873 until her death in 1896. The house is now a museum that not only preserves its historic interior as it appeared during Stowe's lifetime, but also offers an interactive, "non-traditional museum experience" that allows visitors to really dig deep and discuss the issues that inspired and informed her work, including women's rights, immigration, criminal justice reform, and — of course — abolitionism. There's also a research library covering topics related to 19th-century literature, arts, and social history.
- 69 Greenmanville Historic District, 75 Greenmanville Ave., Stonington, Connecticut (At the Mystic Seaport Museum), ☏ . Daily 9AM-5PM. The Greenman brothers — George, Clark, and Thomas — came in 1837 from Rhode Island to a site at the mouth of the Mystic River where they built a shipyard, and in due time a buzzing industrial village had coalesced around it. Staunch abolitionists, the Greenmans operated a station on the Underground Railroad and supported a local Seventh-Day Baptist church (c. 1851) which denounced slavery and regularly hosted speakers who supported abolitionism and women’s rights. Today, the grounds of the Mystic Seaport Museum include ten of the original buildings of the Greenmanville settlement, including the former textile mill, the church, and the Thomas Greenman House. Exhibits cover the history of the settlement and its importance to the Underground Railroad and the abolitionist movement. Museum admission $28.95, seniors $26.95, children $19.95.
- 70 Pawtuxet Village, between Warwick and Cranston, Rhode Island. This historically preserved neighborhood represents the center of one of the oldest villages in New England, dating back to 1638. Flash forward a couple of centuries and it was a prominent stop on the Underground Railroad for runaway slaves. Walking tours of the village are available.
- 71 [dead link] Jackson Homestead and Museum, 527 Washington St., Newton, Massachusetts, ☏ . W-F 11AM-5PM, Sa-Su 10AM-5PM. This Federal-style farmhouse was built in 1809 as the home of Timothy Jackson, a Revolutionary War veteran, factory owner, state legislator, and abolitionist who operated an Underground Railroad station in it. Deeded to the City of Newton by one of his descendants, it's now a local history museum with exhibits on the local abolitionist community as well as paintings, photographs, historic artifacts, and other curiosities. $6, seniors and children 6-12 $5, students with ID $2.
Boston was a major seaport and an abolitionist stronghold. Some freedom seekers arrived overland, others as stowaways aboard coastal trading vessels from the South. The Boston Vigilance Committee (1841-1861), an abolitionist organization founded by the city's free black population to protect their people from abduction into slavery, spread the word when slave catchers came to town. They worked closely with Underground Railroad conductors to provide freedom seekers with transportation, shelter, medical attention and legal counsel. Hundreds of escapees stayed a short time before moving on to Canada, England or other British territories.
The National Park Service offers a ranger-led 1.6 mi (2.6 km) Boston Black Heritage Trail tour through Boston's Beacon Hill district, near the Massachusetts State House and Boston Common. Several old houses in this district were stations on the line, but are not open to the public.
A museum is open in a former meeting house and school:
- 72 Museum of African-American History, 46 Joy St., Boston, Massachusetts, ☏ . M-Sa 10AM-4PM. The African Meeting House (a church built in 1806) and Abiel Smith School (the nation's oldest public school for black children, founded 1835) have been restored to the 1855 era for use as a museum and event space with exhibit galleries, education programs, caterers' kitchen and museum store.
- 73 Rokeby Museum, 4334 US Route 7, Ferrisburgh, Vermont (11 miles/18 km south of Shelburne), ☏ . 10AM-5PM, mid-May to late Oct; house only open by scheduled guided tour. Rowland T. Robinson, a Quaker and ardent abolitionist, openly sheltered escaped slaves on his family's sheep farm in the quiet town of Ferrisburgh. Now a museum complex, visitors can tour nine historic farm buildings furnished in period style and full of interpretive exhibits covering Vermont's contribution to the Underground Railroad effort, or walk a network of hiking trails that cover more than 50 acres (20 ha) of the property. $10, seniors $9, students $8.
...while others continued to follow the coastal routes overland into Maine.
- 74 Abyssinian Meeting House, 75 Newbury St., Portland, Maine, ☏ . Maine's oldest African-American church was erected in 1831 by a community of free blacks and headed up for many years by Reverend Amos Noé Freeman (1810-93), a known Underground Railroad agent who hosted and organized anti-slavery speakers, Negro conventions, and testimonies from runaway slaves. But by 1998, when the building was purchased from the city by a consortium of community leaders, it had fallen into disrepair. The Committee to Restore the Abyssinian plans to convert the former church to a museum dedicated to tracing the story of Maine's African-American community, and also hosts a variety of events, classes, and performances at a variety of venues around Portland.
- 75 Chamberlain Freedom Park, Corner of State and N. Main Sts., Brewer, Maine (Directly across the river from Bangor via the State Street bridge). In his day, John Holyoke — shipping magnate, factory owner, abolitionist — was one of the wealthiest citizens in the city of Brewer, Maine. When his former home was demolished in 1995 as part of a road widening project, a hand-stitched "slave-style shirt" was found tucked in the eaves of the attic along with a stone-lined tunnel in the basement leading to the bank of the Penobscot River, finally confirming the local rumors that claimed he was an Underground Railroad stationmaster. Today, there's a small park on the site, the only official Underground Railroad memorial in the state of Maine, that's centered on a statue entitled North to Freedom: a sculpted figure of an escaped slave hoisting himself out of the preserved tunnel entrance. Nearby is a statue of local Civil War hero Col. Joshua Chamberlain, for whom the park is named.
- 76 Maple Grove Friends Church, Route 1A near Up Country Rd., Fort Fairfield, Maine (9 miles/14.5 km east of Presque Isle via Route 163). 2 mi (3.2 km) from the border, this historic Quaker church was the last stop for many escaped slaves headed for freedom in New Brunswick, where a few African-Canadian communities had gathered in the Saint John River Valley. Historical renovations in 1995 revealed a hiding place concealed beneath a raised platform in the main meeting room. The building was rededicated as a house of worship in 2000 and still holds occasional services.
New Brunswick and Nova ScotiaEdit
Once across the border, a few black families settled in places like Upper Kent along the Saint John River in New Brunswick. Many more continued onward to Nova Scotia, then a separate British colony but now part of Canada's Maritime Provinces.
- 3 Tomlinson Lake Hike to Freedom, Glenburn Rd., Carlingford, New Brunswick (7.2 km/4.5 miles west of Perth-Andover via Route 190). first Sa in Oct. After successfully crossing the border into New Brunswick, the first order of business for many escaped slaves on this route was to seek out the home of Sgt. William Tomlinson, a British-born lumberjack and farmer who was well known for welcoming slaves who came through this area. Every year, the fugitives' cross-border trek to Tomlinson Lake is retraced with a 2.5 km (1.6 mi) family-friendly, all-ages-and-skill-levels "hike to freedom" in the midst of the beautiful autumn foliage the region boasts. Gather at the well-signed trailhead on Glenburn Road, and at the end of the line you can sit down to a hearty traditional meal, peruse the exhibits at an Underground Railroad pop-up museum, or do some more hiking on a series of nature trails around the lake. There's even a contest for the best 1850s-period costumes. Free.
- 77 King's Landing, 5804 Route 102, Prince William, New Brunswick (40 km/25 miles west of downtown Fredericton via the Trans-Canada Highway), ☏ . Daily 10AM-5PM, early June-mid Oct. Set up as a pioneer village, this living-history museum is devoted primarily to United Empire Loyalist communities in 19th century New Brunswick. However, one building, the Gordon House, is a replica of a house built by manumitted slave James Gordon in nearby Fredericton and contains exhibits relative to the Underground Railroad and the African-Canadian experience, including old runaway slave ads and quilts containing secret messages for fugitives. Onsite also is a restaurant, pub and Peddler's Market. $18, seniors $16, youth (6-15) $12.40.
Halifax, the final destination of most fugitive slaves passing out of Boston, still has a substantial mostly-black district populated by descendants of Underground Railroad passengers.
- 78 Black Cultural Centre for Nova Scotia, 10 Cherry Brook Rd., Dartmouth, Nova Scotia (20 km/12 miles east of downtown Halifax via Highway 111 and Trunk 7), ☏ , toll-free: , fax: . M-F 10AM-4PM, also Sa noon-3PM Jun-Sep. Situated in the midst of one of Metro Halifax's largest African-Canadian neighbourhoods, the Black Cultural Centre for Nova Scotia is a museum and cultural centre that traces the history of the Black Nova Scotian community not only during the Underground Railroad era, but before (exhibits tell the story of Black Loyalist settlers and locally-held slaves prior to the Emancipation Act of 1833) and afterward (the African-Canadian contribution to World War I and the destruction of Africville) as well. $6.
- 79 Africville Museum, 5795 Africville Rd., Halifax, Nova Scotia, ☏ , fax: . Tu-Sa 10AM-4PM. Africville was an African-Canadian neighbourhood that stood on the shores of Bedford Basin from the 1860s; it was condemned and destroyed a century later for bridge and industrial development. This museum, situated on the east side of Seaview Memorial Park in a replica of Africville’s Seaview United Baptist Church, was established as part of the city government's belated apology and restitution to Halifax's black community, and tells its story through historic artifacts, photographs, and interpretive displays that inspire the visitor to consider the corrosive effects of racism on society and to recognize the strength that comes through diversity. An "Africville Reunion" is held in the park each July. $5.75, students and seniors $4.75, children under 5 free.
With the passage of the Fugitive Slave Act by Congress in 1850, slaves who had escaped to the northern states were in immediate danger of being forcibly abducted and brought back to southern slavery. Slave catchers from the south operated openly in the northern states, where their brutality quickly alienated the locals. Federal officials were also best carefully avoided, as the influence of plantation owners from the then more populous South was powerful in Washington at the time.
Slaves therefore had to lie low during the day — hiding, sleeping or pretending to be working for local masters — and move north by night. The further north, the longer and colder those winter nights became. The danger of encountering U.S. federal marshals would end once the Canadian border had been crossed, but the passengers of the Underground Railroad would need to remain in Canada (and keep a watchful eye for slave catchers crossing the border in violation of Canadian law) until slavery was ended via the American Civil War of the 1860s.
Even after the end of slavery, racial struggles would continue for at least another century, and "travelling while black" continued to be something of a dangerous proposition. The Negro Motorist Green Book, which listed businesses safe for African-American travellers (nominally) nationwide, would remain in print in New York City from 1936 to 1966; nonetheless, in more than a few "sundown towns" there was nowhere for a traveller of color to stay for the night.
Today, the slave catchers are gone and the federal authorities now stand against various forms of racial segregation in interstate commerce. While an ordinary degree of caution remains advisable on this journey, the primary modern risk is crime when traveling through major cities, not slavery or segregation.
Only a small minority of successful escapees stayed in Canada for the long haul. Despite the fact that slavery was illegal there, racism and nativism were as much a problem as anywhere else. As time went on and more and more escaped slaves poured across the border, they began to wear out their welcome among white Canadians. The refugees usually arrived with only the clothes on their backs, unprepared for the harsh Canadian winters, and lived a destitute existence isolated from their new neighbors. In time, some African-Canadians prospered in farming or business and ended up staying behind in their new home, but at the outbreak of the American Civil War in 1861, many former fugitives jumped at the chance to join the Union army and play a role in the liberation of the compatriots they'd left behind in the South. Even Harriet Tubman herself left her home in St. Catharines to enlist as a cook, medic, and scout. Others simply drifted back to the U.S. because they were sick of living in an unfamiliar place far from their friends and loved ones.
- The end of the Ohio Line coincides with the beginning of the Windsor-Quebec corridor, the most populous and heavily urbanized area of Canada. Head east along Highway 401 toward the majestic Niagara Falls, cosmopolitan Toronto, the lovely Thousand Islands, and the French-flavoured Montreal and Quebec City.
- If you've gone the East Coast route, you'd be remiss not to explore the Atlantic Provinces, where whale-watching, gorgeous seaside scenery, tasty seafood, and Celtic cultural influences abound. | https://en.m.wikivoyage.org/wiki/Underground_Railroad | 21 |
14 | English Language Learning
- Understand the concept of learner empowerment;
- Identify resources that can help develop learner autonomy and multiliteracies.
The purpose of this chapter is to introduce the notion of learner empowerment and provide available resources for empowering English language learners through integrating technology into your instruction. Under this broad concept of empowerment, in this chapter, we focus on two key aspects - developing learner autonomy and employing a multiliteracies perspective in the classroom. We further narrow the scope of each aspect by discussing what learning opportunities each of them affords. We follow this discussion with a list of technology tools that will assist with your incorporation of our suggestion, and provide a scenario-based example using some of the listed tools.
- English Language Learners (ELL)
- students who often come from families where languages other than English are spoken and whose English proficiency may be defined as limited at least at some point of formal schooling; often required to fulfill certain language requirements, such as language assessments or specialized language courses
- Learner Autonomy
- the ability to take charge of and responsibility for one's own learning in order to pursue topics that are relevant and interesting to the learner
- Learner Empowerment
- raising learners' awareness of the control they can have over their own learning process, which often goes hand in hand with the concept of learner autonomy (e.g., when language learners are empowered, they are given the power and ownership of their own learning and are allowed to negotiate identities in the learning process)
- emphasizes that language use is context-specific and multimodal. It values the differences between different communication modes
- like learner autonomy, this concept hands more learning responsibility to students; moreover, it emphasizes on the importance of making connections between learners and the language they are learning at different levels as a way to strengthen the bond; promoting ownership is considered as a strategy to enhance learner autonomy
What do we mean by learner empowerment?
English language learners (ELLs), who often come from families where languages other than English are spoken, is a rapidly growing, but oftentimes underprivileged population of students in U.S. schools. These students sometimes have negative labels or stigmas attached to them because of language proficiency or cultural stereotypes. As a result of the negative labels and stigmas they are exposed to, ELL students may also hold negative beliefs about their own identities and competence.
In the classroom, too often these learners’ voices go unheard and their diverse identities are underappreciated. A simple definition for empowering learners is to give them power and ownership of their own learning and allow students to negotiate their identities in the language learning process. Teachers have used various strategies to allow ELLs to voice their learning needs in the classroom. These includes incorporating students’ home culture, home language, or prior experiences into the instruction, emphasizing diversity or multiliteracies, involving students in making learning-related decisions, or creating opportunities for students to express themselves in a multimodal manner.
To empower ELLs through technology integration into our instruction, in this chapter, we focus on two aspects under the broad concept: learner autonomy and multiliteracies. To promote learner autonomy in your classroom, you can start by creating collaboration and reflective opportunities for your students, and to raise the awareness of multiliteracies in your classroom by providing spaces for students to express their multiple identities in various forms. All of these have become much more accessible for both teachers and learners with the availability of new technologies.
Handing over responsibility to the students by encouraging their control of the learning process or allowing them the options to choose topics pertaining to their interest can promote their learner autonomy. The concept of learner autonomy has been closely associated with self-directed learning, and is seen as an important element that results in learner empowerment. Fortunately, with the emergence of new technologies, learners do not necessarily have to rely on teachers for accessing input and learning resources. They are now given more choices to make learning decisions for themselves as to how, what, and when they want to learn.
We know that there are different ways to define learner autonomy, but in general, it can be defined as“the ability to take charge of one’s own learning” (Little, 2007, p. 15), and it concerns whether or not “learners are able to pursue topics and questions that are interesting and relevant to them” (Cennamo, Ross, & Ertmer, 2013, p. 58). In other words, through shifting responsibility from teachers to learners, we give learners the power to take charge of their own learning process.
Empowering ELLs through developing them into autonomous learners can happen within and outside the classroom. For example, in the classroom, as a teacher, we can include collaborative projects, review our assessment methods to ensure learner autonomy is considered, allow our students chance and time for reflection, or give them opportunities to monitor and assess their learning as well as opportunities to provide us feedback. On the other hand, outside the classroom, there are other methods we can encourage our students to take in promoting learner autonomy. For example, students can make use of digital learning technologies to pace their own learning, find support from distance learning, or seek other learning opportunities, such as language exchanges or study abroad experience. These approaches shift learning responsibility from teachers to learners and engage our students in a learning process where they possess more ownership.
While the backgrounds and needs of English language learners may vary profoundly, one thing they share in common is that most of them come from homes where languages other than English are spoken. This leads to their multiliteracies and a sense of multiple identities and cultures, and potentially to their lack of English language competence and cultural understanding of the U.S. education system. As a teacher, recognizing their differences is an important first step, and to further empower them through embracing their differences and encouraging them to show their differences, new technologies can bring a wide range of possibilities for the acceptance and enactment of multiliteracies in your classroom.
To empower ELLs, one aspect is to challenge the dominance of English language and the cultural values owned and imposed by the mainstream groups. In other words, as English as a second language or content area teachers, we should celebrate and incorporate ELLs' home cultures and languages into our instruction. During the process, we help ELLs develop their bilingual and bicultural identities (or even multilingual and multicultural) instead of forcing them into the English-only mentality and being considered as "inferior" or "disabled" individuals.
The other method of empowering ELLs is to bestow them the opportunities for developing their competency as fluent and critical English speakers, readers, and writers. The notion of multiliteracies or new literacies further comes in as it recognizes that communications go beyond written or oral language. People communicate with one another through modes beyond language (e.g., gestures, interpersonal distance, sound, images). Therefore, aside from the traditional language competences (reading, writing, speaking, listening, grammar), a pedagogy of multiliteracies also emphasize cultivating multiliterate individuals who are "flexible and strategic and can understand and use literacy and literate practices with a range of texts and technologies; in socially responsible ways; in a socially, culturally, and linguistically diverse world; and to fully participate in life as an active and informed citizen" (Anstey & Bull, 2006, p. 55). Oftentimes, participating in such activities, ELLs are also given the space to reflect on their multiple identities more critically.
Enhancing learner autonomy and endorsing multiliteracies in the classroom are both important and neither should be overlooked. Therefore, to be a teacher who is committed to empowering ELLs means one will not only deliberately create opportunities for learners to take in charge of their learning, but also honor their home cultures and languages and strive to cultivate both the traditional and new literacy competences.
How can technology support learner empowerment?
Integrating technology into our ESL teaching can provide a good variety of ways to develop ELLs’ autonomy and multiliteracies. With proper instructional design, technology can help teachers enrich the learning environments, differentiate the learning tasks, and give students the ownership of their learning. It also gives learners ways to express themselves through different channels and modes of communications. For example, by using communication tools such as instant messaging, students who are less confident of speaking are offered an alternative way to express their thoughts in conversational contexts. Or, by using photo or video cameras, students are able to express themselves with both language and visual representations.
Furthermore, one of the biggest challenges in teaching a multilingual/multicultural classroom is that the teacher may not share the same home language as the students and their families. To that end, technology resources such as Internet search engines, online dictionaries or translation services all play a crucial role in understanding and incorporating students’ home cultures and languages into our instruction. Thus, technology not only can potentially enhance the effectiveness of our ESL instruction, it is also the key to realizing a transformative educational experience for ELLs.
What technology tools are available?
Technology for autonomy
- Collaborative learning tools: Students develop autonomy when they take responsibility of their own learning individually, or collaboratively with their peers. The use of collaborative learning tools strengthens learner autonomy because it creates authentic language activities that are engaging, involves learners in decision-making processes where they direct their own learning with their peers, and extends the learning experience outside of the classroom. These activities improve students’ language and autonomous learning skills at the same time. For example, many online collaborative writing tools allow students to compose a story together. Students can use collaborative learning tools to write with their classmates for a course project, or to do creative storytelling online with other writers they have never met. Many of the websites also offer a space for writers to publish their work online, which gives students a real audience to write to. Or, if students are producing a digital project collaboratively, they can share and put together ideas and multimedia resources in a shared digital space, which not only stores the information but also helps them sort out the ideas by engaging in decision-making processes.
- Google Drive [http://drive.google.com/]: Google Drive may be seen as a cloud storage, but it is more than that and is very easy to use for collaboration and resource sharing. Plus, if you or your students are already using Microsoft Office, tools on Google Drive work very similar to Microsoft Office tools, making file exporting and importing between the two straightforward. For more information on how to use Google Drive in e-learning, this article provides some directions for you to go: 6 Effective Ways To Use Google Drive in eLearning [https://edtechbooks.org/-Er].
- Padlet [https://padlet.com/]: Padlet enables students to organize and arrange ideas freely on a blank board. It makes sharing multimedia resources such as audio, video, images, and documents easy and fast. There is a lot of flexibility in terms of how to use this tool. You can create a shared board for your class, or your students can create one for their own group. The tool allows anonymous editing or sharing, so be mindful that if for your purposes you prefer to have identifiable contributions of the students, you will want to require student login. Otherwise, there will be no way to trace back who makes what changes.
- FoldingStory [http://foldingstory.com/]: This is a great tool to motivate students to write creatively together and to turn writing into a game. Your students can do collaborative storytelling with others. What makes this more exciting is each writer only gets to contribute 120 words or less within 3 minutes to a open story. When a line gets more likes from the readers, the writer will get on the leaderboard. If your students don’t feel motivated to write, FoldingStory may bring some change. The site also keeps all stories that are finished for future readers.
- Piazza [https://piazza.com/]: This tool helps you build an online learning community for your course and has features that can encourage extended discussion outside of class. It differs from many other learning management systems in that anonymous postings are allowed, which may be especially beneficial to encouraging different forms of participation from ELLs. The website also provides subject-specific features so that you and your students can expand the discussion with the availability of specific textual and multimedia editing tools. According to their user testimonials, students tended to feel more comfortable discussing and asking questions on this platform.
- Audio recording and editing tools: No matter where your students share their work or collaborate, if they want to create an audio recording and embed it into their project, these free tools are great to use:
- Audacity (https://edtechbooks.org/-Tj): To watch tutorials for how to record with Audacity, you can check out Lynda.com, or read this blog article from Jake Ludington’s Digital Lifestyle [https://edtechbooks.org/-xTF].
- GarageBand for Mac - https://edtechbooks.org/-jdT
- VoiceThread [https://voicethread.com/]
- Self-directed learning tools: As mentioned above, students develop autonomy when they are in charge of their own learning, and self-learning has been considered as a critical process in developing autonomy. When students are involved in self-directed learning, they are usually engaged in activities including diagnosing learning needs, setting learning goals, implementing learning strategies, evaluating their own learning, or searching for different approaches or resources to support and pace their learning more effectively. In addition, particularly for those ELLs who are struggling or unmotivated, creating learning experience they can relate to may help turn around the learning outcomes.
- Self-paced learning tools:
- Duolingo [https://www.duolingo.com/]: A favorite of many language learners. Learners can set daily goals for themselves and use different features to motivate them.
- CourseWorld [http://www.courseworld.org/]: A huge collection of online talks and classes can be found on this site, making the search of resources much easier.
- Khan Academy [https://www.khanacademy.org/]: The site is very well-designed and offers a lot of amazing courses for learning subject areas, and by far an English grammar section.
- NoRedInk [https://www.noredink.com/]: A great site designed for teaching and learning grammar and writing skills. It not only saves you a lot of time creating quizzes and assignments, but better than that, aligns with the Common Core Standards.
- Reflective learning tools:
- Formative assessment/feedback tools: As mentioned earlier, allowing student to reflect on what you teach and to give you feedback is a great way to empower them. These great tools will help you collect student response in an efficient way:
- Student self-reflection tools: Not just reflecting on what you teach, students surely need to reflect on their own learning process. With these following tools, students can record and capture a moment in their learning, add reflection to the image or video of that moment, and even share that with others:
- E-portfolios: The following are tools that are safe for your students to create e-portfolios to record, share, and reflect on their learning, while you (and their parents) monitor their progress and online activities.
- Audio publishing tools: When your students create a digital project or an e-portfolio using the sites above, they can upload a podcast or an audio show they make to those sites. To give them another option, these sites are made for publishing audio shows:
- Podbean [https://podbean.com/]: This site also has a section for publishing education podcasts (https://edtechbooks.org/-bfT), where you will find online lessons, student projects, etc.
- Podomatic [https://www.podomatic.com/]
- BuzzSprout [https://www.buzzsprout.com/]
- Blubrry [https://www.blubrry.com/]
- Spreaker [https://www.spreaker.com/]
- YouTube [https://www.youtube.com/]
- iTunes [https://www.apple.com/itunes/]
- Self-paced learning tools:
Technology for multiliteracies
- Multimedia ESL lessons: ELL teachers are blessed with a great variety of resources available online for enhancing and enriching our instructions. In particular, the following websites offer great multimedia materials for developing language lessons that would help students improve their traditional literacy skills (speaking, listening, reading, writing and grammar) in an integrative way. These multimedia lessons offer visuals, audios, and hand-on activities recommendations that could meet learners’ various learning styles. They incorporate authentic materials (e.g., TED talks, movies, online Youtube videos) that introduce a wide range of knowledge to empower the ELLs with the cultural capitals they need. Some of the websites also allow the teachers to adjust the language difficulty level to fit their ELL’s needs. In addition, teachers can also use these multimedia materials to introduce students to the multiliteracies (new literacies) skills, getting students to start paying attention to the meanings conveyed through modes other than written and oral language (visual representations, ambient sounds, music, accents etc.). These multimedia materials could also serve as examples for students to consider how they could communicate in a multimodal way.
- BBC Learning English (https://edtechbooks.org/-zA): BBC offers free language lessons and listening practices based on current news reports. Their archived site also has a lot of great multimedia materials (https://edtechbooks.org/-gq) .
- Breaking News English (https://edtechbooks.org/-cY): it’s free and it’s amazing. As simple as that.
- TEDxESL [http://tedxesl.com/]: It really is a pity that this site is no longer updated, but all the available TED-talks-based lessons on this site are well-designed and engaging.
- ESLnotes (https://edtechbooks.org/-SB): Who doesn’t like watching movies? ESL notes offer movie watching guide and discussion questions for some classic American movies.
- Viralelt (https://edtechbooks.org/-qN): The author of this blog, Ian, developed ESL lessons for intermediate to advanced adult ESL learners based on youtube videos that had gone viral on the internet.
- ESL Pod (https://edtechbooks.org/-Pi): ESL Pod does not only offer podcast lessons for ESL learners, they also have online blog posts, videos and also kinds of resources for ESL learners and teachers.
- BrainPOP ESL [https://esl.brainpop.com/]: BrainPOP ESL offer lessons specifically designed for ESL learning. With all the animations and games, this is a great resource for younger ELLs. In addition, with captions for all the videos, lessons hosted on BrainPOP junior [https://jr.brainpop.com/] are great resources for elementary ESL teachers, too.
- Starfall [http://more.starfall.com/]: Starfall has interactive games and lessons for emerging readers. Preschool and kindergarten teachers as well as elementary ESL teachers have been using this site to engage young kids.
- Storyline Online (https://edtechbooks.org/-YF): Elementary teachers, if you have never visited this website before, you have to visit it. This is one of the best websites for children’s literacy and storytelling. The storytelling videos are all captioned, so they are appropriate for ESL learning as well. In addition to the videos, the website also provides activity guides for teachers.
- Multimodal composing and digital storytelling: From the multiliteracies perspective, we need to give students opportunities to learn and practice using different modes and technologies. ESL teachers have been engaging ELLs’ in multimodal composing and digital storytelling to empower them with the symbolic competence. Through multimodal composing or digital storytelling, ELLs relies on different modes of communication to express themselves. ESL teachers would further encourage ELLs to tell their own stories, express their emotions or introduce their home cultures and languages through digital storytelling.
- The following sites offer tips for using digital storytelling in teaching and also examples of digital storytelling videos:
- Story Center - https://edtechbooks.org/-nC
- Story Circle - https://edtechbooks.org/-uk
- Story Corps [https://storycorps.org/]
- Video in the classroom - https://edtechbooks.org/-so
- Lang Witches- Digital storytelling (https://edtechbooks.org/-mx): what it is and what it is not
- Larry Ferlazzo’s blog post on digital storytelling - https://edtechbooks.org/-Ya
- U of Houston’s Educational Uses of Digital Storytelling - https://edtechbooks.org/-uf
- The following websites or apps are excellent tools for multimodal composing or digital storytelling:
- Storybird [https://storybird.com/]
- My Storybook [https://www.mystorybook.com/]
- Storify [https://storify.com/]
- Toondoo [http://www.toondoo.com/]
- Pixton [https://www.pixton.com/]
- Make Belief Comix - https://edtechbooks.org/-rNh
- Storyboard That [http://www.storyboardthat.com/]
- VoiceThread [https://voicethread.com/]
- Tika Tok [https://www.tikatok.com/]
- Zimmer Twins [http://www.zimmertwins.com/]
- Toontastic 3D - https://edtechbooks.org/-ri
- Green Screen [https://edtechbooks.org/-JCd]
- Stop Motion Studio [https://edtechbooks.org/-YP]
- Powtoon [https://www.powtoon.com/]
- WeVideo [https://www.wevideo.com/]
- Shadow Puppet [http://get-puppet.co/]
- Haiku Deck [https://www.haikudeck.com/]
- Trading Cards Creator1 - https://edtechbooks.org/-Bf
- Trading Cards Creator 2 [https://edtechbooks.org/-gV]
- The following sites offer tips for using digital storytelling in teaching and also examples of digital storytelling videos:
Example of using technology to empower ELLs
Miss Caroline is an ESL teacher at the Flower Elementary School. One third of the student population at this school are ELLs whose home language includes Spanish, Chinese, Korean, Arabic, Turkish and Swahili. Also, one third of the student population are enrolled in free or reduced price meal plans. Miss Caroline speaks English as first language and can speak a little Spanish. Each ESL class Miss Caroline teaches has around 18-20 students. She has a teaching assistant, and there are community volunteers coming into her classes to help her on a regular basis, too. The Flower Elementary School has 1:1 technology access where kindergarten to 3rd grade have access to iPads and 4th grade or higher has Chromebooks.
In this lesson unit, Miss Caroline engaged the 4-5th grade ESL students to learn about their home cultures and introduce their home cultures to one another through storytelling. The lesson started with Miss Caroline leading the students to discuss what culture means and why cultural understandings are important. Miss Caroline asked the students which culture(s) they felt they were affiliated with and what they know about those cultures.
Next, Miss Caroline mentioned how holidays have significant cultural and historical meanings behind. She used Thanksgiving in the US as example. By introducing Thanksgiving, Miss Caroline taught students vocabulary words related to Thanksgiving such as parade, pilgrim, gravy, mashed potato, turkey, harvest. She also used a BrainPOP lesson [https://edtechbooks.org/-Fy] to teach students past simple tense, which is an important grammatical knowledge for telling stories. She showed a cartoon that tells the story of Thanksgiving [https://youtu.be/Yh_0t4EcsjE], and asked students to retell the story of Thanksgiving.
She then introduced some Thanksgiving traditions in the US such as the Thanksgiving dinner or the Macy’s parade in New York. Miss Caroline also brought photos of her family celebrating Thanksgiving together and shared her thanksgiving stories.
Afterwards, Miss Caroline announced the digital storytelling project. She told students they were to pick an important holiday in their home cultures and create a digital story about how their family celebrated the holiday. Prior to making the digital story, Miss Caroline assigned three mini tasks to the students:
- Conduct online research on the holiday you are going to introduce (in either English or your home language); write a brief introduction of the holiday in English.
- Interview your parents or grandparents to learn about how they celebrated this holiday; take notes on the stories they shared and collect photos if possible.
- Choose three words relevant to the holiday in your home language, create trading cards [https://edtechbooks.org/-Bf] to introduce them to the class.
Then, Miss Caroline taught the elements of good stories and how to write personal narratives. She also introduced action words and adjectives that are useful for writing stories. She then engaged students in creative story writing by randoming picking 3 trading cards other students created to make a story.
Next, Miss Caroline asked students to write the script for the story about how their family celebrated a holiday in their home culture. She prompted the students to think about whose point of view they are going to write for, what events occurred, how they would sequence the events, what problems, dramas or emotions were involved. After the story has been structured, she also guided the students to pay attention to the grammars and word choices.
Miss Caroline then provided various activities to teach students how to create a good digital story. She told students a digital story utilizes things beyond language to convey meaning to the audience. This includes images, sounds, music and even a dramatic tone. She showed to students a few digital story examples she found on Story Center [https://youtu.be/GZ0ouK6xBBA] and the Storyline Online [https://edtechbooks.org/-frq].
She also adopted a storyboard template [https://edtechbooks.org/-rN]she found online to guide students to create different scenes for their digital story. Meanwhile, students went online to search for royalty free music and images they need for their digital stories, and included that information in the storyboard [https://edtechbooks.org/-rN].
Finally, Miss Caroline instructed students how to use WeVideo [https://www.wevideo.com/]to build and edit their digital story videos. Students worked on creating videos to tell the stories of their families. When they were done, they published the videos and shared the videos with their families.
Suggested CitationHung, J. H. R. & Ding, A. (2018). English Language Learning: Empowering ELLs through technology integration. In A. Ottenbreit-Leftwich & R. Kimmons, The K-12 Educational Technology Handbook. EdTech Books. Retrieved from https://edtechbooks.org/k12handbook/ell
End-of-Chapter Survey: How would you rate the overall quality of this chapter?
- Very Low Quality
- Low Quality
- Moderate Quality
- High Quality
- Very High Quality | https://edtechbooks.org/k12handbook/ell | 21 |
57 | Hepatitis C is an infectious disease caused by the hepatitis C virus (HCV) that primarily affects the liver; it is a type of viral hepatitis. During the initial infection people often have mild or no symptoms. Occasionally a fever, dark urine, abdominal pain, and yellow tinged skin occurs. The virus persists in the liver in about 75% to 85% of those initially infected. Early on chronic infection typically has no symptoms. Over many years however, it often leads to liver disease and occasionally cirrhosis. In some cases, those with cirrhosis will develop serious complications such as liver failure, liver cancer, or dilated blood vessels in the esophagus and stomach.
|Electron micrograph of hepatitis C virus from cell culture (scale = 50 nanometers)|
|Specialty||Gastroenterology, Infectious disease|
|Complications||Liver failure, liver cancer, esophageal and gastric varices|
|Duration||Long term (80%)|
|Causes||Hepatitis C virus usually spread by blood-to-blood contact|
|Diagnostic method||Blood testing for antibodies or viral RNA|
|Prevention||Sterile needles, testing donated blood|
|Treatment||Medications, liver transplant|
|Medication||Antivirals (sofosbuvir, simeprevir, others)|
|Frequency||71 million (2017)|
HCV is spread primarily by blood-to-blood contact associated with injection drug use, poorly sterilized medical equipment, needlestick injuries in healthcare, and transfusions. Using blood screening, the risk from a transfusion is less than one per two million. It may also be spread from an infected mother to her baby during birth. It is not spread by superficial contact. It is one of five known hepatitis viruses: A, B, C, D, and E.
There is no vaccine against hepatitis C. Prevention includes harm reduction efforts among people who inject drugs, testing donated blood, and treatment of people with chronic infection. Chronic infection can be cured more than 95% of the time with antiviral medications such as sofosbuvir or simeprevir. Peginterferon and ribavirin were earlier generation treatments that had a cure rate of less than 50% and greater side effects. Getting access to the newer treatments however can be expensive. Those who develop cirrhosis or liver cancer may require a liver transplant. Hepatitis C is the leading reason for liver transplantation, though the virus usually recurs after transplantation.
An estimated 71 million people (1%) worldwide are infected with hepatitis C as of 2015[update]. 80% of the health burden is concentrated in low- and middle-income countries, with the highest levels of prevalence in Africa and Central and East Asia. About 167,000 deaths due to liver cancer and 326,000 deaths due to cirrhosis occurred in 2015 due to hepatitis C. The existence of hepatitis C – originally identifiable only as a type of non-A non-B hepatitis – was suggested in the 1970s and proven in 1989. Hepatitis C infects only humans and chimpanzees.
Signs and symptomsEdit
Acute symptoms develop in some 20–30% of those infected. When this occurs, it is generally 4–12 weeks following infection (but it may take from 2 weeks to 6 months for acute symptoms to appear).
Symptoms are generally mild and vague, and may include fatigue, nausea and vomiting, fever, muscle or joint pains, abdominal pain, decreased appetite and weight loss, jaundice (occurs in ~25% of those infected), dark urine, and clay-coloured stools. There is no evidence that acute hepatitis C can alone cause acute liver failure, though liver injury and elevated liver enzymes may occur. Symptoms and laboratory findings suggestive of liver disease should prompt further tests and can thus help establish a diagnosis of hepatitis C infection early on.
Following the acute phase, the infection may resolve spontaneously in 10–50% of affected people; this occurs more frequently in young people, and females.
About 80% of those exposed to the virus develop a chronic infection. This is defined as the presence of detectable viral replication for at least six months. Most experience minimal or no symptoms during the initial few decades of the infection. Chronic hepatitis C can be associated with fatigue and mild cognitive problems. Chronic infection after several years may cause cirrhosis or liver cancer. The liver enzymes measured from blood samples are normal in 7–53%. (Elevated levels indicate liver cells are being damaged by the virus or other disease.) Late relapses after apparent cure have been reported, but these can be difficult to distinguish from reinfection.
Fatty changes to the liver occur in about half of those infected and are usually present before cirrhosis develops. Usually (80% of the time) this change affects less than a third of the liver. Worldwide hepatitis C is the cause of 27% of cirrhosis cases and 25% of hepatocellular carcinoma. About 10–30% of those infected develop cirrhosis over 30 years. Cirrhosis is more common in those also infected with hepatitis B, schistosoma, or HIV, in alcoholics and in those of male sex. In those with hepatitis C, excess alcohol increases the risk of developing cirrhosis 5-fold. Those who develop cirrhosis have a 20-fold greater risk of hepatocellular carcinoma. This transformation occurs at a rate of 1–3% per year. Being infected with hepatitis B in addition to hepatitis C increases this risk further.
Liver cirrhosis may lead to portal hypertension, ascites (accumulation of fluid in the abdomen), easy bruising or bleeding, varices (enlarged veins, especially in the stomach and esophagus), jaundice, and a syndrome of cognitive impairment known as hepatic encephalopathy. Ascites occurs at some stage in more than half of those who have a chronic infection.
The most common problem due to hepatitis C but not involving the liver is mixed cryoglobulinemia (usually the type II form) – an inflammation of small and medium-sized blood vessels. Hepatitis C is also associated with autoimmune disorders such as Sjögren's syndrome, lichen planus, a low platelet count, porphyria cutanea tarda, necrolytic acral erythema, insulin resistance, diabetes mellitus, diabetic nephropathy, autoimmune thyroiditis, and B-cell lymphoproliferative disorders. 20–30% of people infected have rheumatoid factor – a type of antibody. Possible associations include Hyde's prurigo nodularis and membranoproliferative glomerulonephritis. Cardiomyopathy with associated abnormal heart rhythms has also been reported. A variety of central nervous system disorders has been reported. Chronic infection seems to be associated with an increased risk of pancreatic cancer. People may experience other issues in the mouth such as dryness, salivary duct stones, and crusted lesions around the mouth.
Persons who have been infected with hepatitis C may appear to clear the virus but remain infected. The virus is not detectable with conventional testing but can be found with ultra-sensitive tests. The original method of detection was by demonstrating the viral genome within liver biopsies, but newer methods include an antibody test for the virus' core protein and the detection of the viral genome after first concentrating the viral particles by ultracentrifugation. A form of infection with persistently moderately elevated serum liver enzymes but without antibodies to hepatitis C has also been reported. This form is known as cryptogenic occult infection.
Several clinical pictures have been associated with this type of infection. It may be found in people with anti-hepatitis-C antibodies but with normal serum levels of liver enzymes; in antibody-negative people with ongoing elevated liver enzymes of unknown cause; in healthy populations without evidence of liver disease; and in groups at risk for HCV infection including those on hemodialysis or family members of people with occult HCV. The clinical relevance of this form of infection is under investigation. The consequences of occult infection appear to be less severe than with chronic infection but can vary from minimal to hepatocellular carcinoma.
The rate of occult infection in those apparently cured is controversial but appears to be low. 40% of those with hepatitis but with both negative hepatitis C serology and the absence of detectable viral genome in the serum have hepatitis C virus in the liver on biopsy. How commonly this occurs in children is unknown.
The hepatitis C virus (HCV) is a small, enveloped, single-stranded, positive-sense RNA virus. It is a member of the genus Hepacivirus in the family Flaviviridae. There are seven major genotypes of HCV, which are known as genotypes one to seven. The genotypes are divided into several subtypes with the number of subtypes depending on the genotype. In the United States, about 70% of cases are caused by genotype 1, 20% by genotype 2 and about 1% by each of the other genotypes. Genotype 1 is also the most common in South America and Europe.
The half life of the virus particles in the serum is around 3 hours and may be as short as 45 minutes. In an infected person, about 1012 virus particles are produced each day. In addition to replicating in the liver the virus can multiply in lymphocytes.
Generally, percutaneous contact with contaminated blood is responsible for most infections; however, the method of transmission is strongly dependent on both a country's geography and economic status. Indeed, the primary route of transmission in the developed world is injection drug use, while in the developing world the main methods are blood transfusions and unsafe medical procedures. The cause of transmission remains unknown in 20% of cases; however, many of these are believed to be accounted for by injection drug use.
Injection drug use (IDU) is a major risk factor for hepatitis C in many parts of the world. Of 77 countries reviewed, 25 (including the United States) were found to have a prevalence of hepatitis C of between 60% and 80% among people who use injection drugs. Twelve countries had rates greater than 80%. It is believed that ten million intravenous drug users are infected with hepatitis C; China (1.6 million), the United States (1.5 million), and Russia (1.3 million) have the highest absolute totals. Occurrence of hepatitis C among prison inmates in the United States is 10 to 20 times that of the occurrence observed in the general population; this has been attributed to high-risk behavior in prisons such as IDU and tattooing with nonsterile equipment. Shared intranasal drug use may also be a risk factor.
Blood transfusion, transfusion of blood products, or organ transplants without HCV screening carry significant risks of infection. The United States instituted universal screening in 1992 and Canada instituted universal screening in 1990. This decreased the risk from one in 200 units to between one in 10,000 to one in 10,000,000 per unit of blood. This low risk remains as there is a period of about 11–70 days between the potential blood donor's acquiring hepatitis C and the blood's testing positive depending on the method. Some countries do not screen for hepatitis C due to the cost.
Those who have experienced a needle stick injury from someone who was HCV positive have about a 1.8% chance of subsequently contracting the disease themselves. The risk is greater if the needle in question is hollow and the puncture wound is deep. There is a risk from mucosal exposures to blood, but this risk is low, and there is no risk if blood exposure occurs on intact skin.
Hospital equipment has also been documented as a method of transmission of hepatitis C, including reuse of needles and syringes; multiple-use medication vials; infusion bags; and improperly sterilized surgical equipment, among others. Limitations in the implementation and enforcement of stringent standard precautions in public and private medical and dental facilities are known to have been the primary cause of the spread of HCV in Egypt, the country that used to have the highest rate of infection in the world back in 2012, and currently one of the lowest in the world in 2021.
Sexual transmission of hepatitis C is uncommon. Studies examining the risk of HCV transmission between heterosexual partners, when one is infected and the other is not, have found very low risks. Sexual practices that involve higher levels of trauma to the anogenital mucosa, such as anal penetrative sex, or that occur when there is a concurrent sexually transmitted infection, including HIV or genital ulceration, present greater risks. The United States Department of Veterans Affairs recommends condom use to prevent hepatitis C transmission in those with multiple partners, but not those in relationships that involve only a single partner.
Tattooing is associated with two to threefold increased risk of hepatitis C. This can be due to either improperly sterilized equipment or contamination of the dyes being used. Tattoos or piercings performed either before the mid-1980s, "underground", or nonprofessionally are of particular concern, since sterile techniques in such settings may be lacking. The risk also appears to be greater for larger tattoos. It is estimated that nearly half of prison inmates share unsterilized tattooing equipment. It is rare for tattoos in a licensed facility to be directly associated with HCV infection.
Personal-care items such as razors, toothbrushes, and manicuring or pedicuring equipment can be contaminated with blood. Sharing such items can potentially lead to exposure to HCV. Appropriate caution should be taken regarding any medical condition that results in bleeding, such as cuts and sores. HCV is not spread through casual contact, such as hugging, kissing, or sharing eating or cooking utensils, nor is it transmitted through food or water.
Mother-to-child transmission of hepatitis C occurs in fewer than 10% of pregnancies. There are no measures that alter this risk. It is not clear when transmission occurs during pregnancy, but it may occur both during gestation and at delivery. A long labor is associated with a greater risk of transmission. There is no evidence that breastfeeding spreads HCV; however, to be cautious, an infected mother is advised to avoid breastfeeding if her nipples are cracked and bleeding, or if her viral loads are high.
There are a number of diagnostic tests for hepatitis C, including HCV antibody enzyme immunoassay or ELISA, recombinant immunoblot assay, and quantitative HCV RNA polymerase chain reaction (PCR). HCV RNA can be detected by PCR typically one to two weeks after infection, while antibodies can take substantially longer to form and thus be detected.
Diagnosing patients is generally a challenge as patients with acute illness generally present with mild, non-specific flu-like symptoms, while the transition from acute to chronic is sub-clinical. Chronic hepatitis C is defined as infection with the hepatitis C virus persisting for more than six months based on the presence of its RNA. Chronic infections are typically asymptomatic during the first few decades, and thus are most commonly discovered following the investigation of elevated liver enzyme levels or during a routine screening of high-risk individuals. Testing is not able to distinguish between acute and chronic infections. Diagnosis in the infant is difficult as maternal antibodies may persist for up to 18 months.
Hepatitis C testing typically begins with blood testing to detect the presence of antibodies to the HCV, using an enzyme immunoassay. If this test is positive, a confirmatory test is then performed to verify the immunoassay and to determine the viral load. A recombinant immunoblot assay is used to verify the immunoassay and the viral load is determined by an HCV RNA polymerase chain reaction. If there is no RNA and the immunoblot is positive, it means that the person tested had a previous infection but cleared it either with treatment or spontaneously; if the immunoblot is negative, it means that the immunoassay was wrong. It takes about 6–8 weeks following infection before the immunoassay will test positive. A number of tests are available as point of care testing which means that results are available within 30 minutes.
Liver enzymes are variable during the initial part of the infection and on average begin to rise at seven weeks after infection. The elevation of liver enzymes does not closely follow disease severity.
Liver biopsies are used to determine the degree of liver damage present; however, there are risks from the procedure. The typical changes seen are lymphocytes within the parenchyma, lymphoid follicles in portal triad, and changes to the bile ducts. There are a number of blood tests available that try to determine the degree of hepatic fibrosis and alleviate the need for biopsy.
It is believed that only 5–50% of those infected in the United States and Canada are aware of their status. Routine screening for those between the ages of 18 and 79 was recommended by the United States Preventive Services Task Force in 2020. Previously testing was recommended for those at high risk, which includes injection drug users, those who have received blood transfusions before 1992, those who have been in jail, those on long term hemodialysis, and those with tattoos. Screening is also recommended in those with elevated liver enzymes, as this is frequently the only sign of chronic hepatitis. As of 2012[update], the U.S. Centers for Disease Control and Prevention (CDC) recommends a single screening test for those born between 1945 and 1965. In Canada one time screening is recommended for those born between 1945 and 1975.
As of 2016[update], no approved vaccine protects against contracting hepatitis C. A combination of harm reduction strategies, such as the provision of new needles and syringes and treatment of substance use, decreases the risk of hepatitis C in people using injection drugs by about 75%. The screening of blood donors is important at a national level, as is adhering to universal precautions within healthcare facilities. In countries where there is an insufficient supply of sterile syringes, medications should be given orally rather than via injection (when possible). Recent research also suggests that treating people with active infection, thereby reducing the potential for transmission, may be an effective preventive measure.
Those with chronic hepatitis C are advised to avoid alcohol and medications toxic to the liver. They should also be vaccinated against hepatitis A and hepatitis B due to the increased risk if also infected. Use of acetaminophen is generally considered safe at reduced doses. Nonsteroidal anti-inflammatory drugs (NSAIDs) are not recommended in those with advanced liver disease due to an increased risk of bleeding. Ultrasound surveillance for hepatocellular carcinoma is recommended in those with accompanying cirrhosis. Coffee consumption has been associated with a slower rate of liver scarring in those infected with HCV.
Approximately 90% of chronic cases clear with treatment. Treatment with antiviral medication is recommended in all people with proven chronic hepatitis C who are not at high risk of dying from other causes. People with the highest complication risk should be treated first, with the risk of complications based on the degree of liver scarring. The initial recommended treatment depends on the type of hepatitis C virus, if the person has received previous hepatitis C treatment, and whether or not a person has cirrhosis. Direct-acting antivirals (DAAs) may reduce the number of the infected people.
No prior treatmentEdit
- HCV genotype 1a (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or ledipasvir/sofosbuvir (the latter for people who do not have HIV/AIDS, are not African-American, and have less than 6 million HCV viral copies per milliliter of blood) or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. Sofosbuvir with either daclatasvir or simeprevir may also be used.
- HCV genotype 1a (with compensated cirrhosis): 12 weeks of elbasvir/grazoprevir, glecaprevir/pibrentasvir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. An alternative treatment regimen of elbasvir/grazoprevir with weight-based ribavirin for 16 weeks can be used if the HCV is found to have antiviral resistance mutations against NS5A protease inhibitors.
- HCV genotype 1b (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or ledipasvir/sofosbuvir (with the aforementioned limitations for the latter as above) or 12 weeks of elbasvir/grazoprevir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. Alternative regimens include 12 weeks of ombitasvir/paritaprevir/ritonavir with dasabuvir or 12 weeks of sofosbuvir with either daclatasvir or simeprevir.
- HCV genotype 1b (with compensated cirrhosis): 12 weeks of elbasvir/grazoprevir, glecaprevir/pibrentasvir, ledipasvir/sofosbuvir, or sofosbuvir/velpatasvir. A 12-week course of paritaprevir/ritonavir/ombitasvir with dasabuvir may also be used.
- HCV genotype 2 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir. Alternatively, 12 weeks of sofosbuvir/daclatasvir can be used.
- HCV genotype 2 (with compensated cirrhosis): 12 weeks of sofosbuvir/velpatasvir or glecaprevir/pibrentasvir. An alternative regimen of sofosbuvir/daclatasvir can be used for 16–24 weeks.
- HCV genotype 3 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir or sofosbuvir and daclatasvir.
- HCV genotype 3 (with compensated cirrhosis): 12 weeks of glecaprevir/pibrentasvir, sofosbuvir/velpatasvir, or if certain antiviral mutations are present, 12 weeks of sofosbuvir/velpatasvir/voxilaprevir (when certain antiviral mutations are present), or 24 weeks of sofosbuvir and daclatasvir.
- HCV genotype 4 (no cirrhosis): 8 weeks of glecaprevir/pibrentasvir or 12 weeks of sofosbuvir/velpatasvir, elbasvir/grazoprevir, or ledipasvir/sofosbuvir. A 12-week regimen of ombitasvir/paritaprevir/ritonavir is also acceptable in combination with weight-based ribavirin.
- HCV genotype 4 (with compensated cirrhosis): A 12-week regimen of sofosbuvir/velpatasvir, glecaprevir/pibrentasavir, elbasvir/grazoprevir, or ledipasvir/sofosbuvir is recommended. A 12-week course of ombitasvir/paritaprevir/ritonavir with weight-based ribavirin is an acceptable alternative.
- HCV genotype 5 or 6 (with or without compensated cirrhosis): If no cirrhosis is present, then 8 weeks of glecaprevir/pibrentasvir is recommended. If cirrhosis is present, then a 12-week course of glecaprevir/pibrentasvir, sofosbuvir/velpatasvir, or ledipasvir/sofosbuvir is warranted.
Chronic infection can be cured in more than 90% of people with medications. Getting access to these treatments however can be expensive. The combination of sofosbuvir, velpatasvir, and voxilaprevir may be used in those who have previously been treated with sofosbuvir or other drugs that inhibit NS5A and were not cured.
Prior to 2011, treatments consisted of a combination of pegylated interferon alpha and ribavirin for a period of 24 or 48 weeks, depending on HCV genotype. This produces cure rates of between 70 and 80% for genotype 2 and 3, respectively, and 45 to 70% for genotypes 1 and 4. Adverse effects with these treatments were common, with half of people getting flu-like symptoms and a third experiencing emotional problems. Treatment during the first six months is more effective than once hepatitis C has become chronic. In those with chronic hepatitis B, treatment for hepatitis C results in reactivation of hepatitis B in about 25%.
Cirrhosis due to hepatitis C is a common reason for liver transplantation though the virus usually (80–90% of cases) recurs afterwards. Infection of the graft leads to 10–30% of people developing cirrhosis within five years. Treatment with pegylated interferon and ribavirin post-transplant decreases the risk of recurrence to 70%. A 2013 review found unclear evidence regarding if antiviral medication was useful if the graft became reinfected.
Several alternative therapies are claimed by their proponents to be helpful for hepatitis C including milk thistle, ginseng, and colloidal silver. However, no alternative therapy has been shown to improve outcomes in hepatitis C, and no evidence exists that alternative therapies have any effect on the virus at all.
The responses to treatment is measured by sustained viral response (SVR), defined as the absence of detectable RNA of the hepatitis C virus in blood serum for at least 24 weeks after discontinuing the treatment, and rapid virological response (RVR) defined as undetectable levels achieved within four weeks of treatment. Successful treatment decreases the future risk of hepatocellular carcinoma by 75%.
Prior to 2012 sustained response occurs in about 40–50% in people with HCV genotype 1 given 48 weeks of treatment. A sustained response is seen in 70–80% of people with HCV genotypes 2 and 3 with 24 weeks of treatment. A sustained response occurs about 65% in those with genotype 4 after 48 weeks of treatment. Finally for genotype 6, a 48 weeks treatment with pegylated interferon and ribavirin results in a higher rate in sustained responses than for genotype 1 (86% vs. 52%). Further studies are needed for a 24 weeks treatment and at lower dosages.
About 15–45% of those infected spontaneously clear the virus within 6 months, with the rest going on the develop chronic infection. Spontaneous resolution following acute infection appears more common in females, and younger persons, and appears to also be influenced by genetic factors. Chronic infection may also resolve spontaneously months or years after the acute phase, though this is unusual.
This section needs to be updated.(July 2020)
WHO estimated that 71 million (1%) people globally are living with chronic hepatitis C in its 2017 Global Hepatitis Report. About 1.75 million people are infected per year, and about 400,000 people die yearly from hepatitis C-related diseases. During 2010 it is estimated that 16,000 people died from acute infections while 196,000 deaths occurred from liver cancer secondary to the infection. Rates have increased substantially in the 20th century due to a combination of intravenous drug abuse and reused but poorly sterilized medical equipment.
Rates are high (>3.5% population infected) in Central and East Asia, North Africa and the Middle East, they are intermediate (1.5–3.5%) in South and Southeast Asia, sub-Saharan Africa, Andean, Central and Southern Latin America, Caribbean, Oceania, Australasia and Central, Eastern and Western Europe; and they are low (<1.5%) in Asia-Pacific, Tropical Latin America and North America.
Among those chronically infected, the risk of cirrhosis after 20 years varies between studies but has been estimated at ~10–15% for men and ~1–5% for women. The reason for this difference is not known. Once cirrhosis is established, the rate of developing hepatocellular carcinoma is ~1–4% per year. Rates of new infections have decreased in the Western world since the 1990s due to improved screening of blood before transfusion.
In Egypt, following Egypt's 2030 Vision, the country managed to bring down the infection rates of Hepatitis C from 22% in 2011 to just 2% in 2021. It was believed that the high prevalence in Egypt was linked to a discontinued mass-treatment campaign for schistosomiasis, using improperly sterilized glass syringes.
In the United States, about 2% of people have chronic hepatitis C. In 2014, an estimated 30,500 new acute hepatitis C cases occurred (0.7 per 100,000 population), an increase from 2010 to 2012. The number of deaths from hepatitis C has increased to 15,800 in 2008 having overtaken HIV/AIDS as a cause of death in the US in 2007. In 2014 it was the single greatest cause of infectious death in the United States. This mortality rate is expected to increase, as those infected by transfusion before HCV testing become apparent. In Europe the percentage of people with chronic infections has been estimated to be between 0.13 and 3.26%.
In England about 160,000 people are chronically infected. Between 2006 and 2011 28,000, about 3%, received treatment. About half of people using a needle exchange in London in 2017/8 tested positive for hepatitis C of which half were unaware that they had it. As part of a bid to eradicate hepatitis C by 2025 NHS England conducted a large procurement exercise in 2019. Merck Sharp & Dohme, Gilead Sciences, and Abbvie were awarded contracts, which, together, are worth up to £1 billion over five years.
Since 2014, extremely effective medication have been available to eradication the disease in 8–12 weeks in most people. In 2015 about 950,000 people were treated while 1.7 million new infections occurred, meaning that overall the number of people with HCV increased. These numbers differ by country and improved in 2016, with some countries achieving higher cure rates than new infection rates (mostly high income countries). By 2018, twelve countries are on track to achieve HCV elimination. While antiviral agents will curb new infections, it is less clear whether they impact overall deaths and morbidity. Furthermore, for them to be effective, people need to be aware of their infection – it is estimated that worldwide only 20% of infected people are aware of their infection (in the US fewer than half were aware).
In the mid-1970s, Harvey J. Alter, Chief of the Infectious Disease Section in the Department of Transfusion Medicine at the National Institutes of Health, and his research team demonstrated how most post-transfusion hepatitis cases were not due to hepatitis A or B viruses. Despite this discovery, international research efforts to identify the virus, initially called non-A, non-B hepatitis (NANBH), failed for the next decade. In 1987, Michael Houghton, Qui-Lim Choo, and George Kuo at Chiron Corporation, collaborating with Daniel W. Bradley at the Centers for Disease Control and Prevention, used a novel molecular cloning approach to identify the unknown organism and develop a diagnostic test. In 1988, Alter confirmed the virus by verifying its presence in a panel of NANBH specimens. In April 1989, the discovery of HCV was published in two articles in the journal Science. The discovery led to significant improvements in diagnosis and improved antiviral treatment. In 2000, Alter and Houghton were honored with the Lasker Award for Clinical Medical Research for "pioneering work leading to the discovery of the virus that causes hepatitis C and the development of screening methods that reduced the risk of blood transfusion-associated hepatitis in the U.S. from 30% in 1970 to virtually zero in 2000."
Chiron filed for several patents on the virus and its diagnosis. A competing patent application by the CDC was dropped in 1990 after Chiron paid $1.9 million to the CDC and $337,500 to Bradley. In 1994, Bradley sued Chiron, seeking to invalidate the patent, have himself included as a coinventor, and receive damages and royalty income. He dropped the suit in 1998 after losing before an appeals court.
Society and cultureEdit
This section needs to be updated.(July 2020)
World Hepatitis Day, held on July 28, is coordinated by the World Hepatitis Alliance. The economic costs of hepatitis C are significant both to the individual and to society. In the United States the average lifetime cost of the disease was estimated at US$33,407 in 2003 with the cost of a liver transplant as of 2011[update] costing approximately US$200,000. In Canada the cost of a course of antiviral treatment is as high as 30,000 CAD in 2003, while the United States costs are between 9,200 and 17,600 in 1998 USD. In many areas of the world, people are unable to afford treatment with antivirals as they either lack insurance coverage or the insurance they have will not pay for antivirals. In the English National Health Service treatment rates for hepatitis C are higher among wealthier groups per 2010–2012 data. Spanish anaesthetist Juan Maeso infected 275 patients between 1988 and 1997 as he used the same needles to give both himself and the patients opioids. For this he was jailed.
Children and pregnancyEdit
Compared with adults, infection in children is much less well understood. Worldwide the prevalence of hepatitis C virus infection in pregnant women and children has been estimated to 1–8% and 0.05–5% respectively. The vertical transmission rate has been estimated to be 3–5% and there is a high rate of spontaneous clearance (25–50%) in the children. Higher rates have been reported for both vertical transmission (18%, 6–36% and 41%) and prevalence in children (15%).
In developed countries transmission around the time of birth is now the leading cause of HCV infection. In the absence of virus in the mother's blood transmission seems to be rare. Factors associated with an increased rate of infection include membrane rupture of longer than 6 hours before delivery and procedures exposing the infant to maternal blood. Cesarean sections are not recommended. Breastfeeding is considered safe if the nipples are not damaged. Infection around the time of birth in one child does not increase the risk in a subsequent pregnancy. All genotypes appear to have the same risk of transmission.
HCV infection is frequently found in children who have previously been presumed to have non-A, non-B hepatitis and cryptogenic liver disease. The presentation in childhood may be asymptomatic or with elevated liver function tests. While infection is commonly asymptomatic both cirrhosis with liver failure and hepatocellular carcinoma may occur in childhood.
The rate of hepatitis C in immunosuppressed people is higher. This is particularly true in those with human immunodeficiency virus infection, recipients of organ transplants, and those with hypogammaglobulinemia. Infection in these people is associated with an unusually rapid progression to cirrhosis. People with stable HIV who never received medication for HCV, may be treated with a combination of peginterferon plus ribavirin with caution to the possible side effects.
As of 2011[update], there are about one hundred medications in development for hepatitis C. These include vaccines to treat hepatitis, immunomodulators, and cyclophilin inhibitors, among others. These potential new treatments have come about due to a better understanding of the hepatitis C virus. There are a number of vaccines under development and some have shown encouraging results.
The combination of sofosbuvir and velpatasvir in one trial (reported in 2015) resulted in cure rates of 99%. More studies are needed to investigate the role of the preventive antiviral medication against HCV recurrence after transplantation.
One barrier to finding treatments for hepatitis C is the lack of a suitable animal model. Despite moderate success, research highlights the need for pre-clinical testing in mammalian systems such as mouse, particularly for the development of vaccines in poorer communities. Chimpanzees remain the only available living system to study, yet their use has ethical concerns and regulatory restrictions. While scientists have made use of human cell culture systems such as hepatocytes, questions have been raised about their accuracy in reflecting the body's response to infection.
One aspect of hepatitis research is to reproduce infections in mammalian models. A strategy is to introduce liver tissues from humans into mice, a technique known as xenotransplantation. This is done by generating chimeric mice, and exposing the mice HCV infection. This engineering process is known to create humanized mice, and provide opportunities to study hepatitis C within the 3D architectural design of the liver and evaluating antiviral compounds. Alternatively, generating inbred mice with susceptibility to HCV would simplify the process of studying mouse models.
- "Q&A for Health Professionals". Viral Hepatitis. Centers for Disease Control and Prevention. Retrieved 28 September 2020.
- Ryan KJ, Ray CG, eds. (2004). Sherris Medical Microbiology (4th ed.). McGraw Hill. pp. 551–52. ISBN 978-0-8385-8529-0.
- Maheshwari A, Thuluvath PJ (February 2010). "Management of acute hepatitis C". Clinics in Liver Disease. 14 (1): 169–76, x. doi:10.1016/j.cld.2009.11.007. PMID 20123448.
- "Hepatitis C Fact sheet N°164". WHO. July 2015. Archived from the original on 31 January 2016. Retrieved 4 February 2016.
- Rosen HR (June 2011). "Clinical practice. Chronic hepatitis C infection". The New England Journal of Medicine. 364 (25): 2429–38. doi:10.1056/NEJMcp1006613. PMID 21696309. S2CID 19755395.
- "Hepatitis C". World Health Organization. 9 July 2019. Archived from the original on 2020-05-26. Retrieved 2020-05-26.
- "Hepatitis MedlinePlus". U.S. National Library of Medicine. Retrieved 2020-06-19.
- "Viral Hepatitis: A through E and Beyond". National Institute of Diabetes and Digestive and Kidney Diseases. April 2012. Archived from the original on 2 February 2016. Retrieved 4 February 2016.
- Owens DK, Davidson KW, Krist AH, Barry MJ, Cabana M, Caughey AB, et al. (March 2020). "Screening for Hepatitis C Virus Infection in Adolescents and Adults: US Preventive Services Task Force Recommendation Statement". JAMA. 323 (10): 970. doi:10.1001/jama.2020.1123. PMID 32119076.
- Webster DP, Klenerman P, Dusheiko GM (March 2015). "Hepatitis C". Lancet. 385 (9973): 1124–35. doi:10.1016/S0140-6736(14)62401-6. PMC 4878852. PMID 25687730.
- Zelenev A, Li J, Mazhnaya A, Basu S, Altice FL (February 2018). "Hepatitis C virus treatment as prevention in an extended network of people who inject drugs in the USA: a modelling study". The Lancet. Infectious Diseases. 18 (2): 215–224. doi:10.1016/S1473-3099(17)30676-X. PMC 5860640. PMID 29153265.
- Kim A (September 2016). "Hepatitis C Virus". Annals of Internal Medicine (Review). 165 (5): ITC33–ITC48. doi:10.7326/AITC201609060. PMID 27595226. S2CID 95756.
- Global Hepatitis Report 2017 (Report). World Health Organisation. 2017. Retrieved 5 December 2020.
- Graham CS, Swan T (July 2015). "A path to eradication of hepatitis C in low- and middle-income countries". Antiviral Research. 119: 89–96. doi:10.1016/j.antiviral.2015.01.004. PMID 25615583.
- Wang, Haidong; Naghavi, Mohsen; Allen, Christine; Barber, Ryan M.; Bhutta, Zulfiqar A.; Carter, Austin; Casey, Daniel C.; Charlson, Fiona J.; Chen, Alan Zian; Coates, Matthew M.; Coggeshall, Megan; Dandona, Lalit; Dicker, Daniel J.; Erskine, Holly E.; Ferrari, Alize J.; Fitzmaurice, Christina; Foreman, Kyle; Forouzanfar, Mohammad H.; Fraser, Maya S.; Fullman, Nancy; Gething, Peter W.; Goldberg, Ellen M.; Graetz, Nicholas; Haagsma, Juanita A.; Hay, Simon I.; Huynh, Chantal; Johnson, Catherine O.; Kassebaum, Nicholas J.; Kinfu, Yohannes; et al. (October 2016). "Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1459–1544. doi:10.1016/S0140-6736(16)31012-1. PMC 5388903. PMID 27733281.
- Houghton M (November 2009). "The long and winding road leading to the identification of the hepatitis C virus". Journal of Hepatology. 51 (5): 939–48. doi:10.1016/j.jhep.2009.08.004. PMID 19781804.
- Shors T (2011). Understanding viruses (2nd ed.). Burlington, MA: Jones & Bartlett Learning. p. 535. ISBN 978-0-7637-8553-6. Archived from the original on 2016-05-15.
- Chronic Hepatitis C Virus Advances in Treatment, Promise for the Future. Springer Verlag. 2011. p. 14. ISBN 978-1-4614-1191-8. Archived from the original on 2016-06-17.
- Wilkins T, Malcolm JK, Raina D, Schade RR (June 2010). "Hepatitis C: diagnosis and treatment" (PDF). American Family Physician. 81 (11): 1351–7. PMID 20521755. Archived (PDF) from the original on 2013-05-21.
- Manka P, Verheyen J, Gerken G, Canbay A (April 2016). "Liver Failure due to Acute Viral Hepatitis (A-E)". Visceral Medicine. 32 (2): 80–5. doi:10.1159/000444915. PMC 4926881. PMID 27413724.
- Nelson PK, Mathers BM, Cowie B, Hagan H, Des Jarlais D, Horyniak D, Degenhardt L (August 2011). "Global epidemiology of hepatitis B and hepatitis C in people who inject drugs: results of systematic reviews". Lancet. 378 (9791): 571–83. doi:10.1016/S0140-6736(11)61097-0. PMC 3285467. PMID 21802134.
- Kanwal F, Bacon BR (2011). "Does Treatment Alter the Natural History of Chronic HCV?". In Schiffman ML (ed.). Chronic Hepatitis C Virus Advances in Treatment, Promise for the Future. Springer Verlag. pp. 103–04. ISBN 978-1-4614-1191-8.
- Ray SC, Thomas DL (2009). "Chapter 154: Hepatitis C". In Mandell GL, Bennett, Dolin R (eds.). Mandell, Douglas, and Bennett's principles and practice of infectious diseases (7th ed.). Philadelphia, PA: Churchill Livingstone. ISBN 978-0-443-06839-3.
- Forton DM, Allsop JM, Cox IJ, Hamilton G, Wesnes K, Thomas HC, Taylor-Robinson SD (October 2005). "A review of cognitive impairment and cerebral metabolite abnormalities in patients with hepatitis C infection". AIDS. 19 Suppl 3 (Suppl 3): S53-63. doi:10.1097/01.aids.0000192071.72948.77. PMID 16251829.
- Nicot F (2004). "Chapter 19. Liver biopsy in modern medicine.". Occult hepatitis C virus infection: Where are we now?. ISBN 978-953-307-883-0.
- El-Zayadi AR (July 2008). "Hepatic steatosis: a benign disease or a silent killer". World Journal of Gastroenterology. 14 (26): 4120–6. doi:10.3748/wjg.14.4120. PMC 2725370. PMID 18636654.
- Paradis V, Bedossa P (December 2008). "Definition and natural history of metabolic steatosis: histology and cellular aspects". Diabetes & Metabolism. 34 (6 Pt 2): 638–42. doi:10.1016/S1262-3636(08)74598-1. PMID 19195624.
- Alter MJ (May 2007). "Epidemiology of hepatitis C virus infection". World Journal of Gastroenterology. 13 (17): 2436–41. doi:10.3748/wjg.v13.i17.2436. PMC 4146761. PMID 17552026.
- Mueller S, Millonig G, Seitz HK (July 2009). "Alcoholic liver disease and hepatitis C: a frequently underestimated combination". World Journal of Gastroenterology. 15 (28): 3462–71. doi:10.3748/wjg.15.3462. PMC 2715970. PMID 19630099. Retrieved 10 July 2020.
- Fattovich G, Stroffolini T, Zagni I, Donato F (November 2004). "Hepatocellular carcinoma in cirrhosis: incidence and risk factors". Gastroenterology. 127 (5 Suppl 1): S35-50. doi:10.1053/j.gastro.2004.09.014. PMID 15508101.
- Ozaras R, Tahan V (April 2009). "Acute hepatitis C: prevention and treatment". Expert Review of Anti-Infective Therapy. 7 (3): 351–61. doi:10.1586/eri.09.8. PMID 19344247. S2CID 25574917.
- Zaltron S, Spinetti A, Biasi L, Baiguera C, Castelli F (2012). "Chronic HCV infection: epidemiological and clinical relevance". BMC Infectious Diseases. 12 Suppl 2: S2. doi:10.1186/1471-2334-12-S2-S2. PMC 3495628. PMID 23173556.
- Dammacco F, Sansonno D (September 2013). "Therapy for hepatitis C virus-related cryoglobulinemic vasculitis". The New England Journal of Medicine. 369 (11): 1035–45. doi:10.1056/NEJMra1208642. PMID 24024840. S2CID 205116488.
- Iannuzzella F, Vaglio A, Garini G (May 2010). "Management of hepatitis C virus-related mixed cryoglobulinemia". The American Journal of Medicine. 123 (5): 400–8. doi:10.1016/j.amjmed.2009.09.038. PMID 20399313.
- Zignego AL, Ferri C, Pileri SA, Caini P, Bianchi FB (January 2007). "Extrahepatic manifestations of Hepatitis C Virus infection: a general overview and guidelines for a clinical approach". Digestive and Liver Disease. 39 (1): 2–17. doi:10.1016/j.dld.2006.06.008. PMID 16884964.
- Ko HM, Hernandez-Prera JC, Zhu H, Dikman SH, Sidhu HK, Ward SC, Thung SN (2012). "Morphologic features of extrahepatic manifestations of hepatitis C virus infection". Clinical & Developmental Immunology. 2012: 740138. doi:10.1155/2012/740138. PMC 3420144. PMID 22919404.
- Dammacco F, Sansonno D, Piccoli C, Racanelli V, D'Amore FP, Lauletta G (2000). "The lymphoid system in hepatitis C virus infection: autoimmunity, mixed cryoglobulinemia, and Overt B-cell malignancy". Seminars in Liver Disease. 20 (2): 143–57. doi:10.1055/s-2000-9613. PMID 10946420.
- Lee MR, Shumack S (November 2005). "Prurigo nodularis: a review". The Australasian Journal of Dermatology. 46 (4): 211–18, quiz 219–20. doi:10.1111/j.1440-0960.2005.00187.x. PMID 16197418. S2CID 30087432.
- Matsumori A (2006). Role of hepatitis C virus in cardiomyopathies. Ernst Schering Research Foundation Workshop. 55. pp. 99–120. doi:10.1007/3-540-30822-9_7. ISBN 978-3-540-23971-0. PMID 16329660.
- Monaco S, Ferrari S, Gajofatto A, Zanusso G, Mariotto S (2012). "HCV-related nervous system disorders". Clinical & Developmental Immunology. 2012: 236148. doi:10.1155/2012/236148. PMC 3414089. PMID 22899946.
- Xu JH, Fu JJ, Wang XL, Zhu JY, Ye XH, Chen SD (July 2013). "Hepatitis B or C viral infection and risk of pancreatic cancer: a meta-analysis of observational studies". World Journal of Gastroenterology. 19 (26): 4234–41. doi:10.3748/wjg.v19.i26.4234. PMC 3710428. PMID 23864789.
- Lodi G, Porter SR, Scully C (July 1998). "Hepatitis C virus infection: Review and implications for the dentist". Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontics. 86 (1): 8–22. CiteSeerX 10.1.1.852.7880. doi:10.1016/S1079-2104(98)90143-3. PMID 9690239.
- Carrozzo M, Gandolfo S (2003-03-01). "Oral diseases possibly associated with hepatitis C virus". Critical Reviews in Oral Biology and Medicine. 14 (2): 115–27. doi:10.1177/154411130301400205. PMID 12764074.
- Little JW, Falace DA, Miller C, Rhodus NL (2013). Dental Management of the Medically Compromised Patient. p. 151. ISBN 978-0323080286.
- Sugden PB, Cameron B, Bull R, White PA, Lloyd AR (September 2012). "Occult infection with hepatitis C virus: friend or foe?". Immunology and Cell Biology. 90 (8): 763–73. doi:10.1038/icb.2012.20. PMID 22546735. S2CID 23845868.
- Carreño V (November 2006). "Occult hepatitis C virus infection: a new form of hepatitis C". World Journal of Gastroenterology. 12 (43): 6922–5. doi:10.3748/wjg.12.6922. PMC 4087333. PMID 17109511.
- Carreño García V, Nebreda JB, Aguilar IC, Quiroga Estévez JA (March 2011). "[Occult hepatitis C virus infection]". Enfermedades Infecciosas y Microbiologia Clinica. 29 Suppl 3: 14–9. doi:10.1016/S0213-005X(11)70022-2. PMID 21458706.
- Pham TN, Coffin CS, Michalak TI (April 2010). "Occult hepatitis C virus infection: what does it mean?". Liver International. 30 (4): 502–11. doi:10.1111/j.1478-3231.2009.02193.x. PMID 20070513. S2CID 205651069.
- Carreño V, Bartolomé J, Castillo I, Quiroga JA (June 2012). "New perspectives in occult hepatitis C virus infection". World Journal of Gastroenterology. 18 (23): 2887–94. doi:10.3748/wjg.v18.i23.2887. PMC 3380315. PMID 22736911.
- Carreño V, Bartolomé J, Castillo I, Quiroga JA (May–June 2008). "Occult hepatitis B virus and hepatitis C virus infections". Reviews in Medical Virology. 18 (3): 139–57. doi:10.1002/rmv.569. PMID 18265423. S2CID 12331754.
- Scott JD, Gretch DR (February 2007). "Molecular diagnostics of hepatitis C virus infection: a systematic review". JAMA. 297 (7): 724–32. doi:10.1001/jama.297.7.724. PMID 17312292.
- Robinson, JL (July 2008). "Vertical transmission of the hepatitis C virus: Current knowledge and issues". Paediatrics & Child Health. 13 (6): 529–41. doi:10.1093/pch/13.6.529. PMC 2532905. PMID 19436425.
- Nakano T, Lau GM, Lau GM, Sugiyama M, Mizokami M (February 2012). "An updated analysis of hepatitis C virus genotypes and subtypes based on the complete coding region". Liver International. 32 (2): 339–45. doi:10.1111/j.1478-3231.2011.02684.x. PMID 22142261. S2CID 23271017.
- Lerat H, Hollinger FB (January 2004). "Hepatitis C virus (HCV) occult infection or occult HCV RNA detection?". The Journal of Infectious Diseases. 189 (1): 3–6. doi:10.1086/380203. PMID 14702146.
- Pockros P (2011). Novel and Combination Therapies for Hepatitis C Virus, An Issue of Clinics in Liver Disease. p. 47. ISBN 978-1-4557-7198-1. Archived from the original on 2016-05-21.
- Zignego AL, Giannini C, Gragnani L, Piluso A, Fognani E (August 2012). "Hepatitis C virus infection in the immunocompromised host: a complex scenario with variable clinical impact". Journal of Translational Medicine. 10 (1): 158. doi:10.1186/1479-5876-10-158. PMC 3441205. PMID 22863056.
- Hagan LM, Schinazi RF (February 2013). "Best strategies for global HCV eradication". Liver International. 33 Suppl 1 (s1): 68–79. doi:10.1111/liv.12063. PMC 4110680. PMID 23286849.
- Pondé RA (February 2011). "Hidden hazards of HCV transmission". Medical Microbiology and Immunology. 200 (1): 7–11. doi:10.1007/s00430-010-0159-9. PMID 20461405. S2CID 664199.
- Xia X, Luo J, Bai J, Yu R (October 2008). "Epidemiology of hepatitis C virus infection among injection drug users in China: systematic review and meta-analysis". Public Health. 122 (10): 990–1003. doi:10.1016/j.puhe.2008.01.014. PMID 18486955.
- Imperial JC (June 2010). "Chronic hepatitis C in the state prison system: insights into the problems and possible solutions". Expert Review of Gastroenterology & Hepatology. 4 (3): 355–64. doi:10.1586/egh.10.26. PMID 20528122. S2CID 7931472.
- Vescio MF, Longo B, Babudieri S, Starnini G, Carbonara S, Rezza G, Monarca R (April 2008). "Correlates of hepatitis C virus seropositivity in prison inmates: a meta-analysis". Journal of Epidemiology and Community Health. 62 (4): 305–13. doi:10.1136/jech.2006.051599. PMID 18339822. S2CID 206989111.
- Moyer VA (September 2013). "Screening for hepatitis C virus infection in adults: U.S. Preventive Services Task Force recommendation statement". Annals of Internal Medicine. 159 (5): 349–57. doi:10.7326/0003-4819-159-5-201309030-00672. PMID 23798026. S2CID 8563203.
- Marx J (2010). Rosen's emergency medicine: concepts and clinical practice (7th ed.). Philadelphia, PA: Mosby/Elsevier. p. 1154. ISBN 978-0-323-05472-0.
- Day RA, Paul P, Williams B (2009). Brunner & Suddarth's textbook of Canadian medical-surgical nursing (Canadian 2nd ed.). Philadelphia, PA: Lippincott Williams & Wilkins. p. 1237. ISBN 978-0-7817-9989-8. Archived from the original on 2016-04-25.
- "Hepatitis C prevalence in Egypt drops from 7% to 2% thanks to Sisi's initiative". EgyptToday. 2021-02-06. Retrieved 2021-03-06.
- "Highest Rates of Hepatitis C Virus Transmission Found in Egypt". Al Bawaaba. 2010-08-09. Archived from the original on 2012-05-15. Retrieved 2010-08-27.
- Tohme RA, Holmberg SD (October 2010). "Is sexual contact a major mode of hepatitis C virus transmission?". Hepatology. 52 (4): 1497–505. doi:10.1002/hep.23808. PMID 20635398. S2CID 5592006.
- "Hepatitis C Group Education Class". United States Department of Veteran Affairs. Archived from the original on 2011-11-09. Retrieved 2011-11-20.
- Jafari S, Copes R, Baharlou S, Etminan M, Buxton J (November 2010). "Tattooing and the risk of transmission of hepatitis C: a systematic review and meta-analysis" (PDF). International Journal of Infectious Diseases. 14 (11): e928-40. doi:10.1016/j.ijid.2010.03.019. PMID 20678951. Archived from the original (PDF) on 2012-04-26. Retrieved 2012-01-02.
- "Hepatitis C" (PDF). Centers for Disease Control and Prevention (CDC). Archived (PDF) from the original on 5 January 2012. Retrieved 2 January 2012.
- Lock G, Dirscherl M, Obermeier F, Gelbmann CM, Hellerbrand C, Knöll A, et al. (September 2006). "Hepatitis C - contamination of toothbrushes: myth or reality?". Journal of Viral Hepatitis. 13 (9): 571–3. doi:10.1111/j.1365-2893.2006.00735.x. PMID 16907842. S2CID 24264376.
- "Hepatitis C FAQs for Health Professionals". Centers for Disease Control and Prevention (CDC). Archived from the original on 4 January 2012. Retrieved 2 January 2012.
- Wong T, Lee SS (February 2006). "Hepatitis C: a review for primary care physicians". CMAJ. 174 (5): 649–59. doi:10.1503/cmaj.1030034. PMC 1389829. PMID 16505462.
- Lam NC, Gotsch PB, Langan RC (November 2010). "Caring for pregnant women and newborns with hepatitis B or C" (PDF). American Family Physician. 82 (10): 1225–9. PMID 21121533. Archived (PDF) from the original on 2013-05-21.
- Mast EE (2004). "Mother-to-infant hepatitis C virus transmission and breastfeeding". Protecting Infants through Human Milk. Advances in Experimental Medicine and Biology. 554. pp. 211–16. doi:10.1007/978-1-4757-4242-8_18. ISBN 978-1-4419-3461-1. PMID 15384578.
- Westbrook RH, Dusheiko G (November 2014). "Natural history of hepatitis C". Journal of Hepatology. 61 (1 Suppl): S58-68. doi:10.1016/j.jhep.2014.07.012. PMID 25443346.
- Patel K, Muir AJ, McHutchison JG (April 2006). "Diagnosis and treatment of chronic hepatitis C infection". BMJ. 332 (7548): 1013–7. doi:10.1136/bmj.332.7548.1013. PMC 1450048. PMID 16644828.
- Shivkumar S, Peeling R, Jafari Y, Joseph L, Pant Pai N (October 2012). "Accuracy of rapid and point-of-care screening tests for hepatitis C: a systematic review and meta-analysis". Annals of Internal Medicine. 157 (8): 558–66. doi:10.7326/0003-4819-157-8-201210160-00006. PMID 23070489. S2CID 5650682.
- Senadhi V (July 2011). "A paradigm shift in the outpatient approach to liver function tests". Southern Medical Journal. 104 (7): 521–5. doi:10.1097/SMJ.0b013e31821e8ff5. PMID 21886053. S2CID 26462106.
- Smith BD, Morgan RL, Beckett GA, Falck-Ytter Y, Holtzman D, Teo CG, et al. (August 2012). "Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945-1965" (PDF). MMWR. Recommendations and Reports. 61 (RR-4): 1–32. PMID 22895429.
- "Testing Recommendations for Hepatitis C Virus Infection – HCV – Division of Viral Hepatitis". U.S. Centers for Disease Control and Prevention (CDC). 12 June 2019. Retrieved 11 January 2020.
- "People Born 1945–1965 (Baby Boomers) – Populations and Settings – Division of Viral Hepatitis". U.S. Centers for Disease Control and Prevention (CDC). 26 July 2019. Archived from the original on 22 October 2019. Retrieved 11 January 2020.
- "Final Update Summary: Hepatitis C: Screening". US Preventive Services Task Force. Retrieved 11 January 2020.
- Shah H, Bilodeau M, Burak KW, Cooper C, Klein M, Ramji A, et al. (June 2018). "The management of chronic hepatitis C: 2018 guideline update from the Canadian Association for the Study of the Liver". CMAJ. 190 (22): E677–E687. doi:10.1503/cmaj.170453. PMC 5988519. PMID 29866893.
- Abdelwahab KS, Ahmed Said ZN (January 2016). "Status of hepatitis C virus vaccination: Recent update". World Journal of Gastroenterology. 22 (2): 862–73. doi:10.3748/wjg.v22.i2.862. PMC 4716084. PMID 26811632.
- Hagan H, Pouget ER, Des Jarlais DC (July 2011). "A systematic review and meta-analysis of interventions to prevent hepatitis C virus infection in people who inject drugs". The Journal of Infectious Diseases. 204 (1): 74–83. doi:10.1093/infdis/jir196. PMC 3105033. PMID 21628661.
- AASLD/IDSA HCV Guidance Panel (September 2015). "Hepatitis C guidance: AASLD-IDSA recommendations for testing, managing, and treating adults infected with hepatitis C virus". Hepatology. 62 (3): 932–54. doi:10.1002/hep.27950. PMID 26111063.
- "HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C" (PDF). 12 April 2017. Archived from the original (PDF) on 2017-07-10. Retrieved 28 July 2017.
- Jakobsen JC, Nielsen EE, Feinberg J, Katakam KK, Fobian K, Hauser G, et al. (September 2017). "Direct-acting antivirals for chronic hepatitis C". The Cochrane Database of Systematic Reviews. 9: CD012143. doi:10.1002/14651858.CD012143.pub3. PMC 6484376. PMID 28922704.
- "Treatment: Naive Genotype 1a Without Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 1a With Compensated Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 1b Without Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 1b With Compensated Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment; Naive Genotype 2 Without Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment – Naive Genotype 2 With Compensated Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 3 Without Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 3 With Compensated Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 4 Without Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 4 With Compensated Cirrhosis". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Treatment: Naive Genotype 5 or 6". HCV Guidance: Recommendations for Testing, Managing, and Treating Hepatitis C. www.hcvguidelines.org. American Association for the Study of Liver Diseases. Retrieved 26 April 2017.
- "Hepatitis C Questions and Answers for Health Professionals". www.cdc.gov. 2 July 2019. Retrieved 23 July 2019.
- "FDA approves Vosevi for Hepatitis C". U.S. Food and Drug Administration (FDA) (Press release). 18 July 2017. Archived from the original on 23 July 2017. Retrieved 27 July 2017.
- Liang TJ, Ghany MG (May 2013). "Current and future therapies for hepatitis C virus infection". The New England Journal of Medicine. 368 (20): 1907–17. doi:10.1056/NEJMra1213651. PMC 3893124. PMID 23675659.
- Mücke MM, Backus LI, Mücke VT, Coppola N, Preda CM, Yeh ML, et al. (March 2018). "Hepatitis B virus reactivation during direct-acting antiviral therapy for hepatitis C: a systematic review and meta-analysis". The Lancet. Gastroenterology & Hepatology. 3 (3): 172–180. doi:10.1016/S2468-1253(18)30002-5. PMID 29371017.
- Sanders M (2011). Mosby's Paramedic Textbook. Jones & Bartlett Publishers. p. 839. ISBN 978-0-323-07275-5. Archived from the original on 2016-05-11.
- Ciria R, Pleguezuelo M, Khorsandi SE, Davila D, Suddle A, Vilca-Melendez H, et al. (May 2013). "Strategies to reduce hepatitis C virus recurrence after liver transplantation". World Journal of Hepatology. 5 (5): 237–50. doi:10.4254/wjh.v5.i5.237. PMC 3664282. PMID 23717735.
- Coilly A, Roche B, Samuel D (February 2013). "Current management and perspectives for HCV recurrence after liver transplantation". Liver International. 33 Suppl 1: 56–62. doi:10.1111/liv.12062. PMID 23286847. S2CID 23601091.
- Gurusamy KS, Tsochatzis E, Toon CD, Xirouchakis E, Burroughs AK, Davidson BR (December 2013). "Antiviral interventions for liver transplant patients with recurrent graft infection due to hepatitis C virus". The Cochrane Database of Systematic Reviews (12): CD006803. doi:10.1002/14651858.CD006803.pub4. PMID 24307460.
- Hepatitis C and CAM: What the Science Says Archived 2011-03-20 at the Wayback Machine. National Center for Complementary and Alternative Medicine (NCCAM). March 2011. (Retrieved 7 March 2011)
- Liu J, Manheimer E, Tsutani K, Gluud C (March 2003). "Medicinal herbs for hepatitis C virus infection: a Cochrane hepatobiliary systematic review of randomized trials". The American Journal of Gastroenterology. 98 (3): 538–44. PMID 12650784.
- Rambaldi A, Jacobs BP, Gluud C (October 2007). "Milk thistle for alcoholic and/or hepatitis B or C virus liver diseases". The Cochrane Database of Systematic Reviews (4): CD003620. doi:10.1002/14651858.CD003620.pub3. PMID 17943794.
- Helms RA, Quan DJ, eds. (2006). Textbook of Therapeutics: Drug and Disease Management (8th ed.). Philadelphia, PA [u.a.]: Lippincott Williams & Wilkins. p. 1340. ISBN 978-0-7817-5734-8. Archived from the original on 5 December 2015. Retrieved 7 November 2014.
- Morgan RL, Baack B, Smith BD, Yartel A, Pitasi M, Falck-Ytter Y (March 2013). "Eradication of hepatitis C virus infection and the development of hepatocellular carcinoma: a meta-analysis of observational studies". Annals of Internal Medicine. 158 (5 Pt 1): 329–37. doi:10.7326/0003-4819-158-5-201303050-00005. PMID 23460056.
- Fung J, Lai CL, Hung I, Young J, Cheng C, Wong D, Yuen MF (September 2008). "Chronic hepatitis C virus genotype 6 infection: response to pegylated interferon and ribavirin". The Journal of Infectious Diseases. 198 (6): 808–12. doi:10.1086/591252. PMID 18657036.
- Lozano R, Naghavi M, Foreman K, Lim S, Shibuya K, Aboyans V, et al. (December 2012). "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010". Lancet. 380 (9859): 2095–128. doi:10.1016/S0140-6736(12)61728-0. hdl:10536/DRO/DU:30050819. PMID 23245604. S2CID 1541253.
- Mohd Hanafiah K, Groeger J, Flaxman AD, Wiersma ST (April 2013). "Global epidemiology of hepatitis C virus infection: new estimates of age-specific antibody to HCV seroprevalence". Hepatology. 57 (4): 1333–42. doi:10.1002/hep.26141. PMID 23172780. S2CID 16265266.
- Yu ML, Chuang WL (March 2009). "Treatment of chronic hepatitis C in Asia: when East meets West". Journal of Gastroenterology and Hepatology. 24 (3): 336–45. doi:10.1111/j.1440-1746.2009.05789.x. PMID 19335784. S2CID 27333980.
- "U.S. 2014 Surveillance Data for Viral Hepatitis, Statistics & Surveillance, Division of Viral Hepatitis". CDC. Archived from the original on 2016-08-08. Retrieved 2016-08-04.
- Table 4.5. "Number and rate of deaths with hepatitis C listed as a cause of death, by demographic characteristic and year – United States, 2004–2008". Viral Hepatitis on the CDC web site. Centers for Disease Control and Prevention, Atlanta, GA. Archived from the original on 9 March 2014. Retrieved 28 July 2013.
- "Hepatitis Death Rate Creeps past AIDS". New York Times. 27 February 2012. Archived from the original on 30 June 2017. Retrieved 28 July 2013.
- "Hepatitis C Kills More Americans than Any Other Infectious Disease". Centers for Disease Control and Prevention. May 4, 2016. Archived from the original on 9 August 2016. Retrieved 3 August 2016.
- Blatt LM, Tong M (2004). Colacino JM, Heinz BA (eds.). Hepatitis prevention and treatment. Basel: Birkhäuser. p. 32. ISBN 978-3-7643-5956-0. Archived from the original on 2016-06-24.
- Blachier M, Leleu H, Peck-Radosavljevic M, Valla DC, Roudot-Thoraval F (March 2013). "The burden of liver disease in Europe: a review of available epidemiological data". Journal of Hepatology. 58 (3): 593–608. doi:10.1016/j.jhep.2012.12.005. PMID 23419824.
- "Commissioning supplement: Health inequalities tell a tale of data neglect". Health Service Journal. 19 March 2015. Archived from the original on 28 July 2015. Retrieved 30 April 2015.
- "More than half of patients using needle exchange pilot tested positive for Hepatitis C". Pharmaceutical Journal. 17 May 2018. Retrieved 15 August 2018.
- "Legal action firm among winners for largest medicines procurement". Health Service Journal. 30 April 2019. Retrieved 9 June 2019.
- Holmberg S (2011-05-12). Brunette GW, Kozarsky PE, Magill AJ, Shlim DR, Whatley AD (eds.). CDC Health Information for International Travel 2012. New York: Oxford University Press. p. 231. ISBN 978-0-19-976901-8.
- "Hepatitis C". World Health Organization (WHO). June 2011. Archived from the original on 2011-07-12. Retrieved 2011-07-13.
- Lombardi A, Mondelli MU (March 2019). "Hepatitis C: Is eradication possible?". Liver International. 39 (3): 416–426. doi:10.1111/liv.14011. PMID 30472772.
- Boyer JL (2001). Liver cirrhosis and its development: proceedings of the Falk Symposium 115. Springer. pp. 344. ISBN 978-0-7923-8760-2.
- Choo QL, Kuo G, Weiner AJ, Overby LR, Bradley DW, Houghton M (April 1989). "Isolation of a cDNA clone derived from a blood-borne non-A, non-B viral hepatitis genome" (PDF). Science. 244 (4902): 359–62. Bibcode:1989Sci...244..359C. CiteSeerX 10.1.1.469.3592. doi:10.1126/science.2523562. PMID 2523562.
- Kuo G, Choo QL, Alter HJ, Gitnick GL, Redeker AG, Purcell RH, et al. (April 1989). "An assay for circulating antibodies to a major etiologic virus of human non-A, non-B hepatitis". Science. 244 (4902): 362–4. Bibcode:1989Sci...244..362K. doi:10.1126/science.2496467. PMID 2496467.
- "2000 Winners Albert Lasker Award for Clinical Medical Research". Archived from the original on February 25, 2008. Retrieved 2006-04-21.CS1 maint: bot: original URL status unknown (link). Retrieved 20 February 2008.
- EP patent 0318216, Houghton M, Choo QL, Kuo G, "NANBV diagnostics", issued 1989-05-31, assigned to Chiron
- Wilken. "United States Court of Appeals for the Federal Circuit". United States Court of Appeals for the Federal Circuit. Archived from the original on 19 November 2009. Retrieved 11 January 2012.
- Gallagher J (2020-10-05). "Hepatitis C discovery wins the Nobel Prize". BBC News. Retrieved 2020-10-05.
- "The unsung heroes of the Nobel-winning hepatitis C discovery". nature.com. 19 October 2020. Retrieved 20 October 2020.
- Eurosurveillance editorial team (July 2011). "World Hepatitis Day 2011" (PDF). Euro Surveillance. 16 (30). PMID 21813077. Archived (PDF) from the original on 2011-11-25.
- Wong JB (2006). "Hepatitis C: cost of illness and considerations for the economic evaluation of antiviral therapies". PharmacoEconomics. 24 (7): 661–72. doi:10.2165/00019053-200624070-00005. PMID 16802842. S2CID 6713508.
- El Khoury AC, Klimack WK, Wallace C, Razavi H (March 2012). "Economic burden of hepatitis C-associated diseases in the United States". Journal of Viral Hepatitis. 19 (3): 153–60. doi:10.1111/j.1365-2893.2011.01563.x. PMID 22329369. S2CID 27409621.
- "Hepatitis C Prevention, Support and Research ProgramHealth Canada". Public Health Agency of Canada. Nov 2003. Archived from the original on 22 March 2011. Retrieved 10 January 2012.
- Thomas H, Lemon S, Zuckerman A, eds. (2008). Viral Hepatitis (3rd ed.). Oxford: John Wiley & Sons. p. 532. ISBN 978-1-4051-4388-2. Archived from the original on 2016-06-17.
- "Spanish Anesthetist Infected Patients". The Washington Post. 15 May 2007. Archived from the original on 22 August 2016. Retrieved 13 July 2016.
- "Spanish Hep C anaesthetist jailed". BBC. 15 May 2007. Archived from the original on 23 October 2007. Retrieved 13 July 2016.
- Arshad M, El-Kamary SS, Jhaveri R (April 2011). "Hepatitis C virus infection during pregnancy and the newborn period--are they opportunities for treatment?". Journal of Viral Hepatitis. 18 (4): 229–36. doi:10.1111/j.1365-2893.2010.01413.x. PMID 21392169. S2CID 35515919.
- Hunt CM, Carson KL, Sharara AI (May 1997). "Hepatitis C in pregnancy". Obstetrics and Gynecology. 89 (5 Pt 2): 883–90. doi:10.1016/S0029-7844(97)81434-2. PMID 9166361. S2CID 23182340.
- Thomas SL, Newell ML, Peckham CS, Ades AE, Hall AJ (February 1998). "A review of hepatitis C virus (HCV) vertical transmission: risks of transmission to infants born to mothers with and without HCV viraemia or human immunodeficiency virus infection". International Journal of Epidemiology. 27 (1): 108–17. doi:10.1093/ije/27.1.108. PMID 9563703.
- Fischler B (June 2007). "Hepatitis C virus infection". Seminars in Fetal & Neonatal Medicine. 12 (3): 168–73. CiteSeerX 10.1.1.852.7880. doi:10.1016/j.siny.2007.01.008. PMID 17320495.
- Indolfi G, Resti M (May 2009). "Perinatal transmission of hepatitis C virus infection". Journal of Medical Virology. 81 (5): 836–43. doi:10.1002/jmv.21437. PMID 19319981. S2CID 21207996.
- González-Peralta RP (November 1997). "Hepatitis C virus infection in pediatric patients". Clinics in Liver Disease. 1 (3): 691–705, ix. doi:10.1016/s1089-3261(05)70329-9. PMID 15560066.
- Suskind DL, Rosenthal P (February 2004). "Chronic viral hepatitis". Adolescent Medicine Clinics. 15 (1): 145–58, x–xi. doi:10.1016/j.admecli.2003.11.001. PMID 15272262.
- Iorio A, Marchesini E, Awad T, Gluud LL (January 2010). "Antiviral treatment for chronic hepatitis C in patients with human immunodeficiency virus". The Cochrane Database of Systematic Reviews (1): CD004888. doi:10.1002/14651858.CD004888.pub2. PMID 20091566.
- Ahn J, Flamm SL (August 2011). "Hepatitis C therapy: other players in the game". Clinics in Liver Disease. 15 (3): 641–56. doi:10.1016/j.cld.2011.05.008. PMID 21867942.
- Vermehren J, Sarrazin C (February 2011). "New HCV therapies on the horizon". Clinical Microbiology and Infection. 17 (2): 122–34. doi:10.1111/j.1469-0691.2010.03430.x. PMID 21087349.
- Feld JJ, Jacobson IM, Hézode C, Asselah T, Ruane PJ, Gruener N, et al. (December 2015). "Sofosbuvir and Velpatasvir for HCV Genotype 1, 2, 4, 5, and 6 Infection". The New England Journal of Medicine. 373 (27): 2599–607. doi:10.1056/NEJMoa1512610. hdl:10722/226358. PMID 26571066.
- Gurusamy KS, Tsochatzis E, Toon CD, Davidson BR, Burroughs AK (December 2013). "Antiviral prophylaxis for the prevention of chronic hepatitis C virus in patients undergoing liver transplantation". The Cochrane Database of Systematic Reviews (12): CD006573. doi:10.1002/14651858.CD006573.pub3. PMC 6599865. PMID 24297303.
- Sandmann L, Ploss A (January 2013). "Barriers of hepatitis C virus interspecies transmission". Virology. 435 (1): 70–80. doi:10.1016/j.virol.2012.09.044. PMC 3523278. PMID 23217617.
|Wikimedia Commons has media related to Hepatitis C.|
- Hepatitis C at Curlie
- "Recommendations for Testing, Managing, and Treating Hepatitis C". www.hcvguidelines.org. IDSA/AASLD. Retrieved 28 July 2017.
- "Hepatitis C". MedlinePlus. U.S. National Library of Medicine. | https://en.m.wikipedia.org/wiki/Hepatitis_C | 21 |
22 | Cholesterol plaques can be the cause of heart disease. Plaques begin in artery walls and grow over years. The growth of cholesterol plaques slowly blocks blood flow in the arteries. Worse, a cholesterol plaque can rupture. The sudden blood clot that forms over the rupture then causes a heart attack or stroke.
Blocked arteries caused by plaque buildup and blood clots are the leading cause of death in the U.S. Reducing cholesterol and other risk factors can help prevent cholesterol plaques from forming. Occasionally, it can even reverse some plaque buildup.
Cholesterol Plaques and Atherosclerosis
Cholesterol plaques form by a process called atherosclerosis. It’s also called "hardening of the arteries." LDL, or "bad cholesterol," is the raw material of cholesterol plaques. It can damage the arteries that carry blood from your heart to the rest of your body. Then, once the damage has started, LDL keeps on building up in the artery walls. Progressive and painless, atherosclerosis grows cholesterol plaques silently and slowly.
The cholesterol plaques of atherosclerosis are the usual cause of heart attacks, strokes, and peripheral arterial disease. These conditions together are major contributors to cardiovascular disease. Cardiovascular disease is the No. 1 killer in America, causing about 650,000 deaths each year.
Understanding Cholesterol Plaque
Cholesterol plaques start developing in the walls of arteries. Long before they can be called plaques, hints of atherosclerosis can be found in the arteries. Even some teens have these "fatty streaks" of cholesterol in their artery walls. These streaks are early precursors of cholesterol plaques. They can't be easily spotted by tests. But researchers have found them during autopsies of young victims of accidents and violence.
Atherosclerosis develops over years. It happens through a complicated process that involves:
- Damaged endothelium. The smooth, delicate lining of blood vessels is called the endothelium. High cholesterol, smoking, high blood pressure, or diabetes can damage the endothelium, creating a place for cholesterol to enter the artery's wall.
- Cholesterol invasion. "Bad" cholesterol (LDL cholesterol) circulating in the blood crosses the damaged endothelium. LDL cholesterol starts to gather in the wall of the artery.
- Plaque formation. White blood cells stream in to digest the LDL cholesterol. Over years, the toxic mess of cholesterol and cells becomes a cholesterol plaque in the wall of the artery.
How Cholesterol Plaque Attacks
Once established, cholesterol plaques can behave in different ways.
- They can stay within the artery wall. The cholesterol plaque may stop growing or may grow into the wall, out of the path of blood.
- Plaques can grow in a slow, controlled way into the path of blood flow. Slow-growing cholesterol plaques may or may not ever cause any symptoms, even with severely blocked arteries.
- Cholesterol plaques can suddenly rupture, the worst-case scenario. This will allow blood to clot inside an artery. In the heart, this causes a heart attack. In the brain, it causes a stroke.
Cholesterol plaques from atherosclerosis cause the three main kinds of cardiovascular disease:
- Coronary artery disease. Stable cholesterol plaques in the heart's arteries can cause no symptoms or can cause chest pain called angina. Sudden cholesterol plaque rupture and clotting cause blocked arteries. When that happens, heart muscle dies. This is a heart attack, also called myocardial infarction.
- Cerebrovascular disease. Cholesterol plaque can rupture in one of the brain's arteries. This causes a stroke, leading to permanent brain damage. Blockages can also cause transient ischemic attacks, or TIAs. A TIA has symptoms like those of stroke. But they are temporary, and there is no permanent brain damage. But patients who have a TIA are at a much higher risk of a later stroke, so medical attention is essential.
- Peripheral arterial disease (PAD). Blocked arteries in the legs can cause pain when you walk and poor wound healing because of poor circulation. Severe disease may lead to amputations.
Preventing Cholesterol Plaques
Atherosclerosis and cholesterol plaques are progressive, meaning they get worse with time. They are also preventable. Nine risk factors are to blame for up to 90% of all heart attacks:
- High cholesterol
- High blood pressure
- Abdominal obesity ("spare tire")
- Not eating many fruits and vegetables
- Drinking too much alcohol: more than one drink per day for women or more than one or two drinks per day for men
- Not getting regular physical activity
You may notice that almost all of these have something in common: You can do something about them. Experts agree that reducing your risk factors leads to a lower risk of heart disease.
For people at higher risk from cholesterol plaques, taking a baby aspirin a day can be important. Aspirin helps prevent clots from forming. Ask your doctor before starting aspirin, as it can have side effects.
Shrinking Cholesterol Plaques
Once a cholesterol plaque is there, it's generally there to stay. With treatment, though, plaque buildup may slow or stop.
Some evidence shows that with strong treatment, cholesterol plaques can even shrink slightly. In one major study, cholesterol plaques shrank 10% in size after a 50% reduction in blood cholesterol levels.
The best way to treat cholesterol plaques is to keep them from forming or progressing. That can be done with lifestyle changes and, if needed, medication.
Drugs and Lifestyle Changes to Cut the Chance of Having Atherosclerosis
Reducing the risk factors that lead to atherosclerosis will slow or stop the process. Ways to lower the amount of cholesterol in your body involve taking cholesterol and blood pressure medications, eating a healthy diet, getting frequent exercise, and not smoking. These treatments won't unclog arteries. But they do lower the risk of heart attacks and strokes.
Here is some advice that can help you improve your cholesterol level and reduce the risks that come with atherosclerosis:
- Exercise, with or without weight loss, increases "good" HDL cholesterol and reduces the risk of heart attacks and strokes.
- A diet high in fiber and low in fats can lower "bad" LDL cholesterol.
- Oily fish and other foods high in omega-3 fatty acids can raise “good” HDL cholesterol.
- If you know or think your cholesterol is high, or if you have a family history of high cholesterol, talk to your doctor about ways you can lower it.
Certain drugs can lower cholesterol levels.
Statins are the most frequently prescribed cholesterol-lowering drugs. They can dramatically lower "bad" LDL cholesterol, by 60% or more. They can also increase HDL. Studies have shown that statins can reduce the rates of heart attacks, strokes, and death from atherosclerosis.
Taking a statin for a year or longer can even slightly shrink plaques that cause atherosclerosis. This reversal of atherosclerosis surprised many experts who believed it couldn’t be done.
Completely reversing it isn't possible yet. But taking a statin can reduce the risk of complications from atherosclerosis. It fights inflammation, which stabilizes the plaque. For this reason, statins are often key to treating atherosclerosis.
To be effective, statins need to be part of a larger personalized strategy that you and your doctor work out together. Among other things, that strategy will be based on your level of risk for heart attack and stroke as well as your own lifestyle choices.
Fibrates are drugs that reduce triglyceride levels. Fibrates also slightly increase HDL. There are two fibrates used in the U.S.:
Many people have uncomfortable skin flushing that prevents them from taking niacin. (Be wary of "no-flush" over-the-counter preparations; many lack the active form of niacin.) Niacin also can increase blood sugar levels. This is a problem especially for people with diabetes.
Because of its side effects, niacin is much less often prescribed than statins or fibrates.
Bile acid sequestrants
Bile acid sequestrants bind to bile acids in the intestines. This leads to a lower bile acid level. You need bile, so when that happens, cholesterol must be used to make more. This lowers blood cholesterol levels. They include:
Other drugs for lower cholesterol
Ezetimibe (Zetia). This drug works by reducing absorption of cholesterol in the intestines. It can lower LDL levels. But it doesn’t work as well as statins. This drug is usually used in addition to a statin to further lower bad cholesterol. There is no evidence that it reduces the risk of heart attacks or strokes.
Plant sterols. These are taken as supplements in pill form or in foods like margarine. Getting plant sterols every day can reduce cholesterol modestly, about 10%.
Epanova, Lovaza, Omtryg, and Vascepa. These prescription drugs have omega-3s and can be used with diet to lower high levels of triglycerides.
Alirocumab (Praluent) and evolocumab (Repatha). These are included in a new class of drugs called proprotein convertase subtilisin kexin type 9 (PCSK9) inhibitors. They are for use by people who can’t control their cholesterol through diet and statin treatments. For those with established cardiovascular disease, evolocumab has also proved to be effective in significantly reducing the risk of heart attacks and strokes.
Drugs to reduce high blood pressure
Lowering blood pressure lowers the risk of atherosclerosis and its complications. Diet and exercise alone don't usually bring high blood pressure down to the safe range. Most people with high blood pressure will need medications (usually at least two) to do the job.
There are many classes of high blood pressure drugs that work in a variety of ways. The choice of medicine isn't as important as the result: getting blood pressure down. Guidelines released in 2017 state that normal blood pressure should be less than 120/80. Blood pressure goals for people being treated for high blood pressure vary according to their other health concerns.
Drugs to reduce the risk of blood clots
Antiplatelets. These blood thinners make blood less likely to clot, which can help prevent heart attacks and strokes. But antiplatelets don't slow down or reverse atherosclerosis.
Aspirin. Plain old aspirin is actually a powerful blood thinner. A baby aspirin a day can reduce the risk of first heart attacks and strokes by about 25%.
Ticagrelor (Brilinta). Ticagrelor is similar to clopidogrel. This drug is less effective if patients take more than 100 milligrams a day of aspirin. A "baby aspirin" has 81 milligrams of aspirin. An FDA "black box" warning tells doctors about the risk of using higher doses of aspirin along with ticagrelor.
Prasugrel (Effient). You take this medicine by mouth with or without food, usually once a day or as directed by your doctor. Your doctor may tell you to take it with a low dose of aspirin.
Warfarin (Coumadin). This powerful blood thinner is an anticoagulant. It is not generally used to treat atherosclerosis. Warfarin is used for other medical conditions that involve blood clots, such as atrial fibrillation and deep vein thrombosis. It has not been shown to be better than aspirin in preventing heart attacks.
The benefits of blood thinners come at the price of an increased risk of bleeding. For most people at risk from atherosclerosis, the benefits of antiplatelets outweigh the risks. Speak with your doctor before you start using aspirin or any other heart medication.
A daily dose of colchicine (0.5 or 0.6 mg) has proved effective in helping prevent atherosclerosis in some patients. Inflammation plays a pivotal role in coronary disease, and this medication, normally used for treating gout, has shown some success because of its anti-inflammatory properties.
There are no proven cures for atherosclerosis. But medication and lifestyle changes can reduce the risk of complications.
Procedures to Unclog Arteries
Using invasive procedures, doctors can see and unclog arteries, or provide a path for blood to go around blocked arteries. Treatments include:
- Angiography, angioplasty, and stenting. Using a catheter put into an artery in the leg or arm, doctors can enter diseased arteries. This procedure is called cardiac catheterization. Blocked arteries are visible on a live X-ray screen. A tiny balloon on the catheter can be inflated to compress cholesterol plaque in the blocked arteries. Placing small tubes called stents helps to keep open blocked arteries. The stent is usually made of metal and is permanent. Some stents have medicine that helps keep the artery from getting blocked again.
- Bypass surgery. Surgeons harvest a healthy blood vessel from the leg or chest. They use the healthy vessel to bypass blocked arteries.
These procedures involve a risk of complications. They are usually saved for people with significant symptoms or limits caused by the cholesterol plaques of atherosclerosis. | https://www.webmd.com/cholesterol-management/cholesterol-and-artery-plaque-buildup | 21 |
30 | Empathy is the capacity to understand or feel what another person is experiencing from within their frame of reference, that is, the capacity to place oneself in another's position. Definitions of empathy encompass a broad range of emotional states. Types of empathy include cognitive empathy, emotional (or affective) empathy, somatic, and spiritual empathy.
The English word empathy is derived from the Ancient Greek ἐμπάθεια (empatheia, meaning "physical affection or passion"). This, in turn, comes from ἐν (en, "in, at") and πάθος (pathos, "passion" or "suffering"). Hermann Lotze and Robert Vischer adapted the term to create the German Einfühlung ("feeling into"). Edward B. Titchener translated Einfühlung into English as "empathy" in 1909. In modern Greek: εμπάθεια may mean, depending on context, prejudice, malevolence, malice, or hatred.
Empathy definitions encompass a broad range of phenomena, including caring for other people and having a desire to help them; experiencing emotions that match another person's emotions; discerning what another person is thinking or feeling; and making less distinct the differences between the self and the other.
Having empathy can include having the understanding that there are many factors that go into decision making and cognitive thought processes. Past experiences have an influence on the decision making of today. Understanding this allows a person to have empathy for individuals who sometimes make illogical decisions to a problem that most individuals would respond with an obvious response. Broken homes, childhood trauma, lack of parenting and many other factors can influence the connections in the brain which a person uses to make decisions in the future. According to Martin Hoffman everyone is born with the capability of feeling empathy.
Since empathy involves understanding the emotional states of other people, the way it is characterized is derived from the way emotions themselves are characterized. If, for example, emotions are taken to be centrally characterized by bodily feelings, then grasping the bodily feelings of another will be central to empathy. On the other hand, if emotions are more centrally characterized by a combination of beliefs and desires, then grasping these beliefs and desires will be more essential to empathy. The ability to imagine oneself as another person is a sophisticated imaginative process. However, the basic capacity to recognize emotions is probably innate and may be achieved unconsciously. Yet it can be trained and achieved with various degrees of intensity or accuracy.
Empathy necessarily has a "more or less" quality. The paradigm case of an empathic interaction, however, involves a person communicating an accurate recognition of the significance of another person's ongoing intentional actions, associated emotional states, and personal characteristics in a manner that the recognized person can tolerate. Recognitions that are both accurate and tolerable are central features of empathy.
The human capacity to recognize the bodily feelings of another is related to one's imitative capacities, and seems to be grounded in an innate capacity to associate the bodily movements and facial expressions one sees in another with the proprioceptive feelings of producing those corresponding movements or expressions oneself. Humans seem to make the same immediate connection between the tone of voice and other vocal expressions and inner feeling.
Compassion and sympathy are terms associated with empathy. Definitions vary, contributing to the challenge of defining empathy. Compassion is often defined as an emotion people feel when others are in need, which motivates people to help them. Sympathy is a feeling of care and understanding for someone in need. Some include in sympathy an empathic concern, a feeling of concern for another, in which some scholars include the wish to see them better off or happier.
Empathy is distinct also from pity and emotional contagion. Pity is a feeling that one feels towards others that might be in trouble or in need of help as they cannot fix their problems themselves, often described as "feeling sorry" for someone. Emotional contagion is when a person (especially an infant or a member of a mob) imitatively "catches" the emotions that others are showing without necessarily recognizing this is happening.
Empathy is generally divided into two major components:
Affective empathy, also called emotional empathy: the capacity to respond with an appropriate emotion to another's mental states. Our ability to empathize emotionally is based on emotional contagion: being affected by another's emotional or arousal state.
- Empathic concern: sympathy and compassion for others in response to their suffering.
- Personal distress: self-centered feelings of discomfort and anxiety in response to another's suffering. There is no consensus regarding whether personal distress is a basic form of empathy or instead does not constitute empathy. There may be a developmental aspect to this subdivision. Infants respond to the distress of others by getting distressed themselves; only when they are 2 years old do they start to respond in other-oriented ways, trying to help, comfort and share.
Cognitive empathy: the capacity to understand another's perspective or mental state. The terms social cognition, perspective-taking, theory of mind, and mentalizing are often used synonymously, but due to a lack of studies comparing theory of mind with types of empathy, it is unclear whether these are equivalent.
Although science has not yet agreed upon a precise definition of these constructs, there is consensus about this distinction. Affective and cognitive empathy are also independent from one another; someone who strongly empathizes emotionally is not necessarily good in understanding another's perspective.
- Perspective-taking: the tendency to spontaneously adopt others' psychological perspectives.
- Fantasy: the tendency to identify with fictional characters.
- Tactical (or "strategic") empathy: the deliberate use of perspective-taking to achieve certain desired ends.
Although measures of cognitive empathy include self-report questionnaires and behavioral measures, a 2019 meta analysis found only a negligible association between self report and behavioral measures, suggesting that people are generally not able to accurately assess their own cognitive empathy abilities.
Evolution across speciesEdit
An increasing number of studies in animal behavior and neuroscience indicate that empathy is not restricted to humans, and is in fact as old as the mammals, or perhaps older. Examples include dolphins saving humans from drowning or from shark attacks. Professor Tom White suggests that reports of cetaceans having three times as many spindle cells—the nerve cells that convey empathy—in their brains as we do might mean these highly-social animals have a great awareness of one another's feelings.
A multitude of behaviors has been observed in primates, both in captivity and in the wild, and in particular in bonobos, which are reported as the most empathetic of all the primates. A recent study has demonstrated prosocial behavior elicited by empathy in rodents.
Rodents have been shown to demonstrate empathy for cagemates (but not strangers) in pain. One of the most widely read studies on the evolution of empathy, which discusses a neural perception-action mechanism (PAM), is the one by Stephanie Preston and de Waal. This review postulates a bottom-up model of empathy that ties together all levels, from state matching to perspective-taking. For University of Chicago neurobiologist Jean Decety, [empathy] is not specific to humans. He argues that there is strong evidence that empathy has deep evolutionary, biochemical, and neurological underpinnings, and that even the most advanced forms of empathy in humans are built on more basic forms and remain connected to core mechanisms associated with affective communication, social attachment, and parental care. Core neural circuits that are involved in empathy and caring include the brainstem, the amygdala, hypothalamus, basal ganglia, insula and orbitofrontal cortex.
Since all definitions of empathy involve an element of caring for others, all distinctions between egoism and empathy fail at least for beings lacking self-awareness. Since the first mammals lacked a self-aware distinction between self and other, as shown by most mammals failing at mirror tests, the first mammals or anything more evolutionarily primitive than them cannot have had a context of default egoism requiring an empathy mechanism to be transcended. However, there are numerous examples in artificial intelligence research showing that simple reactions can carry out de facto functions the agents have no concept of, so this does not contradict evolutionary explanations of parental care. However, such mechanisms would be unadapted to self-other distinction and beings already dependent on some form of behavior benefitting each other or their offspring would never be able to evolve a form of self-other distinction that necessitated evolution of specialized non-preevolved and non-preevolvable mechanisms for retaining empathic behavior in the presence of self-other distinction, and so a fundamental neurological distinction between egoism and empathy cannot exist in any species.
By the age of two years, children normally begin to display the fundamental behaviors of empathy by having an emotional response that corresponds with another person's emotional state. Even earlier, at one year of age, infants have some rudiments of empathy, in the sense that they understand that, just like their own actions, other people's actions have goals. Sometimes, toddlers will comfort others or show concern for them at as early an age as two. Also during the second year, toddlers will play games of falsehood or "pretend" in an effort to fool others, and this requires that the child know what others believe before he or she can manipulate those beliefs. In order to develop these traits, it is essential to expose your child to face-to-face interactions and opportunities and lead them away from a sedentary lifestyle.
According to researchers at the University of Chicago who used functional magnetic resonance imaging (fMRI), children between the ages of 7 and 12 years appear to be naturally inclined to feel empathy for others in pain. Their findings are consistent with previous fMRI studies of pain empathy with adults. The research also found additional aspects of the brain were activated when youngsters saw another person intentionally hurt by another individual, including regions involved in moral reasoning.
Despite being able to show some signs of empathy, including attempting to comfort a crying baby, from as early as 18 months to two years, most children do not show a fully fledged theory of mind until around the age of four. Theory of mind involves the ability to understand that other people may have beliefs that are different from one's own, and is thought to involve the cognitive component of empathy. Children usually become capable of passing "false belief" tasks, considered to be a test for a theory of mind, around the age of four. Individuals with autism often find using a theory of mind very difficult (e.g. the Sally–Anne test ).
Empathetic maturity is a cognitive structural theory developed at the Yale University School of Nursing and addresses how adults conceive or understand the personhood of patients. The theory, first applied to nurses and since applied to other professions, postulates three levels that have the properties of cognitive structures. The third and highest level is held to be a meta-ethical theory of the moral structure of care. Those adults operating with level-III understanding synthesize systems of justice and care-based ethics.
Empathy in the broadest sense refers to a reaction of one individual to another's emotional state. Recent years have seen increased movement toward the idea that empathy occurs from motor neuron imitation. It cannot be said that empathy is a single unipolar construct but rather a set of constructs. In essence, not every individual responds equally and uniformly the same to various circumstances. The Empathic Concern scale assesses "other-oriented" feelings of sympathy and concern and the Personal Distress scale measures "self-oriented" feelings of personal anxiety and unease. The combination of these scales helps reveal those that might not be classified as empathetic and expands the narrow definition of empathy. Using this approach we can enlarge the basis of what it means to possess empathetic qualities and create a multi-faceted definition.
Behavioral and neuroimaging research show that two underlying facets of the personality dimensions Extraversion and Agreeableness (the Warmth-Altruistic personality profile) are associated with empathic accuracy and increased brain activity in two brain regions important for empathic processing (medial prefrontal cortex and temporoparietal junction).
The literature commonly indicates that females tend to have more cognitive empathy than males. Reviews, meta-analysis and studies of physiological measures, behavioral tests, and brain neuroimaging, however, have revealed some mixed findings. Whereas some experimental and neuropsychological measures show no reliable sex effect, self-report data consistently indicates greater empathy in females. On average, female subjects score higher than males on the Empathy Quotient (EQ), while males tend to score higher on the Systemizing Quotient (SQ). Both males and females with autistic spectrum disorders usually score lower on the EQ and higher on SQ (see below for more detail on autism and empathy). However, a series of studies, using a variety of neurophysiological measures, including MEG, spinal reflex excitability, electroencephalography and N400 paradigm have documented the presence of an overall gender difference in the human mirror neuron system, with female participants tending to exhibit stronger motor resonance than male participants. In addition, these aforementioned studies found that female participants tended to score higher on empathy self-report dispositional measures and that these measures positively correlated with the physiological response. Other studies show no significant difference, and instead suggest that gender differences are the result of motivational differences.
A review published in the journal Neuropsychologia found that women tended to be better at recognizing facial effects, expression processing and emotions in general. Men only tended to be better at recognizing specific behavior which includes anger, aggression and threatening cues. A 2006 meta-analysis by researcher Rena A Kirkland in the journal North American Journal of Psychology found small significant sex differences favoring females in "Reading of the mind" test. "Reading of the mind" test is an advanced ability measure of cognitive empathy in which Kirkland's analysis involved 259 studies across 10 countries. Another 2014 meta-analysis in the journal of Cognition and Emotion, found a small overall female advantage in non-verbal emotional recognition across 215 samples.
Using fMRI, neuroscientist Tania Singer showed that empathy-related neural responses tended to be significantly lower in males when observing an "unfair" person experiencing pain. An analysis from the journal of Neuroscience & Biobehavioral Reviews also found that, overall, there are sex differences in empathy from birth, growing larger with age and which remains consistent and stable across lifespan. Females, on average, were found to have higher empathy than males, while children with higher empathy regardless of gender continue to be higher in empathy throughout development. Further analysis of brain tools such as event related potentials found that females who saw human suffering tended to have higher ERP waveforms than males. Another investigation with similar brain tools such as N400 amplitudes found, on average, higher N400 in females in response to social situations which positively correlated with self-reported empathy. Structural fMRI studies also found females to have larger grey matter volumes in posterior inferior frontal and anterior inferior parietal cortex areas which are correlated with mirror neurons in fMRI literature. Females also tended to have a stronger link between emotional and cognitive empathy. The researchers found that the stability of these sex differences in development are unlikely to be explained by any environment influences but rather might have some roots in human evolution and inheritance. Throughout prehistory, females were the primary nurturers and caretakers of children; so this might have led to an evolved neurological adaptation for women to be more aware and responsive to non-verbal expressions. According to the Primary Caretaker Hypothesis, prehistoric males did not have the same selective pressure as primary caretakers; so therefore this might explain modern day sex differences in emotion recognition and empathy.
The environment has been another interesting topic of study. Many theorize that environmental factors, such as parenting style and relationships, play a significant role in the development of empathy in children. Empathy promotes pro social relationships, helps mediate aggression, and allows us to relate to others, all of which make empathy an important emotion among children.
A study done by Caroline Tisot looked at how a variety of environmental factors affected the development of empathy in young children. Parenting style, parent empathy, and prior social experiences were looked at. The children participating in the study were asked to complete an effective empathy measure, while the children's parents completed the Parenting Practices Questionnaire, which assesses parenting style, and the Balanced Emotional Empathy scale. This study found that a few parenting practices – as opposed to parenting style as a whole – contributed to the development of empathy in children. These practices include encouraging the child to imagine the perspectives of others and teaching the child to reflect on his or her own feelings. The results also show that the development of empathy varied based on the gender of the child and parent. Paternal warmth was found to be significantly important, and was positively related to empathy within children, especially in boys. However, maternal warmth was negatively related to empathy within children, especially in girls.
Some research has also found that empathy can be disrupted due to trauma in the brain such as a stroke. In most cases, empathy is usually impaired if a lesion or stroke occurs on the right side of the brain. In addition to this it has been found that damage to the frontal lobe, which is primarily responsible for emotional regulation, can impact profoundly on a person's capacity to experience empathy toward another individual. People who have suffered from an acquired brain injury also show lower levels of empathy according to previous studies. In fact, more than 50% of people who suffer from a traumatic brain injury self-report a deficit in their empathic capacity. Again, linking this back to the early developmental stages of emotion, if emotional growth has been stunted at an early age due to various factors, empathy will struggle to infest itself in that individual's mind-set as a natural feeling, as they themselves will struggle to come to terms with their own thoughts and emotions. This is again suggestive of the fact that understanding one's own emotions is key in being able to identify with another individual's emotional state.
Empathic anger and distressEdit
Empathic anger is an emotion, a form of empathic distress. Empathic anger is felt in a situation where someone else is being hurt by another person or thing. "Unfortunately, there is no research on empathetic anger's contribution to pro-social action. But it seems likely that, since anger in response to defending oneself 'mobilizes energy and makes one capable of defending oneself with vigor'."
Empathic anger has direct effects on both helping and punishing desires. Empathic anger can be divided into two sub-categories: trait empathic anger and state empathic anger.
The relationship between empathy and anger response towards another person has also been investigated, with two studies basically finding that the higher a person's perspective taking ability, the less angry they were in response to a provocation. Empathic concern did not, however, significantly predict anger response, and higher personal distress was associated with increased anger.
Empathic distress is feeling the perceived pain of another person. This feeling can be transformed into empathic anger, feelings of injustice, or guilt. These emotions can be perceived as pro-social; however, views differ as to whether they serve as motives for moral behavior.
Influence on helping behaviorEdit
Emotions motivate individual behavior that aids in solving communal challenges as well as guiding group decisions about social exchange. Additionally, recent research has shown individuals who report regular experiences of gratitude engage more frequently in prosocial behaviors. Positive emotions like empathy or gratitude are linked to a more positive continual state and these people are far more likely to help others than those not experiencing a positive emotional state. Thus, empathy's influence extends beyond relating to other's emotions, it correlates with an increased positive state and likeliness to aid others. Likewise, research has shown that people with high levels of empathy are also more likely than average to assume that others will comply with a request for help. Measures of empathy show that mirror neurons are activated during arousal of sympathetic responses and prolonged activation shows increased probability to help others.
Research investigating the social response to natural disasters looked at the characteristics associated with individuals who help victims. Researchers found that cognitive empathy, rather than emotional empathy, predicted helping behavior towards victims. Others have posited that taking on the perspectives of others (cognitive empathy) allows these individuals to better empathize with victims without as much discomfort, whereas sharing the emotions of the victims (emotional empathy) can cause emotional distress, helplessness, victim-blaming, and ultimately can lead to avoidance rather than helping.
Despite this evidence for empathy-induced altruistic motivation, egoistic explanations may still be possible. For example, one alternative explanation for the problem-specific helping pattern may be that the sequence of events in the same problem condition first made subjects sad when they empathized with the problem and then maintained or enhanced subjects’ sadness when they were later exposed to the same plight. Consequently, the negative state relief model would predict substantial helping among imagine-set subjects in the same condition, which is what occurred. An intriguing question arises from such findings concerning whether it is possible to have mixed motivations for helping. If this is the case, then simultaneous egoistic and altruistic motivations would occur. This would allow for a stronger sadness-based motivation to obscure the effects of an empathic concern-based altruistic motivation. The observed study would then have sadness as less intense than more salient altruistic motivation. Consequently, relative strengths of different emotional reactions, systematically related to the need situation, may moderate the predominance of egoistic or altruistic motivation. But it has been shown that researchers in this area who have used very similar procedures sometimes obtain apparently contradictory results. Superficial procedural differences such as precisely when a manipulation is introduced could also lead to divergent results and conclusions. It is therefore vital for any future research to move toward even greater standardization of measurement. Thus, an important step in solving the current theoretical debate concerning the existence of altruism may involve reaching common methodological ground.
Research suggests that empathy is also partly genetically determined. Carriers of the deletion variant of ADRA2B show more activation of the amygdala when viewing emotionally arousing images. The gene 5-HTTLPR seems to determine sensitivity to negative emotional information and is also attenuated by the deletion variant of ADRA2b. Carriers of the double G variant of the OXTR gene were found to have better social skills and higher self-esteem. A gene located near LRRN1 on chromosome 3 then again controls the human ability to read, understand and respond to emotions in others.
Neuroscientific basis of empathyEdit
Contemporary neuroscience has allowed us to understand the neural basis of the human mind's ability to understand and process emotion. Studies today enable us to see the activation of mirror neurons and attempt to explain the basic processes of empathy. By isolating these mirror neurons and measuring the neural basis for human mind reading and emotion sharing abilities, science has come one step closer to finding the reason for reactions like empathy. Neuroscientists have already discovered that people scoring high on empathy tests have especially busy mirror neuron systems in their brains. Empathy is a spontaneous sharing of affect, provoked by witnessing and sympathizing with another's emotional state. In a way we mirror or mimic the emotional response that we would expect to feel in that condition or context, much like sympathy. Unlike personal distress, empathy is not characterized by aversion to another's emotional response. Additionally, empathizing with someone requires a distinctly sympathetic reaction where personal distress demands avoidance of distressing matters. This distinction is vital because empathy is associated with the moral emotion sympathy, or empathetic concern, and consequently also prosocial or altruistic action. Empathy leads to sympathy by definition unlike the over-aroused emotional response that turns into personal distress and causes a turning-away from another's distress.
In empathy, people feel what we believe are the emotions of another, which makes it both affective and cognitive by most psychologists. In this sense, arousal and empathy promote prosocial behavior as we accommodate each other to feel similar emotions. For social beings, negotiating interpersonal decisions is as important to survival as being able to navigate the physical landscape.
A meta-analysis of recent fMRI studies of empathy confirmed that different brain areas are activated during affective–perceptual empathy and cognitive–evaluative empathy. Also, a study with patients with different types of brain damage confirmed the distinction between emotional and cognitive empathy. Specifically, the inferior frontal gyrus appears to be responsible for emotional empathy, and the ventromedial prefrontal gyrus seems to mediate cognitive empathy.
Research in recent years has focused on possible brain processes underlying the experience of empathy. For instance, functional magnetic resonance imaging (fMRI) has been employed to investigate the functional anatomy of empathy. These studies have shown that observing another person's emotional state activates parts of the neuronal network involved in processing that same state in oneself, whether it is disgust, touch, or pain. The study of the neural underpinnings of empathy has received increased interest following the target paper published by Preston and Frans de Waal, following the discovery of mirror neurons in monkeys that fire both when the creature watches another perform an action as well as when they themselves perform it.
In their paper, they argue that attended perception of the object's state automatically activates neural representations, and that this activation automatically primes or generates the associated autonomic and somatic responses (idea of perception-action-coupling), unless inhibited. This mechanism is similar to the common coding theory between perception and action. Another recent study provides evidence of separate neural pathways activating reciprocal suppression in different regions of the brain associated with the performance of "social" and "mechanical" tasks. These findings suggest that the cognition associated with reasoning about the "state of another person's mind" and "causal/mechanical properties of inanimate objects" are neurally suppressed from occurring at the same time.
A recent meta-analysis of 40 fMRI studies found that affective empathy is correlated with increased activity in the insula while cognitive empathy is correlated with activity in the mid cingulate cortex and adjacent dorsomedial prefrontal cortex.
It has been suggested that mirroring-behavior in motor neurons during empathy may help duplicate feelings. Such sympathetic action may afford access to sympathetic feelings for another and, perhaps, trigger emotions of kindness, forgiveness.
A difference in distribution between affective and cognitive empathy has been observed in various conditions. Psychopathy and narcissism have been associated with impairments in affective but not cognitive empathy, whereas bipolar disorder and borderline traits have been associated with deficits in cognitive but not affective empathy. Autism spectrum disorders have been associated with various combinations, including deficits in cognitive empathy as well as deficits in both cognitive and affective empathy. Schizophrenia, too, has been associated with deficits in both types of empathy. However, even in people without conditions such as these, the balance between affective and cognitive empathy varies.
Atypical empathic responses have been associated with autism and particular personality disorders such as psychopathy, borderline, narcissistic, and schizoid personality disorders; conduct disorder; schizophrenia; bipolar disorder; and depersonalization. Lack of affective empathy has also been associated with sex offenders. It was found that offenders that had been raised in an environment where they were shown a lack of empathy and had endured the same type of abuse, felt less affective empathy for their victims.
The interaction between empathy and autism is a complex and ongoing field of research. Several different factors are proposed to be at play.
A study of high-functioning adults with autistic spectrum disorders found an increased prevalence of alexithymia, a personality construct characterized by the inability to recognize and articulate emotional arousal in oneself or others. Based on fMRI studies, alexithymia is responsible for a lack of empathy. The lack of empathic attunement inherent to alexithymic states may reduce quality and satisfaction of relationships. Recently, a study has shown that high-functioning autistic adults appear to have a range of responses to music similar to that of neurotypical individuals, including the deliberate use of music for mood management. Clinical treatment of alexithymia could involve using a simple associative learning process between musically induced emotions and their cognitive correlates. A study has suggested that the empathy deficits associated with the autism spectrum may be due to significant comorbidity between alexithymia and autism spectrum conditions rather than a result of social impairment.
One study found that, relative to typically developing children, high-functioning autistic children showed reduced mirror neuron activity in the brain's inferior frontal gyrus (pars opercularis) while imitating and observing emotional expressions. EEG evidence revealed that there was significantly greater mu suppression in the sensorimotor cortex of autistic individuals. Activity in this area was inversely related to symptom severity in the social domain, suggesting that a dysfunctional mirror neuron system may underlie social and communication deficits observed in autism, including impaired theory of mind and cognitive empathy. The mirror neuron system is essential for emotional empathy.
Previous studies have suggested that autistic individuals have an impaired theory of mind. Theory of mind is the ability to understand the perspectives of others. The terms cognitive empathy and theory of mind are often used synonymously, but due to a lack of studies comparing theory of mind with types of empathy, it is unclear whether these are equivalent. Theory of mind relies on structures of the temporal lobe and the pre-frontal cortex, and empathy, i.e. the ability to share the feelings of others, relies on the sensorimotor cortices as well as limbic and para-limbic structures. The lack of clear distinctions between theory of mind and cognitive empathy may have resulted in an incomplete understanding of the empathic abilities of those with Asperger syndrome; many reports on the empathic deficits of individuals with Asperger syndrome are actually based on impairments in theory of mind.
Studies have found that individuals on the autistic spectrum self-report lower levels of empathic concern, show less or absent comforting responses toward someone who is suffering, and report equal or higher levels of personal distress compared to controls. The combination in those on the autism spectrum of reduced empathic concern and increased personal distress may lead to the overall reduction of empathy. Professor Simon Baron-Cohen suggests that those with classic autism often lack both cognitive and affective empathy. However, other research has found no evidence of impairment in autistic individuals' ability to understand other people's basic intentions or goals; instead, data suggests that impairments are found in understanding more complex social emotions or in considering others' viewpoints. Research also suggests that people with Asperger syndrome may have problems understanding others' perspectives in terms of theory of mind, but the average person with the condition demonstrates equal empathic concern as, and higher personal distress, than controls. The existence of individuals with heightened personal distress on the autism spectrum has been offered as an explanation as to why at least some people with autism would appear to have heightened emotional empathy, although increased personal distress may be an effect of heightened egocentrism, emotional empathy depends on mirror neuron activity (which, as described previously, has been found to be reduced in those with autism), and empathy in people on the autism spectrum is generally reduced. The empathy deficits present in autism spectrum disorders may be more indicative of impairments in the ability to take the perspective of others, while the empathy deficits in psychopathy may be more indicative of impairments in responsiveness to others’ emotions. These “disorders of empathy” further highlight the importance of the ability to empathize by illustrating some of the consequences to disrupted empathy development.
The empathizing–systemizing theory (E-S) suggests that people may be classified on the basis of their capabilities along two independent dimensions, empathizing (E) and systemizing (S). These capabilities may be inferred through tests that measure someone's Empathy Quotient (EQ) and Systemizing Quotient (SQ). Five different "brain types" can be observed among the population based on the scores, which should correlate with differences at the neural level. In the E-S theory, autism and Asperger syndrome are associated with below-average empathy and average or above-average systemizing. The E-S theory has been extended into the Extreme Male Brain theory, which suggests that people with an autism spectrum condition are more likely to have an "Extreme Type S" brain type, corresponding with above-average systemizing but challenged empathy.
It has been shown that males are generally less empathetic than females. The Extreme Male Brain (EMB) theory proposes that individuals on the autistic spectrum are characterized by impairments in empathy due to sex differences in the brain: specifically, people with autism spectrum conditions show an exaggerated male profile. A study showed that some aspects of autistic neuroanatomy seem to be extremes of typical male neuroanatomy, which may be influenced by elevated levels of fetal testosterone rather than gender itself. Another study involving brain scans of 120 men and women suggested that autism affects male and female brains differently; females with autism had brains that appeared to be closer to those of non-autistic males than females, yet the same kind of difference was not observed in males with autism.
While the discovery of a higher incidence of diagnosed autism in some groups of second generation immigrant children was initially explained as a result of too little vitamin D during pregnancy in dark-skinned people further removed from the equator, that explanation did not hold up for the later discovery that diagnosed autism was most frequent in children of newly immigrated parents and decreased if they immigrated many years earlier as that would further deplete the body's store of vitamin D. Nor could it explain the similar effect on diagnosed autism for some European migrants America in the 1940s that was reviewed in the 2010s as a shortage of vitamin D was never a problem for these light-skinned immigrants to America. The decrease of diagnosed autism with the number of years the parents had lived in their new country also cannot be explained by the theory that the cause is genetic no matter if it is said to be caused by actual ethnic differences in autism gene prevalence or a selective migration of individuals predisposed for autism since such genes, if present, would not go away over time. It has therefore been suggested that autism is not caused by an innate deficit in a specific social circuitry in the brain, also citing other research suggesting that specificalized social brain mechanisms may not exist even in neurotypical people, but that particular features of appearance and/or minor details in behavior are met with exclusion from socialization that shows up as apparently reduced social ability.
Psychopathy is a personality disorder partly characterized by antisocial and aggressive behaviors, as well as emotional and interpersonal deficits including shallow emotions and a lack of remorse and empathy. The Diagnostic and Statistical Manual of Mental Disorders (DSM) and International Classification of Diseases (ICD) list antisocial personality disorder (ASPD) and dissocial personality disorder, stating that these have been referred to or include what is referred to as psychopathy.
A large body of research suggests that psychopathy is associated with atypical responses to distress cues (e.g. facial and vocal expressions of fear and sadness), including decreased activation of the fusiform and extrastriate cortical regions, which may partly account for impaired recognition of and reduced autonomic responsiveness to expressions of fear, and impairments of empathy. Studies on children with psychopathic tendencies have also shown such associations. The underlying biological surfaces for processing expressions of happiness are functionally intact in psychopaths, although less responsive than those of controls. The neuroimaging literature is unclear as to whether deficits are specific to particular emotions such as fear. Some recent fMRI studies have reported that emotion perception deficits in psychopathy are pervasive across emotions (positives and negatives).
A recent study on psychopaths found that, under certain circumstances, they could willfully empathize with others, and that their empathic reaction initiated the same way it does for controls. Psychopathic criminals were brain-scanned while watching videos of a person harming another individual. The psychopaths' empathic reaction initiated the same way it did for controls when they were instructed to empathize with the harmed individual, and the area of the brain relating to pain was activated when the psychopaths were asked to imagine how the harmed individual felt. The research suggests how psychopaths could switch empathy on at will, which would enable them to be both callous and charming. The team who conducted the study say it is still unknown how to transform this willful empathy into the spontaneous empathy most people have, though they propose it could be possible to bring psychopaths closer to rehabilitation by helping them to activate their "empathy switch". Others suggested that despite the results of the study, it remained unclear whether psychopaths' experience of empathy was the same as that of controls, and also questioned the possibility of devising therapeutic interventions that would make the empathic reactions more automatic.
Work conducted by Professor Jean Decety with large samples of incarcerated psychopaths offers additional insights. In one study, psychopaths were scanned while viewing video clips depicting people being intentionally hurt. They were also tested on their responses to seeing short videos of facial expressions of pain. The participants in the high-psychopathy group exhibited significantly less activation in the ventromedial prefrontal cortex, amygdala and periaqueductal gray parts of the brain, but more activity in the striatum and the insula when compared to control participants. In a second study, individuals with psychopathy exhibited a strong response in pain-affective brain regions when taking an imagine-self perspective, but failed to recruit the neural circuits that were activated in controls during an imagine-other perspective—in particular the ventromedial prefrontal cortex and amygdala—which may contribute to their lack of empathic concern.
It was predicted that people who have high levels of psychopathy would have sufficient levels of cognitive empathy but would lack in their ability to use affective empathy. People that scored highly on psychopathy measures were less likely to portray affective empathy. There was a strong negative correlation showing that psychopathy and affective empathy correspond strongly. The DANVA-2 portrayed those who scored highly on the psychopathy scale do not lack in recognising emotion in facial expressions. Therefore, individuals who have high scores on psychopathy and do not lack in perspective-talking ability but do lack in compassion and the negative incidents that happen to others.
Despite studies suggesting deficits in emotion perception and imagining others in pain, professor Simon Baron-Cohen claims psychopathy is associated with intact cognitive empathy, which would imply an intact ability to read and respond to behaviors, social cues and what others are feeling. Psychopathy is, however, associated with impairment in the other major component of empathy—affective (emotional) empathy—which includes the ability to feel the suffering and emotions of others (what scientists would term as emotional contagion), and those with the condition are therefore not distressed by the suffering of their victims. Such a dissociation of affective and cognitive empathy has indeed been demonstrated for aggressive offenders. Those with autism, on the other hand, are claimed to be often impaired in both affective and cognitive empathy.
One problem with the theory that the ability to turn empathy on and off constitutes psychopathy is that such a theory would classify socially sanctioned violence and punishment as psychopathy, as it means suspending empathy towards certain individuals and/or groups. The attempt to get around this by standardizing tests of psychopathy for cultures with different norms of punishment is criticized in this context for being based on the assumption that people can be classified in discrete cultures while cultural influences are in reality mixed and every person encounters a mosaic of influences (e.g. non-shared environment having more influence than family environment). It is suggested that psychopathy may be an artefact of psychiatry's standardization along imaginary sharp lines between cultures, as opposed to an actual difference in the brain.
Research indicates atypical empathic responses are also correlated with a variety of other conditions.
Borderline personality disorder is characterized by extensive behavioral and interpersonal difficulties that arise from emotional and cognitive dysfunction. Dysfunctional social and interpersonal behavior has been shown to play a crucial role in the emotionally intense way people with borderline personality disorder react. While individuals with borderline personality disorder may show their emotions too much, several authors have suggested that they might have a compromised ability to reflect upon mental states (impaired cognitive empathy), as well as an impaired theory of mind. People with borderline personality disorder have been shown to be very good at recognizing emotions in people's faces, suggesting increased empathic capacities. It is, therefore, possible that impaired cognitive empathy (the capacity for understanding another person's experience and perspective) may account for borderline personality disorder individuals' tendency for interpersonal dysfunction, while "hyper-emotional empathy"[verification needed] may account for the emotional over-reactivity observed in these individuals. One primary study confirmed that patients with borderline personality disorder were significantly impaired in cognitive empathy, yet there was no sign of impairment in affective empathy.
Characteristics of schizoid personality disorder include emotional coldness, detachment, and impaired affect corresponding with an inability to be empathetic and sensitive towards others.
A study conducted by Jean Decety and colleagues at the University of Chicago demonstrated that subjects with aggressive conduct disorder elicit atypical empathic responses to viewing others in pain. Subjects with conduct disorder were at least as responsive as controls to the pain of others but, unlike controls, subjects with conduct disorder showed strong and specific activation of the amygdala and ventral striatum (areas that enable a general arousing effect of reward), yet impaired activation of the neural regions involved in self-regulation and metacognition (including moral reasoning), in addition to diminished processing between the amygdala and the prefrontal cortex.
Schizophrenia is characterized by impaired affective empathy, as well as severe cognitive and empathy impairments as measured by the Empathy Quotient (EQ). These empathy impairments are also associated with impairments in social cognitive tasks.
Bipolar individuals have been observed to have impaired cognitive empathy and theory of mind, but increased affective empathy. Despite cognitive flexibility being impaired, planning behavior is intact. It has been suggested that dysfunctions in the prefrontal cortex could result in the impaired cognitive empathy, since impaired cognitive empathy has been related with neurocognitive task performance involving cognitive flexibility.
Lieutenant Colonel Dave Grossman, in his book On Killing, suggests that military training artificially creates depersonalization in soldiers, suppressing empathy and making it easier for them to kill other human beings.
In educational contextsEdit
Another growing focus of investigation is how empathy manifests in education between teachers and learners. Although there is general agreement that empathy is essential in educational settings, research has found that it is difficult to develop empathy in trainee teachers. According to one theory, there are seven components involved in the effectiveness of intercultural communication; empathy was found to be one of the seven. This theory also states that empathy is learnable. However, research also shows that it is more difficult to empathize when there are differences between people including status, culture, religion, language, skin colour, gender, age and so on.
An important target of the method Learning by teaching (LbT) is to train systematically and, in each lesson, teach empathy. Students have to transmit new content to their classmates, so they have to reflect continuously on the mental processes of the other students in the classroom. This way it is possible to develop step-by-step the students' feeling for group reactions and networking. Carl R. Rogers pioneered research in effective psychotherapy and teaching which espoused that empathy coupled with unconditional positive regard or caring for students and authenticity or congruence were the most important traits for a therapist or teacher to have. Other research and publications by Tausch, Aspy, Roebuck. Lyon, and meta-analyses by Cornelius-White, corroborated the importance of these person-centered traits.
In intercultural contextsEdit
To achieve intercultural empathy, psychologists have employed empathy training. One study hypothesized that empathy training would increase the measured level of relational empathy among the individuals in the experimental group when compared to the control group. The study also hypothesized that empathy training would increase communication among the experimental group, and that perceived satisfaction with group dialogue would also increase among the experimental group. To test this, the experimenters used the Hogan Empathy Scale, the Barrett-Lennard Relationship Inventory, and questionnaires. Using these measures, the study found that empathy training was not successful in increasing relational empathy. Also, communication and satisfaction among groups did not increase as a result of the empathy training. While there didn't seem to be a clear relationship between empathy and relational empathy training, the study did report that "relational empathy training appeared to foster greater expectations for a deep dialogic process resulting in treatment differences in perceived depth of communication".
US researchers William Weeks, Paul Pedersen et al. state that developing intercultural empathy enables the interpretation of experiences or perspectives from more than one worldview. Intercultural empathy can also improve self-awareness and critical awareness of one's own interaction style as conditioned by one's cultural views and promote a view of self-as-process.
The empathy-altruism relationship also has broad and practical implications. Knowledge of the power of the empathic feeling to evoke altruistic motivation may lead to strategies for learning to suppress or avoid these feelings; such numbing or loss of the capacity to feel empathy for clients has been suggested as a factor in the experience of burnout among case workers in helping professions. Awareness of this impending futile effort— nurses caring for terminal patients or pedestrians walking by the homeless—may make individuals try to avoid feelings of empathy in order to avoid the resulting altruistic motivation. Promoting an understanding about the mechanisms by which altruistic behavior is driven, whether it is from minimizing sadness or the arousal of mirror neurons allows people to better cognitively control their actions. However, empathy-induced altruism may not always produce pro-social effects. It could lead one to increase the welfare of those for whom empathy is felt at the expense of other potential pro-social goals, thus inducing a type of bias. Researchers suggest that individuals are willing to act against the greater collective good or to violate their own moral principles of fairness and justice if doing so will benefit a person for whom empathy is felt.
On a more positive note, aroused individuals in an empathetic manner may focus on the long-term welfare rather than just the short-term of those in need. Empathy-based socialization is very different from current practices directed toward inhibition of egoistic impulses through shaping, modeling and internalized guilt. Therapeutic programs built around facilitating altruistic impulses by encouraging perspective taking and empathetic feelings might enable individuals to develop more satisfactory interpersonal relations, especially in the long-term. At a societal level, experiments have indicated that empathy-induced altruism can be used to improve attitudes toward stigmatized groups, even used to improve racial attitudes, actions toward people with AIDS, the homeless and even convicts. Such resulting altruism has also been found to increase cooperation in competitive situations.
In the field of positive psychology, empathy has also been compared with altruism and egotism. Altruism is behavior that is aimed at benefitting another person, while egotism is a behavior that is acted out for personal gain. Sometimes, when someone is feeling empathetic towards another person, acts of altruism occur. However, many question whether or not these acts of altruism are motivated by egotistical gains. According to positive psychologists, people can be adequately moved by their empathies to be altruistic, and there are others who consider the wrong moral leaning perspectives and having empathy can lead to polarization, ignite violence and motivate dysfunctional behavior in relationships.
The capacity to empathize is a revered trait in society. Empathy is considered a motivating factor for unselfish, prosocial behavior, whereas a lack of empathy is related to antisocial behavior.
Empathic engagement helps an individual understand and anticipate the behavior of another. Apart from the automatic tendency to recognize the emotions of others, one may also deliberately engage in empathic reasoning. Two general methods have been identified here. An individual may simulate fictitious versions of the beliefs, desires, character traits and context of another individual to see what emotional feelings it provokes. Or, an individual may simulate an emotional feeling and then access the environment for a suitable reason for the emotional feeling to be appropriate for that specific environment.
Some research suggests that people are more able and willing to empathize with those most similar to themselves. In particular, empathy increases with similarities in culture and living conditions. Empathy is more likely to occur between individuals whose interaction is more frequent. A measure of how well a person can infer the specific content of another person's thoughts and feelings has been developed by William Ickes. In 2010, team led by Grit Hein and Tania Singer gave two groups of men wristbands according to which football team they supported. Each participant received a mild electric shock, then watched another go through the same pain. When the wristbands matched, both brains flared: with pain, and empathic pain. If they supported opposing teams, the observer was found to have little empathy. Bloom calls improper use of empathy and social intelligence as a tool can lead to shortsighted actions and parochialism, he further defies conventional supportive research findings as gremlins from biased standards. He ascertains empathy as an exhaustive process that limits us in morality and if low empathy makes for bad people, bundled up in that unsavoury group would be many who have Asperger's or autism and reveals his own brother is severely autistic. Early indicators for a lack of empathy:
- Frequently finding oneself in prolonged arguments
- Forming opinions early and defending them vigorously
- Thinking that other people are overly sensitive
- Refusing to listen to other points of view
- Blaming others for mistakes
- Not listening when spoken to
- Holding grudges and having difficulty to forgive
- Inability to work in a team
There are concerns that the empathizer's own emotional background may affect or distort what emotions they perceive in others. It is evidenced that societies that promote individualism have lower ability for empathy. Empathy is not a process that is likely to deliver certain judgments about the emotional states of others. It is a skill that is gradually developed throughout life, and which improves the more contact we have with the person with whom one empathizes. Empathizers report finding it easier to take the perspective of another person when they have experienced a similar situation, as well as experience greater empathic understanding. Research regarding whether similar past experience makes the empathizer more accurate is mixed.
The extent to which a person's emotions are publicly observable, or mutually recognized as such has significant social consequences. Empathic recognition may or may not be welcomed or socially desirable. This is particularly the case where we recognize the emotions that someone has towards us during real time interactions. Based on a metaphorical affinity with touch, philosopher Edith Wyschogrod claims that the proximity entailed by empathy increases the potential vulnerability of either party. The appropriate role of empathy in our dealings with others is highly dependent on the circumstances. For instance, Tania Singer says that clinicians or caregivers must be objective to the emotions of others, to not over-invest their own emotions for the other, at the risk of draining away their own resourcefulness. Furthermore, an awareness of the limitations of empathic accuracy is prudent in a caregiving situation.
Empathic distress fatigueEdit
Excessive empathy can lead to empathic distress fatigue, especially if it is associated with pathological altruism. The medical risks are fatigue, occupational burnout, guilt, shame, anxiety, and depression.
In his 2008 book, How to Make Good Decisions and Be Right All the Time:Solving the Riddle of Right and Wrong, writer Iain King presents two reasons why empathy is the "essence" or "DNA" of right and wrong. First, he argues that empathy uniquely has all the characteristics we can know about an ethical viewpoint – including that it is "partly self-standing", and so provides a source of motivation that is partly within us and partly outside, as moral motivations seem to be. This allows empathy-based judgements to have sufficient distance from a personal opinion to count as "moral". His second argument is more practical: he argues, "Empathy for others really is the route to value in life", and so the means by which a selfish attitude can become a moral one. By using empathy as the basis for a system of ethics, King is able to reconcile ethics based on consequences with virtue-ethics and act-based accounts of right and wrong. His empathy-based system has been taken up by some Buddhists, and is used to address some practical problems, such as when to tell lies, and how to develop culturally-neutral rules for romance.
In the 2007 book The Ethics of Care and Empathy, philosopher Michael Slote introduces a theory of care-based ethics that is grounded in empathy. His claim is that moral motivation does, and should, stem from a basis of empathic response. He claims that our natural reaction to situations of moral significance are explained by empathy. He explains that the limits and obligations of empathy and in turn morality are natural. These natural obligations include a greater empathic, and moral obligation to family and friends, along with an account of temporal and physical distance. In situations of close temporal and physical distance, and with family or friends, our moral obligation seems stronger to us than with strangers at a distance naturally. Slote explains that this is due to empathy and our natural empathic ties. He further adds that actions are wrong if and only if they reflect or exhibit a deficiency of fully developed empathic concern for others on the part of the agent.
In phenomenology, empathy describes the experience of something from the other's viewpoint, without confusion between self and other. This draws on the sense of agency. In the most basic sense, this is the experience of the other's body and, in this sense, it is an experience of "my body over there". In most other respects, however, the experience is modified so that what is experienced is experienced as being the other's experience; in experiencing empathy, what is experienced is not "my" experience, even though I experience it. Empathy is also considered to be the condition of intersubjectivity and, as such, the source of the constitution of objectivity.
Some postmodern historians such as Keith Jenkins in recent years have debated whether or not it is possible to empathize with people from the past. Jenkins argues that empathy only enjoys such a privileged position in the present because it corresponds harmoniously with the dominant liberal discourse of modern society and can be connected to John Stuart Mill's concept of reciprocal freedom. Jenkins argues the past is a foreign country and as we do not have access to the epistemological conditions of by gone ages we are unable to empathize.
It is impossible to forecast the effect of empathy on the future. A past subject may take part in the present by the so-called historic present. If we watch from a fictitious past, can tell the present with the future tense, as it happens with the trick of the false prophecy. There is no way of telling the present with the means of the past.
Heinz Kohut is the main introducer of the principle of empathy in psychoanalysis. His principle applies to the method of gathering unconscious material. The possibility of not applying the principle is granted in the cure, for instance when you must reckon with another principle, that of reality.
In evolutionary psychology, attempts at explaining pro-social behavior often mention the presence of empathy in the individual as a possible variable. While exact motives behind complex social behaviors are difficult to distinguish, the "ability to put oneself in the shoes of another person and experience events and emotions the way that person experienced them" is the definitive factor for truly altruistic behavior according to Batson's empathy-altruism hypothesis. If empathy is not felt, social exchange (what's in it for me?) supersedes pure altruism, but if empathy is felt, an individual will help by actions or by word, regardless of whether it is in their self-interest to do so and even if the costs outweigh potential rewards.
Business and managementEdit
In the 2009 book Wired to Care, strategy consultant Dev Patnaik argues that a major flaw in contemporary business practice is a lack of empathy inside large corporations. He states that lacking any sense of empathy, people inside companies struggle to make intuitive decisions and often get fooled into believing they understand their business if they have quantitative research to rely upon. Patnaik claims that the real opportunity for companies doing business in the 21st century is to create a widely held sense of empathy for customers, pointing to Nike, Harley-Davidson, and IBM as examples of "Open Empathy Organizations". Such institutions, he claims, see new opportunities more quickly than competitors, adapt to change more easily, and create workplaces that offer employees a greater sense of mission in their jobs. In the 2011 book The Empathy Factor, organizational consultant Marie Miyashiro similarly argues the value of bringing empathy to the workplace, and offers Nonviolent Communication as an effective mechanism for achieving this. In studies by the Management Research Group, empathy was found to be the strongest predictor of ethical leadership behavior out of 22 competencies in its management model, and empathy was one of the three strongest predictors of senior executive effectiveness. A study by the Center for Creative Leadership found empathy to be positively correlated to job performance amongst employees as well.
Evolution of cooperationEdit
Empathetic perspective taking plays important roles in sustaining cooperation in human societies, as studied by evolutionary game theory. In game theoretical models, indirect reciprocity refers to the mechanism of cooperation based on moral reputations, assigned to individuals based on a set of moral rules called social norms. It has been shown that if reputations are relative and individuals disagree on moral the standing of others (for example, because they use different moral evaluation rules or make errors of judgement), then cooperation will not be sustained. However, when individuals have the capacity for empathetic perspective taking, altruistic behavior can once again evolve. Moreover, evolutionary models also revealed that empathetic perspective taking itself can evolve, promoting prosocial behavior in human populations.
Research into the measurement of empathy has sought to answer a number of questions: who should be carrying out the measurement? What should pass for empathy and what should be discounted? What unit of measure (UOM) should be adopted and to what degree should each occurrence precisely match that UOM are also key questions that researchers have sought to investigate.
Researchers have approached the measurement of empathy from a number of perspectives.
Behavioral measures normally involve raters assessing the presence or absence of certain either predetermined or ad hoc behaviors in the subjects they are monitoring. Both verbal and non-verbal behaviors have been captured on video by experimenters such as Truax. Other experimenters, including Mehrabian and Epstein, have required subjects to comment upon their own feelings and behaviors, or those of other people involved in the experiment, as indirect ways of signaling their level of empathic functioning to the raters.
Physiological responses tend to be captured by elaborate electronic equipment that has been physically connected to the subject's body. Researchers then draw inferences about that person's empathic reactions from the electronic readings produced.
Bodily or "somatic" measures can be looked upon as behavioral measures at a micro level. Their focus is upon measuring empathy through facial and other non-verbally expressed reactions in the empathizer. These changes are presumably underpinned by physiological changes brought about by some form of "emotional contagion" or mirroring. These reactions, whilst appearing to reflect the internal emotional state of the empathizer, could also, if the stimulus incident lasted more than the briefest period, be reflecting the results of emotional reactions that are based upon more pieces of thinking through (cognitions) associated with role-taking ("if I were him I would feel ...").
For the very young, picture or puppet-story indices for empathy have been adopted to enable even very young, pre-school subjects to respond without needing to read questions and write answers. Dependent variables (variables that are monitored for any change by the experimenter) for younger subjects have included self reporting on a 7-point smiley face scale and filmed facial reactions.
Paper-based indices involve one or more of a variety of methods of responding. In some experiments, subjects are required to watch video scenarios (either staged or authentic) and to make written responses which are then assessed for their levels of empathy; scenarios are sometimes also depicted in printed form.
Measures of empathy also frequently require subjects to self-report upon their own ability or capacity for empathy, using Likert-style numerical responses to a printed questionnaire that may have been designed to tap into the affective, cognitive-affective or largely cognitive substrates of empathic functioning. Some questionnaires claim to have been able to tap into both cognitive and affective substrates. However, a 2019 meta analysis questions the validity of self-report measures of cognitive empathy in particular, finding that such self-report measures have negligibly small correlations with corresponding behavioral measures.
In the field of medicine, a measurement tool for carers is the Jefferson Scale of Physician Empathy, Health Professional Version (JSPE-HP).
The Interpersonal Reactivity Index (IRI) is among the oldest published measurement tools (first published in 1983) that provides a multi-dimensional assessment of empathy. It comprises a self-report questionnaire of 28 items, divided into four 7-item scales covering the above subdivisions of affective and cognitive empathy. More recent self-report tools include The Empathy Quotient (EQ) created by Baron-Cohen and Wheelwright which comprises a self-report questionnaire consisting of 60 items. Also among more recent multi-dimensional scales is the Questionnaire of Cognitive and Affective Empathy (QCAE, first published in 2011).
The Empathic Experience Scale is a 30-item questionnaire that was developed to cover the measurement of empathy from a phenomenological perspective on intersubjectivity, which provides a common basis for the perceptual experience (vicarious experience dimension) and a basic cognitive awareness (intuitive understanding dimension) of others' emotional states.
International comparison of country-wide empathyEdit
In a 2016 study by a US research team, self-report data from the mentioned Interreactivity Index (see Measurement) were compared across countries. From the surveyed nations, the five highest empathy scores had (in descending order): Ecuador, Saudi Arabia, Peru, Denmark and United Arab Emirates. Bulgaria, Poland, Estonia, Venezuela and Lithuania ranked as having lowest empathy scores.
Other animals and empathy between speciesEdit
Researchers Zanna Clay and Frans de Waal studied the socio-emotional development of the bonobo chimpanzee. They focused on the interplay of numerous skills such as empathy-related responding, and how different rearing backgrounds of the juvenile bonobo affected their response to stressful events, related to themselves (loss of a fight) and of stressful events of others. It was found that the bonobos sought out body contact as a coping mechanism with one another. A finding of this study was that the bonobos sought out more body contact after watching a distressing event upon the other bonobos rather than their individually experienced stressful event. Mother-reared bonobos as opposed to orphaned bonobos sought out more physical contact after a stressful event happened to another. This finding shows the importance of mother-child attachment and bonding, and how it may be crucial to successful socio-emotional development, such as empathic-like behaviors.
Empathic-like responding has been observed in chimpanzees in various different aspects of their natural behaviors. For example, chimpanzees are known to spontaneously contribute comforting behaviors to victims of aggressive behavior in natural and unnatural settings, a behavior recognized as consolation. Researchers Teresa Romero and co-workers observed these empathic and sympathetic-like behaviors in chimpanzees at two separate outdoor housed groups. The act of consolation was observed in both of the groups of chimpanzees. This behavior is found in humans, and particularly in human infants. Another similarity found between chimpanzees and humans is that empathic-like responding was disproportionately provided to individuals of kin. Although comforting towards non-family chimpanzees was also observed, as with humans, chimpanzees showed the majority of comfort and concern to close/loved ones. Another similarity between chimpanzee and human expression of empathy is that females provided more comfort than males on average. The only exception to this discovery was that high-ranking males showed as much empathy-like behavior as their female counterparts. This is believed to be because of policing-like behavior and the authoritative status of high-ranking male chimpanzees.
It is thought that species that possess a more intricate and developed prefrontal cortex have more of an ability of experiencing empathy. It has however been found that empathic and altruistic responses may also be found in sand dwelling Mediterranean ants. Researcher Hollis studied the Cataglyphis cursor sand dwelling Mediterranean ant and their rescue behaviors by ensnaring ants from a nest in nylon threads and partially buried beneath the sand. The ants not ensnared in the nylon thread proceeded to attempt to rescue their nest mates by sand digging, limb pulling, transporting sand away from the trapped ant, and when efforts remained unfruitful, began to attack the nylon thread itself; biting and pulling apart the threads. Similar rescue behavior was found in other sand-dwelling Mediterranean ants, but only Cataglyphis floricola and Lasius grandis species of ants showed the same rescue behaviors of transporting sand away from the trapped victim and directing attention towards the nylon thread. It was observed in all ant species that rescue behavior was only directed towards nest mates. Ants of the same species from different nests were treated with aggression and were continually attacked and pursued, which speaks to the depths of ants discriminative abilities. This study brings up the possibility that if ants have the capacity for empathy and/or altruism, these complex processes may be derived from primitive and simpler mechanisms.
Canines have been hypothesized to share empathic-like responding towards human species. Researchers Custance and Mayer put individual dogs in an enclosure with their owner and a stranger. When the participants were talking or humming, the dog showed no behavioral changes, however when the participants were pretending to cry, the dogs oriented their behavior toward the person in distress whether it be the owner or stranger. The dogs approached the participants when crying in a submissive fashion, by sniffing, licking and nuzzling the distressed person. The dogs did not approach the participants in the usual form of excitement, tail wagging or panting. Since the dogs did not direct their empathic-like responses only towards their owner, it is hypothesized that dogs generally seek out humans showing distressing body behavior. Although this could insinuate that dogs have the cognitive capacity for empathy, this could also mean that domesticated dogs have learned to comfort distressed humans through generations of being rewarded for that specific behavior.
When witnessing chicks in distress, domesticated hens, Gallus gallus domesticus show emotional and physiological responding. Researchers Edgar, Paul and Nicol found that in conditions where the chick was susceptible to danger, the mother hens heart rate increased, vocal alarms were sounded, personal preening decreased and body temperature increased. This responding happened whether or not the chick felt as if they were in danger. Mother hens experienced stress-induced hyperthermia only when the chick's behavior correlated with the perceived threat. Animal maternal behavior may be perceived as empathy, however, it could be guided by the evolutionary principles of survival and not emotionality.
At the same time, humans can empathize with other species. A study by Miralles et al. (2019) showed that human empathic perceptions (and compassionate reactions) toward an extended sampling of organisms are strongly negatively correlated with the divergence time separating them from us. In other words, the more phylogenetically close a species is to us, the more likely we are to feel empathy and compassion towards it.
- Against Empathy: The Case for Rational Compassion (book by Paul Bloom)
- Artificial empathy
- Attribution (psychology)
- Digital empathy
- Emotional contagion
- Emotional intelligence
- Emotional literacy
- Empathic concern
- Empathizing–systemizing theory
- Ethnocultural empathy
- Fake empathy
- Grounding in communication
- Highly sensitive person
- Humanistic coefficient
- Identification (psychology)
- Life skills
- Moral emotions
- Nonviolent Communication
- People skills
- Philip K. Dick's Do Androids Dream of Electric Sheep?
- Schema (psychology)
- Self-conscious emotions
- Simulation theory of empathy
- Social emotions
- Soft skills
- Theory of mind in animals
- Vicarious embarrassment
- Bellet PS, Maloney MJ (October 1991). "The importance of empathy as an interviewing skill in medicine". JAMA. 266 (13): 1831–2. doi:10.1001/jama.1991.03470130111039. PMID 1909761.
- Rothschild, B. (with Rand, M. L.). (2006). Help for the Helper: The psychophysiology of compassion fatigue and vicarious trauma.
- Read H (August 22, 2019). "A typology of empathy and its many moral forms". Philosophy Compass. 14 (10). doi:10.1111/phc3.12623.
- "The relationship of nursing students' spiritual care perspectives to their expressions of spiritual empathy" Chism, Lisa Astalos ; Magnan, Morris A. The Journal of nursing education, 2009-11, Vol.48 (11), p.597-605; United States
- Harper D. "empathy". Online Etymology Dictionary.
- ἐμπάθεια. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
- Titchener EB (2014). "Introspection and empath" (PDF). Dialogues in Philosophy, Mental and Neuro Sciences. 7: 25–30. Archived from the original (PDF) on July 26, 2014.
- Gallese V (2003). "The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity". Psychopathology. 36 (4): 171–80. CiteSeerX 10.1.1.143.2396. doi:10.1159/000072786. PMID 14504450. S2CID 9422028.
- Koss J (March 2006). "On the Limits of Empathy". The Art Bulletin. 88 (1): 139–157. doi:10.1080/00043079.2006.10786282. JSTOR 25067229. S2CID 194079190.
- "εμπάθεια". Glosbe. Glosbe dictionary. Retrieved April 26, 2019.
- Pijnenborg GH, Spikman JM, Jeronimus BF, Aleman A (June 2013). "Insight in schizophrenia: associations with empathy". European Archives of Psychiatry and Clinical Neuroscience. 263 (4): 299–307. doi:10.1007/s00406-012-0373-0. PMID 23076736. S2CID 25194328.
- Hodges SD, Klein KJ (September 2001). "Regulating the costs of empathy: the price of being human" (PDF). The Journal of Socio-economics. 30 (5): 437–52. doi:10.1016/S1053-5357(01)00112-3.
- Dietrich C. "Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes". Inquiries Journal. Inquiries Journal/Student Pulse LLC. Archived from the original on October 3, 2017. Retrieved February 6, 2017.
- Roth-Hanania R, Davidov M, Zahn-Waxler C (June 2011). "Empathy development from 8 to 16 months: early signs of concern for others". Infant Behavior & Development. 34 (3): 447–58. doi:10.1016/j.infbeh.2011.04.007. PMID 21600660.
- Baird JD, Nadel L (April 2010). Happiness Genes: Unlock the Positive Potential Hidden in Your DNA. New Page Books. ISBN 978-1-60163-105-3.
- O'Malley WJ (1999). "Teaching Empathy". America. 180 (12): 22–26.
- Schwartz W (2002). "From passivity to competence: a conceptualization of knowledge, skill, tolerance, and empathy". Psychiatry. 65 (4): 339–45. doi:10.1521/psyc.65.4.338.20239. PMID 12530337. S2CID 35496086.
- Schwartz W (2013). "The parameters of empathy: Core considerations for psychotherapy and supervision". Advances in Descriptive Psychology. 10. doi:10.2139/ssrn.2393689.
- Meltzoff AN, Decety J (March 2003). "What imitation tells us about social cognition: a rapprochement between developmental psychology and cognitive neuroscience". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 358 (1431): 491–500. doi:10.1098/rstb.2002.1261. PMC 1351349. PMID 12689375.
- Batson CD (2009). "These things called empathy: Eight related but distinct phenomena.". In Decety J, Ickes W (eds.). The Social Neuroscience of Empathy. Cambridge: MIT Press. pp. 3–15.
- Hatfield E, Cacioppo JL, Rapson RL (1993). "Emotional contagion" (PDF). Current Directions in Psychological Science. 2 (3): 96–99. doi:10.1111/1467-8721.ep10770953. S2CID 220533081. Archived from the original (PDF) on November 19, 2012.
- Bar-On RE, Parker JD (2000). The Handbook of Emotional Intelligence: Theory, Development, Assessment, and Application at Home, School, and in the Workplace. San Francisco, California: Jossey-Bass. ISBN 0-7879-4984-1.
- Rogers K, Dziobek I, Hassenstab J, Wolf OT, Convit A (April 2007). "Who cares? Revisiting empathy in Asperger syndrome" (PDF). Journal of Autism and Developmental Disorders. 37 (4): 709–15. doi:10.1007/s10803-006-0197-8. PMID 16906462. S2CID 13999363. Archived (PDF) from the original on July 16, 2015.
- Shamay-Tsoory SG, Aharon-Peretz J, Perry D (March 2009). "Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions". Brain. 132 (Pt 3): 617–27. doi:10.1093/brain/awn279. PMID 18971202.
- de Waal FB (2008). "Putting the altruism back into altruism: the evolution of empathy" (PDF). Annual Review of Psychology. 59 (1): 279–300. doi:10.1146/annurev.psych.59.103006.093625. PMID 17550343. Archived (PDF) from the original on April 17, 2012.
- Davis M (1983). "Measuring individual differences in empathy: evidence for a multidimensional approach". Journal of Personality and Social Psychology. 44 (1): 113–126. doi:10.1037/0022-35220.127.116.11.
- Minio-Paluello I, Lombardo MV, Chakrabarti B, Wheelwright S, Baron-Cohen S (December 2009). "Response to Smith's Letter to the Editor "Emotional Empathy in Autism Spectrum Conditions: Weak, Intact, or Heightened?"". Journal of Autism and Developmental Disorders. 39 (12): 1749. doi:10.1007/s10803-009-0800-x. S2CID 42834991. Pdf. Archived March 4, 2016, at the Wayback Machine
- Lamm C, Batson CD, Decety J (January 2007). "The neural substrate of human empathy: effects of perspective-taking and cognitive appraisal". Journal of Cognitive Neuroscience. 19 (1): 42–58. CiteSeerX 10.1.1.511.3950. doi:10.1162/jocn.2007.19.1.42. PMID 17214562. S2CID 2828843.
- Baron-Cohen S (2003). The Essential Difference: The Truth about the Male and Female Brain. Basic Books. ISBN 9780738208442.
- Gerace A, Day A, Casey S, Mohr P (2013). "An exploratory investigation of the process of perspective taking in interpersonal situations". Journal of Relationships Research. 4: e6, 1–12. doi:10.1017/jrr.2013.6.
- Rogers K, Dziobek I, Hassenstab J, Wolf OT, Convit A (April 2007). "Who cares? Revisiting empathy in Asperger syndrome" (PDF). Journal of Autism and Developmental Disorders. 37 (4): 709–15. doi:10.1007/s10803-006-0197-8. PMID 16906462. S2CID 13999363. Archived (PDF) from the original on July 16, 2015.
- Cox CL, Uddin LQ, Di Martino A, Castellanos FX, Milham MP, Kelly C (August 2012). "The balance between feeling and knowing: affective and cognitive empathy are reflected in the brain's intrinsic functional dynamics". Social Cognitive and Affective Neuroscience. 7 (6): 727–37. doi:10.1093/scan/nsr051. PMC 3427869. PMID 21896497.
- Winczewski LA, Bowen JD, Collins NL (March 2016). "Is Empathic Accuracy Enough to Facilitate Responsive Behavior in Dyadic Interaction? Distinguishing Ability From Motivation". Psychological Science. 27 (3): 394–404. doi:10.1177/0956797615624491. PMID 26847609. S2CID 206588127.
- Kanske P, Böckler A, Trautwein FM, Parianen Lesemann FH, Singer T (September 2016). "Are strong empathizers better mentalizers? Evidence for independence and interaction between the routes of social cognition". Social Cognitive and Affective Neuroscience. 11 (9): 1383–92. doi:10.1093/scan/nsw052. PMC 5015801. PMID 27129794.
- Kanske P, Böckler A, Trautwein FM, Singer T (November 2015). "Dissecting the social brain: Introducing the EmpaToM to reveal distinct neural networks and brain-behavior relations for empathy and Theory of Mind". NeuroImage. 122: 6–19. doi:10.1016/j.neuroimage.2015.07.082. PMID 26254589. S2CID 20614006.
- Radzvilavicius AL, Stewart AJ, Plotkin JB (April 2019). "Evolution of empathetic moral evaluation". eLife. 8: e44269. doi:10.7554/eLife.44269. PMC 6488294. PMID 30964002.
- "The Tao of Doing Good (SSIR)". ssir.org. Archived from the original on February 13, 2017. Retrieved February 13, 2017.
- Murphy BA, Lilienfeld SO (August 2019). "Are self-report cognitive empathy ratings valid proxies for cognitive empathy ability? Negligible meta-analytic relations with behavioral task performance". Psychological Assessment. 31 (8): 1062–1072. doi:10.1037/pas0000732. PMID 31120296.
- White TI (2007). In defense of dolphins: the new moral frontier. Malden, MA: Blackwell Pub.
- Sandin J (2007). Bonobos: Encounters in Empathy. Milwaukee: Zoological Society of Milwaukee & The Foundation for Wildlife Conservation, Inc. p. 109. ISBN 978-0-9794151-0-4.
- de Waal FB (2009). The age of empathy: nature's lessons for a kinder society. Harmony Books.
- Ben-Ami Bartal I, Decety J, Mason P (December 2011). "Empathy and pro-social behavior in rats". Science. 334 (6061): 1427–30. Bibcode:2011Sci...334.1427B. doi:10.1126/science.1210789. PMC 3760221. PMID 22158823.
- Langford DJ, Crager SE, Shehzad Z, Smith SB, Sotocinal SG, Levenstadt JS, et al. (June 2006). "Social modulation of pain as evidence for empathy in mice". Science. 312 (5782): 1967–70. Bibcode:2006Sci...312.1967L. doi:10.1126/science.1128322. PMID 16809545. S2CID 26027821.
- de Waal FB (2008). "Putting the altruism back into altruism: the evolution of empathy". Annual Review of Psychology. 59 (1): 279–300. doi:10.1146/annurev.psych.59.103006.093625. PMID 17550343.
- Decety J (August 2011). "The neuroevolution of empathy". Annals of the New York Academy of Sciences. 1231 (1): 35–45. Bibcode:2011NYASA1231...35D. doi:10.1111/j.1749-6632.2011.06027.x. PMID 21651564. S2CID 9895828.
- Decety J, Svetlova M (January 2012). "Putting together phylogenetic and ontogenetic perspectives on empathy". Developmental Cognitive Neuroscience. 2 (1): 1–24. doi:10.1016/j.dcn.2011.05.003. PMC 6987713. PMID 22682726.
- Pfeifer R, Bongard J (October 2006). How the Body Shapes the Way We Think: A New View of Intelligence. MIT Press.
- Rumbaugh DM, Washburn DA (October 2008). Intelligence of apes and other rational beings. Yale University Press.
- Hoffman ML (2000). Empathy and Moral Development. Cambridge: Cambridge University Press.
- Decety J, Meyer M (2008). "From emotion resonance to empathic understanding: a social developmental neuroscience account". Development and Psychopathology. 20 (4): 1053–80. doi:10.1017/S0954579408000503. PMID 18838031. S2CID 8508693.
- Eisenberg N, Spinrad TL, Sadovsky A (2006). "Empathy-related responding in children.". In Killen M, Smetana J (eds.). Handbook of Moral Development. Mahwah, New Jersey: Lawrence Erlbaum Associates. pp. 517–549.
- Falck-Ytter T, Gredebäck G, von Hofsten C (July 2006). "Infants predict other people's action goals". Nature Neuroscience. 9 (7): 878–9. doi:10.1038/nn1729. PMID 16783366. S2CID 2409686.
- Zahn-Waxler C, Radke-Yarrow M (1990). "The origins of empathic concern". Motivation and Emotion. 14 (2): 107–130. doi:10.1007/BF00991639. S2CID 143436918.
- Decety J, Michalska KJ, Akitsuki Y (September 2008). "Who caused the pain? An fMRI investigation of empathy and intentionality in children". Neuropsychologia. 46 (11): 2607–14. doi:10.1016/j.neuropsychologia.2008.05.026. PMID 18573266. S2CID 19428145.
- Brain Scans Show Children Naturally Prone to Empathy Archived January 2, 2009, at the Wayback Machine Newswise. Retrieved July 13, 2008.
- Wimmer H, Perner J (January 1983). "Beliefs about beliefs: representation and constraining function of wrong beliefs in young children's understanding of deception". Cognition. 13 (1): 103–28. doi:10.1016/0010-0277(83)90004-5. PMID 6681741. S2CID 17014009.
- Baron-Cohen S, Leslie AM, Frith U (October 1985). "Does the autistic child have a "theory of mind"?". Cognition. 21 (1): 37–46. doi:10.1016/0010-0277(85)90022-8. PMID 2934210. S2CID 14955234.
- Leslie AM, Frith U (November 1988). "Autistic children's understanding of seeing, knowing and believing". British Journal of Developmental Psychology. 6 (4): 315–324. doi:10.1111/j.2044-835X.1988.tb01104.x.
- Olsen DP (September 2001). "Empathetic maturity: theory of moral point of view in clinical relations". ANS. Advances in Nursing Science. 24 (1): 36–46. doi:10.1097/00012272-200109000-00006. PMID 11554532. Archived from the original on September 7, 2009.
- Davis MH (1983). "Measuring Individual Differences in Empathy: Evidence for a Multidimensional Approach". Journal of Personality and Social Psychology. 44 (1): 113–26. doi:10.1037/0022-3518.104.22.168.
- Haas BW, Brook M, Remillard L, Ishak A, Anderson IW, Filkowski MM (2015). "I know how you feel: the warm-altruistic personality profile and the empathic brain". PLOS ONE. 10 (3): e0120639. Bibcode:2015PLoSO..1020639H. doi:10.1371/journal.pone.0120639. PMC 4359130. PMID 25769028.
- Joseph DL, Newman DA (January 2010). "Emotional intelligence: an integrative meta-analysis and cascading model". The Journal of Applied Psychology. 95 (1): 54–78. doi:10.1037/a0017286. PMID 20085406.
- Christov-Moore L, Simpson EA, Coudé G, Grigaityte K, Iacoboni M, Ferrari PF (October 2014). "Empathy: gender effects in brain and behavior". Neuroscience and Biobehavioral Reviews. 46 Pt 4 (4): 604–27. doi:10.1016/j.neubiorev.2014.09.001. PMC 5110041. PMID 25236781.
- Cheng YW, Tzeng OJ, Decety J, Imada T, Hsieh JC (July 2006). "Gender differences in the human mirror system: a magnetoencephalography study". NeuroReport. 17 (11): 1115–9. doi:10.1097/01.wnr.0000223393.59328.21. PMID 16837838. S2CID 18811017.
- Cheng Y, Decety J, Lin CP, Hsieh JC, Hung D, Tzeng OJ (June 2007). "Sex differences in spinal excitability during observation of bipedal locomotion". NeuroReport. 18 (9): 887–90. doi:10.1097/WNR.0b013e3280ebb486. PMID 17515795. S2CID 16295878.
- Yang CY, Decety J, Lee S, Chen C, Cheng Y (January 2009). "Gender differences in the mu rhythm during empathy for pain: an electroencephalographic study". Brain Research. 1251: 176–84. doi:10.1016/j.brainres.2008.11.062. PMID 19083993. S2CID 40145972.
- Cheng Y, Lee PL, Yang CY, Lin CP, Hung D, Decety J (May 2008). Rustichini A (ed.). "Gender differences in the mu rhythm of the human mirror-neuron system". PLOS ONE. 3 (5): e2113. Bibcode:2008PLoSO...3.2113C. doi:10.1371/journal.pone.0002113. PMC 2361218. PMID 18461176.
- Proverbio AM, Riva F, Zani A (April 2010). "When neurons do not mirror the agent's intentions: sex differences in neural coding of goal-directed actions". Neuropsychologia. 48 (5): 1454–63. doi:10.1016/j.neuropsychologia.2010.01.015. PMID 20117123. S2CID 207236007. Archived from the original on September 8, 2017.
- Ickes W (1997). Empathic accuracy. New York: The Guilford Press.
- Klein K, Hodges S (2001). "Gender Differences, Motivation, and Empathic Accuracy: When it Pays to Understand". Personality and Social Psychology Bulletin. 27 (6): 720–730. doi:10.1177/0146167201276007. S2CID 14361887.
- Kret ME, De Gelder B (June 2012). "A review on sex differences in processing emotional signals". Neuropsychologia. 50 (7): 1211–21. doi:10.1016/j.neuropsychologia.2011.12.022. PMID 22245006. S2CID 11695245.
- "Meta-analysis reveals adult female superiority in "Reading the Mind in the Eyes Test"". ResearchGate. Archived from the original on December 8, 2015. Retrieved December 1, 2015.
- Thompson AE, Voyer D (January 1, 2014). "Sex differences in the ability to recognise non-verbal displays of emotion: a meta-analysis". Cognition & Emotion. 28 (7): 1164–95. doi:10.1080/02699931.2013.875889. PMID 24400860. S2CID 5402395.
- Singer T, Seymour B, O'Doherty JP, Stephan KE, Dolan RJ, Frith CD (January 2006). "Empathic neural responses are modulated by the perceived fairness of others". Nature. 439 (7075): 466–9. Bibcode:2006Natur.439..466S. doi:10.1038/nature04271. PMC 2636868. PMID 16421576.
- Christov-Moore L, Simpson EA, Coudé G, Grigaityte K, Iacoboni M, Ferrari PF (October 2014). "Empathy: gender effects in brain and behavior". Neuroscience and Biobehavioral Reviews. 46 Pt 4: 604–27. doi:10.1016/j.neubiorev.2014.09.001. PMC 5110041. PMID 25236781. Archived from the original on August 14, 2017.
- Tisot CM (2003). Environmental contributions to empathy development in young children (PhD thesis). Temple University. OCLC 56772472.
- Leigh R, Oishi K, Hsu J, Lindquist M, Gottesman RF, Jarso S, et al. (August 2013). "Acute lesions that impair affective empathy". Brain. 136 (Pt 8): 2539–49. doi:10.1093/brain/awt177. PMC 3722353. PMID 23824490.
- de Sousa A, McDonald S, Rushby J (July 1, 2012). "Changes in emotional empathy, affective responsivity, and behavior following severe traumatic brain injury". Journal of Clinical and Experimental Neuropsychology. 34 (6): 606–23. doi:10.1080/13803395.2012.667067. PMID 22435955. S2CID 44373955.
- de Sousa A, McDonald S, Rushby J, Li S, Dimoska A, James C (October 2010). "Why don't you feel how I feel? Insight into the absence of empathy after severe traumatic brain injury". Neuropsychologia. 48 (12): 3585–95. doi:10.1016/j.neuropsychologia.2010.08.008. PMID 20713073. S2CID 25275909.
- Hoffman M (2000). Empathy and Moral Development: Implications for Caring and Justice. Cambridge University Press. p. 101. ISBN 9780511805851.
- Vitaglione GD, Barnett MA (December 2003). "Assessing a new dimension of empathy: Empathic anger as a predictor of helping and punishing desires". Motivation and Emotion. 27 (4): 301–25. doi:10.1023/A:1026231622102. S2CID 143276552. Archived from the original on May 14, 2011.
- Mohr P, Howells K, Gerace A, Day A, Wharton M (2007). "The role of perspective taking in anger arousal". Personality and Individual Differences. 43 (3): 507–517. doi:10.1016/j.paid.2006.12.019. hdl:2328/36189.
- Day A, Mohr P, Howells K, Gerace A, Lim L (June 2012). "The role of empathy in anger arousal in violent offenders and university students" (PDF). International Journal of Offender Therapy and Comparative Criminology. 56 (4): 599–613. doi:10.1177/0306624X11431061. hdl:2328/35889. PMID 22158909. S2CID 46542250.
- Bloom P (January 2017). "Empathy and Its Discontents". Trends in Cognitive Sciences. 21 (1): 24–31. doi:10.1016/j.tics.2016.11.004. PMID 27916513. S2CID 3863278.
- Bartlett MY, DeSteno D (April 2006). "Gratitude and prosocial behavior: helping when it costs you". Psychological Science. 17 (4): 319–25. doi:10.1111/j.1467-9280.2006.01705.x. PMID 16623689. S2CID 6491264.
- Bohns, Vanessa K.; Flynn, Francis J. (January 1, 2021). "Empathy and expectations of others' willingness to help". Personality and Individual Differences. 168: 110368. doi:10.1016/j.paid.2020.110368. ISSN 0191-8869.
- Marjanovic Z, Struthers G (August 8, 2011). "Who Helps Natural-Disaster Victims? Assessment of Trait and Situational Predictors" (PDF). Analyses of Social Issues and Public Policy. 12 (1): 245–267. doi:10.1111/j.1530-2415.2011.01262.x.
- Einolf C (March 13, 2012). "Is Cognitive Empathy More Important than Affective Empathy? A Response to "Who Helps Natural-Disaster Victims?"" (PDF). Analyses of Social Issues and Public Policy. 12 (1): 268–271. doi:10.1111/j.1530-2415.2012.01281.x. Retrieved May 30, 2014.
- Dovidio JF, Allen JL, Schroeder DA (1990). "Specificity of empathy-induced helping: Evidence for altruistic motivation". Journal of Personality and Social Psychology. 59 (2): 249–260. doi:10.1037/0022-3522.214.171.124.
- Davis MH, Luce C, Kraus SJ (September 1994). "The heritability of characteristics associated with dispositional empathy". Journal of Personality. 62 (3): 369–91. doi:10.1111/j.1467-6494.1994.tb00302.x. PMID 7965564.
- Todd RM, Anderson AK (November 2009). "The neurogenetics of remembering emotions past". Proceedings of the National Academy of Sciences of the United States of America. 106 (45): 18881–2. Bibcode:2009PNAS..10618881T. doi:10.1073/pnas.0910755106. PMC 2776429. PMID 19889977.
- Todd RM, Ehlers MR, Müller DJ, Robertson A, Palombo DJ, Freeman N, et al. (April 2015). "Neurogenetic variations in norepinephrine availability enhance perceptual vividness". The Journal of Neuroscience. 35 (16): 6506–16. doi:10.1523/JNEUROSCI.4489-14.2015. PMC 6605217. PMID 25904801.
- Naudts KH, Azevedo RT, David AS, van Heeringen K, Gibbs AA (September 2012). "Epistasis between 5-HTTLPR and ADRA2B polymorphisms influences attentional bias for emotional information in healthy volunteers". The International Journal of Neuropsychopharmacology. 15 (8): 1027–36. doi:10.1017/S1461145711001295. PMID 21854681. Pdf. Archived October 23, 2017, at the Wayback Machine
- Saphire-Bernstein S, Way BM, Kim HS, Sherman DK, Taylor SE (September 2011). "Oxytocin receptor gene (OXTR) is related to psychological resources". Proceedings of the National Academy of Sciences of the United States of America. 108 (37): 15118–22. Bibcode:2011PNAS..10815118S. doi:10.1073/pnas.1113137108. PMC 3174632. PMID 21896752.
- Warrier V, Grasby KL, Uzefovsky F, Toro R, Smith P, Chakrabarti B, et al. (June 2018). "Genome-wide meta-analysis of cognitive empathy: heritability, and correlates with sex, neuropsychiatric conditions and cognition". Molecular Psychiatry. 23 (6): 1402–1409. doi:10.1038/mp.2017.122. PMC 5656177. PMID 28584286.
- Keen S (2006). "A Theory of Narrative Empathy". Narrative. 14 (3): 207–36. doi:10.1353/nar.2006.0015. S2CID 52228354.
- Gazzola V, Aziz-Zadeh L, Keysers C (September 2006). "Empathy and the somatotopic auditory mirror system in humans". Current Biology. 16 (18): 1824–9. doi:10.1016/j.cub.2006.07.072. PMID 16979560. S2CID 5223812.
- Fan Y, Duncan NW, de Greck M, Northoff G (January 2011). "Is there a core neural network in empathy? An fMRI based quantitative meta-analysis". Neuroscience and Biobehavioral Reviews. 35 (3): 903–11. doi:10.1016/j.neubiorev.2010.10.009. PMID 20974173. S2CID 20965340.
- Keysers C, Gazzola V (December 2009). "Expanding the mirror: vicarious activity for actions, emotions, and sensations". Current Opinion in Neurobiology. 19 (6): 666–71. doi:10.1016/j.conb.2009.10.006. PMID 19880311. S2CID 2692907.
- Decety J, Moriguchi Y (November 2007). "The empathic brain and its dysfunction in psychiatric populations: implications for intervention across different clinical conditions". BioPsychoSocial Medicine. 1 (1): 22. doi:10.1186/1751-0759-1-22. PMC 2206036. PMID 18021398.
- Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, Rizzolatti G (October 2003). "Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust". Neuron. 40 (3): 655–64. doi:10.1016/S0896-6273(03)00679-2. PMID 14642287.
- Keysers C, Wicker B, Gazzola V, Anton JL, Fogassi L, Gallese V (April 2004). "A touching sight: SII/PV activation during the observation and experience of touch". Neuron. 42 (2): 335–46. doi:10.1016/S0896-6273(04)00156-4. PMID 15091347.
- Blakemore SJ, Bristow D, Bird G, Frith C, Ward J (July 2005). "Somatosensory activations during the observation of touch and a case of vision-touch synaesthesia". Brain. 128 (Pt 7): 1571–83. doi:10.1093/brain/awh500. PMID 15817510.
- Morrison I, Lloyd D, di Pellegrino G, Roberts N (June 2004). "Vicarious responses to pain in anterior cingulate cortex: is empathy a multisensory issue?". Cognitive, Affective & Behavioral Neuroscience. 4 (2): 270–8. doi:10.3758/cabn.4.2.270. PMID 15460933.
- Jackson PL, Meltzoff AN, Decety J (February 2005). "How do we perceive the pain of others? A window into the neural processes involved in empathy". NeuroImage. 24 (3): 771–9. CiteSeerX 10.1.1.391.8127. doi:10.1016/j.neuroimage.2004.09.006. PMID 15652312. S2CID 10691796.
- Singer T, Seymour B, O'Doherty J, Kaube H, Dolan RJ, Frith CD (February 2004). "Empathy for pain involves the affective but not sensory components of pain". Science. 303 (5661): 1157–62. Bibcode:2004Sci...303.1157S. doi:10.1126/science.1093535. hdl:21.11116/0000-0001-A020-5. PMID 14976305. S2CID 14727944.
- Preston SD, de Waal FB (February 2002). "Empathy: Its ultimate and proximate bases". The Behavioral and Brain Sciences. 25 (1): 1–20, discussion 20–71. doi:10.1017/s0140525x02000018. PMID 12625087.
- Gutsell JN, Inzlicht M (2010). "Empathy constrained: Prejudice predicts reduced mental simulation of actions during observation of outgroups". Journal of Experimental Social Psychology. 46 (5): 841–845. doi:10.1016/j.jesp.2010.03.011.
- Jack AI, Dawson AJ, Begany KL, Leckie RL, Barry KP, Ciccia AH, Snyder AZ (February 2013). "fMRI reveals reciprocal inhibition between social and physical cognitive domains". NeuroImage. 66: 385–401. doi:10.1016/j.neuroimage.2012.10.061. PMC 3602121. PMID 23110882.
- Case Western Reserve University (October 30, 2012). "Empathy represses analytic thought, and vice versa: Brain physiology limits simultaneous use of both networks". Science Daily. Archived from the original on October 24, 2017.
- Eres R, Decety J, Louis WR, Molenberghs P (August 2015). "Individual differences in local gray matter density are associated with differences in affective and cognitive empathy". NeuroImage. 117: 305–10. doi:10.1016/j.neuroimage.2015.05.038. PMID 26008886. S2CID 15373798. Archived from the original on September 8, 2017.
- Thomas B (November 6, 2012). "What's so special about mirror neurons? (guest blog)". Scientific American. New York. Archived from the original on May 21, 2015.
- Marsh J (March 29, 2012). "Do mirror neurons give us empathy?". Greater Good Magazine. Greater Good Science Center. Archived from the original on October 24, 2017.
- See also:
- Ramachandran VS (2011). The tell-tale brain: a neuroscientist's quest for what makes us human. New York: W.W. Norton. ISBN 9780393077827.
- See also:
- Phoebe Caldwell, "Letters", London Times, Dec 30 2005
- Baron-Cohen S (2011). Zero Degrees of Empathy: A New Theory of Human Cruelty. Penguin UK. ISBN 9780713997910. Retrieved August 8, 2013.
- Bora E, Gökçen S, Veznedaroglu B (July 2008). "Empathic abilities in people with schizophrenia". Psychiatry Research. 160 (1): 23–9. doi:10.1016/j.psychres.2007.05.017. PMID 18514324. S2CID 20896840.
- Decety J, Michalska KJ, Akitsuki Y, Lahey BB (February 2009). "Atypical empathic responses in adolescents with aggressive conduct disorder: a functional MRI investigation". Biological Psychology. 80 (2): 203–11. doi:10.1016/j.biopsycho.2008.09.004. PMC 2819310. PMID 18940230.
- Grossman D (1996). On Killing: The Psychological Cost of Learning to Kill in War and Society. Back Bay Books. ISBN 978-0-316-33000-8.
- Simons D, Wurtele SK, Heil P (December 1, 2002). "Childhood Victimization and Lack of Empathy as Predictors of Sexual Offending Against Women and Children". Journal of Interpersonal Violence. 17 (12): 1291–1307. doi:10.1177/088626002237857. ISSN 0886-2605. S2CID 145525384.
- Hill E, Berthoz S, Frith U (April 2004). "Brief report: cognitive processing of own emotions in individuals with autistic spectrum disorder and in their relatives" (PDF). Journal of Autism and Developmental Disorders. 34 (2): 229–35. doi:10.1023/B:JADD.0000022613.41399.14. PMID 15162941. S2CID 776386. Archived from the original (PDF) on June 19, 2013.
- Taylor, G.J. and Bagby, R.M & Parker, J.D.A. Disorders of Affect Regulation: Alexithymia in Medical and Psychiatric Illness. (1997) Cambridge Uni. Press.
- Sifneos PE (1973). "The prevalence of 'alexithymic' characteristics in psychosomatic patients". Psychotherapy and Psychosomatics. 22 (2): 255–62. doi:10.1159/000286529. PMID 4770536.
- Moriguchi Y, Decety J, Ohnishi T, Maeda M, Mori T, Nemoto K, Matsuda H, Komaki G (September 2007). "Empathy and judging other's pain: an fMRI study of alexithymia". Cerebral Cortex. New York, N.Y. 17 (9): 2223–34. doi:10.1093/cercor/bhl130. PMID 17150987.
- Brackett MA, Warner RM, Bosco JS (2005). "Emotional Intelligence and Relationship Quality Among Couples" (PDF). Personal Relationships. 12 (2): 197–212. CiteSeerX 10.1.1.385.3719. doi:10.1111/j.1350-4126.2005.00111.x. Archived from the original (PDF) on September 27, 2007.
- Yelsma P, Marrow S (January 2003). "An examination of couples' difficulties with emotional expressiveness and their marital satisfaction". The Journal of Family Communication. 3 (1): 41–62. doi:10.1207/S15327698JFC0301_03. S2CID 144200365.
- Allen R, Heaton P (April 2010). "Autism, music, and the therapeutic potential of music in alexithymia" (PDF). Music Perception. 27 (4): 251–61. doi:10.1525/mp.2010.27.4.251.
- Bird G, Silani G, Brindley R, White S, Frith U, Singer T (May 2010). "Empathic brain responses in insula are modulated by levels of alexithymia but not autism". Brain. 133 (Pt 5): 1515–25. doi:10.1093/brain/awq060. PMC 2859151. PMID 20371509.
- Dapretto M, Davies MS, Pfeifer JH, Scott AA, Sigman M, Bookheimer SY, Iacoboni M (January 2006). "Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders". Nature Neuroscience. 9 (1): 28–30. doi:10.1038/nn1611. PMC 3713227. PMID 16327784.
- Oberman LM, Hubbard EM, McCleery JP, Altschuler EL, Ramachandran VS, Pineda JA (July 2005). "EEG evidence for mirror neuron dysfunction in autism spectrum disorders". Brain Research. Cognitive Brain Research. 24 (2): 190–8. doi:10.1016/j.cogbrainres.2005.01.014. PMID 15993757.
- Gillberg CL (July 1992). "The Emanuel Miller Memorial Lecture 1991. Autism and autistic-like conditions: subclasses among disorders of empathy". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 33 (5): 813–42. doi:10.1111/j.1469-7610.1992.tb01959.x. PMID 1634591.
- Roeyers H, Buysse A, Ponnet K, Pichal B (February 2001). "Advancing advanced mind-reading tests: empathic accuracy in adults with a pervasive developmental disorder". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 42 (2): 271–8. doi:10.1017/s0021963001006680. PMID 11280423.
- Hamilton AF (August 2009). "Goals, intentions and mental states: challenges for theories of autism". Journal of Child Psychology and Psychiatry, and Allied Disciplines. 50 (8): 881–92. CiteSeerX 10.1.1.621.6275. doi:10.1111/j.1469-7610.2009.02098.x. PMID 19508497.
- McDonald, Nicole M., and Daniel S. Messinger. "The development of empathy: How, when, and why." Moral Behavior and Free Will: A Neurobiological and Philosophical Aprroach (2011): 341-368.
- Baron-Cohen S (March 2009). "Autism: the empathizing-systemizing (E-S) theory". Annals of the New York Academy of Sciences. 1156 (The Year in Cognitive Neuroscience 2009): 68–80. Bibcode:2009NYASA1156...68B. doi:10.1111/j.1749-6632.2009.04467.x. PMID 19338503. S2CID 1440395.
- Baron-Cohen S, Knickmeyer RC, Belmonte MK (November 2005). "Sex differences in the brain: implications for explaining autism" (PDF). Science. 310 (5749): 819–23. Bibcode:2005Sci...310..819B. doi:10.1126/science.1115455. PMID 16272115. S2CID 44330420. Archived (PDF) from the original on July 19, 2018. Retrieved November 21, 2018. Pdf. Archived May 17, 2017, at the Wayback Machine
- Extracted in:
- Kessel C (November 15, 2011). "Half of Women Do Not Have "Female Brains" (blog)". mathedck.wordpress.com. Mathematics and Education via WordPress. Archived from the original on August 16, 2017.
- Extracted in:
- Auyeung B, Baron-Cohen S, Ashwin E, Knickmeyer R, Taylor K, Hackett G (February 2009). "Fetal testosterone and autistic traits" (PDF). British Journal of Psychology. 100 (Pt 1): 1–22. doi:10.1348/000712608X311731. hdl:20.500.11820/3012e64e-48e9-46fb-b47e-8a8a7853b4de. PMID 18547459. Archived (PDF) from the original on November 22, 2018. Retrieved November 21, 2018. Pdf. Archived August 9, 2017, at the Wayback Machine
- "Testosterone may reduce empathy by reducing brain connectivity". PsyPost. March 31, 2016. Archived from the original on April 2, 2016. Retrieved April 3, 2016.
- "Autism 'affects male and female brains differently'". BBC News. August 9, 2013. Archived from the original on August 9, 2013. Retrieved August 9, 2013.
- Pondé MP, Rousseau C (May 2013). "Immigrant Children with Autism Spectrum Disorder: The Relationship between the Perspective of the Professionals and the Parents' Point of View". Journal of the Canadian Academy of Child and Adolescent Psychiatry. 22 (2): 131–8. PMC 3647629. PMID 23667359.
- Dealberto MJ (May 2011). "Prevalence of autism according to maternal immigrant status and ethnic origin". Acta Psychiatrica Scandinavica. 123 (5): 339–48. doi:10.1111/j.1600-0447.2010.01662.x. PMID 21219265. S2CID 22927622.
- Cleckly HC (1941). "The Mask of Sanity: An attempt to Reinterpret the So-Called Psychopathic Personality". St. Louis, MO: Mosby. Cite journal requires
- Hare RD (1991). "The Hare Psychopathy Checklist-Revised". Toronto: Multi Health Systems. Cite journal requires
- Skeem JL, Polaschek DL, Patrick CJ, Lilienfeld SO (December 2011). "Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy". Psychological Science in the Public Interest. 12 (3): 95–162. doi:10.1177/1529100611426706. PMID 26167886. S2CID 8521465. Archived from the original on February 22, 2016.
- Patrick C (2005). Handbook of Psychopathy. Guilford Press. ISBN 978-1-60623-804-2.[page needed]
- Andrade J (March 23, 2009). Handbook of Violence Risk Assessment and Treatment: New Approaches for Mental Health Professionals. New York, NY: Springer Publishing Company. ISBN 978-0-8261-9904-1. Retrieved January 5, 2014.
- WHO (2010) ICD-10: Clinical descriptions and diagnostic guidelines: Disorders of adult personality and behavior Archived March 23, 2014, at the Wayback Machine
- Decety J, Skelly L (2013). "The neural underpinnings of the experience of empathy: Lessons for psychopathy.". In Ochsner KN, Kosslyn SM (eds.). The Oxford Handbook of Cognitive Neuroscience. 2. New York: Oxford University Press. pp. 228–243.
- Kiehl KA (June 2006). "A cognitive neuroscience perspective on psychopathy: evidence for paralimbic system dysfunction". Psychiatry Research. 142 (2–3): 107–28. doi:10.1016/j.psychres.2005.09.013. PMC 2765815. PMID 16712954.
- Blair RJ (October 1995). "A cognitive developmental approach to mortality: investigating the psychopath" (PDF). Cognition. 57 (1): 1–29. doi:10.1016/0010-0277(95)00676-p. PMID 7587017. S2CID 16366546. Archived from the original (PDF) on July 21, 2013.
- Blair RJ (January 2003). "Neurobiological basis of psychopathy". The British Journal of Psychiatry. 182: 5–7. doi:10.1192/bjp.182.1.5. PMID 12509310.
- "Psychopathy" by Quinton 2006
- Blair RJ, Colledge E, Mitchell DG (December 2001). "Somatic markers and response reversal: is there orbitofrontal cortex dysfunction in boys with psychopathic tendencies?". Journal of Abnormal Child Psychology. 29 (6): 499–511. doi:10.1023/A:1012277125119. PMID 11761284. S2CID 1951812.
- Blair RJ, Mitchell DG, Richell RA, Kelly S, Leonard A, Newman C, Scott SK (November 2002). "Turning a deaf ear to fear: impaired recognition of vocal affect in psychopathic individuals". Journal of Abnormal Psychology. 111 (4): 682–6. doi:10.1037/0021-843x.111.4.682. PMID 12428783.
- Stevens D, Charman T, Blair RJ (June 2001). "Recognition of emotion in facial expressions and vocal tones in children with psychopathic tendencies". The Journal of Genetic Psychology. 162 (2): 201–11. doi:10.1080/00221320109597961. PMID 11432605. S2CID 42581610.
- Decety J, Skelly L, Yoder KJ, Kiehl KA (February 2014). "Neural processing of dynamic emotional facial expressions in psychopaths". Social Neuroscience. 9 (1): 36–49. doi:10.1080/17470919.2013.866905. PMC 3970241. PMID 24359488.
- Dawel A, O'Kearney R, McKone E, Palermo R (November 2012). "Not just fear and sadness: meta-analytic evidence of pervasive emotion recognition deficits for facial and vocal expressions in psychopathy". Neuroscience and Biobehavioral Reviews. 36 (10): 2288–304. doi:10.1016/j.neubiorev.2012.08.006. hdl:1885/19765. PMID 22944264. S2CID 2596760.
- Hogenboom M (July 25, 2013). "Psychopathic criminals have empathy switch". BBC News. Archived from the original on July 27, 2013. Retrieved July 28, 2013.
- Lewis T (July 24, 2013). "Cold-hearted Psychopaths Feel Empathy Too". Live Science.
- Decety J, Skelly LR, Kiehl KA (June 2013). "Brain response to empathy-eliciting scenarios involving pain in incarcerated individuals with psychopathy". JAMA Psychiatry. 70 (6): 638–45. doi:10.1001/jamapsychiatry.2013.27. PMC 3914759. PMID 23615636.
- Decety J, Chen C, Harenski C, Kiehl KA (2013). "An fMRI study of affective perspective taking in individuals with psychopathy: imagining another in pain does not evoke empathy". Frontiers in Human Neuroscience. 7: 489. doi:10.3389/fnhum.2013.00489. PMC 3782696. PMID 24093010.
- Mullins-Nelson JL, Salekin RT, Anne-Marie RT, Leistico RL (2006). "Psychopathy, Empathy, and Perspective -Taking Ability in a Community Sample: Implications for the Successful Psychopathy Concept". International Journal of Forensic Mental Health. 5 (2): 133–149. doi:10.1080/14999013.2006.10471238. S2CID 143760402.
- Winter K, Spengler S, Bermpohl F, Singer T, Kanske P (April 2017). "Social cognition in aggressive offenders: Impaired empathy, but intact theory of mind". Scientific Reports. 7 (1): 670. Bibcode:2017NatSR...7..670W. doi:10.1038/s41598-017-00745-0. PMC 5429629. PMID 28386118.
- Barrett LF (2017). How Emotions are Made: The Secret Life of the Brain.
- Atkins D (2014). The Role of Culture in Empathy: The Consequences and Explanations of Cultural Differences in Empathy at the Affective and Cognitive Levels.
- Minzenberg MJ, Fisher-Irving M, Poole JH, Vinogradov S (February 2006). "Reduced Self-Referential Source Memory Performance is Associated with Interpersonal Dysfunction in Borderline Personality Disorder" (PDF). Journal of Personality Disorders. 20 (1): 42–54. doi:10.1521/pedi.2006.20.1.42. PMID 16563078. Archived (PDF) from the original on May 16, 2013.
- Harari H, Shamay-Tsoory SG, Ravid M, Levkovitz Y (February 2010). "Double dissociation between cognitive and affective empathy in borderline personality disorder". Psychiatry Research. 175 (3): 277–9. doi:10.1016/j.psychres.2009.03.002. PMID 20045198. S2CID 27303466.
- Wagner AW, Linehan MM (1999). "Facial expression recognition ability among women with borderline personality disorder: implications for emotion regulation?". Journal of Personality Disorders. 13 (4): 329–44. doi:10.1521/pedi.19126.96.36.1999. PMID 10633314.
- Lynch TR, Rosenthal MZ, Kosson DS, Cheavens JS, Lejuez CW, Blair RJ (November 2006). "Heightened sensitivity to facial expressions of emotion in borderline personality disorder". Emotion. 6 (4): 647–655. doi:10.1037/1528-35188.8.131.527. PMID 17144755.
- Narcissistic personality disorder Archived January 18, 2013, at archive.today – Diagnostic and Statistical Manual of Mental Disorders Fourth edition Text Revision (DSM-IV-TR) American Psychiatric Association (2000)
- "Schizoid personality disorder". Diagnostic and Statistical Manual of Mental Disorders (Fourth (DSM-IV-TR) ed.). American Psychiatric Association. 2000. Archived from the original on January 18, 2013.
- Guntrip H (1969). Schizoid Phenomena, Object-Relations, and The Self. New York: International Universities Press.
- Ralph Klein- pp. 13–23 in Disorders of the Self: New Therapeutic Horizons, Brunner/Mazel (1995).
- Shamay-Tsoory S, Harari H, Szepsenwol O, Levkovitz Y (2009). "Neuropsychological evidence of impaired cognitive empathy in euthymic bipolar disorder". The Journal of Neuropsychiatry and Clinical Neurosciences. 21 (1): 59–67. doi:10.1176/jnp.2009.21.1.59. PMID 19359453.
- McAlinden M (2014). "Can teachers know learners' minds? Teacher empathy and learner body language in English language teaching". In Dunworth K, Zhang G (eds.). Critical perspectives on language education: Australia and the Asia Pacific. Cham, Switzerland: Springer. pp. 71–100. ISBN 9783319061856.
- Tettegah S, Anderson CJ (2007). "Pre-service teachers' empathy and cognitions: Statistical analysis of text data by graphical models". Contemporary Educational Psychology. 32 (1): 48–82. doi:10.1016/j.cedpsych.2006.10.010.
- Cornelius-White JH, Harbaugh AP (2010). Learner-Centered Instruction. Thousand Oaks, CA, London, New Delhi, Singapore: SAGE Publications.
- Rogers CR, Lyon Jr HC, Tausch R (2013). On Becoming an Effective Teacher - Person-centered teaching, psychology, philosophy, anddialogues. London: Routledge. ISBN 978-0-415-81698-4.
- Keillor RM (1999). Empathy and intergroup relations: a study in cross-cultural relationship building (PhD thesis). Arizona State University. OCLC 44999879.
- William Weeks, Paul Pedersen, & Richard Brislin (1979). A Manual of Structured Experiences for Cultural Learning. La Grange P ark, IL: Intercultural Network.
- Divine World College (2016), Bachelor of Arts in Intercultural Studies program, Epworth, IA.
- Sue Brown and Joyce Osland (2016), Developing Cultural Diversity Competency. University of Portland.
- Batson CD, Moran T (1999). "Empathy-induced altruism in a prisoner's dilemma". Eur. J. Soc. Psychol. 29 (7): 909–924. doi:10.1002/(sici)1099-0992(199911)29:7<909::aid-ejsp965>3.0.co;2-l.
- Snyder CR, Lopez SJ, eds. (2009). Oxford Handbook of Positive Psychology (Second ed.). Oxford: Oxford University Press. pp. 243–44.
- Lopez SJ, Pedrotti JT, Snyder CR (2011). Positive Psychology: The Scientific and Practical Explorations of Human Strengths (Second ed.). Los Angeles: SAGE. pp. 267–75.
- "Empathy". plato.stanford.edu. March 31, 2008. Retrieved August 29, 2012.
- "Does Empathy Have a Dark Side?".
- Eisenberg N, Miller PA (January 1987). "The relation of empathy to prosocial and related behaviors". Psychological Bulletin. 101 (1): 91–119. doi:10.1037/0033-2909.101.1.91. PMID 3562705.
- Bjorkqvist K, Osterman K, Kaukiainen A (2000). "Social intelligence - empathy = aggression?". Aggression and Violent Behavior. 5 (2): 191–200. doi:10.1016/s1359-1789(98)00029-9.
- Geer JH, Estupinan LA, Manguno-Mire GM (2000). "Empathy, social skills, and other relevant cognitive processes in rapists and child molesters". Aggression and Violent Behavior. 5 (1): 99–126. doi:10.1016/s1359-1789(98)00011-1.
- Segal SA, Gerdes KE, Lietz CA (2017). Assessing Empathy. Columbia University Press. pp. 79–81. ISBN 978-0-231-54388-0.
- Levenson RW, Ruef AM (1997). "Physiological aspects of emotional knowledge and rapport.". In Ickes WJ (ed.). Empathic Accuracy. New York, NY: The Guilford Press. pp. 44–72. ISBN 978-1-57230-161-0.
- Hoffman (2000), p. 62 harvp error: multiple targets (2×): CITEREFHoffman2000 (help)
- Hein G, Silani G, Preuschoff K, Batson CD, Singer T (October 2010). "Neural responses to ingroup and outgroup members' suffering predict individual differences in costly helping". Neuron. 68 (1): 149–60. doi:10.1016/j.neuron.2010.09.003. PMID 20920798.
- Goleman D (2005). Emotional intelligence (in Danish). New York: Bantam Books. ISBN 978-0-553-38371-3. OCLC 61770783.
- Weiner IB, Craighead WE (2010). The Corsini Encyclopedia of Psychology. John Wiley & Sons. p. 810. ISBN 978-0-470-17026-7.
- Gerace A, Day A, Casey S, Mohr P (2015). "Perspective taking and empathy: Does having similar past experience to another person make it easier to take their perspective?" (PDF). Journal of Relationships Research. 6: e10, 1–14. doi:10.1017/jrr.2015.6. hdl:2328/35813. S2CID 146270695.
- Hodges SD, Kiel KJ, Kramer AD, Veach D, Villanueva BR (March 2010). "Giving birth to empathy: the effects of similar experience on empathic accuracy, empathic concern, and perceived empathy". Personality & Social Psychology Bulletin. 36 (3): 398–409. doi:10.1177/0146167209350326. PMID 19875825. S2CID 23104368.
- Wyschogrod E (February 1981). "Empathy and sympathy as tactile encounter". The Journal of Medicine and Philosophy. 6 (1): 25–43. doi:10.1093/jmp/6.1.25. PMID 7229562.
- Solon O (July 12, 2012). "Compassion over empathy could help prevent emotional burnout". Wired UK. Archived from the original on May 15, 2016.
- Klimecki O, Singer T (2012). "Empathic distress fatigue rather than compassion fatigue? Integrating findings from empathy research in psychology and social neuroscience." (PDF). In Oakley B, Knafo A, Madhavan G, Wilson DS (eds.). Pathological Altruism. USA: Oxford University Press. pp. 368–383. ISBN 978-0-19-973857-1.
- Tone EB, Tully EC (November 2014). "Empathy as a "risky strength": a multilevel examination of empathy and risk for internalizing disorders". Development and Psychopathology. 26 (4 Pt 2): 1547–65. doi:10.1017/S0954579414001199. PMC 4340688. PMID 25422978.
- King I (2008). How to Make Good Decisions and Be Right All the Time: Solving the Riddle of Right and Wrong. ISBN 978-1-84706-347-2.
- King I (October 16, 2008). How to Make Good Decisions and Be Right All the Time. Continuum. p. 74. ISBN 978-1-84706-347-2. Retrieved August 28, 2013.
Empathy is special, because it always and automatically has the characteristics of right and wrong ... Something rooted in empathy must have more of the essence of good about it than something which is not.
- King I (2008). How to Make Good Decisions and Be Right All the Time. London: Continuum. p. 227. ISBN 978-1-84706-347-2.
- A Buddhist account of Iain King's ideas is set out in "Iain King – Ethics". Global Oneness. Archived from the original on October 20, 2012.
- Publishers Weekly state that "King is even able to formulate a credible rule that tells us when to lie" here. Archived November 17, 2015, at the Wayback Machine
- The Ethics of Care and Empathy, Michael Slote, Oxford University Press, 2007
- Empathy in the Context of Philosophy, Lou Agosta, Palgrave/Macmillan, 2010
- Jenkins, K. (1991) Re-thinking History London: Routledge
- Pozzi, G. (1976) Prefazione 6. L'elemento storico e politico -sociale, in G.B. Marino, L'Adone Milano
- Aronson E, Wilson TD, Akert R (2007). Social Psychology (6th ed.). Prentice Hall. ISBN 978-0-13-238245-8.
- "Wired To Care". wiredtocare.com. Archived from the original on December 10, 2008.
- Miyashiro MR (2011). The Empathy Factor: Your Competitive Advantage for Personal, Team, and Business Success. Puddledancer Press. p. 256. ISBN 978-1-892005-25-0.
- Dowden C (June 21, 2013). "Forget ethics training: Focus on empathy". The National Post. Archived from the original on July 24, 2013.
- "The Importance of Empathy in the Workplace". Center for Creative Leadership. Retrieved November 10, 2020.
- Truax, C. B. (1967). Rating of Accurate Empathy. The Therapeutic Relationship and its Impact. A Study of Psychotherapy with Schizophrenics. Eds. C. R. Rogers, E. T. Gendlin, D. J. Kiesler and C. B. Truax. Madison, Wisconsin, The University of Wisconsin Press pp. 555–568.
- Mehrabian A, Epstein N (December 1972). "A measure of emotional empathy". Journal of Personality. 40 (4): 525–43. doi:10.1111/j.1467-6494.1972.tb00078.x. PMID 4642390.
- e.g. Levenson RW, Ruef AM (August 1992). "Empathy: a physiological substrate" (PDF). Journal of Personality and Social Psychology. 63 (2): 234–46. doi:10.1037/0022-35184.108.40.206. PMID 1403614. S2CID 12650202. Archived from the original (PDF) on July 30, 2020.; Leslie KR, Johnson-Frey SH, Grafton ST (February 2004). "Functional imaging of face and hand imitation: towards a motor theory of empathy". NeuroImage. 21 (2): 601–7. doi:10.1016/j.neuroimage.2003.09.038. PMID 14980562. S2CID 1723495.
- Denham SA, McKinley M, Couchoud EA, Holt R (August 1990). "Emotional and behavioral predictors of preschool peer ratings". Child Development. JSTOR. 61 (4): 1145–52. doi:10.2307/1130882. JSTOR 1130882. PMID 2209184.
- Barnett MA (1984). "Similarity of experience and empathy in preschoolers". Journal of Genetic Psychology. 145 (2): 241–250. doi:10.1080/00221325.1984.10532271.
- e.g. Geher, Warner & Brown (2001) harvp error: no target: CITEREFGeherWarnerBrown2001 (help)
- e.g. Mehrabian & Epstein (1972)
- e.g. Davis MH (1980). "A multidimensional approach to individual differences in empathy". JSAS Catalogue of Selected Documents in Psychology. 10 (4): 1–17.
- Chen D, Lew R, Hershman W, Orlander J (October 2007). "A cross-sectional measurement of medical student empathy". Journal of General Internal Medicine. 22 (10): 1434–8. doi:10.1007/s11606-007-0298-x. PMC 2305857. PMID 17653807.
- Baron-Cohen S, Wheelwright S (April 2004). "The empathy quotient: an investigation of adults with Asperger syndrome or high functioning autism, and normal sex differences" (PDF). Journal of Autism and Developmental Disorders. 34 (2): 163–75. doi:10.1023/B:JADD.0000022607.19833.00. PMID 15162935. S2CID 2663853. Archived from the original (PDF) on March 4, 2015.
- Reniers RL, Corcoran R, Drake R, Shryane NM, Völlm BA (January 2011). "The QCAE: a Questionnaire of Cognitive and Affective Empathy". Journal of Personality Assessment. 93 (1): 84–95. doi:10.1080/00223891.2010.528484. PMID 21184334. S2CID 3035172.
- Innamorati M, Ebisch SJ, Gallese V, Saggino A (April 29, 2019). "A bidimensional measure of empathy: Empathic Experience Scale". PLOS ONE. 14 (4): e0216164. Bibcode:2019PLoSO..1416164I. doi:10.1371/journal.pone.0216164. PMC 6488069. PMID 31034510.
- Chopik WJ, O'Brien E, Konrath SH (2017). "Differences in Empathic Concern and Perspective Taking Across 63 Countries". Journal of Cross-Cultural Psychology. 48 (1). Supplementary Table 1. doi:10.1177/0022022116673910. hdl:1805/14139. ISSN 0022-0221. S2CID 149314942.
- Clay Z, de Waal FB (November 2013). "Development of socio-emotional competence in bonobos". Proceedings of the National Academy of Sciences of the United States of America. 110 (45): 18121–6. Bibcode:2013PNAS..11018121C. doi:10.1073/pnas.1316449110. PMC 3831480. PMID 24127600.
- Romero T, Castellanos MA, de Waal FB (July 2010). "Consolation as possible expression of sympathetic concern among chimpanzees". Proceedings of the National Academy of Sciences of the United States of America. 107 (27): 12110–5. Bibcode:2010PNAS..10712110R. doi:10.1073/pnas.1006991107. PMC 2901437. PMID 20547864.
- Hollis K (March 2013). "A comparative analysis of precision rescue behaviour in sand-dwelling ants". British Journal of Animal Behaviour. Animal Behaviour. 85 (3): 537–544. doi:10.1016/j.anbehav.2012.12.005. S2CID 53179078.
- Custance D, Mayer J (September 2012). "Empathic-like responding by domestic dogs (Canis familiaris) to distress in humans: an exploratory study" (PDF). Animal Cognition. 15 (5): 851–9. doi:10.1007/s10071-012-0510-1. PMID 22644113. S2CID 15153091.
- Edgar JL, Paul ES, Nicol CJ (August 2013). "Protective Mother Hens: Cognitive influences on the avian maternal response". British Journal of Animal Behaviour. 86 (2): 223–229. doi:10.1016/j.anbehav.2013.05.004. S2CID 53179718.
- Miralles A, Raymond M, Lecointre G (December 2019). "Empathy and compassion toward other species decrease with evolutionary divergence time". Scientific Reports. 9 (1): 19555. Bibcode:2019NatSR...919555M. doi:10.1038/s41598-019-56006-9. PMC 6925286. PMID 31862944.
- The dictionary definition of empathy at Wiktionary
- Quotations related to Empathy at Wikiquote
- Media related to Empathy at Wikimedia Commons
- "Empathy and Sympathy in Ethics". Internet Encyclopedia of Philosophy.
- Zalta, Edward N. (ed.). "Empathy". Stanford Encyclopedia of Philosophy.
- Toward a consensus on the nature of empathy: A review of reviews | https://en.m.wikipedia.org/wiki/Empathy | 21 |
135 | What Is Real Gross Domestic Product (Real GDP)?
Real gross domestic product (Real GDP) is an inflation-adjusted measure that reflects the value of all goods and services produced by an economy in a given year (expressed in base-year prices) and is often referred to as constant-price GDP, inflation-corrected GDP, or constant dollar GDP.
- Real gross domestic product (Real GDP) is an inflation-adjusted measure that reflects the value of all goods and services produced by an economy in a given year (expressed in base-year prices) and is often referred to as "constant-price," "inflation-corrected", or "constant dollar" GDP.
- Real GDP makes comparing GDP from year to year and from different years more meaningful because it shows comparisons for both the quantity and value of goods and services.
- Real GDP is calculated by dividing nominal GDP over a GDP deflator.
Nominal vs. Real GDP
Understanding Real GDP
Real GDP is a macroeconomic statistic that measures the value of the goods and services produced by an economy in a specific period, adjusted for inflation. Essentially, it measures a country's total economic output, adjusted for price changes. Governments use both nominal and real GDP as metrics for analyzing economic growth and purchasing power over time. This is done using the GDP price deflator (also called the implicit price deflator), which measures the changes in prices for all of the goods and services produced in an economy. The GDP price deflator is considered to be a more appropriate inflation measure for measuring economic growth than the consumer price index (CPI) because it isn't based on a fixed basket of goods.
The Bureau of Economic Analysis (BEA) provides a quarterly report on GDP with headline data statistics representing real GDP levels and real GDP growth. Nominal GDP is also included in the BEA’s quarterly report under the name current dollar. Unlike nominal GDP, real GDP accounts for changes in price levels and provides a more accurate figure of economic growth.
Nominal GDP vs. Real GDP
Because GDP is one of the most important metrics for evaluating the economic activity, stability, and growth of goods and services in an economy, it is usually reviewed from two angles: nominal and real. Nominal GDP is a macroeconomic assessment of the value of goods and services using current prices in its measure. Nominal GDP is also referred to as the current dollar GDP. Real GDP takes into consideration adjustments for changes in inflation. This means that if inflation is positive, real GDP will be lower than nominal, and vice versa. Without a real GDP adjustment, positive inflation greatly inflates GDP in nominal terms.
Economists use the BEA’s real GDP headline data for macroeconomic analysis and central bank planning. The main difference between nominal GDP and real GDP is the adjustment for inflation. Since nominal GDP is calculated using current prices, it does not require any adjustments for inflation. This makes comparisons from quarter to quarter and year to year much simpler, though less relevant, to calculate and analyze.
As such, real GDP provides a better basis for judging long-term national economic performance than nominal GDP. Using a GDP price deflator, real GDP reflects GDP on a per quantity basis. Without real GDP, it would be difficult to identify just from examining nominal GDP whether production is actually expanding—or it's just a factor of rising per-unit prices in the economy.
A positive difference in nominal minus real GDP signifies inflation and a negative difference signifies deflation. In other words, when nominal is higher than real, inflation is occurring and when real is higher than nominal, deflation is occurring.
Real GDP Calculation
Calculating real GDP is a complex process typically best provided by the BEA. In general, calculating real GDP is done by dividing nominal GDP by the GDP deflator (R).
Real GDP=RNominal GDPwhere:GDP=Gross domestic product
The BEA provides the deflator on a quarterly basis. The GDP deflator is a measurement of inflation since a base year (currently 2012 for the BEA). Dividing the nominal GDP by the deflator removes the effects of inflation.
For example, if an economy's prices have increased by 1% since the base year, the deflating number is 1.01. If nominal GDP was $1 million, then real GDP is calculated as $1,000,000 / 1.01, or $990,099.
Frequently Asked Questions
What does 'real' mean in real GDP?
Real GDP tracks the total value of goods and services calculating the quantities but using constant prices that are adjusted for inflation. This is opposed to nominal GDP that does not account for inflation. Adjusting for constant prices makes it a measure of "real" economic output for apples-to-apples comparison over time and between countries.
What does real GDP measure?
Real GDP is an inflation-adjusted measurement of a country’s economic output over the course of a year. The U.S. GDP is primarily measured based on the expenditure approach and calculated using the following formula: GDP = C + G + I + NX (where C=consumption; G=government spending; I=Investment; and NX=net exports).
How will real and nominal GDP differ from one another?
In inflationary periods, real GDP will be lower than nominal GDP. In deflationary times, real GDP will be higher.
Take for example a hypothetical country that had a nominal GDP of $100 Billion in 2000, which grew by 50% to $150 billion by 2020. Over the same period of time, inflation reduced the relative purchasing power of the dollar by 50%. Looking at just the nominal GDP, the economy appears to be performing very well, whereas the real GDP expressed in 2000 dollars would actually indicate a reading of $75 billion, revealing in fact a net overall decline in economic growth had occurred. It is due to this greater accuracy that real GDP is favored by economists as a method of measuring economic performance
Why is measuring real GDP important?
Countries with larger GDPs will have a greater amount of goods and services generated within them, and will generally have a higher standard of living. For this reason, many citizens and political leaders see GDP growth as an important measure of national success, often referring to “GDP growth” and “economic growth” interchangeably. GDP enables policymakers and central banks to judge whether the economy is contracting or expanding, whether it needs a boost or restraint, and if a threat such as a recession or inflation looms on the horizon. By accounting for inflation, real GDP is a better gauge of the change in production levels from one period to another.
What are some critiques of using GDP?
Many economists have argued that GDP should not be used as a proxy for overall economic success, much less the success of a society more generally. Like any measure, GDP has its imperfections. For instance, it does not account for the informal economy, does not count care work or domestic labor done in the home, ignores business-to-business activity, and counts costs and wastes as economic activity, among other shortcomings. In recent decades, governments have created various nuanced modifications in attempts to increase GDP accuracy and specificity. Means of calculating GDP have also evolved continually since its conception so as to keep up with evolving measurements of industry activity and the generation and consumption of new, emerging forms of digital and other intangible assets. | https://www.investopedia.com/terms/r/realgdp.asp | 21 |
84 | A+= Money loses its value
What is Inflation . Inflation is a rise in the general price level and is reported in rates of change. Essentially what this means is that the value of your money is going down and it takes more money to buy things. Therefore a 4% inflation rate means that the price level for that given year has risen 4% from a certain measuring year (currently 1982 is used). The inflation rate is determined by finding the difference between price levels for the current year and previous given year. The answer is then divided by the given year and then multiplied by 100. To measure the price level, economists select a variety of goods and construct a price index such as the consumer price index (CPI). By using the CPI, which measures the price changes, the inflation rate can be calculated. This is done by dividing the CPI by the beginning price level and then multiplying the result by 100. Causes of Inflation There are several reasons as to why an economy can experience inflation. One explanation is the demand-pull theory, which states that all sectors in the economy try to buy more than the economy can produce. Shortages are then created and merchants lose business. To compensate, some merchants raise their prices. Others don't offer discounts or sales. In the end, the price level rises. A second explanation involves the deficit of the federal government. If the Federal Reserve System expands the money supply to keep the interest rate down, the federal deficit can contribute to inflation. If the debt is not monetized, some borrowers will be crowded out if interest rates rise. This results in the federal deficit having more of an impact on output and employment than on the price level. A third reason involves the cost-push theory which states that labor groups cause inflation. If a strong union wins a large wage contract, it forces producers to raise their prices in order to compensate for the increase in salaries they have to pay. The fourth explanation is the wage-price spiral which states that no single group is to blame for inflation. Higher prices force workers to ask for higher wages. If they get their way, then producers try to recover with higher prices. Basically, if either side tries to increase its position with a larger price hike, the rate of inflation continues to rise. Finally, another reason for inflation is excessive monetary growth. When any extra money is created, it will increase some group's buying power. When this money is spent, it will cause a demand-pull effect that drives up prices. For inflation to continue, the money supply must grow faster than the real GDP. Effects of Inflation The most immediate effects of inflation are the decreased purchasing power of the dollar and its depreciation. Depreciation is especially hard on retired people with fixed incomes because their money buys a little less each month. Those not on fixed incomes are more able to cope because they can simply increase their fees. A second destablizling effect is that inflation can cause consumers and investors to changer their speeding habits. When inflation occurs, people tend to spend less meaning that factories have to lay off workers because of a decline in orders. A third destabilizing effect of inflation is that some people choose to speculate heavily in an attempt to take advantage of the higher price level. Because some of the purchases are high-risk investments, spending is diverted from the normal channels and some structural unemployment may take place. Finally, inflation alters the distribution of income. Lenders are generally hurt more than borrowers during long inflationary periods which means that loans made earlier are repaid later in inflated dollars.
favourable effects of inflation
What are the effects of inflation on real domestic output?
the core inflation rate
One problem with inflation is redistribution. Inflation makes some people better off while it makes others worse off. The three things that cause redistribution are price effects, wealth effects, and income effects.
the core inflation rate
It results into inflation in the country
inflation effcts in Pakistan
there will be an increase in unemployment, inflation will be caused
malay ko sa inyo
you cant buy anything.
core inflation rate
to counter the effects of inflation
* Unemployment * Inflation * Totalitarianism
I don't think that there is any difference on how inflation effects the Indian economy as it effects any other economy in the world. Same thing happens to everyone. The government prints to much money which causes the prices to be raised and after a certain period of time it will all become next to worthless
On the basis of rate of Inflation, there are different types of Inflation. They are:Creeping Inflation.Walking or Trotting Inflation.Running inflation.Hyper or Galloping Inflation.Open Inflation.Suppressed Inflation.On the basis of rate of Inflation, there are different types of Inflation. They are:Creeping Inflation.Walking or Trotting Inflation.Running inflation.Hyper or Galloping Inflation.Open Inflation.Suppressed Inflation.
Creeping inflationWalking inflationRunning inflationGalloping inflation
When changes in the CPI in the base month have a considerable effect on twelve-month measured inflation, this is commonly referred to as a base effect. Base effects are therefore the contribution to changes in the annual rate of measured inflation from abnormal changes in the CPI in the base period.
It leads to an arbitrary redistribution of income and difficulties in the baqlance of payment
There was a inflation so the goods were scarce.It also became a boomtown.
C. Eugene Steuerle has written: 'Taxes, loans and inflation' -- subject(s): Income tax, Capital levy, Loans, United Sates, Effect of inflation on, Effects of inflation on, United States 'Economic effects of health reform' -- subject(s): Health Insurance, Health care reform, Insurance, Health, Medical care, Medical policy, Public opinion
Because peoples not interested in Agriculture much so automatically it effects
Inflation in India can be control by by rising the prices of products which auto maticly decreases the consumption of consumers or the other way is extensive advertisement for minimum utilisation resources.make people aware of what are the effects of inflation and how scarcity of resources affects there future and there children(most of the people work for whole life to make there future childrens secure) so if we emphsize on how inflation affects there future somehow we may control on inflation such as crude oil
The effect of inflation in India is an unbalanced relationship between the amount of money earned and the cost of regular goods. This relationship can be controlled by bank authorities by limiting inflation.
A nominal quantity is one that is represented in current dollars, that is, without inflation effect. A quantity that accounts for inflation effects is called a "real" quantity. For more information, please see the related link below. | https://www.answers.com/Q/What_are_the_effects_of_inflation | 21 |
17 | Swedish emigration to the United States
During the Swedish emigration to the United States in the 19th and early 20th centuries, about 1.3 million Swedes left Sweden for the United States of America. While the land of the U.S. frontier was a magnet for the rural poor all over Europe, some factors encouraged Swedish emigration in particular. The religious repression practiced by the Swedish Lutheran State Church was widely resented, as was the social conservatism and class snobbery of the Swedish monarchy. Population growth and crop failures made conditions in the Swedish countryside increasingly bleak. By contrast, reports from early Swedish emigrants painted the American Midwest as an earthly paradise, and praised American religious and political freedom and undreamed-of opportunities to better one's condition.
Swedish migration to the United States peaked in the decades after the American Civil War (1861–65). By 1890 the U.S. census reported a Swedish-American population of nearly 800,000. Most immigrants became pioneers, clearing and cultivating the prairie, but some forces pushed the new immigrants towards the cities, particularly Chicago. Single young women usually went straight from agricultural work in the Swedish countryside to jobs as housemaids in American towns. Many established Swedish Americans visited the old country in the later 19th century, their narratives illustrating the difference in customs and manners. Some made the journey with the intention of spending their declining years in Sweden, but changed their minds when faced with what they thought an arrogant aristocracy, a coarse and degraded laboring class, and a lack of respect for women.
After a dip in the 1890s, emigration rose again, causing national alarm in Sweden. A broad-based parliamentary emigration commission was instituted in 1907. It recommended social and economic reform in order to reduce emigration by "bringing the best sides of America to Sweden." The commission's major proposals were rapidly implemented: universal male suffrage, better housing, general economic development, and broader popular education. The impact of these measures is hard to assess, as World War I (1914–18) broke out the year after the commission published its last volume, reducing emigration to a mere trickle. From the mid-1920s, there was no longer a Swedish mass emigration.
Early history: the Swedish-American dream
The Swedish West India Company established a colony on the Delaware River in 1638, naming it New Sweden. A small, short-lived colonial settlement, New Sweden contained at its height only some 600 Swedish and Finnish settlers (Finland being part of Sweden). It was lost to the Dutch in New Netherland in 1655. Nevertheless, the descendants of the original colonists maintained spoken Swedish until the late 18th century. Modern day reminders of the history of New Sweden are reflected in the presence of the American Swedish Historical Museum in Philadelphia, Fort Christina State Park in Wilmington, Delaware, and The Printzhof in Essington, Pennsylvania.
The historian H. A. Barton has suggested that the greatest significance of New Sweden was the strong and long-lasting interest in America that the colony generated in Sweden. America was seen as the standard-bearer of liberalism and personal freedom, and became an ideal for liberal Swedes. Their admiration for America was combined with the notion of a past Swedish Golden Age with ancient Nordic ideals. Supposedly corrupted by foreign influences, the timeless "Swedish values" would be recovered by Swedes in the New World. This remained a fundamental theme of Swedish, and later Swedish-American, discussion of America, though the recommended "timeless" values changed over time. In the 17th and 18th centuries, Swedes who called for greater religious freedom would often refer to America as the supreme symbol of it. The emphasis shifted from religion to politics in the 19th century, when liberal citizens of the hierarchic Swedish class society looked with admiration to the American Republicanism and civil rights. In the early 20th century, the Swedish-American dream even embraced the idea of a welfare state responsible for the well-being of all its citizens. Underneath these shifting ideas ran from the start the current which carried all before it in the later 20th century: America as the symbol and dream of unfettered individualism.
Swedish debate about America remained mostly theoretical before the 19th century, since very few Swedes had any personal experience of the nation. Emigration was illegal and population was seen as the wealth of nations. However, the Swedish population doubled between 1750 and 1850, and as population growth outstripped economic development, it gave rise to fears of overpopulation based on the influential population theory of Thomas Malthus. In the 1830s, the laws against emigration were repealed.
Akenson argues that hard times in Sweden before 1867 produced a strong push effect, but that for cultural reasons most Swedes refused to emigrate and clung on at home. Akenson says the state wanted to keep its population high and:
- The upper classes' need for a cheap and plentiful labor force, the instinctive willingness of the clergy of the state church to discourage emigration on both moral and social grounds, and the deference of the lower orders to the arcade of powers that hovered above them—all these things formed an architecture of cultural hesitancy concerning emigration.
A few "countercultural" deviants from the mainstream did leave and showed the way. The severe economic hardship of the " Great Deprivation" of 1867 to 1869, finally overcame the reluctance and the floodgates opened to produce an "emigration culture""
European mass emigration: push and pull
Large-scale European emigration to the United States started in the 1840s in Britain, Ireland and Germany. That was followed by a rising wave after 1850 from most Northern European countries, and in turn by Central and Southern Europe. Research into the forces behind this European mass emigration has relied on sophisticated statistical methods. One theory which has gained wide acceptance is Jerome's analysis in 1926 of the "push and pull" factors—the impulses to emigration generated by conditions in Europe and the U.S. respectively. Jerome found that fluctuations in emigration co-varied more with economic developments in the U.S. than in Europe, and deduced that the pull was stronger than the push. Jerome's conclusions have been challenged, but still form the basis of much work on the subject.
Emigration patterns in the Nordic countries—Finland, Sweden, Norway, Denmark, and Iceland—show striking variation. Nordic mass emigration started in Norway, which also retained the highest rate throughout the century. Swedish emigration got underway in the early 1840s, and had the third-highest rate in all of Europe, after Ireland and Norway. Denmark had a consistently low rate of emigration, while Iceland had a late start but soon reached levels comparable to Norway. Finland, whose mass emigration did not start until the late 1880s, and at the time part of the Russian Empire, is usually classified as part of the Eastern European wave.
Crossing the Atlantic
The first European emigrants travelled in the holds of sailing cargo ships. With the advent of the age of steam, an efficient transatlantic passenger transport mechanism was established at the end of the 1860s. It was based on huge ocean liners run by international shipping lines, most prominently Cunard, White Star, and Inman. The speed and capacity of the large steamships meant that tickets became cheaper. From the Swedish port towns of Stockholm, Malmö and Gothenburg, transport companies operated various routes, some of them with complex early stages and consequently a long and trying journey on the road and at sea. Thus North German transport agencies relied on the regular Stockholm—Lübeck steamship service to bring Swedish emigrants to Lübeck, and from there on German train services to take them to Hamburg or Bremen. There they would board ships to the British ports of Southampton and Liverpool and change to one of the great transatlantic liners bound for New York. The majority of Swedish emigrants, however, travelled from Gothenburg to Hull, UK, on dedicated boats run by the Wilson Line, then by train across Britain to Liverpool and the big ships.
During the later 19th century, the major shipping lines financed Swedish emigrant agents and paid for the production of large quantities of emigration propaganda. Much of this promotional material, such as leaflets, was produced by immigration promoters in the U.S. Propaganda and advertising by shipping line agents was often blamed for emigration by the conservative Swedish ruling class, which grew increasingly alarmed at seeing the agricultural labor force leave the country. It was a Swedish 19th-century cliché to blame the falling ticket prices and the pro-emigration propaganda of the transport system for the craze of emigration, but modern historians have varying views about the real importance of such factors. Brattne and Åkerman have examined the advertising campaigns and the ticket prices as a possible third force between push and pull. They conclude that neither advertisements nor pricing had any decisive influence on Swedish emigration. While the companies remain unwilling, as of 2007[update], to open their archives to researchers, the limited sources available suggest that ticket prices did drop in the 1880s, but remained on average artificially high because of cartels and price-fixing. On the other hand, H. A. Barton states that the cost of crossing the Atlantic dropped drastically between 1865 and 1890, encouraging poorer Swedes to emigrate. The research of Brattne and Åkerman has shown that the leaflets sent out by the shipping line agents to prospective emigrants would not so much celebrate conditions in the New World, as simply emphasize the comforts and advantages of the particular company. Descriptions of life in America were unvarnished, and the general advice to emigrants brief and factual. Newspaper advertising, while very common, tended to be repetitive and stereotyped in content.
Swedish mass migration took off in the spring of 1841 with the departure of Uppsala University graduate Gustaf Unonius (1810–1902) together with his wife, a maid, and two students. This small group founded a settlement they named New Upsala in Waukesha County, Wisconsin, and began to clear the wilderness, full of enthusiasm for frontier life in "one of the most beautiful valleys the world can offer". After moving to Chicago, Unonius soon became disillusioned with life in the U.S., but his reports in praise of the simple and virtuous pioneer life, published in the liberal newspaper Aftonbladet, had already begun to draw Swedes westward.
The rising Swedish exodus was caused by economic, political, and religious conditions affecting particularly the rural population. Europe was in the grip of an economic depression. In Sweden, population growth and repeated crop failures were making it increasingly difficult to make a living from the tiny land plots on which at least three quarters of the inhabitants depended. Rural conditions were especially bleak in the stony and unforgiving Småland province, which became the heartland of emigration. The American Midwest was an agricultural antipode to Småland, for it, Unonius reported in 1842, "more closely than any other country in the world approaches the ideal which nature seems to have intended for the happiness and comfort of humanity." Prairie land in the Midwest was ample, loamy, and government-owned. From 1841 it was sold to squatters for $1.25 per acre, ($31 per acre ($77/ha) as of 2020), following the Preemption Act of 1841 (later replaced by the Homestead Act). The inexpensive and fertile land of Illinois, Iowa, Minnesota and Wisconsin was irresistible to landless and impoverished European peasants. It also attracted more well-established farmers.
The political freedom of the American republic exerted a similar pull. Swedish peasants were some of the most literate in Europe, and consequently had access to the European egalitarian and radical ideas that culminated in the Revolutions of 1848. The clash between Swedish liberalism and a repressive monarchist regime raised political awareness among the disadvantaged, many of whom looked to the U.S. to realize their republican ideals.
Dissenting religious practitioners also widely resented the treatment they received from the Lutheran State Church through the Conventicle Act. Conflicts between local worshipers and the new churches were most explosive in the countryside, where dissenting pietist groups were more active, and were more directly under the eye of local law enforcement and the parish priest. Before non-Lutheran churches were granted toleration in 1809, clampdowns on illegal forms of worship and teaching often provoked whole groups of pietists to leave together, intent on forming their own spiritual communities in the new land. The largest contingent of such dissenters, 1,500 followers of Eric Jansson, left in the late 1840s and founded a community in Bishop Hill, Illinois.
The first Swedish emigrant guidebook was published as early as 1841, the year Unonius left, and nine handbooks were published between 1849 and 1855. Substantial groups of lumberjacks and iron miners were recruited directly by company agents in Sweden. Agents recruiting construction builders for American railroads also appeared, the first in 1854, scouting for the Illinois Central Railroad.
The Swedish establishment disapproved intensely of emigration. Seen as depleting the labor force and as a defiant act among the lower orders, emigration alarmed both the spiritual and the secular authorities. Many emigrant diaries and memoirs feature an emblematic early scene in which the local clergy warns travellers against risking their souls among foreign heretics. The conservative press described emigrants as lacking in patriotism and moral fibre: "No workers are more lazy, immoral and indifferent than those who immigrate to other places." Emigration was denounced as an unreasoning "mania" or "craze", implanted in an ignorant populace by "outside agents". The liberal press retorted that the "lackeys of monarchism" failed to take into account the miserable conditions in the Swedish countryside and the backwardness of Swedish economic and political institutions. "Yes, emigration is indeed a 'mania'", wrote the liberal Göteborgs Handels- och Sjöfartstidning sarcastically, "The mania of wanting to eat one's fill after one has worked oneself hungry! The craze of wanting to support oneself and one's family in an honest manner!"
The great Swedish famine of 1867-1869, and the distrust and discontent concerning the way the establishment distributed the relief help, is estimated to have contributed greatly to the raising Swedish emigration to the United States. Another contributing factor was the poverty of 19th-century Sweden, made worse by the abusive solutions practiced to complement the strict Poor Care Regulation of 1871, such as the rotegång, the pauper auction and child auctions.
Late 19th century
Swedish emigration to the United States reached its height in the 1870–1900 era. The size of the Swedish-American community in 1865 is estimated at 25,000 people, a figure soon to be surpassed by the yearly Swedish immigration. By 1890, the U.S. census reported a Swedish-American population of nearly 800,000, with immigration peaking in 1869 and again in 1887. Most of this influx settled in the North. The great majority of them had been peasants in the old country, pushed away from Sweden by disastrous crop failures and pulled towards America by the cheap land resulting from the 1862 Homestead Act. Most immigrants became pioneers, clearing and cultivating the virgin land of the Midwest and extending the pre-Civil War settlements further west, into Kansas and Nebraska. Once sizable Swedish farming communities had formed on the prairie, the greatest impetus for further peasant migration came through personal contacts. The iconic "America-letter" to relatives and friends at home spoke directly from a position of trust and shared background, carrying immediate conviction. At the height of migration, familial America-letters could lead to chain reactions which would all but depopulate some Swedish parishes, dissolving tightly knit communities which then re-assembled in the Midwest.
Other forces worked to push the new immigrants towards the cities, particularly Chicago. According to historian H. Arnold Barton, the cost of crossing the Atlantic dropped by more than half between 1865 and 1890, which led to progressively poorer Swedes contributing a growing share of immigration (but compare Brattne and Åkerman, see "Crossing the Atlantic" above). The new immigrants were increasingly younger and unmarried. With the shift from family to individual immigration came a faster and fuller Americanization, as young, single individuals with little money took whatever jobs they could get, often in cities. Large numbers even of those who had been farmers in the old country made straight for American cities and towns, living and working there at least until they had saved enough capital to marry and buy farms of their own. A growing proportion stayed in urban centers, combining emigration with the flight from the countryside which was happening in the homeland and all across Europe.
Single young women, most commonly moved straight from field work in rural Sweden to jobs as live-in housemaids in urban America. "Literature and tradition have preserved the often tragic image of the pioneer immigrant wife and mother", writes Barton, "bearing her burden of hardship, deprivation and longing on the untamed frontier ... More characteristic among the newer arrivals, however, was the young, unmarried woman ... As domestic servants in America, they ... were treated as members of the families they worked for and like 'ladies' by American men, who showed them a courtesy and consideration to which they were quite unaccustomed at home." They found employment easily, as Scandinavian maids were in high demand, and learned the language and customs quickly. Working conditions were far better than in Sweden, in terms of wages, hours of work, benefits, and ability to change positions. In contrast, newly arrived Swedish men were often employed in all-Swedish work gangs. The young women usually married Swedish men and brought with them in marriage an enthusiasm for ladylike, American manners and middle-class refinements. Many admiring remarks are recorded from the late 19th century about the sophistication and elegance that simple Swedish farm girls would gain in a few years, and about their unmistakably American demeanor.
As ready workers, the Swedes were generally welcomed by the Americans, who often singled them out as the "best" immigrants. There was no significant anti-Swedish nativism of the sort that attacked Irish, German and, especially, Chinese newcomers. The Swedish style was more familiar: "They are not peddlers, nor organ grinders, nor beggars; they do not sell ready-made clothing nor keep pawn shops", wrote the Congregational missionary M. W. Montgomery in 1885; "they do not seek the shelter of the American flag merely to introduce and foster among us ... socialism, nihilism, communism ... they are more like Americans than are any other foreign peoples."
A number of well-established and longtime Swedish Americans visited Sweden in the 1870s, making comments that give historians a window on the cultural contrasts involved. A group from Chicago made the journey in an effort to remigrate and spend their later years in the country of their birth, but changed their minds when faced with the realities of 19th-century Swedish society. Uncomfortable with what they described as the social snobbery, pervasive drunkenness, and superficial religious life of the old country, they returned promptly to America. The most notable visitor was Hans Mattson (1832–1893), an early Minnesota settler who had served as a colonel in the Union Army and had been Minnesota's secretary of state. He visited Sweden in 1868–69 to recruit settlers on behalf of the Minnesota Immigration Board, and again in the 1870s to recruit for the Northern Pacific Railroad. Viewing Swedish class snobbery with indignation, Mattson wrote in his Reminiscences that this contrast was the key to the greatness of America, where "labor is respected, while in most other countries it is looked down upon with slight". He was sardonically amused by the ancient pageantry of monarchy at the ceremonial opening of the Riksdag: "With all respects for old Swedish customs and manners, I cannot but compare this pageant to a great American circus—minus the menagerie, of course."
Mattson's first recruiting visit came immediately after consecutive seasons of crop failure in 1867 and 1868, and he found himself "besieged by people who wished to accompany me back to America." He noted that:
…the laboring and middle classes already at that time had a pretty correct idea of America, and the fate that awaited emigrants there; but the ignorance, prejudice and hatred toward America and everything pertaining to it among the aristocracy, and especially the office holders, was as unpardonable as it was ridiculous. It was claimed by them that all was humbug in America, that it was the paradise of scoundrels, cheats, and rascals, and that nothing good could possibly come out of it.
A more recent American immigrant, Ernst Skarstedt, who visited Sweden in 1885, received the same galling impression of upper-class arrogance and anti-Americanism. The laboring classes, in their turn, appeared to him coarse and degraded, drinking heavily in public, speaking in a stream of curses, making obscene jokes in front of women and children. Skarstedt felt surrounded by "arrogance on one side and obsequiousness on the other, a manifest scorn for menial labor, a desire to appear to be more than one was". This traveller too was incessantly hearing American civilization and culture denigrated from the depths of upper-class Swedish prejudice: "If I, in all modesty, told something about America, it could happen that in reply I was informed that this could not possibly be so or that the matter was better understood in Sweden."
Swedish emigration dropped dramatically after 1890; return migration rose as conditions in Sweden improved. Sweden underwent a rapid industrialization within a few years in the 1890s, and wages rose, principally in the fields of mining, forestry, and agriculture. The pull from the U.S. declined even more sharply than the Swedish "push", as the best farmland was taken. No longer growing but instead settling and consolidating, the Swedish-American community seemed set to become ever more American and less Swedish. The new century, however, saw a new influx.
In the 1800s–1900s, the Lutheran State Church supported the Swedish government by opposing both emigration and the clergy's efforts recommending sobriety. This escalated to a point where its priests even were persecuted by the church for preaching sobriety, and the reactions of many congregation members to that contributed to an inspiration to leave the country (which however was against the law until 1840).
Parliamentary Emigration Commission 1907–1913
Emigration rose again at the turn of the 20th century, reaching a new peak of about 35,000 Swedes in 1903. Figures remained high until World War I, alarming both conservative Swedes, who saw emigration as a challenge to national solidarity, and liberals, who feared the disappearance of the labor force necessary for economic development. One-fourth of all Swedes had made the United States their home, and a broad national consensus mandated that a Parliamentary Emigration Commission study the problem in 1907. Approaching the task with what Barton calls "characteristic Swedish thoroughness", the Commission published its findings and proposals in 21 large volumes. The Commission rejected conservative proposals for legal restrictions on emigration and in the end supported the liberal line of "bringing the best sides of America to Sweden" through social and economic reform. Topping the list of urgent reforms were universal male suffrage, better housing, and general economic development. The Commission especially hoped that broader popular education would counteract "class and caste differences"
Class inequality in Swedish society was a strong and recurring theme in the Commission's findings. It appeared as a major motivator in the 289 personal narratives included in the report. These documents, of great research value and human interest today, were submitted by Swedes in Canada and the U.S. in response to requests in Swedish-American newspapers. The great majority of replies expressed enthusiasm for their new homeland and criticized conditions in Sweden. Bitter experiences of Swedish class snobbery still rankled after sometimes 40–50 years in America. Writers recalled the hard work, pitiful wages, and grim poverty of life in the Swedish countryside. One woman wrote from North Dakota of how in her Värmland home parish, she had had to earn her living in peasant households from the age of eight, starting work at four in the morning and living on "rotten herring and potatoes, served out in small amounts so that I would not eat myself sick". She could see "no hope of saving anything in case of illness", but rather could see "the poorhouse waiting for me in the distance". When she was seventeen, her emigrated brothers sent her a prepaid ticket to America, and "the hour of freedom struck"
A year after the Commission published its last volume, World War I began and reduced emigration to a mere trickle. From the 1920s, there was no longer a Swedish mass emigration. The influence of the ambitious Emigration Commission in solving the problem is still a matter of debate. Franklin D. Scott has argued in an influential essay that the American Immigration Act of 1924 was the effective cause. Barton, by contrast, points to the rapid implementation of essentially all the Commission's recommendations, from industrialization to an array of social reforms. He maintains that its findings "must have had a powerful cumulative effect upon Sweden's leadership and broader public opinion".
The Midwest remained the heartland of the Swedish-American community, but its position weakened in the 20th century: in 1910, 54% of the Swedish immigrants and their children lived in the Midwest, 15% in industrial areas in the East, and 10% on the West Coast. Chicago was effectively the Swedish-American capital, accommodating about 10% of all Swedish Americans—more than 100,000 people—making it the second-largest Swedish city in the world (only Stockholm had more Swedish inhabitants).
Defining themselves as both Swedish and American, the Swedish-American community retained a fascination for the old country and their relationship to it. The nostalgic visits to Sweden which had begun in the 1870s continued well into the 20th century, and narratives from these trips formed a staple of the lively Swedish-American publishing companies. The accounts testify to complex feelings, but each contingent of American travellers were freshly indignant at Swedish class pride and Swedish disrespect for women. It was with renewed pride in American culture that they returned to the Midwest.
In the 2000 U.S. Census, about four million Americans claimed to have Swedish roots. Minnesota remains by a wide margin the state with the most inhabitants of Swedish descent—9.6% of the population as of 2005[update].
The best-known artistic representation of the Swedish mass migration is the epic four-novel suite The Emigrants (1949–1959) by Vilhelm Moberg (1898–1973). Portraying the lives of an emigrant family through several generations, the novels have sold nearly two million copies in Sweden and have been translated into more than twenty languages. The tetralogy has been filmed by Jan Troell as The Emigrants (1971) and The New Land (1972), and forms the basis of Kristina from Duvemåla, a 1995 musical by former ABBA members Benny Andersson and Björn Ulvaeus.
In Sweden, the Småland city of Växjö is home to the Swedish Emigrant Institute (Svenska Emigrantinstitutet), founded in 1965 "to preserve records, interviews, and memorabilia relating to the period of major Swedish emigration between 1846 and 1930". The House of the Emigrants (Emigranternas Hus) was founded in Gothenburg, the main port for Swedish emigrants, in 2004. The centre shows exhibitions on migration and has a research hall for genealogy. In the U.S., there are hundreds of active Swedish-American organizations as of 2007[update], for which the Swedish Council of America functions as an umbrella group. There are Swedish-American museums in Philadelphia, Chicago, Minneapolis, and Seattle. Rural cemeteries such as the Moline Swedish Lutheran Cemetery in central Texas also serve as a valuable record of the first Swedish people to come to America.
- Nordstjernan (newspaper)
- American Swedish Historical Museum
- American Swedish Institute
- Swedish colonization of the Americas
- Swedish language in the United States
- Swedish-American relations
- Barton, A Folk Divided, 5–7.
- Kälvemark, 94–96.
- See Beijbom, "Review Archived 2 June 2006 at the Wayback Machine".
- Barton, A Folk Divided, 11.
- Donald Harman Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 (McGill-Queen's University Press; 2011) p 70
- The pictures originally illustrated a cautionary tale published in 1869 in the Swedish periodical Läsning för folket, the organ of the Society for the Propagation of Useful Knowledge (Sällskapet för nyttiga kunskapers spridande). See Barton, A Folk Divided, 71.
- Akenson, Ireland, Sweden, and the Great European Migration, 1815-1914 pp 37-39
- Åkerman, passim.
- Norman, 150–153.
- Runblom and Norman, 315.
- Norman, passim.
- Brattne and Åkerman, 179–181.
- Brattne and Åkerman, 179–181, 186–189, 199–200.
- Barton, 38.
- Brattne and Åkerman, 187–192.
- Unonius, quoted in Barton, A Folk Divided, 13.
- Quoted in Barton, A Folk Divided, 14.
- Cipollo, 115, estimates adult literacy in Sweden at 90% in 1850, which places it highest among the European countries he has surveyed.
- Gritsch, Eric W. A History of Lutheranism. Minneapolis: Fortress Press, 2002. p. 180.
- Barton, A Folk Divided, 15–16.
- Barton, A Folk Divided, 17.
- Barton, A Folk Divided, 18.
- Proclaimed in an article in the newspaper Nya Wermlandstidningen in April 1855; quoted by Barton, A Folk Divided, 20–22.
- Göteborgs Handels- och Sjöfartstidning, 1849, quoted in Barton, A Folk Divided, 24.
- 1851, quoted and translated by Barton, A Folk Divided, 24.
- Häger, Olle; Torell, Carl; Villius, Hans (1978). Ett satans år: Norrland 1867. Stockholm: Sveriges Radio. Libris 8358120. ISBN 91-522-1529-6 (inb.)
- Sven Ulric Palme: Hundra år under kommunalförfattningarna 1862-1962: en minnesskrift utgiven av Svenska landskommunernas förbund, Svenska landstingsförbundet [och] Svenska stadsförbundet, Trykt hos Godvil, 1962
- The exact figure is 776,093 people (Barton, A Folk Divided, 37).
- 1867 and 1868 were the worst years for crop failure, which ruined many smallholders; see Barton, A Folk Divided, 37.
- Swenson Center Archived 18 April 2015 at the Wayback Machine .
- Beijbom, "Chicago Archived 6 October 2013 at the Wayback Machine "
- Barton, A Folk Divided, 38–41.
- Barton, A Folk Divided, 41.
- Joy K. Lintelman (2009). I Go to America: Swedish American Women and the Life of Mina Anderson. Minnesota Historical Society. pp. 57–58. ISBN 9780873516365.
- Dirk Hoerder; Elise van Nederveen Meerkerk; Silke Neunsinger, eds. (2015). Towards a Global History of Domestic and Caregiving Workers. BRILL. p. 78. ISBN 9789004280144.
- Quoted by Barton, A Folk Divided, 40.
- Private letters by Anders Larsson in the 1870s, summarized by Barton, A Folk Divided, 59.
- Quoted by Barton, A Folk Divided, 60–61.
- Barton, A Folk Divided, 61–62.
- Svensk-amerikanska folket i helg och söcken (Ernst Teofil Skarstedt. Stockholm: Björck & Börjesson. 1917)
- Barton, A Folk Divided, 80.
- Vår svenska stam på utländsk mark; Svenska öden och insatser i främmande land; I västerled, Amerikas förenta stater och Kanada, Ed. Axel Boëthius, Stockholm 1952, Volume I, pp. 92, 137, 273 & 276; for the whole section
- 1.4 million first- and second-generation Swedish immigrants lived in the U.S. in 1910, while Sweden's population at the time was 5.5 million; see Beijbom, "Review Archived 2 June 2006 at the Wayback Machine".
- Barton, A Folk Divided, 149.
- The phrase is from Ernst Beckman's original liberal parliamentary motion for instituting the Commission; quoted by Barton, A Folk Divided, 149.
- Quoted from Volume VII of the Survey by Barton, A Folk Divided, 152.
- Barton, A Folk Divided,165.
- For Swedish American publishing, see Barton, A Folk Divided, 212–213, 254.
- Barton, A Folk Divided, 103 ff.
- American FactFinder, Fact Sheet "Swedish" Archived 13 September 2009 at the Wayback Machine .
- American FactFinder: Minnesota, Selected Social Characteristics in the United States, 2005 Archived 11 February 2020 at archive.today.
- Moberg biography by JoAnn Hanson-Stone at the Swedish Emigrant Institute Archived 6 October 2013 at the Wayback Machine .
- "The Swedish Emigrant Institute". UtvandrarnasHus.se. Svenska Emigrantinstitutet. Archived from the original on 6 October 2013.
- House of the Emigrants Archived 13 January 2016 at the Wayback Machine .
- Scott, Larry E. "Swedish Texans". University of Texas Institute of Texan Cultures at San Antonio, 2006.
- Akenson, Donald Harman. (2011) Ireland, Sweden and the Great European Migration, 1815-1914 (McGill-Queens University Press)
- Åkerman, Sune (1976). Theories and Methods of Migration Research in Runblom and Norman, From Sweden to America, 19–75.
- American FactFinder, United States Census, 2000. Consulted 30 June 2007.
- Andersson, Benny, and Ulvaeus, Björn. Kristina from Duvemåla (musical), consulted 7 May 2007.
- Barton, H. Arnold (1994). A Folk Divided: Homeland Swedes and Swedish Americans, 1840–1940. Uppsala: Acta Universitatis Upsaliensis.
- Barton, H. Arnold Swedish America in Fifty Years—2050, a paper read to the Swedish American Historical Society on the occasion of the 1996 celebration of the Swedish Immigration Jubilee. Consulted 7 May 2007.
- Beijbom, Ulf. Chicago, the Essence of the Promised Land at the Swedish Emigrant Institute. Click on "History", then "Chicago." Consulted 6 May 2007.
- Beijbom, Ulf (1996). A Review of Swedish Emigration to America at AmericanWest.com, consulted 2 February 2007.
- Brattne, Berit, and Sune Åkerman (1976). The Importance of the Transport Sector for Mass Emigration in Runblom and Norman, From Sweden to America, 176–200.
- Cipolla, Carlo (1966). Literacy and Development in the West. Harmondsworth.
- Elovson, Harald (1930). Amerika i svensk litteratur 1750–1820. Lund.
- Glynn, Irial: Emigration Across the Atlantic: Irish, Italians and Swedes compared, 1800-1950 , European History Online, Mainz: Institute of European History, 2011, retrieved: 16 June 2011.
- Lintelman, Joy K. (2009). I Go to America: Swedish American Women and the Life of Mina Anderson. Minnesota Historical Society. ISBN 9780873516365.
- Kälvemark, Ann-Sofie (1976). Swedish Emigration Policy in an International Perspective, 1840–1925, in Runblom and Norman, From Sweden to America, 94–113.
- Norman, Hans (1976). The Causes of Emigration in Runblom and Norman, From Sweden to America, 149–164.
- Runblom, Harald, and Hans Norman (eds.) (1976). From Sweden to America: A History of the Migration. Minneapolis: University of Minnesota Press.
- Scott, Franklin D. (1965). Sweden's Constructive Opposition to Emigration, Journal of Modern History, Vol. 37, No. 3. (Sep. 1965), 307–335. in JSTOR
- The Swedish Emigrant Institute. Consulted 30 June 2007.
- Swenson Center, a research institute at Augustana College, Illinois. Consulted 7 May 2007. | https://worddisk.com/wiki/Swedish_emigration_to_the_United_States/ | 21 |
32 | Principles of Economics
Basic Concepts and Definitions
- Scarcity is known as the limited nature of society's resources.
- Economics is the field of how society controls its scarce resources such as:
- what to buy, how to save, and spend
- how companies and firms decide how much to produce, how many workers to recruit
- how society plans to use their resources on consumer goods, education, healthcare, military, etc.
Ten Principles of Ecnonomics
Principle # 1: People face trade-offs
- How society geta the most from their resources (efficiency)
- Whether or not prosperity is distributed uniformely among society's members (equality)
- Redistribute wealth from rich to poor without jeopardizing incentives to be productive (Tradeoff)
- Example: For example, tax paid by rich people and then distributed to poor may improve life equality of some but
lower the incentive for hard work and therefore reduce the level of production.
Principle # 2: The cost of something is what you give up to get it
- Decisions are made based on comparing the costs and benefits of alternative choices.
- The opportunity cost of any item is is defined as "whatever must be given up to obtain it".
Example: Traveling to Disneyland is not just the price of the trip and the ticket, but the value of the time spent at the theme park.
Principle # 3: Rational People Think at the Margin
- Rational people are known for doing what it takes to their goals.
- Example: When an employee considers whether to pay for additional professional development, s/he evaluates to fees and the extra income s/he could earn after completing the training.
Principle # 4: People Respond to Incentives
- Incentive is like a reward that motivates a person to act.
- Rational people react to incentives.
- Example: When electricity bills rise, home owners (consumers) use renewable energy like solar panels to generate electricity
Exercise on Applying the Principles of Economics: Costs and Benefits
Check your answers here:
Solution to the Exercise on Applying the Principles of Economics: Costs and Benefits
Principle # 5: Trade Can Make Everyone Better Off
For more details, please contact me here.
- It is not expected from a country or a society to become full self-sufficient, but rather produces particular goods/services and exchange them for other goods.
- How could countries take benefit from trade?
- Increase income by selling their goods to foreign countries
- Alternatively, it's cheaper to buy goods from abroad than produced them locally
- Countries benefit from trading with one another
Principle # 6: Markets Are Usually A Good Way to Organize Economic Activity
- Market is defined as a group of buyers and sellers
- Economic actitvity is based on the goods that are produced, how to produce them, and who is getting them.
- Market prices reflect both the value of a product to consumers and the cost of the resources used to produce it
- Definition of market economy: an economy that allocates resources through the decentralized decisions of many firms
and households as they interact in markets for goods and services.
- Centrally planned economies have failed because they did not allow the market to work properly
- Adam Smith and the Invisible Hand:
Adam Smith's 1776 work suggested that although individuals are motivated by self-interest,
an invisible hand guides this self-interest into promoting society&339s economic well-being.
- So, the interaction of buyers and sellers governs prices.
Principle # 7: Governments Can Sometimes Improve Market Outcomes
- There are two broad reasons for the government to interfere with the economy:
the promotion of efficiency and equality.
- Market failure is defined as a situation in which a market left on its own fails to allocate resources efficiently.
- Externalities is the impact of one person’s actions on the well-being of a bystander, for example pollution.
- Market power is defined as the ability of a single economic actor (or small group of actors) to have
a substantial influence on market prices.
- Hence, government policy can be most useful when there is market failure and promote efficiency.
Exercise on Applying the Principles of Economics: Government Role
Check your answers here:
Solution to the Exercise on Applying the Principles of Economics: Government Role
Principle # 8: A country's standard of living depends on its ability to produce goods & services
- Differences in the standard of living from one country to another are quite large.
- Differences in living standards over time are also quite important, which are explained in differences in productivity.
- A key determinant of living standards is productivity, which represents
the quantity of goods and services produced per unit of labor.
- High productivity implies a high standard of living.
- Productivity depends on the tools, skills, well-educated workers, and access to the best available technology.
- Hence, economic policies must be drafted in a way to induce a positive impact and enhance the capability to produce goods and services.
Principle # 9: Prices rise when the government prints too much money
- Inflation is defined as a sustained increase in the overall level of prices in the economy.
- Usually, the fall of money value is triggered when the government creates a large amount of money, and the faster this done,
the greater the inflation rate is.
- In the long run, inflation is almost always caused by excessive growth in the quantity of money,
which causes the value of money to fall.
- Examples: Germany after World War I in the 1920s), and the United States in the 1970s.
Principle # 10: Prices rise when the government prints too much money
- Most economists believe that the short-run economic policies (1 to 2 years) effect of a monetary injection
into the economy leads to lower unemployment and higher prices.
- An increase in the amount of money in the economy stimulates spending and
increases the demand of goods and services in the economy, which in turn
increases production and ultimately hiring more workers, i.e., lower unemployment.
Some economists are not sure if this relationship still exists.
Date of last modification: 2019 | https://www.initiatewebdevelopment.com/Economics/principles-economics.html | 21 |
17 | By the end of this section, you will be able to:
- Discuss the problems and benefits of divided government
- Define party polarization
- List the main explanations for partisan polarization
- Explain the implications of partisan polarization
In 1950, the American Political Science Association’s Committee on Political Parties (APSA) published an article offering a criticism of the current party system. The parties, it argued, were too similar. Distinct, cohesive political parties were critical for any well-functioning democracy. First, distinct parties offer voters clear policy choices at election time. Second, cohesive parties could deliver on their agenda, even under conditions of lower bipartisanship. The party that lost the election was also important to democracy because it served as the “loyal opposition” that could keep a check on the excesses of the party in power. Finally, the paper suggested that voters could signal whether they preferred the vision of the current leadership or of the opposition. This signaling would keep both parties accountable to the people and lead to a more effective government, better capable of meeting the country’s needs.
But, the APSA article continued, U.S. political parties of the day were lacking in this regard. Rarely did they offer clear and distinct visions of the country’s future, and, on the rare occasions they did, they were typically unable to enact major reforms once elected. Indeed, there was so much overlap between the parties when in office that it was difficult for voters to know whom they should hold accountable for bad results. The article concluded by advocating a set of reforms that, if implemented, would lead to more distinct parties and better government. While this description of the major parties as being too similar may have been accurate in the 1950s; that is no longer the case.51
THE PROBLEM OF DIVIDED GOVERNMENT
The problem of majority versus minority politics is particularly acute under conditions of divided government. Divided government occurs when one or more houses of the legislature are controlled by the party in opposition to the executive. Unified government occurs when the same party controls the executive and the legislature entirely. Divided government can pose considerable difficulties for both the operations of the party and the government as a whole. It makes fulfilling campaign promises extremely difficult, for instance, since the cooperation (or at least the agreement) of both Congress and the president is typically needed to pass legislation. Furthermore, one party can hardly claim credit for success when the other side has been a credible partner, or when nothing can be accomplished. Party loyalty may be challenged too, because individual politicians might be forced to oppose their own party agenda if it will help their personal reelection bids.
Divided government can also be a threat to government operations, although its full impact remains unclear.52 For example, when the divide between the parties is too great, government may shut down. A 1976 dispute between Republican president Gerald Ford and a Democrat-controlled Congress over the issue of funding for certain cabinet departments led to a ten-day shutdown of the government (although the federal government did not cease to function entirely). But beginning in the 1980s, the interpretation that Republican president Ronald Reagan’s attorney general gave to a nineteenth-century law required a complete shutdown of federal government operations until a funding issue was resolved (Figure 9.13).53
Clearly, the parties’ willingness to work together and compromise can be a very good thing. However, the past several decades have brought an increased prevalence of divided government. Since 1969, the U.S. electorate has sent the president a Congress of his own party in only seven of twenty-three congressional elections, and during George W. Bush’s first administration, the Republican majority was so narrow that a combination of resignations and defections gave the Democrats control before the next election could be held.
Over the short term, however, divided government can make for very contentious politics. A well-functioning government usually requires a certain level of responsiveness on the part of both the executive and the legislative branches. This responsiveness is hard enough if government is unified under one party. During the presidency of Democrat Jimmy Carter (1977–1980), despite the fact that both houses of Congress were controlled by Democratic majorities, the government was shut down on five occasions because of conflict between the executive and legislative branches.54 Shutdowns are even more likely when the president and at least one house of Congress are of opposite parties. During the presidency of Ronald Reagan, for example, the federal government shut down eight times; on seven of those occasions, the shutdown was caused by disagreements between Reagan and the Republican-controlled Senate on the one hand and the Democrats in the House on the other, over such issues as spending cuts, abortion rights, and civil rights.55 More such disputes and government shutdowns took place during the administrations of George H. W. Bush, Bill Clinton, and Barack Obama, when different parties controlled Congress and the presidency. The most recent government shutdown, the longest in U.S. history, began in December 2018 under the 115th Congress, when the presidency and both houses were controlled by Republican majorities, but continued into the 116th, which features a Democratically controlled House and a Republican Senate.
For the first few decades of the current pattern of divided government, the threat it posed to the government appears to have been muted by a high degree of bipartisanship, or cooperation through compromise. Many pieces of legislation were passed in the 1960s and 1970s with reasonably high levels of support from both parties. Most members of Congress had relatively moderate voting records, with regional differences within parties that made bipartisanship on many issues more likely.
For example, until the 1980s, northern and midwestern Republicans were often fairly progressive, supporting racial equality, workers’ rights, and farm subsidies. Southern Democrats were frequently quite socially and racially conservative and were strong supporters of states’ rights. Cross-party cooperation on these issues was fairly frequent. But in the past few decades, the number of moderates in both houses of Congress has declined. This has made it more difficult for party leadership to work together on a range of important issues, and for members of the minority party in Congress to find policy agreement with an opposing party president.
THE IMPLICATIONS OF POLARIZATION
The past thirty years have brought a dramatic change in the relationship between the two parties as fewer conservative Democrats and liberal Republicans have been elected to office. As political moderates, or individuals with ideologies in the middle of the ideological spectrum, leave the political parties at all levels, the parties have grown farther apart ideologically, a result called party polarization. In other words, at least organizationally and in government, Republicans and Democrats have become increasingly dissimilar from one another (Figure 9.14). In the party-in-government, this means fewer members of Congress have mixed voting records; instead they vote far more consistently on issues and are far more likely to side with their party leadership.56 It also means a growing number of moderate voters aren’t participating in party politics. Either they are becoming independents, or they are participating only in the general election and are therefore not helping select party candidates in primaries.
What is most interesting about this shift to increasingly polarized parties is that it does not appear to have happened as a result of the structural reforms recommended by APSA. Rather, it has happened because moderate politicians have simply found it harder and harder to win elections. There are many conflicting theories about the causes of polarization, some of which we discuss below. But whatever its origin, party polarization in the United States does not appear to have had the net positive effects that the APSA committee was hoping for. With the exception of providing voters with more distinct choices, positives of polarization are hard to find. The negative impacts are many. For one thing, rather than reducing interparty conflict, polarization appears to have only amplified it. For example, the Republican Party (or the GOP, standing for Grand Old Party) has historically been a coalition of two key and overlapping factions: pro-business rightists and social conservatives. The GOP has held the coalition of these two groups together by opposing programs designed to redistribute wealth (and advocating small government) while at the same time arguing for laws preferred by conservative Christians. But it was also willing to compromise with pro-business Democrats, often at the expense of social issues, if it meant protecting long-term business interests.
Recently, however, a new voice has emerged that has allied itself with the Republican Party. Born in part from an older third-party movement known as the Libertarian Party, the Tea Party is more hostile to government and views government intervention in all forms, and especially taxation and the regulation of business, as a threat to capitalism and democracy. It is less willing to tolerate interventions in the market place, even when they are designed to protect the markets themselves. Although an anti-tax faction within the Republican Party has existed for some time, some factions of the Tea Party movement are also active at the intersection of religious liberty and social issues, especially in opposing such initiatives as same-sex marriage and abortion rights.57 The Tea Party has argued that government, both directly and by neglect, is threatening the ability of evangelicals to observe their moral obligations, including practices some perceive as endorsing social exclusion.
Although the Tea Party is a movement and not a political party, 86 percent of Tea Party members who voted in 2012 cast their votes for Republicans.58 Some members of the Republican Party are closely affiliated with the movement, and before the 2012 elections, Tea Party activist Grover Norquist exacted promises from many Republicans in Congress that they would oppose any bill that sought to raise taxes.59 The inflexibility of Tea Party members has led to tense floor debates and was ultimately responsible for the 2014 primary defeat of Republican majority leader Eric Cantor and the 2015 resignation of the sitting Speaker of the House John Boehner. In 2015, Chris Christie, John Kasich, Ben Carson, Marco Rubio, and Ted Cruz, all of whom were Republican presidential candidates, signed Norquist’s pledge as well (Figure 9.15).
Movements on the left have also arisen. The Occupy Wall Street movement was born of the government’s response to the Great Recession of 2008 and its assistance to endangered financial institutions, provided through the Troubled Asset Relief Program, TARP (Figure 9.16). The Occupy Movement believed the recession was caused by a failure of the government to properly regulate the banking industry. The Occupiers further maintained that the government moved swiftly to protect the banking industry from the worst of the recession but largely failed to protect the average person, thereby worsening the growing economic inequality in the United States.
While the Occupy Movement itself has largely fizzled, the anti-business sentiment to which it gave voice continues within the Democratic Party, and many Democrats have proclaimed their support for the movement and its ideals, if not for its tactics.60 Champions of the left wing of the Democratic Party, however, such as former presidential candidate Senator Bernie Sanders and Massachusetts senator Elizabeth Warren, have ensured that the Occupy Movement’s calls for more social spending and higher taxes on the wealthy remain a prominent part of the national debate. Their popularity, and the growing visibility of race issues in the United States, have helped sustain the left wing of the Democratic Party. Bernie Sanders’ presidential run made these topics and causes even more salient, especially among younger voters. This reality led Hillary Clinton to move left during the primaries and attempt to win people over. However, the left never warmed up to Clinton after Sanders exited the race. After Clinton lost to Trump, many on the left blamed Clinton for not going far enough left, and they further claimed that Sanders would have had a better chance at beating Trump.61
Unfortunately, party factions haven’t been the only result of party polarization. By most measures, the U.S. government in general and Congress in particular have become less effective in recent years. Congress has passed fewer pieces of legislation, confirmed fewer appointees, and been less effective at handling the national purse than in recent memory. If we define effectiveness as legislative productivity, the 106th Congress (1999–2000) passed 463 pieces of substantive legislation (not including commemorative legislation, such as bills proclaiming an official doughnut of the United States). The 107th Congress (2000–2001) passed 294 such pieces of legislation. By 2013–2014, the total had fallen to 212.62
Perhaps the clearest sign of Congress’ ineffectiveness is that the threat of government shutdown has become a constant. Shutdowns occur when Congress and the president are unable to authorize and appropriate funds before the current budget runs out. This is now an annual problem. Relations between the two parties became so bad that financial markets were sent into turmoil in 2014 when Congress failed to increase the government’s line of credit before a key deadline, thus threatening a U.S. government default on its loans. While any particular trend can be the result of multiple factors, the decline of key measures of institutional confidence and trust suggest the negative impact of polarization. Public approval ratings for Congress have been near single digits for several years, and a poll taken in February 2016 revealed that only 11 percent of respondents thought Congress was doing a “good or excellent job.”63 In the wake of the Great Recession, President Obama’s average approval rating remained low for several years, despite an overall trend in economic growth since the end of 2008, before he enjoyed an uptick in support during his final year in office.64 Typically, economic conditions are a significant driver of presidential approval, suggesting the negative effect of partisanship on presidential approval.
THE CAUSES OF POLARIZATION
Scholars agree that some degree of polarization is occurring in the United States, even if some contend it is only at the elite level. But they are less certain about exactly why, or how, polarization has become such a mainstay of American politics. Several conflicting theories have been offered. The first and perhaps best argument is that polarization is a party-in-government phenomenon driven by a decades-long sorting of the voting public, or a change in party allegiance in response to shifts in party position.65 According to the sorting thesis, before the 1950s, voters were mostly concerned with state-level party positions rather than national party concerns. Since parties are bottom-up institutions, this meant local issues dominated elections; it also meant national-level politicians typically paid more attention to local problems than to national party politics.
But over the past several decades, voters have started identifying more with national-level party politics, and they began to demand their elected representatives become more attentive to national party positions. As a result, they have become more likely to pick parties that consistently represent national ideals, are more consistent in their candidate selection, and are more willing to elect office-holders likely to follow their party’s national agenda. One example of the way social change led to party sorting revolves around race.
The Democratic Party returned to national power in the 1930s largely as the result of a coalition among low socio-economic status voters in northern and midwestern cities. These new Democratic voters were religiously and ethnically more diverse than the mostly White, mostly Protestant voters who supported Republicans. But the southern United States (often called the “Solid South”) had been largely dominated by Democratic politicians since the Civil War. These politicians agreed with other Democrats on most issues, but they were more evangelical in their religious beliefs and less tolerant on racial matters. The federal nature of the United States meant that Democrats in other parts of the country were free to seek alliances with minorities in their states. But in the South, African Americans were still largely disenfranchised well after Franklin Roosevelt had brought other groups into the Democratic tent.
The Democratic alliance worked relatively well through the 1930s and 1940s when post-Depression politics revolved around supporting farmers and helping the unemployed. But in the late 1950s and early 1960s, social issues became increasingly prominent in national politics. Southern Democrats, who had supported giving the federal government authority for economic redistribution, began to resist calls for those powers to be used to restructure society. Many of these Democrats broke away from the party only to find a home among Republicans, who were willing to help promote smaller national government and greater states’ rights. 66 This shift was largely completed with the rise of the evangelical movement in politics, when it shepherded its supporters away from Jimmy Carter, an evangelical Christian, to Ronald Reagan in the 1980 presidential election.
At the same time social issues were turning the Solid South towards the Republican Party, they were having the opposite effect in the North and West. Moderate Republicans, who had been champions of racial equality since the time of Lincoln, worked with Democrats to achieve social reform. These Republicans found it increasing difficult to remain in their party as it began to adjust to the growing power of the small government–states’ rights movement. A good example was Senator Arlen Specter, a moderate Republican who represented Pennsylvania and ultimately switched to become a Democrat before the end of his political career.
A second possible culprit in increased polarization is the impact of technology on the public square. Before the 1950s, most people got their news from regional newspapers and local radio stations. While some national programming did exist, most editorial control was in the hands of local publishers and editorial boards. These groups served as a filter of sorts as they tried to meet the demands of local markets.
As described in detail in the media chapter, the advent of television changed that. Television was a powerful tool, with national news and editorial content that provided the same message across the country. All viewers saw the same images of the women’s rights movement and the war in Vietnam. The expansion of news coverage to cable, and the consolidation of local news providers into big corporate conglomerates, amplified this nationalization. Average citizens were just as likely to learn what it meant to be a Republican from a politician in another state as from one in their own, and national news coverage made it much more difficult for politicians to run away from their votes. The information explosion that followed the heyday of network TV by way of cable, the Internet, and blogs has furthered this nationalization trend.
A final possible cause for polarization is the increasing sophistication of gerrymandering, or the manipulation of legislative districts in an attempt to favor a particular candidate (Figure 9.17). According to the gerrymandering thesis, the more moderate or heterogeneous a voting district, the more moderate the politician’s behavior once in office. Taking extreme or one-sided positions on a large number of issues would be hazardous for a member who needs to build a diverse electoral coalition. But if the district has been drawn to favor a particular group, it now is necessary for the elected official to serve only the portion of the constituency that dominates.
Gerrymandering is a centuries-old practice. There has always been an incentive for legislative bodies to draw districts in such a way that sitting legislators have the best chance of keeping their jobs. But changes in law and technology have transformed gerrymandering from a crude art into a science. The first advance came with the introduction of the “one-person-one-vote” principle by the U.S. Supreme Court in 1962. Before then, it was common for many states to practice redistricting, or redrawing of their electoral maps, only if they gained or lost seats in the U.S. House of Representatives. This can happen once every ten years as a result of a constitutionally mandated reapportionment process, in which the number of House seats given to each state is adjusted to account for population changes.
But if there was no change in the number of seats, there was little incentive to shift district boundaries. After all, if a legislator had won election based on the current map, any change to the map could make losing seats more likely. Even when reapportionment led to new maps, most legislators were more concerned with protecting their own seats than with increasing the number of seats held by their party. As a result, some districts had gone decades without significant adjustment, even as the U.S. population changed from largely rural to largely urban. By the early 1960s, some electoral districts had populations several times greater than those of their more rural neighbors.
However, in its one-person-one-vote decision in Reynolds v. Simms (1964), the Supreme Court argued that everyone’s vote should count roughly the same regardless of where they lived.67 Districts had to be adjusted so they would have roughly equal populations. Several states therefore had to make dramatic changes to their electoral maps during the next two redistricting cycles (1970–1972 and 1980–1982). Map designers, no longer certain how to protect individual party members, changed tactics to try and create safe seats so members of their party could be assured of winning by a comfortable margin. The basic rule of thumb was that designers sought to draw districts in which their preferred party had a 55 percent or better chance of winning a given district, regardless of which candidate the party nominated.
Of course, many early efforts at post-Reynolds gerrymandering were crude since map designers had no good way of knowing exactly where partisans lived. At best, designers might have a rough idea of voting patterns between precincts, but they lacked the ability to know voting patterns in individual blocks or neighborhoods. They also had to contend with the inherent mobility of the U.S. population, which meant the most carefully drawn maps could be obsolete just a few years later. Designers were often forced to use crude proxies for party, such as race or the socio-economic status of a neighborhood (Figure 9.18). Some maps were so crude they were ruled unconstitutionally discriminatory by the courts.
Proponents of the gerrymandering thesis point out that the decline in the number of moderate voters began during this period of increased redistricting. But it wasn’t until later, they argue, that the real effects could be seen. A second advance in redistricting, via computer-aided map making, truly transformed gerrymandering into a science. Refined computing technology, the ability to collect data about potential voters, and the use of advanced algorithms have given map makers a good deal of certainty about where to place district boundaries to best predetermine the outcomes. These factors also provided better predictions about future population shifts, making the effects of gerrymandering more stable over time. Proponents argue that this increased efficiency in map drawing has led to the disappearance of moderates in Congress.
According to political scientist Nolan McCarty, there is little evidence to support the redistricting hypothesis alone. First, he argues, the Senate has become polarized just as the House of Representatives has, but people vote for Senators on a statewide basis. There are no gerrymandered voting districts in elections for senators. Research showing that more partisan candidates first win election to the House before then running successfully for the Senate, however, helps us understand how the Senate can also become partisan.68 Furthermore, states like Wyoming and Vermont, which have only one Representative and thus elect House members on a statewide basis as well, have consistently elected people at the far ends of the ideological spectrum.69 Redistricting did contribute to polarization in the House of Representatives, but it took place largely in districts that had undergone significant change.70
Furthermore, polarization has been occurring throughout the country, but the use of increasingly polarized district design has not. While some states have seen an increase in these practices, many states were already largely dominated by a single party (such as in the Solid South) but still elected moderate representatives. Some parts of the country have remained closely divided between the two parties, making overt attempts at gerrymandering difficult. But when coupled with the sorting phenomenon discussed above, redistricting probably is contributing to polarization, if only at the margins.
The Politics of Redistricting
Voters in a number of states have become so worried about the problem of gerrymandering that they have tried to deny their legislatures the ability to draw district boundaries. The hope is that by taking this power away from whichever party controls the state legislature, voters can ensure more competitive districts and fairer electoral outcomes.
In 2000, voters in Arizona approved a referendum that created an independent state commission responsible for drafting legislative districts. But the Arizona legislature fought back against the creation of the commission, filing a lawsuit that claimed only the legislature had the constitutional right to draw districts. Legislators asked the courts to overturn the popular referendum and end the operation of the redistricting commission. However, the U.S. Supreme Court upheld the authority of the independent commission in a 5–4 decision titled Arizona State Legislature v. Arizona Independent Redistricting Commission (2015).71
Currently, only five states use fully independent commissions—ones that do not include legislators or other elected officials—to draw the lines for both state legislative and congressional districts. These states are Arizona, California, Idaho, Montana, and Washington. In Florida, the League of Women Voters and Common Cause challenged a new voting districts map supported by state Republicans, because they did not believe it fulfilled the requirements of amendments made to the state constitution in 2010 requiring that voting districts not favor any political party or incumbent.72
Do you think redistricting is a partisan issue? Should commissions draw districts instead of legislators? If commissions are given this task, who should serve on them? | https://openstax.org/books/american-government-2e/pages/9-4-divided-government-and-partisan-polarization | 21 |
16 | Comment | A 20,000-tonne oil spill is contaminating the Arctic – it could take decades to clean up
The spill perhaps didn’t get the international attention it warranted as it happened in the midst of a global pandemic and just a few days after the death of African-American George Floyd, which sparked a wave of Black Lives Matter protests. But the spill was a major disaster with serious implications.
As experts in Arctic ecosystems, we are worried about the long-term impacts of this diesel spill in such pristine environments where cold, harsh conditions mean that life is limited. While bacteria are known to “clean up” oil spills elsewhere in the world, in the Arctic, their low numbers and slow rates of activity could mean diesel products linger for years, if not decades.
A diesel spill differs from other oil spills
Major oil spills such as that of the Exxon Valdez in 1989 or Deepwater Horizon in 2010 typically involve thick, gloopy crude oil that sits on the surface of seawater. For these sorts of spills, clean-up best practice is well known. However, the recent Norilsk spill involved thinner, less gloopy diesel oil in freshwater, making clean-up more difficult.
Diesel oil contains between 2,000 and 4,000 types of hydrocarbon (the naturally occurring building blocks of fossil fuels), which break down differently in the environment. Typically, 50% or more can evaporate within hours and days, harming the environment and causing respiratory problems for people nearby.
Other, more resistant chemicals can bind with algae and microorganisms in the water and sink, creating a toxic sludge on the bed of the river or lake. This gives the impression that the contamination has been removed and is no longer a threat. However, this sludge can persist for months or years.
How different parts of the ecosystem respond
At the bottom of the food chain in rivers and lakes are microscopic plants and algae that need sunlight to create energy through photosynthesis. When oil first enters the water it sits on the surface and forms a sort of oily sun block, and so these organisms rapidly decrease in number. Zooplankton (tiny animals) that feed on them also eventually die off.
Over time, wind and currents help disperse this oily layer, but some oil will sink to the bottom and, with their predators diminished, algae will return in even greater numbers.
Soils in the Russian Arctic harbour fewer organisms than elsewhere in the world, thanks to cold, harsh conditions, where the ground is often frozen, liquid water is scarce and there are few nutrients available. But nonetheless, these soils are still teeming with life and badly affected by oil spills.
Initially, oil coats soil particles, reducing their ability to absorb water and nutrients, negatively affecting soil organisms as they are unable to access food and water essential for survival. This oily coat can last for years as it is very hard to wash off, so often the soil has to be physically removed.
As of July 6, Nornickel, the mining company that owned the storage tank, says it has removed 185,000 tonnes of contaminated soil (about 14 times the weight of the Brooklyn Bridge). The soil is being stored on site to be “cleaned” by certified contaminant experts by early September.
The “cleaned” soil will then likely be returned to its original site. Also, 13 Olympic swimming pools’ worth of fuel-contaminated water has been pumped from the river to a nearby industrial site where harmful chemicals will be separated and the “clean” water will likely by returned to the river.
This is better than nothing, although toxins will likely remain in both the water and soil. Over months and years, these toxins will build up within the food chain, starting with the microscopic organisms and eventually causing health problems in larger organisms such as fish and birds.
Some of these small, largely invisible organisms in both the soil and freshwater can in theory be part of the solution. Diesel contains carbon (which is essential for all life) and some microorganisms actually thrive on fuel spills, helping to break down contaminants by using the carbon as a food source.
Normally, cold Arctic conditions hinder microbial activity and biodegradation. The current Arctic heatwave may speed up this process initially, enabling oil-degrading microorganisms to grow, reproduce and consume these contaminants more rapidly than normal. But due to the region’s lack of water and the nitrogen and phosphorous needed for growth, even a heatwave can only help these microorganisms so much.
This will probably happen again
Russian authorities have blamed the collapse on the poor state of the fuel tank and have requested Nornickel pay “voluntary compensation” for environmental damage. Nornickel denies negligence and says the fuel tank failed due to rapidly thawing permafrost.
This spring saw Siberia experience temperatures 10°C warmer than average and, with permafrost underlying most of Russia, the region is highly vulnerable to climate warming. Indeed, 45% of oil and gas extraction fields in the Russian Arctic are at risk of infrastructure instability due to thawing permafrost.
Without more stringent regulations to improve existing infrastructure then more spills are likely to occur, especially given how rapidly permafrost is melting in these areas causing unstable ground.
While nature and her oil-degrading microbial communities can help clean up our mess, we should avoid relying on a largely invisible force that we don’t fully understand to fix a much larger human-generated problem. And how can a environment already on the edge of devastation ever fully recover?
*Accreditation is dependent on the degree route and modules taken | https://www.keele.ac.uk/gge/latestnews/2020/july/keele-comment/arctic-oil-spill.php | 21 |
30 | Habitat destruction (also termed habitat loss and habitat reduction) is the process by which a natural habitat becomes incapable of supporting its native species. The organisms that previously inhabited the site are displaced or dead, thereby reducing biodiversity and species abundance. Habitat destruction is the leading cause of biodiversity loss.
Activities such as harvesting natural resources, industrial production and urbanization are human contributions to habitat destruction. Pressure from agriculture is the principal human cause. Some others include mining, logging, trawling, and urban sprawl. Habitat destruction is currently considered the primary cause of species extinction worldwide. Environmental factors can contribute to habitat destruction more indirectly. Geological processes, climate change, introduction of invasive species, ecosystem nutrient depletion, water and noise pollution are some examples. Loss of habitat can be preceded by an initial habitat fragmentation.
Attempts to address habitat destruction are in international policy commitments embodied by Sustainable Development Goal 15 "Life on Land" and Sustainable Development Goal 14 "Life Below Water". However, the United Nations Environment Programme report on "Making Peace with Nature" released in 2021 found that most of these efforts had failed to meet their internationally agree upon goals.
Habitat loss is perhaps the greatest threat to organisms and biodiversity. Temple (1986) found that 82% of endangered bird species were significantly threatened by habitat loss. Most amphibian species are also threatened by native habitat loss, and some species are now only breeding in modified habitat. Endemic organisms with limited ranges are most affected by habitat destruction, mainly because these organisms are not found anywhere else within the world, and thus have less chance of recovering. Many endemic organisms have very specific requirements for their survival that can only be found within a certain ecosystem, resulting in their extinction. Extinction may also take place very long after the destruction of habitat, a phenomenon known as extinction debt. Habitat destruction can also decrease the range of certain organism populations. This can result in the reduction of genetic diversity and perhaps the production of infertile youths, as these organisms would have a higher possibility of mating with related organisms within their population, or different species. One of the most famous examples is the impact upon China's giant panda, once found in many areas of Sichuan. Now it is only found in fragmented and isolated regions in the southwest of the country, as a result of widespread deforestation in the 20th century.
As habitat destruction of an area occurs, the species diversity offsets from a combination of habitat generalists and specialists to a population primarily consisting of generalist species. Invasive species are frequently generalists that are able to survive in much more diverse habitats. Habitat destruction leading to climate change offsets the balance of species keeping up with the Extinction threshold leading to a higher likelihood of extinction.
Habitat fragmentation is the main cause of most conservation problems that exist. There have been many experiments run to look if habitat fragmentation has any type of correlation to the loss of habitat for many species. They ran a survey which showed that there were around 20 experiments worldwide. The main purpose of these experiments was to show or even explain the general issues in ecology. The fragmentation experiments have been kept now for a much longer time because of the effect they have started to have in the areas that fragmentation is being a part of. As it shows the smaller habitats have expected to hang on to fewer species rather than the larger areas which keep a larger amount of species.
Habitat loss has been one of the biggest things impacted by habitat fragmentation, but also when it comes to the biodiversity of that area there isn't much within the species. Fragmentation has been such a big impact since it doesn't allow for the species to have what they are naturally accustomed to. This makes the species isolate, reduce the area where they can live, and have many ecological boundaries. There are studies that are starting to show that many species have started to lose their richness. Throughout these studies they also learned that changes in the abiotic and biotic parameters have caused a greater impact on the ecology than the actual habitat. They have also come to a conclusion which makes them believe that when species are crowded into one space that will eventually lead to the extinction of that species. As for fragmentation now that can be seen as a big cause for the big effects on species.
In recent times the destruction of habitat has been the main cause of the loss of many different species. Sometimes the area may be small of destruction but as time goes by slowly that will cause for the increase of extinction. But this is not the only thing which will cause extinction. There are many other reasons for that to happen but they are all connected back to the loss of habitat. As we see in a three species system the cause for them to lose their habitat is not just natural but it's also caused from having too much of a certain species. Since if we were to say we have species x, y and z if species z were to go extinct which is the predator it will now increase prey which could possibly cause overpopulation. And with a higher amount of any species that can cause them to use too much of their resources or even exploit them. Since many species their habitats depend on natural resources and with the overuse of them they will eventually run out and cause them to lose most of their habitat. Not just that but now the species that has gone extinct will change everything drastically.
The destruction and fragmentation are the 2 most important factors in species extinction. The negative effects of decreasing size and increasing of isolation of habitat are misinterpreted by fragmentation, but in reality they are much more larger effects on the population. Fragmentation and generally has either no effect or a negative effect on population survival. Since habitat loss of fragmentation typically occurs together it's still not clear which process has a larger effect on extinction. To ensure that there is no more habitat loss we have to make sure that anyway fragmentation is mitigated or reduced. The decreasing number of isolation and habitat loss with fragmentation are all connected in a way that has negatively affected the environment.
Biodiversity hotspots are chiefly tropical regions that feature high concentrations of endemic species and, when all hotspots are combined, may contain over half of the world's terrestrial species. These hotspots are suffering from habitat loss and destruction. Most of the natural habitat on islands and in areas of high human population density has already been destroyed (WRI, 2003). Islands suffering extreme habitat destruction include New Zealand, Madagascar, the Philippines, and Japan. South and East Asia -- especially China, India, Malaysia, Indonesia, and Japan -- and many areas in West Africa have extremely dense human populations that allow little room for natural habitat. Marine areas close to highly populated coastal cities also face degradation of their coral reefs or other marine habitat. These areas include the eastern coasts of Asia and Africa, northern coasts of South America, and the Caribbean Sea and its associated islands.
Regions of unsustainable agriculture or unstable governments, which may go hand-in-hand, typically experience high rates of habitat destruction. Central America, Sub-Saharan Africa, and the Amazonian tropical rainforest areas of South America are the main regions with unsustainable agricultural practices and/or government mismanagement.
Areas of high agricultural output tend to have the highest extent of habitat destruction. In the U.S., less than 25% of native vegetation remains in many parts of the East and Midwest. Only 15% of land area remains unmodified by human activities in all of Europe.
Tropical rainforests have received most of the attention concerning the destruction of habitat. From the approximately 16 million square kilometers of tropical rainforest habitat that originally existed worldwide, less than 9 million square kilometers remain today. The current rate of deforestation is 160,000 square kilometers per year, which equates to a loss of approximately 1% of original forest habitat each year.
Other forest ecosystems have suffered as much or more destruction as tropical rainforests. Deforestation for farming and logging have severely disturbed at least 94% of temperate broadleaf forests; many old growth forest stands have lost more than 98% of their previous area because of human activities. Tropical deciduous dry forests are easier to clear and burn and are more suitable for agriculture and cattle ranching than tropical rainforests; consequently, less than 0.1% of dry forests in Central America's Pacific Coast and less than 8% in Madagascar remain from their original extents.
Plains and desert areas have been degraded to a lesser extent. Only 10-20% of the world's drylands, which include temperate grasslands, savannas, and shrublands, scrub, and deciduous forests, have been somewhat degraded. But included in that 10-20% of land is the approximately 9 million square kilometers of seasonally dry-lands that humans have converted to deserts through the process of desertification. The tallgrass prairies of North America, on the other hand, have less than 3% of natural habitat remaining that has not been converted to farmland.
Wetlands and marine areas have endured high levels of habitat destruction. More than 50% of wetlands in the U.S. have been destroyed in just the last 200 years. Between 60% and 70% of European wetlands have been completely destroyed. In the United Kingdom, there has been an increase in demand for coastal housing and tourism which has caused a decline in marine habitats over the last 60 years. The rising sea levels and temperatures have caused soil erosion, coastal flooding, and loss of quality in the UK marine ecosystem. About one-fifth (20%) of marine coastal areas have been highly modified by humans. One-fifth of coral reefs have also been destroyed, and another fifth has been severely degraded by overfishing, pollution, and invasive species; 90% of the Philippines' coral reefs alone have been destroyed. Finally, over 35% of the mangrove ecosystems worldwide have been destroyed.
Habitat destruction through natural processes such as volcanism, fire, and climate change is well documented in the fossil record. One study shows that habitat fragmentation of tropical rainforests in Euramerica 300 million years ago led to a great loss of amphibian diversity, but simultaneously the drier climate spurred on a burst of diversity among reptiles.
Habitat destruction caused by humans includes land conversion from forests, etc. to arable land, urban sprawl, infrastructure development, and other anthropogenic changes to the characteristics of land. Habitat degradation, fragmentation, and pollution are aspects of habitat destruction caused by humans that do not necessarily involve over destruction of habitat, yet result in habitat collapse. Desertification, deforestation, and coral reef degradation are specific types of habitat destruction for those areas (deserts, forests, coral reefs).
Geist and Lambin (2002) assessed 152 case studies of net losses of tropical forest cover to determine any patterns in the proximate and underlying causes of tropical deforestation. Their results, yielded as percentages of the case studies in which each parameter was a significant factor, provide a quantitative prioritization of which proximate and underlying causes were the most significant. The proximate causes were clustered into broad categories of agricultural expansion (96%), infrastructure expansion (72%), and wood extraction (67%). Therefore, according to this study, forest conversion to agriculture is the main land use change responsible for tropical deforestation. The specific categories reveal further insight into the specific causes of tropical deforestation: transport extension (64%), commercial wood extraction (52%), permanent cultivation (48%), cattle ranching (46%), shifting (slash and burn) cultivation (41%), subsistence agriculture (40%), and fuel wood extraction for domestic use (28%). One result is that shifting cultivation is not the primary cause of deforestation in all world regions, while transport extension (including the construction of new roads) is the largest single proximate factor responsible for deforestation.
Rising global temperatures, caused by the greenhouse effect, contribute to habitat destruction, endangering various species, such as the polar bear. Melting ice caps promote rising sea levels and floods which threaten natural habitats and species globally.
While the above-mentioned activities are the proximal or direct causes of habitat destruction in that they actually destroy habitat, this still does not identify why humans destroy habitat. The forces that cause humans to destroy habitat are known as drivers of habitat destruction. Demographic, economic, sociopolitical, scientific and technological, and cultural drivers all contribute to habitat destruction.
Demographic drivers include the expanding human population; rate of population increase over time; spatial distribution of people in a given area (urban versus rural), ecosystem type, and country; and the combined effects of poverty, age, family planning, gender, and education status of people in certain areas. Most of the exponential human population growth worldwide is occurring in or close to biodiversity hotspots. This may explain why human population density accounts for 87.9% of the variation in numbers of threatened species across 114 countries, providing indisputable evidence that people play the largest role in decreasing biodiversity. The boom in human population and migration of people into such species-rich regions are making conservation efforts not only more urgent but also more likely to conflict with local human interests. The high local population density in such areas is directly correlated to the poverty status of the local people, most of whom lacking an education and family planning.
According to the Geist and Lambin (2002) study, the underlying driving forces were prioritized as follows (with the percent of the 152 cases the factor played a significant role in): economic factors (81%), institutional or policy factors (78%), technological factors (70%), cultural or socio-political factors (66%), and demographic factors (61%). The main economic factors included commercialization and growth of timber markets (68%), which are driven by national and international demands; urban industrial growth (38%); low domestic costs for land, labor, fuel, and timber (32%); and increases in product prices mainly for cash crops (25%). Institutional and policy factors included formal pro-deforestation policies on land development (40%), economic growth including colonization and infrastructure improvement (34%), and subsidies for land-based activities (26%); property rights and land-tenure insecurity (44%); and policy failures such as corruption, lawlessness, or mismanagement (42%). The main technological factor was the poor application of technology in the wood industry (45%), which leads to wasteful logging practices. Within the broad category of cultural and sociopolitical factors are public attitudes and values (63%), individual/household behavior (53%), public unconcern toward forest environments (43%), missing basic values (36%), and unconcern by individuals (32%). Demographic factors were the in-migration of colonizing settlers into sparsely populated forest areas (38%) and growing population density--a result of the first factor--in those areas (25%).
There are also feedbacks and interactions among the proximate and underlying causes of deforestation that can amplify the process. Road construction has the largest feedback effect, because it interacts with--and leads to--the establishment of new settlements and more people, which causes a growth in wood (logging) and food markets. Growth in these markets, in turn, progresses the commercialization of agriculture and logging industries. When these industries become commercialized, they must become more efficient by utilizing larger or more modern machinery that often has a worse effect on the habitat than traditional farming and logging methods. Either way, more land is cleared more rapidly for commercial markets. This common feedback example manifests just how closely related the proximate and underlying causes are to each other.
Habitat destruction can vastly increase an area's vulnerability to natural disasters like flood and drought, crop failure, spread of disease, and water contamination.[page needed] On the other hand, a healthy ecosystem with good management practices can reduce the chance of these events happening, or will at least mitigate adverse impacts. Eliminating swamps - the habitat of pests such as mosquitoes - has contributed to the prevention of diseases such as malaria.
Agricultural land can actually suffer from the destruction of the surrounding landscape. Over the past 50 years, the destruction of habitat surrounding agricultural land has degraded approximately 40% of agricultural land worldwide via erosion, salinization, compaction, nutrient depletion, pollution, and urbanization. Humans also lose direct uses of natural habitat when habitat is destroyed. Aesthetic uses such as birdwatching, recreational uses like hunting and fishing, and ecotourism usually[quantify] rely upon virtually undisturbed habitat. Many people value the complexity of the natural world and are disturbed by the loss of natural habitats and of animal- or plant-species worldwide.
Probably the most profound impact that habitat destruction has on people is the loss of many valuable ecosystem services. Habitat destruction has altered nitrogen, phosphorus, sulfur, and carbon cycles, which has increased the frequency and severity of acid rain, algal blooms, and fish kills in rivers and oceans and contributed tremendously to global climate change.[need quotation to verify] One ecosystem service whose significance is becoming better understood is climate regulation. On a local scale, trees provide windbreaks and shade; on a regional scale, plant transpiration recycles rainwater and maintains constant annual rainfall; on a global scale, plants (especially trees from tropical rainforests) from around the world counter the accumulation of greenhouse gases in the atmosphere by sequestering carbon dioxide through photosynthesis. Other ecosystem services that are diminished or lost altogether as a result of habitat destruction include watershed management, nitrogen fixation, oxygen production, pollination (see pollinator decline), waste treatment (i.e., the breaking down and immobilization of toxic pollutants), and nutrient recycling of sewage or agricultural runoff.
The loss of trees from the tropical rainforests alone represents a substantial diminishing of Earth's ability to produce oxygen and to use up carbon dioxide. These services are becoming even more important as increasing carbon dioxide levels is one of the main contributors to global climate change. The loss of biodiversity may not directly affect humans, but the indirect effects of losing many species as well as the diversity of ecosystems in general are enormous. When biodiversity is lost, the environment loses many species that perform valuable and unique roles in the ecosystem. The environment and all its inhabitants rely on biodiversity to recover from extreme environmental conditions. When too much biodiversity is lost, a catastrophic event such as an earthquake, flood, or volcanic eruption could cause an ecosystem to crash, and humans would obviously suffer from that. Loss of biodiversity also means that humans are losing animals that could have served as biological-control agents and plants that could potentially provide higher-yielding crop varieties, pharmaceutical drugs to cure existing or future diseases (such as cancer), and new resistant crop-varieties for agricultural species susceptible to pesticide-resistant insects or virulent strains of fungi, viruses, and bacteria.
The negative effects of habitat destruction usually impact rural populations more directly than urban populations. Across the globe, poor people suffer the most when natural habitat is destroyed, because less natural habitat means fewer natural resources per capita, yet wealthier people and countries can simply pay more to continue to receive more than their per capita share of natural resources.
Another way to view the negative effects of habitat destruction is to look at the opportunity cost of destroying a given habitat. In other words, what do people lose out on with the removal of a given habitat? A country may increase its food supply by converting forest land to row-crop agriculture, but the value of the same land may be much larger when it can supply natural resources or services such as clean water, timber, ecotourism, or flood regulation and drought control.[need quotation to verify]
The rapid expansion of the global human population is increasing the world's food requirement substantially. Simple logic dictates that more people will require more food. In fact, as the world's population increases dramatically, agricultural output will need to increase by at least 50%, over the next 30 years. In the past, continually moving to new land and soils provided a boost in food production to meet the global food demand. That easy fix will no longer be available, however, as more than 98% of all land suitable for agriculture is already in use or degraded beyond repair.
The impending global food crisis will be a major source of habitat destruction. Commercial farmers are going to become desperate to produce more food from the same amount of land, so they will use more fertilizers and show less concern for the environment to meet the market demand. Others will seek out new land or will convert other land-uses to agriculture. Agricultural intensification will become widespread at the cost of the environment and its inhabitants. Species will be pushed out of their habitat either directly by habitat destruction or indirectly by fragmentation, degradation, or pollution. Any efforts to protect the world's remaining natural habitat and biodiversity will compete directly with humans' growing demand for natural resources, especially new agricultural lands.
Tropical deforestation: In most cases of tropical deforestation, three to four underlying causes are driving two to three proximate causes. This means that a universal policy for controlling tropical deforestation would not be able to address the unique combination of proximate and underlying causes of deforestation in each country. Before any local, national, or international deforestation policies are written and enforced, governmental leaders must acquire a detailed understanding of the complex combination of proximate causes and underlying driving forces of deforestation in a given area or country. This concept, along with many other results of tropical deforestation from the Geist and Lambin study, can easily be applied to habitat destruction in general.
Shoreline erosion: Coastal erosion is a natural process as storms, waves, tides and other water level changes occur. Shoreline stabilization can be done by barriers between land and water such as seawalls and bulkheads. Living shorelines are gaining attention as new stabilization method. These can reduce damage and erosion while simultaneously providing ecosystem services such as food production, nutrient and sediment removal, and water quality improvement to society
To prevent an area from losing its specialist species to generalist invasive species depends on the extent of the habitat distraction that has already taken place. In areas where habitat is relatively undisturbed, halting further habitat destruction may be enough. In areas where habitat destruction is more extreme (fragmentation or patch loss), Restoration ecology may be needed.
Education of the general public is possibly the best way to prevent further human habitat destruction. Changing the dull creep of environmental impacts from being viewed as acceptable to being seen a reason for change to more sustainable practices. Education about the necessity of family planning to slow population growth is important as greater population leads to greater human caused habitat destruction.
The biggest potential to solving the issue of habitat destruction comes from solving the political, economical and social problems that go along with it such as, individual and commercial material consumption, sustainable extraction of resources, conservation areas, restoration of degraded land and addressing climate change.
Governmental leaders need to take action by addressing the underlying driving forces, rather than merely regulating the proximate causes. In a broader sense, governmental bodies at a local, national, and international scale need to emphasize:
|issn=value (help). PMC 5552933. PMID 28811883.
It was drainage of swampland which eradicated the disease [malaria] from the Fenlands in Britain and the Pontine marshes of Italy. | https://www.popflock.com/learn?s=Habitat_loss | 21 |
21 | Gene mutations can cause hearing loss in several ways.
Genetic factors make some people more susceptible to hearing loss than others. Their genes make them more predisposed to hearing loss due to ageing or induced by noise, drugs or infections. It is estimated that the causes of age-related hearing loss are 35-55% genetic.
Genes in ear cells affect our hearing
Genes are chemical units found inside all cells of the human body. Inside the cell the genes form specific structures called chromosomes, which make up our DNA and hold our hereditary characteristics. Every cell in the human body is composed of some 30,000 genes.
Some of the genes in ear cells affect our hearing and help determine how sounds are turned into signals that the brain understands.
At times, changes occur in the DNA of the genes, affecting their functioning. If these mutations occur in a gene with important information about our sense of hearing, the result may be hearing loss or, in extreme cases, deafness.
Examples of hereditary conditions causing hearing loss include Otosclerosis, Usher's syndrome and Pendred syndrome. You can find more specific information about different syndromes under "syndromes and hearing loss".
Inner ear sensory hair cells play a vital role in our hearing, and mutations in these cells can prevent them from functioning properly, resulting in hearing loss.
Finally, gene mutations may cause several non-hearing related, hereditary conditions combined with a deformation of the inner ear, resulting in deafness at birth or later in life.
Inherited from parents
Some children are born with a hearing loss or born with genes so that they will develop a hearing loss later in life. This is called a congenital hearing loss. In most cases it is genetics that cause a newborn´s hearing loss.
All human genes exist in two copies passed on from the mother and the father, respectively. The risk of hearing loss may depend on whether a possible mutation is dominant or recessive. A dominant mutation causes hearing loss if just one of the inherited copies from the parents is damaged. Recessive mutations manifest themselves as hearing loss only if both copies are damaged, i.e. if both parents are carriers of the gene mutation.
Different types of hearing loss
Pinpointing the genetic causes for a specific hearing loss is complicated. Many different genes can cause the same type of hearing loss and the same genes can also be involved in different types of hearing loss. Two people with the same gene mutation may still have very different levels of hearing ability. | https://www.hear-it.org/Genetic-hearing-loss | 21 |
16 | The pilgrims were not the only people who did not like, or accept, British rule. When the French sold some of their territory to the British, the Indian tribes in these areas were not happy about the new regime. The French had more or less left them alone to do as they chose, and so they tended to live in relative peace, but the British were a different kind of rule, and the Indians felt that they were far less conciliatory than their predecessors. It wasn’t that the French and the Indians got along well, after all they had just ended the French and Indian Wars in the early 1760s. It was simply that the British were more demanding and less giving in the area of the Indian rights, than the French had been.
As the matter became more and more heated, an Ottawan Indian chief named Pontiac decided that it was time for the Indian tribes to rebel. So, he called together a confederacy of Native warriors to attack the British force at Detroit. In 1762, Pontiac enlisted support from practically every tribe from Lake Superior to the lower Mississippi for a joint campaign to expel the British from the formerly French-occupied lands. According to Pontiac’s plan, each tribe would seize the nearest fort and then join forces to wipe out the undefended settlements. In April 1763, Pontiac convened a war council on the banks of the Ecorse River near Detroit. It was decided that Pontiac and his warriors would gain access to the British fort at Detroit under the pretense of negotiating a peace treaty, giving them an opportunity to seize forcibly the arsenal there. However, British Major Henry Gladwin learned of the plot, and the British were ready when Pontiac arrived in early May 1763, and Pontiac was forced to begin a siege. His Indian allies in Pennsylvania began a siege of Fort Pitt, while other sympathetic tribes, such as the Delaware, the Shawnees, and the Seneca, prepared to move against various British forts and outposts in Michigan, New York, Pennsylvania, Maryland and Virginia, at the same time. After failing to take the fort in their initial assault, Pontiac’s forces, made up of Ottawas and reinforced by Wyandots, Ojibwas and Potawatamis, initiated a siege that would stretch into months.
A British relief expedition attacked Pontiac’s camp on July 31, 1763. They suffered heavy losses and were repelled in the Battle of Bloody Run. However, they did succeeded in providing the fort at Detroit with reinforcements and supplies. That victorious battle allowed the fort to hold out against the Indians into the fall. Also holding on were the major forts at Pitt and Niagara, but the united tribes captured eight other fortified posts. At these forts, the garrisons were wiped out, relief expeditions were repulsed, and nearby frontier settlements were destroyed.
Two British armies were sent out in the spring of 1764. One was sent into Pennsylvania and Ohio under Colonel Bouquet, and the other to the Great Lakes under Colonel John Bradstreet. Bouquet’s campaign met with success, and the Delawares and the Shawnees were forced to sue for peace, breaking Pontiac’s alliance. Failing to persuade tribes in the West to join his rebellion, and lacking the hoped-for support from the French, Pontiac finally signed a treaty with the British in 1766. In 1769, he was murdered by a Peoria tribesman while visiting Illinois. His death led to bitter warfare among the tribes, and the Peorias were nearly wiped out.
Apartment living is something many people do, and while they might dream of a house, or even have one, there can be reasons for having an apartment too. The oilfield would be one example of the need for a second place to live. Often, oil field workers must travel to the worksite. Once there, they have to stay there for a time, because traveling to and from home twice a day is just not feasible. Many oilfield companies provide living quarters for their employees. Sometimes it is a local motel, sometimes apartments, and sometimes, as with off shore drilling operations, companies must get innovative.
Some living quarters for oil field workers is quite a bit different than others. The Edda oil rig in the Ekofisk field, 235 miles east of Dundee, Scotland had just such an unusual housing arrangement for the employees who worked on the Edda oil rig. The Alexander Kielland platform was a floating apartment unit that housed 208 people. The floating apartment complex was located in the North Sea. The majority of the Phillips Petroleum workers were from Norway, but a few were American and British. The platform was held up by two large pontoons. It had bedrooms, kitchens, and lounges, and provided a place for workers to spend their time when not working. It was truly a comfortable home away from home…for the most part.
On March 30, 1980, at about 6:30pm most of the residents were in the platform’s small theater watching a movie. There was a storm brewing, but although there were gale conditions in the North Sea that evening, no one was expecting that a large wave would collapse and capsize the platform. Everything happened very fast. The wave hit, and things began to collapse. Within 15 minutes of the collapse, the floating apartment complex had capsized. It was so fast that many of the workers were unable to make it to the lifeboats. The Royal Air Force of Great Britain and Norwegian military both immediately sent rescue helicopters, but the poor weather made it impossible for them to help. Of the 208 people onboard, 123 drowned. The nightmare scenario seemed impossible, but a subsequent investigation revealed that there was a previously undetected crack in one of main legs of the platform. That had caused the structure’s disastrous collapse. The Alexander Kielland sat in the water for three years before it was salvaged.
The port city of my birth, Superior, Wisconsin was founded on November 6, 1854 and incorporated March 25 1889. The city’s slogan soon became, “Where Sail Meets Rail,” because it was port connection between the shipping industry and the railroad. Much of Superior’s history parallels its sister city of Duluth’s, but Superior has been around longer than Duluth, which is also known as the Zenith City. Of course, the area had people there before that…there were Ojibwe Indians, and French traders that are known to be in the area in the early 1600s.
After the Ojibwe settled in the area and set up an encampment on present-day Madeline Island, the French started arriving. In 1618 voyageur Etienne Brulé paddled along Lake Superior’s south shore where he encountered the Ojibwe tribe, but he also found copper specimens. Brulé went back to Quebec with the copper samples, and a glowing report of the region. French traders and missionaries began settling the area a short time later, and a Lake Superior tributary was named for Brulé. Father Claude Jean Allouez, was one of those missionaries. His is often credited with the development of an early map of the region. Superior’s Allouez neighborhood takes its name from the Catholic missionary. The area was developed quickly after that, and by 1700 the area was crawling with French traders. The French traders developed a good working relationship with the Ojibwe people.
The Ojibwe continued to get along well with the French, but not so much the British, who ruled the area after the French, but that ended with the America Revolution and the Treaty of Peace in 1783. The British weren’t as good to the Ojibwe as the French had been. Treaties with the Ojibwe would give more territory to settlers of European descent, and by 1847 the United States had taken control of all lands along Lake Superior’s south shore.
In 1854 the first copper claims were staked at the mouth of the Nemadji River…some say it was actually 1853. The Village of Superior became the county seat of the newly formed Douglas County that same year. The village grew quickly and within two years, about 2,500 people called Superior home. Unfortunately, with the financial panic of 1857, the town’s population stagnated through the end of the Civil War. The building of the Duluth Ship Canal in 1871, which was followed by the Panic of 1873. pretty much crushed Superior’s economic future. Things began to look up when in 1885, Robert Belknap and General John Henry Hammond’s Land and River Improvement Company established West Superior. Immediately they began building elevators, docks, and industrial railroads. In 1890, Superior City and West Superior merged, The city’s population fluctuated, as a boom town will, between 1887 and 1893, and then another financial panic halted progress. Over the years since then, Superior’s population has had it’s ups and down, as has it’s sister city, Duluth, but it has remained about one fourth the size of its twin across the bay.
My great grandparents, Carl and Albertine Schumacher lived in the Goodhue, Minnesota area, when my grandmother Anna was born, but my grandparents Allen and Anna Spencer lived in Superior. That is where my dad, Allen Spencer was born, as were my sister, Cheryl and I. We didn’t live in Superior for all of our lives, just 3 and 5 years, but the area remains in our blood, and in our hearts. It could be partly because of all the trips our family made back to Superior, but I don’t think that’s totally it, because there is just something about knowing that you came from a place, that will always make it special. Superior, Wisconsin is a very special place, that will always be a part of me and my sister, Cheryl too.
As we all know, being a prisoner of war is not a safe place to be. Growing us watching shows like “Hogan’s Heroes” gave the impression that the enemy was always nice to their prisoners, and that being in a POW camp was ok, but that wasn’t the reality of those camps. Of course, in the days of “Hogan’s Heroes” violence was just not shown on television. The world was a different place…at least for the people back home watching it on television. The reality of the POW camps was much different, as many of us have seen in newer shows about prisoners during wars. We have witnessed some of the atrocities that have been done…and we still aren’t seeing the true story, I don’t suppose.
One example of the true story is the Sandakan Death March. For some reason, when the Japanese were close to getting caught in their brutality, they decided that the best thing to do was to take the prisoners on a march deeper into the jungle so that the evidence of their torture was not found. In reality, they could have just left the prisoners in the camp…abandoned them, and they would have likely never been caught, but apparently it was more about retaliation over their loss in the war. One such march was the Bataan Death March in the Philippians and the deadly construction of the rail line linking Burma with Thailand.
Similar to that was the Sandakan Death March in Malaysia. It is not as well known, and in fact many Australians would like it to be removed from history…not from the history books, but they wish it had never happened at all. I can fully understand that, once I found out what the Sandakan Death March was all about. Still, I don’t understand why more people don’t know about it. There were 2,700 British and Australian prisoners of war interned there by Japanese forces, as the end of World War II approached. The Sandakan Death March has been called that Australia’s worst military tragedy.
Sandakan was a brutal place. About 900 British soldiers were among the prisoners of war brought to Sandakan. Most of them did not survived. Prisoners interned here died slowly. They were starved and beaten. Toward the end of the war, when the Japanese decided to flee Sandakan, most of the remaining prisoners were marched to their deaths. Those who were strong enough to make it to the end of the trail were executed. Only six…a man named Owen Campbell and five others survived…and then only because they escaped. Campbell is the last living survivor, and a reminder that when war veterans pass away, a little piece of history dies with them. That is the saddest part of such horrific loss. When people forget about the atrocities of the past, the world is destined to repeat them. When life is viewed as so insignificant, then killing becomes easy, and the consequences of these killings is somehow pushed aside and even justified. That should never be allowed to happen…not in the POW camps, or in the streets of our cities. Bruce Scott, Australia’s minister for veterans affairs, referring to the prison guards at Sandakan said, “The proud and honorable title of soldier cannot be applied to those men.” The guards forced the prisoners to begin the death march from the camp at Sandakan as the Allies were approaching. The men were already very weak from being starved and beaten. Most of the men did not survive the march…succumbing to their conditions along the way. Those that survived the march were simply executed when they reached the end of the journey. That was even more brutal than the march itself.
Campbell returned to Borneo for a ceremony in March of 1999, back to the jungles where half a century ago his best mates were marched to their deaths. Wearing a row of ribbons and medals across his left breast pocket, Mr. Campbell, aged 82, stared straight ahead at a black slab of granite, a memorial to one of the most horrific…yet little-known…atrocities of World War II in the Pacific. “We come here to this place to help ensure that this story is not forgotten,” Bruce Ruxton, an Australian veteran, told the crowd assembled at the memorial site. “We acknowledge that great evil was done here, that inhumanity here reached such depths that shame us as human beings even to contemplate.” In a sign of the continuing sensitivities and anger surrounding the prison camp and death march, no Japanese were present at the ceremony.
I think most of us have heard of being “tarred and feathered,” as a form of punishment, but we may not really know how much of a punishment it really was. When we think about it, the show, “Home Alone” might come to mind. Of course, some kind of syrup and then the feathers, but that does not really even begin to describe the real act of tarring and feathering.
In 1776 in Norfolk, Virginia, Captain William Smith was tarred and feathered by a mob which actually included the mayor of Norfolk apparently!! Mobs are never a good thing. They are always out of control, and people who might normally be pretty decent, are dragged into things they might never do otherwise. Captain Smith was suspected of sharing secrets about a local ship owner John Gilcrest smuggling goods, with British officials. It was a terrible offence, but remember that he was “suspected” of this, not convicted. That is the problem with the mobs. They often take matters into their own hands…Vigilante Justice…whether the person is really guilty or not.
Tar and Feather was a medieval form of torture and humiliation. It involved stripping the victim up to his waist, applying tar on his body, and covering him with feathers. That wasn’t the end of it though. The victim was then put on a cart and paraded around the place. Sometimes, the tar was simply poured on the victim’s body and he was made to roll on feathers. This isn’t like syrup or the asphalt tar of today. The tar they used was likely from the pine tar the Colonies were accustomed to distilling for its use on preserving the wood of ships from rot. Hot asphalt tar would critically burn the body. Pine creates charcoal and pine tar when heated up. This pine tar is naturally a sticky substance making it a perfect material for applying to someone who is about to be covered in feathers. While that explains why there were no casualties of this form of punishment. Nevertheless, the job of removing the dried pine tar and feathers off the skin was extremely painful.
After Captain Smith was humiliated by the application of his feathery outfit, he was thrown into the harbor. He almost drowned before being rescued by a passing ship, just as his strength was giving out. He survived, and was later quoted as saying that “…[they] dawbed my body and face all over with tar and afterwards threw feathers on me.” As with most other tar and feathers victims in the decade that followed, Smith was suspected of informing on smugglers to the British Customs service. The punishment was harsh, and it was swift. The colonies were trying to gain their freedom, and that meant that they would fight to the death, and they would never tolerate traitors. I don’t know if Captain Smith was a traitor or not, but no one was ever punished for what they did to him, so there is that.
I am sometimes amazed at the ability of humans to be heinously cruel to other human beings. From murders, to slave owners, to prisons or prisoner of war camps, man has the ability to act out evil in its purest form. Still, one would not have expected such evil in the American Revolutionary War era. Well, one would be wrong. We all know that war is a horrific event, but worse than losing life and limb in battle, seems to be the fate faced by those who are captured by the enemy forces, only to be tortured and even killed.
During the Revolutionary War, being captured by the British often meant being sent to a prison ship, the worse of which was the HMS Jersey. Over the years of the war, approximately 11,000 prisoners of war perished on the HMS Jersey. The number of American field casualties during that war was approximately 4,500. That is a stunning difference. The HMS Jersey often held thousands of prisoners at one time, in quarters that were so close, that it could be likened to being packed in like sardines in a tin. There was no light, no medical care, barely any oxygen, and very little in the way of food and clean water. The guards on the prison ships were not concerned with keeping their prisoners alive, and HMS Jersey was the worst of them all.
The little food the prisoners were given was moldy, putrefied, and worm infested. The prisoners had to choose daily to eat the horrible food, or starve. One prisoner Ebenezer Fox, who survived said, “The bread was mostly mouldy, and filled with worms. It required considerable rapping upon the deck, before these worms could be dislodged from their lurking places in a biscuit. As for the pork, we were cheated out of it more than half the time, and when it was obtained one would have judged from its motley hues, exhibiting the consistence and appearance of variegated soap, that it was the flesh of the porpoise or sea hog, and had been an inhabitant of the ocean, rather than a sty. The provisions were generally damaged, and from the imperfect manner in which they were cooked were about as indigestible as grape shot.” That pretty much says it all, I would say. The British soldiers were seemingly unaffected by the image of prisoners banging their biscuits against the deck to remove worms, because this treatment continued throughout the conflict.
Because the prisoners were kept at sea, the smell of a piece of dirt from the shoes of a soldier back from shore leave became one of the prisoners’ greatest delights. I guess that one can always find some good, even in the worst situations, if one looks for it. Captain Dring, a survivor who wrote prolifically about his experiences on the Jersey, recalled one particularly strange consolation. When someone died on the ship, their remains were usually thrown overboard, but occasionally they were allowed to be taken ashore and laid to rest. Dring was part of a group that was tasked with digging graves on land. The men chosen for this duty were ecstatic to be on land again. Dring even took off his boots simply to feel the earth underneath his feet. However, when the crew came across a piece of broken-up turf, they did something extraordinary: “We went by a small patch of turf, some pieces of which we tore up from the earth, and obtained permission to carry them on board for our comrades to smell them. Circumstances like these may appear trifling to the careless reader; but let him be assured that they were far from being trifles to men situated as we had been. Sadly did we approach and reenter our foul and disgusting place of confinement. The pieces of turf which we carried on board were sought for by our fellow prisoners, with the greatest avidity, every fragment being passed by them from hand to hand, and its smell inhaled as if it had been a fragrant rose.”
The known fate of the men on board the prison ships, and especially HMS Jersey was a slow and painful death. Most knew better than to expect to survive their ordeal. They had seen too many of their comrades die right before their eyes, to have much hope that they could make it out. To make mattes worse, the majority of the prisoners aboard the Jersey were young, inexperienced farmhands, not hardened soldiers with survival experience. Only a few of Washington’s army were soldiers with any experience. The rest were provincial people, and many had never traveled beyond the limits of the small county where they lived. Imagine the horror of war, and then the conditions on HMS Jersey to the young, innocent men. The constant punishment, meager rations, lack of light, and lack of privacy could be tolerated, but the inactivity and helplessness most likely added depression and despair to their suffering. Times were different then, and there were things that were not available, but many of the things the prisoners suffered could have been avoided, especially the overcrowding and unsanitary conditions, but apparently they just didn’t care.
World War II brought with it necessary changes to war ships. Suddenly, the world had planes that could fly greater distances, and even had the ability to land on a ship, provided the ship was big enough to have a relatively short runway. I say relatively short, because the runways on ships seemed like they would be too short to safely land a plane, but they did. One such ship was the HMS Ark Royal.
The Ark Royal was an English ship designed in 1934 to fit the restrictions of the Washington Naval Treaty. The ship was built by Cammell Laird at Birkenhead, England, and was completed in November 1938. The design of this ship differed from previous aircraft carriers, in that Ark Royal was the first ship on which the hangars and flight deck were an integral part of the hull, instead of an add-on or part of the superstructure. This ship was designed to carry a large number of aircraft. There were two hangar deck levels. HMS Ark Royal served during a period of time during which we first saw the extensive use of naval air power. The Ark Royal played an integral part in developing and refining several carrier tactics.
HNS Ark Royal served in some of the most active naval theatres of the World War II. The ship was involved in the first aerial and U-boat kills of the war, operations off Norway, the search for the German battleship Bismarck, and the Malta Convoys. After Ark Royal survived several near misses, she became known as a “lucky ship” and the reputation stuck…at least until November 13, 1941, when the German submarine U-81 torpedoed her and she sank the following day. Nevertheless, only one of her 1,488 crew members was killed. Her sinking was the subject of several inquiries. While only one man was killed, investigators still couldn’t figure out how the carrier was lost…in spite of efforts to tow her to the naval base at Gibraltar. In the end, they found that several design flaws contributed to the loss. These flaws were rectified in subsequent British carriers. There was, of course, no time to look for the ship then. The war was still going on.
The wreck was discovered in December 2002 by an American underwater survey company using sonar mounted on an autonomous underwater vehicle. The company was under contract from the BBC for the filming of a documentary about the HMS Ark Royal. The ship was at a depth of about 3,300 feet and approximately 30 nautical miles from Gibraltar. So close, and yet so far away.
Most people have heard of, and seen, the James Bond movies. Of course, Bond is a fictional British agent, known as 007, and his character has been played by a number of actors over the years, but in reality, he is fictional. Renato Levi, who was also known as CHEESE, MR. ROSE, LAMBERT, EMILE, or ROBERTO, was a Jewish-Italian adventurer and double-agent for the British in World War II. Levi was instrumental in setting up a wireless transmitter in Cairo. The transmitter fed false information to the Axis powers over the course of the war. It was a great tool for the Allies. Unfortunately, Levi was captured and imprisoned shortly after he accomplished his mission. Levi’s “CHEESE” network helped to outflank Rommel at the battle of El Alamein in Egypt, as well as placing other, strategic misinformation that aided the Allies, including at Normandy.
Levi almost always flew under the radar, especially in the British National Archives. Even in recent books about spies and counter-intelligence, the accomplishments of Renato Levi still receive barely a mention and the specifics about his part in all this is often confused. In all reality, Levi’s files have only recently been released, and even then Levi’s, aliases “Cheese,” “Lambert,” or “Mr. Rose” seem to be identified openly only once in his classified dossier. Indeed, in his national documents, there is evidence of redaction everywhere, including Levi’s primary codename “CHEESE” has been carefully handwritten in tiny, blocky letters over white-out, in order to re-establish a place in history.
The CHEESE network, out of Cairo, took a significant hit to its credibility when Levi was arrested and convicted in late 1941 or early 1942. The British came up with an imaginary agent. “Paul Nicossof” was able to regain and retain the trust of the Germans, which is one of the most interesting features of this story. Thanks to the expert manipulations of the British Intelligence operatives controlling the wireless, the CHEESE network was considered credible again by June of 1942…just in time for “A” Force to start planting counter-intelligence prior to the commencement of Operation Bertram at El Alamein in Egypt during October of 1942.
Most interesting to note are the ways that the intelligence operatives used payment schedules…or, rather, the German’s lack of payment to “Paul Nicossof”…to establish credibility about the fictitious informant’s information. “Nicossof” was portrayed as moody and inconsistent, because efforts to pay him were always unsuccessful. His “handlers” credited Germany’s inability to pay “Nicossof” as the way they were able to extend his character beyond the “impasse” that would normally constitute a non-military informant. “Nicossof” could portray himself as the “man who brought Rommel to Egypt,” which would get him paid for his troubles at last, as well as the glory and medals that went with it…all to a fictitious agent!!
Perhaps because of the British Intelligence’s efforts to make “Nicossof” convincing and because Levi was so good under duress in prison, the Germans never really lost faith in the CHEESE operative network. They were starved for information, and CHEESE held the only promise for any intelligence about the Middle East. The Germans blamed the Italians for the confinement of their only key agent in the Middle East, Renato Levi. For whatever the reason, the Germans trusted Levi, but he never broke or compromised his duty to the Allied forces.
After looking at these newly declassified documents some people have tried to press Levi into the service of a “Hero Spy” figure, but in reality, Levi was a far more complicated figure and these whitewashed narratives don’t really tell the whole story of Levi’s complexity, nor the complexity of his work. Levi’s story also reveals much about the inner workings of the German Abwehr and the nature of the Italian Intelligence operations. Levi’s British handlers speculated that it was unlikely that the German and Italian Intelligence bureaus had a great deal of communication between them. The Germans were really overly satisfied with Levi’s original purpose of establishing a wireless transmitter network, to their detriment in the end.
It seems that Levi’s ultimate fate is unknown. It is true that the CHEESE network was in full swing throughout the war, and many have credited “CHEESE” with hoodwinking the Germans in a big way on many occasions. Perhaps Levi was again affiliated with CHEESE after his release, or maybe not. Regardless, Renato Levi, who had always loved travel, intrigue, and a really good lie, did a remarkable service to the Allied forces by instituting one of the best and most productive counter-intelligence operations of World War II, and he kept it all safe.
We have all tried our hand at skipping stones across the water, but who would have thought that such an idea could be applied to a bomb, or that it would ultimately become extremely successful in accomplishing its given task…destroying German dams and hydroelectric plants along the Ruhr valley.
During World War II, the Allies we’re desperate to cut off energy to the Nazi war machine, so the Allied engineers were given the task of finding a way to breach the defenses surrounding the dams and hydroelectric plants. In the end, it was British engineer, Barnes Wallis who came through with what he called “bouncing bombs.” To watch it in action, one is reminded of skipping stones like most of us have done in the past. In similar fashion, the bomb skips along the water bouncing over the torpedo nets to hit its target.
When World War II began, Germany had the undisputed upper hand when it came to water-based warfare with their deadly U-boats and defensive “torpedo nets” placed strategically in front of their energy-creating dams. This made it next to impossible to hit the dams with the traditional torpedo. The British Royal Air Force was determined to take out these German battlements, as they slowly wore the Axis of Evil down.
The problem was, how to somehow get past the torpedo nets, to destroy the dams and their hydroelectric plants. Wallis had to figure out how to bypass the torpedo nets, in order to make direct contact with the wall of the dams. It seemed like an insurmountable task. After dwelling on the problem for a while, Wallis seized on the potential of the Magnus effect, which would bounce a bomb across the water like a skipping stone.
The theory was to create backspin, which would counter the gravity and send the bomb skimming over the water. Once it bounced over the torpedo net, it hit the designated target. The plan seemed plausible, and the Royal Air Force commenced Operation Chastise on May 16, 1943. The results were spectacular!! As it turned out, Barnes Wallis really knew his stuff.
Today it would be worth about $4750. Would you pay that much for a bicycle? I don’t think I would, but then I don’t suppose I would be buying a bicycle called the Spacelander. Still, if I was, $4750 would be the asking price, or something close to that number. The Spacelander was created by Benjamin Bowden, who was born June 3, 1906. He was a British industrial designer, whose specialty was automobiles and bicycles. He received violin training at Guildhall, and completed a course in engineering at Regent. Bowden designed the coachwork of Healey’s Elliott, an influential British sports car.
In 1925 Bowden began working as an automobile designer for the Rootes Group. By the late 1930s, Bowden was the chief body engineer for the Humber car factory in Coventry. During World War II, his design of an armored car was used by Winston Churchill and George VI for their protection. In 1945, he left the Rootes Group, and with partner John Allen, formed his own design company in Leamington Spa. The studio was one of the first such design firms in Britain. Bowden designed the body of Healey’s Elliott in 1947. It was the first British car to break the 100 mile per hour barrier. Working with Achille Sampietro who created the chassis, Bowden drew the initial design for the auto directly onto the walls of his house. Unusual…yes, but it worked for him, I guess. Shortly before his departure to the United States Bowden penned a sketch design for a two-seater sports racing prototype, the Zethrin Rennsport, being developed by Val Zethrin. This used the same wheelbase as the short-chassis Squire Sports, and was dressed in a contemporary, streamlined body. This design theme was carried through to his future work on the early Chevrolet Corvette and Ford Thunderbird.
He went on to design the Spacelander in 1946. It was a space-age looking bicycle, that was ahead of its time, since space travel wouldn’t occur for two decades. It’s not that the Spacelander would ever be used in space, but rather the design that seemed space-like. Bowden called the bicycle the Classic. In the early or mid 1950s, Bowden moved to Michigan, in the United States. While in Muskegon, Michigan in 1959, he met with Joe Kaskie, of the George Morrell Corporation, a custom molding company. Kaskie suggested molding the bicycle in fiberglass instead of aluminum, but the fiberglass frame was relatively fragile, and its unusual nature made it difficult to market to established bicycle distributors. Although he retained the futuristic appearance of the Classic, Bowden abandoned the hub dynamo, and replaced the drive-train with a more common sprocket-chain assembly. The new name, Spacelander, was chosen to capitalize on interest in the Space Race. Financial troubles from the distributor forced Bowden to rush development of the Spacelander, which was released in 1960 in five colors: Charcoal Black, Cliffs of Dover White, Meadow Green, Outer Space Blue, and Stop Sign Red. The bicycle was priced at $89.50, which made it one of the more expensive bicycles on the market. Only 522 Spacelander bicycles were shipped before production was stopped, although more complete sets of parts were manufactured. In more recent years, the Spacelander has become a collector’s item…hence the price tag. | http://carynschulenberg.com/tag/british/ | 21 |
14 | The Camps: Table of Contents | Photographs | Visiting Today
CAMPS (Concentration and Extermination). The English-language term concentration camp is commonly used to describe a wide number of places of internment created by Nazi Germany, which served a variety of functions and were called by different names: labor camps (Arbeitslager); transit camps (Durchgangslager); prisoner-of-war camps (Kriegsgefangenlager); concentration camps (Konzentrationslager KZ); and death camps or killing centers, often referred to in Nazi parlance as extermination camps (Vernichtungslager).
Concentration camps underwent a series of developments over time to respond to differing German policies and needs. From 1933 to 1936 they were used for incarcerating political adversaries and preventive protective custody. During this period of time Jews were not arrested as Jews but because of their political or cultural activities. Most of those interned were trade unionists, political dissidents, communists, and others. In 1936 operational responsibility for the camps was consolidated under the SS and the camp universe expanded incrementally. In 1941–42 the major killing centers came on line: *Chelmno, *Auschwitz-Birkenau, *Majdanek as well as the Aktion Reinhard camps of *Belzec, *Sobibor, and *Treblinka. A series of labor camps were created in direct response to the impact of the war and Germany's growing need for workers. German companies participated directly in the growth of the labor camps and were the chief employers and thus beneficiaries of these captive workers. The SS profited greatly by these arrangements. In 1944–45 in the face of advancing Allied armies, the concentration camps in occupied countries were dismantled and evacuated, bringing back to Germany, often on foot in what became known as death marches, the Jewish population that had previously been expelled from Germany. The evacuees were moved to concentration camps within Germany, which resulted in overcrowding and their functional collapse, or they were simply walked endlessly until they dropped and were shot or until they were overrun by advancing Allied armies, (See Map: Camps in Europe, WWII).
Protective Custody of Enemies of the State (1933–39)
During the night following the declaration of a state of emergency after the Reichstag fire (Feb. 27, 1933), there was a wave of mass arrests of the Communist opposition. After the Ermaechtigungsgesetz ("Enabling Act") of March 23, 1933, the non-Nazi political elite, composed of trade-union members, socialists, and civil party members, was arrested, together with writers, journalists, and lawyers, who were Jewish, but arrested because of their activities – alleged or actual. In July 1933, the number of protective-custody detainees reached 14,906 in Prussia and 26,789 in the whole Reich. The SA (Storm Troops), the *SS, and the police improvised about 50 mass-detention camps. *Dachau, Oranienburg, Esterwegen, and Sachsenburg were thus created. The worst camp of all was the Berlin Columbia Haus. The methods of arrest, kidnappings, tortures, bribery, and blackmail of associates created chaos and aroused protest in newly Nazified Germany. In response to pressure from the judiciary, and upon the advice of the then head of the Gestapo, Rudolf Diels, to Hermann *Goering, most of the SA and SS Wilde KZ ("Wild concentration camps") were broken up. Oranienburg, Lichtenburg, and Columbia Haus remained, containing no more than 1,000 prisoners each. Later on there was less judicial pressure and a confident and dominant Nazi regime became less responsive – but never unresponsive – to public opinion. Public opposition to the regime was less forthcoming because of fear, coercion, despair and indifference.
The reduction in concentration camps during the early years of the Nazi regime was no indication of any move to abolish them; among the new victims of the terror were those who listened to foreign radio stations, rumormongers, Jehovah's Witnesses (Bibelforscher, in 1935), and German male homosexuals. There was no incarceration of lesbians qua lesbians. Jehovah's Witnesses were the only "voluntary victims" of Nazism. They refused to register in the Wehrmacht or to swear allegiance to the state. The words "Heil Hitler" never passed their lips. Their allegiance was to Jehovah and not to the state. Jehovah's Witnesses could be freed from concentration camps if they signed a simple document renouncing their faith and swearing to cease their religious activities. Few succumbed to this temptation, even at the risk of endless internment and conditions that might lead to death. There was a basic tension in German policy and among German policymakers toward
Under the command of Himmler, who on April 20, 1934, took over direction of the Berlin Gestapo, the SS gained total control of the concentration camps, and the judiciary was prevented from intervening in the Gestapo's domain. Small concentration camps were broken up, and their prisoners transferred to larger camps, such as Dachau (which was enlarged), *Sachsenhausen (established in September 1936), and *Buchenwald (established in August 1937). When the number of concentration-camp detainees dropped to about 8,000 in late 1937, it was augmented by the dispatch of criminal offenders and persons defined as "asocial." In April 1938 ordinary prisoners under preventive detention were transferred from prisons to concentration camps, which, in addition to their original function, then became Staatliche Besserungs-und Arbeitslager ("State Improvement and Labor Camps"). At about the same time, Jews qua Jews (not as Communists, Socialists, etc.) were interned in concentration camps for the first time.
The German state gave legal sanction to arbitrary imprisonment by the Notverordnung des Reichspraesidenten zum Schutz von Volk und Staat (Feb. 28, 1933), which served as a base for "protective custody" by authorizing the unlimited detention of persons suspected of hostility to the regime. The regulation requiring a written protective-custody warrant (Schutzhaftbefehl) was introduced on April 12–16, 1934, in order to placate the judiciary, who still demanded that the legality of each arrest be examined. A clause postulated on Jan. 25, 1938, extended protective custody to persons whose conduct endangered the security of the nation and the state for detention solely in the concentration camps. In an order of Feb. 10, 1936, Heinrich *Himmler invested the *Gestapo authority to make arrests and investigate all activities hostile to the state within the Reich. He also decreed that the Gestapo's orders were not subject to investigation by courts of law and handed over the administration of the concentration camps to the Gestapo. The protective-custody warrant was presented to the detainees, if at all, only after their arrest. They were first sent to prison and tortured for long periods. The detainee was then forced to sign the warrant that was sent to the concentration camp as his dispatch note.
The number of political detainees (Marxists, anti-Nazis, and Jews) rose after the annexations of Austria – in March and April 1938 – and Sudetenland – in October and November 1938 (see *Czechoslovakia). Overcrowding in the camps grew worse, especially after the arrest throughout the Reich of about 30,000 Jewish men – aged 16–60 – after the November pogrom of 1938 known as *Kristallnacht. The total number of detainees rose that year from 24,000 to 60,000. In 1939 the internment of individual Jews for the slightest violation of the Schikanengesetzgebung – irksome special legislation – began. Jews convicted for Rassenschande (violation of race purity), those Jews who remained married to "Aryans", were often put into internment camps after having served their sentence. But prior to World War II, Jews could be released from the camps if they could prove that they had a chance to leave Germany, and in 1939 the release of Jews possessing emigration papers, who paid exorbitant ransoms, resulted in a marked drop in the number of Jewish internees. Many historians argue that Germany's goal at this point was the forced emigration of the Jews, not their murder, and this policy is viewed as evidence for their argument. With the outbreak of war, the total number of detainees rose to 25,000 (including those in the women's camp of *Ravensbrueck, set up in May 1939 in place of Lichtenburg).
World War II
World War II wrought changes in the concentration camp system. There was an increase in the number of prisoners, extension of the network of concentration camps in and outside Germany, and an alteration in the camps' function. The security function (i.e., protective custody) was subordinated to the economic exploitation of detainees and mass murder, especially as the war progressed and German planners understood that an immediate victory would not be forthcoming and they had to plan for an extended conflict. Under the renewed security pretext, ten times as many political prisoners were arrested in the Reich as had been arrested in the years 1935–36. In the occupied countries, thousands of "opponents" were detained in local concentration camps while special groups were "transferred" in vast numbers to concentration camps within the Reich. From the outbreak of war until March 1942, the number of detainees rose from 25,000 to 100,000 and in 1944 the number reached 1,000,000; only between 5 and 10% of them were German nationals.
Late in 1939 the concentration camp organization in Germany was authorized to set up about 100 concentration camps of all types, including Internierungslager (detention or internment camps) and Austauschlager (exchange camps). To these were added *Auschwitz (May 1940), Gusen (May 1940), and Gross-Rosen (Aug. 1940). That year, a series of Jewish and non-Jewish labor camps was established, together with transit camps (Durchgangslager), as part of Himmler's "transfer and resettlement" plan designed to get Jews out of Germany and Germany's sphere of influence and move them eastward to German-occupied territories. In May 1941 *Natzweiler was set up, followed by Niederhagen (May 1940), *Majdanek (November 1940), Stutthof (November 1940), and Arbeitsdorf (April 1942). In early 1942 there was further expansion, when the extermination camps were set up in Poland. The rate at which camps were established varied but did not decline. Even as late as 1944 Sonderlager ("special camps") were established for Hungarian Jews in Austria on the borders with Czechoslovakia and Hungary.
In October 1939, Hitler signed an order empowering his personal physician and the chief of the Fuehrer Chancellory to put to death those considered unsuited to live. He backdated it to September 1, 1939, the day World War II began, to give it the appearance of a wartime measure. In Hitler's directive:
Reich leader Philip Bouhler and Dr. Brandt are charged with responsibility for expanding the authority of physicians, to be designated by name, to the end that patients considered incurable according to the best available human judgment of their state of health, can be granted a mercy killing.
What followed was the so-called euthanasia program, in which German men, women, and children who were physically disabled, mentally retarded, or emotionally disturbed were systematically killed.
Within a few months, the T-4 program (named for Berlin Chancellory Tiergarten 4, which directed it) involved virtually the entire German psychiatric community. A new bureaucracy, headed by physicians, was established with a mandate to "take executive measures against those defined as 'unworthy of living.' "
Patients whom it was decided to kill were transported to six killing centers: Hartheim, Sonnenstein, Grafeneck, Bernburg, Hadamar, and Brandenburg. The members of the SS in charge of the transports donned white coats to keep up the charade of a medical procedure. These camps were fertile ground for the training of staff that latter served the "Final Solution" – the mass murder of Jews in the "Aktion Reinhard" camps, both in leadership capacities and in secondary and tertiary positions. It also was used to master killing by gas.
The first killings were by starvation. Then injections of lethal doses of sedatives were used. Children were easily "put to sleep." But gassing soon became the preferred method of killing. Fifteen to 20 people were killed in a chamber disguised as a shower. Chemists provided the lethal gas, and physicians supervised the process. Afterwards, black smoke billowed from the chimneys as the bodies were burned in adjacent crematoria. It was a technique that was later used to kill millions not hundreds or thousands.
In 1938, the SS began to exploit prison labor in its DEST (Deutsche Erd-und Steinwerke Gmb-H) enterprise (see OSTI), in coordination with Albert Speer, the man responsible for the Nazi construction program for rebuilding Berlin and Nuremberg. This policy determined the sites for new concentration camps – Flossenbuerg, a punishment camp, and Mauthausen, established in mid-1938. The war effort reinforced the function of the camps as a source of manpower for forced labor. Under Oswald Pohl, the concentration camps became centers for the exploitation of the inmates. According to German calculations, the fee for 11 hours (by day or night) of prisoner labor was 6 RM (= $1). The fees from prisoner labor, totaling hundreds of millions of marks, were one of the SS's principal sources of income. The SS incurred inconsequential expenses for the prisoner's upkeep, amounting to no more than 0.70 RM daily for food and depreciation in clothing. Taking into account the average life span of a slave laborer (about 9 months) and the plunder of the corpse for further profit, the total income to the SS for each prisoner averaged 1,631 RM. This excluded industrial exploitation of corpses and property confiscated before internment.
Private suppliers of military equipment, such as I.G. Farben, Krupp, Thyssen, Flick, Siemens, and many others used the concentration camps because of the cheap labor and maximum exploitation afforded, so that prisoners constituted 40% of the industries' labor force. Working conditions in private enterprises, worse than those in the concentration camps themselves, were the direct cause of a high death rate. In the Bunawerke (artificial rubber factory) belonging to I.G. Farben at Monowitz near Auschwitz, the manpower turnover was 300% per year. The employers were not authorized to mete out punishment, but with the aid of the Kapos they instituted so brutal a system of punishments that the SS sometimes intervened on the prisoners' behalf. Approximately 250,000 concentration camp prisoners were employed in private industry, while about 170,000 were utilized by the Reich Ministry of Munitions and War Production. The death rate in the concentration camps (60% in 1942 and 80% thereafter) appeared excessive even to the Inspection Authority, who, for fear of a depletion of a manpower reserve, were ordered to absorb new prisoners and lower the death rate.
The desire to exploit the prisoners was in direct tension to the killing program (the "Final Solution"). This opposition resulted in a continual battle between the employers, the SS-Wirtschafts-und Verwaltungshauptamt ("Economic and Administrative Main Office", WVHA) and the Reichssicherheitshauptamt, RSHA, who were responsible for the extermination policy. The former wanted workers; the latter dead Jews. The scenes of these conflicts were those concentration camps in which mass extermination facilities had been installed, such as Auschwitz, where SS officers and SS doctors sorted out the transports, sending the weak (including children) to their deaths and the able-bodied to work. The latter became camp prisoners and were registered accordingly. They were kept alive for as long as they could work. Reality had created a sort of compromise; the conditions of employment of prisoners helped to kill them and served merely as an extension of life until they completely collapsed and were sent as refuse to the crematories. These concentration camps thus became large-scale extermination centers where in the end Jewish slave labor was regarded as a consumable raw material to be discarded in the process of manufacture and recycled into the war economy.
THE CAMPS AND THE "FINAL SOLUTION."
The killing of Jews began in June 1941 as the Einsatzkommando ("mobile killing units"), which accompanied the German army invading the Soviet Union, went into towns, villages, and cities and killed Jews, Soviet kommisars, and gypsies, one by one, bullet by bullet. This system of sending mobile killers to stationary victims was slow, public, and horrifying, however, even for the SS. Thus by late 1941 the system was reversed. The victims were made mobile – they were sent by train from ghettos and cities to stationary killing centers, where mass murder could be effected in an assembly line process with economies of scale and personnel. Soviet prisoners of war – often Ukrainians
From December 1941, Jews had been gassed in trucks at the *Chelmno extermination camp at a pace that did not satisfy those responsible for carrying out the solution to the "Jewish Question." After the Wannsee *Conference (1942), which was convened to smooth the cooperation toward liquidation of the Jews, the establishment of new killing centers, mainly on German-occupied Polish soil, was hastened. The first to use gas chambers was Odilo *Globocnik, chief of the SS and Police Force in *Lublin, who set up a Jewish labor camp in 1940 in the Lublin district. He later transformed this camp into a killing center. At Chelmno, situated in German-occupied Poland, gassing by carbon monoxide fumes introduced from exhaust pipes into hermetically sealed trucks was employed. It was also used in Yugoslavia. The use of trucks was facilitated by local mechanics, who improvised by reconfiguring existing vehicles and even strengthened the rear axles to prevent their breakdown as the victims pushed to the rear.
Mobile gas vans, which could deal with a limited number of victims, 1,000–2,000 a day, had many disadvantages and were superseded in 1942 by the use of stationary gassing installations. A second method was that of gas chambers, disguised as shower room facilities, with shower room notices in various languages. At first the gas used was diesel exhaust fumes, and the victims often waited outside for hours in long queues because the motor had broken down. At Auschwitz Zyklon B, a disinfectant provided by I.G. Farben, first employed to destroy insects, was used. It seems that bureaucratic rivalries between camp commandants prevented its universal use.
Between 1942 and 1943 Jews were gassed in *Belzec, *Treblinka, and *Sobibor. Near *Vilna, *Riga, *Minsk, *Kovno, and *Lvov, there were smaller killing centers where Jews were executed by firing squads. The large concentration camps became death camps, e.g., Majdanek, and the largest of all, Auschwitz, which at the height of the extermination program accounted for more than 10,000 victims per day. Adolf Eichmann gave priority to the murder of Polish Jews and those expelled from the Reich, since in their case the problem of transport was nil and particularly because Hans *Frank, governor of the General-Gouvernement, was urging that his area be "cleansed" of Jews, whose number he overestimated at 3,500,000. Thus in early 1942 the evacuation of the Polish ghettos began in an operation deceptively termed Umsiedlung ("resettlement"), the evacuees being sent to killing centers. The liquidation of the Jewry of the General-Gouvernement, organized by Globocnik, was termed Aktion Reinhard in memory of *Heydrich, who had been assassinated in June 1942. When the operation ended (October 1943), many Jewish labor camps still remained, but all of them were turned into concentration camps in 1944.
The deportations from the rest of Europe to the extermination camps (including transports from concentration camps) began in March and April 1942 and continued until late 1944. The pace of the killing was related to the availability of transports and many deportations and subsequent gassing occurred after it was clear that Germany would lose the war. It did not want to lose the war against the Jews. At first, those able to work were brought because the construction of the extermination camps had yet to be completed. Belzec was operational between February and December 1942; killings had ceased before the new year began. Its mission was complete. The Jews of Galicia were dead. All that remained in 1943 was to exhume the dead, burn their bodies to destroy all evidence of the crime, and to plow the camp under. Following the rebellion at Treblinka (August 1943) and at Sobibor (October 1943) and the advance of the Soviet army, these two camps were abolished, and the killing moved westward to Auschwitz, which only in the summer of 1944 became the most lethal of the death camps, and Stutthof. The gassing of Jews continued until November 1944, when it was halted on Himmler's orders, perhaps to keep some Jews alive who could be used as barter for peace with the West.
From 1941 crematoria were built in several concentration camps to solve the problem of body disposal. In a few death camps, the crematoria was an all-purpose facility complete with its own gas chamber and undressing room. Prisoners would be entered into the building, forced to undress, instructed to remember where they had left their clothes, as part of the effort to deceive them, and then forced into gas chambers disguised as showers. Men, women, and children were undressed together, killed together. Because of the large numbers of corpses, they were not all dissected before cremation, but nevertheless the Selektion provided the physicians in German universities with "specimens" for study and for collection. The Sonderkommando ("special squad") of prisoners who worked in the crematoria were routinely murdered and replaced by new squads, in order to prevent the leaking of information. After all, they were the most dangerous of victims. Much to the surprise of historians and also of the SS, several Sonderkommando survived to bear witness to what had happened. Camps of a special type were set up late in 1941 for the sole purpose of the extermination of "undesirable populations." These were from the first equipped with gas chambers and crematoria and differed from concentration and labor camps and from those camps with a combined program of concentration and murder.
Train transport to the camp was often in crowded cattle cars with merely a bucket for sanitation. Conditions were primitive and cramped and upon reaching their destination the new arrivals mistakenly thought they had survived the worst. At the entrance to each of the death camps – the reception area – the dead were removed from the trains and the living divided according to their ability to walk. Those able to walk were sent on, people unable to walk were taken away. Those who could walk then faced the first Selektion. An SS officer pointed to the left or to the right. Elderly people, pregnant women, young children, and the infirm were immediately condemned to death. Segregated by sex, they surrendered
At Auschwitz, those selected for work were registered and branded and sheared. Their hair was shaved and their arms tattooed with a number. Uniforms were issued. Their ordeal as inmates was just beginning. They would face additional "selections" in the future. The officer in charge of the "selection" was a physician. His "expert opinion" was required to determine who would live and who would die. The most infamous of all of them, Dr. Josef *Mengele, who also oversaw some of the cruelest quasi-medical experiments conducted on inmates, was often to be found at the ramp in Birkenau. At other death camps, no selection was needed; arriving Jews were all sent to their death.
Those marked for Selektion and after it were forced to run to the "showers" to the accompaniment of a band playing music. Between 700–800 men and women, elderly people, and children were crammed into a chamber measuring 25 square meters (225 sq. ft). Certain tasks were restricted to the Germans; they alone emptied the Zyklon B into the chamber through slits in the roof; the gassing took about 20 minutes, depending on the number of persons in the chamber and then the gas had to be evacuated from the chamber. They alone pronounced the dead, dead.
Terrible shrieks could be heard from the hermetically sealed chamber when those inside began to suffocate and their lungs burst. One Sonderkommando from Auschwitz recalled, "People called one another by name. Mothers called their children, children, their mothers and fathers. Sometimes we could hear Sh'ma Yisrael." Hear Oh Israel, the Lord is our God, the Lord is One, the traditional line recited by Jews at death. Rudolph Reder, one of two survivors of Belzec and the only one to bear witness said: "Only when I heard children calling: 'Mommy. Haven't I been good? It's dark.' My heart would break. Later we stopped having feelings."
Some of the victims understood what was about to happen. Others were deceived to the very end. When the doors were reopened, the Sonderkommando entered to take out the corpses. If anyone was left alive, he was beaten to death. The contorted and entangled bodies were separated, body cavities were inspected for possible valuables, and after rings and gold teeth were removed and hair was shorn, they were piled in tens for inspection and then taken and burned. Later, furnaces and cremating pits were constructed. As the rate of extermination increased, heaps of ashes accumulated by the pits, whose smoke was visible from far away. The distinct smell of burning flesh permeated the area. The economic exploitation of the corpses involved the extraction of tons of gold teeth and rings, which were sent to the Reichsbank and credited to the SS account; the hair and bones were employed in industry; the ashes were used as fertilizer; and the clothes were sent to other camps after fumigation. There is no credible evidence that body fat was used for soap.
The murder rate was so intense that at the beginning of 1942 eight out of ten of the Jews who were to die in the Holocaust were still alive. Fourteen months later, the figure was reversed, 80% of the Jews were already dead. The rate of extermination, which was subject to the rate of transports, took its toll on the communications system just when the army was in need of it, and the extermination of manpower undermined the war effort.
Pseudo-medical experiments were carried out in a number of camps. Prior to World War II governments routinely used vulnerable populations for experimentation, but German physicians operated without limits and with routine disregard for the humanity of those upon whom they experimented. Even before World War II interned Jews had been used for pseudo-biological "race research." Upon Himmler's initiative, unlimited supplies of live men and women were put at the disposal of the SS medical organization for the purpose of "medical" experiments in the camps and outside. Under the program of the biological destruction of the "inferior races", Viktor Brack, who had also been one of the heads of the Euthanasia Program, was charged in 1941 with developing a quick system of sterilizing between 2,000,000 and 3,000,000 Jews who were fit for work. The logic was simple: if Jews could be sterilized, then the imposition of the "Final Solution" would take but a generation as there would be no danger of their reproducing and perpetuating the Jewish people. In the interim, the German people could enjoy the benefits of their labor. The Brack system, employed in Auschwitz by Horst Schumann, consisted of the irradiation of the reproductive organs of men and women. Another system was also tested in Auschwitz by Karl Clauberg, who, during the gynecological examination of women, injected them with matter, which burned out the womb. Gerhard Madaus and Ernst Koch worked on the development of an herbal means of sterilization, using Caladium seguinum; Gypsies were used as guinea pigs. August Hirt worked on shrinking skulls for his collection at the anatomical institute at Strasbourg, for the purposes of "racial research." The "specimens" were put to death at Natzweiler. Upon orders received from the air force, experiments subjecting humans to conditions of high pressure and freezing were held at Dachau, to investigate the possibilities of the survival of pilots. In the name of "medical research", humans were infected with contagious diseases and epidemics, in order to try out new drugs and poisons. The SS doctors also amputated bones and cut muscles for transplantation purposes; they removed internal organs and introduced cancer into human bodies. Those victims who did not die immediately were left to perish from neglect and agony. Some of them survived, crippled or maimed for life.
In November 1943, Dr. Josef Mengele became the chief physician of Birkenau. Mengele wanted to "prove" the superiority of the Nordic race. His first experiments were performed on Gypsy (Roma) (Roma) children supplied to him from the so-called kindergarten. Before long he expanded his interest to twins, dwarfs, and persons with abnormalities.
Mengele subjected his experimental group to all possible medical analyses that could be performed while the victims were alive. The tests he performed were painful, exhausting, and traumatic for the frightened and hungry children who made up the bulk his subjects.
The twins and the crippled persons designated as subjects of experiments were photographed, their jaws and teeth cast in plaster molds, fingerprints were taken from hands and legs. On Mengele's instructions, an inmate painter made comparative drawings of the shapes of heads, auricles, noses, mouths, hands, and legs of the twins.
When the research was completed some subjects were killed by phenol injections and their organs were autopsied and analyzed. Scientifically interesting anatomical specimens were preserved and shipped out to the Institute in Berlin-Dahlem for further research.
On the day he left Auschwitz, January 17, 1945, Mengele took with him the documentation of his experiments. He still imagined that they would bring him scientific honor.
STRUCTURE AND ADMINISTRATION
On July 7, 1934, Himmler appointed Theodor Eicke inspector of concentration camps and Fuehrer of the SS Wachverbaende ("guards"). A fanatic, brutal Nazi and efficient organizer, Eicke determined the uniform pattern of the concentration camps, fixed their locations, and headed their inspection authority until his transfer to the front in November 1939. The economic administration, including the financing and equipping of the SS Death Head Unit, members of which served as guards, was handled by Pohl. As a result of conflicts between the Gestapo and SS, a division of tasks was made: the Gestapo made arrests and the SS actually ran the camps. This, however, did not prevent the struggle between the various authorities and the resulting tangle of bureaucracy, which kept the prisoners from knowing which office decided their fate. The different types of concentration camps were classified into three categories in accordance with the severity of their detention conditions. In practice the various camps resembled one another in their inhumanity. Dachau served as the model camp, where guards and commandants were trained. Eicke created a combination of concentration camp and labor camp by exploiting the prisoners for profit and to finance the camps themselves.
The gate of the camp was a one-story construction in the center of which stood a tower with a clock and a searchlight. The gate usually bore a motto, such as "Arbeit macht frei" ("Labor makes free"). The parade ground (Appellplatz) stretched from the gate to the wooden huts where the prisoners were housed. The structure of the command was fixed in 1936 and included
(a) the Kommandantur, comprising the Kommandant, who held authority over the heads of divisions;
(b) the Political Department, an autonomous authority in the Gestapo, responsible for the file cards of the prisoners and, from 1943, in command of executions (it confirmed the lists of Jews chosen through Selektion ("selection") for death in the gas chambers);
(c) the Schutzhaftlager ("protective custody" camp), under command of the Schutzhaftlagerfuehrer, whose Blockfuehrer were responsible for order and discipline in the prisoners' quarters (there were also Arbeitsdienstfuehrer, responsible for the division of labor, and the Kommandofuehrer, who led the labor detachments);
(d) the administration, which dealt with administration, internal affairs, and economy (Concentration camps that absorbed transports of Jews had a special staff to classify their goods and send them on to the Hauptversorgungslager in Auschwitz.);
(e) Lagerarzt, the SS physician.
Guard duties were carried out mostly by the SS Death Head Units. In 1944, 1,000,000 prisoners were kept by 45,000 guards, of whom 35,000 were SS men and 10,000 were army or navy men or non-German auxiliaries. The guards were allowed the unstinted use of weapons against escapees or rebels, and if a prisoner escaped the guard was tried, while guards who killed escapees were rewarded.
The prisoners were classified as follows: political prisoners, including smugglers and deserters (after the outbreak of war these included all non-Germans); members of "inferior races", Jews and gypsies, and criminals; asocials, such as tramps, drunkards, and those guilty of negligence at work. Homosexuals constituted a special group. Each group wore a distinctive badge, a number, and a triangle colored according to the different categories. The Jews wore an additional yellow triangle, inverted under the first, thus forming a Star of David. At a later stage, in some concentration camps the prisoner's number was tattooed on his arm.
The prisoners' administration, whose structure resembled that of the concentration camp command, cooperated with the SS, and this structure resulted in dual supervision of the prisoners. Sadists and disturbed persons in an administrative post could brutalize their fellows. The prisoners' administration was headed by a Lageraeltester ("camp elder"), appointed by the camp commandant. Each block of prisoners' dwellings had a Blockaeltester, assisted by Stubendienste ("room orderlies"), who were responsible for maintaining order and for the distribution of food. The work detachments were headed by Kapos, work supervisors responsible to the SS Kommandofuehrer and assisted by a Vorarbeiter ("foreman"). These posts were generally given to criminal offenders, who often exceeded the SS in their brutality, either from sadism or from fear of the SS. The Kapos spied on their fellow prisoners and ingratiated themselves with their masters, but their hopes of survival through oppression of their fellow men failed, as they too usually fell victim to the machinations of the SS. In hard labor detachments a prisoner could escape the punishments meted out by the Kapos and remain alive only by bribing them. The Kapos created a regime of corruption and blackmail, which gave them a life of comfort and ease as long as they held their posts.
The prisoners, who reached the camps in a state of hunger and exhaustion, were forced to hand over the remainder of their personal property and in return received a set of clothing, which included a navy- and white-striped shirt, a spoon, a bowl, and a cup. They were allotted space in the tiers of wooden bunks in huts containing three or four times the number of persons for which the structures were originally intended. The prisoners' daily life resembled the outside world only in the names given to everyday objects. Horrific realities were often hidden under accepted words as "food", "work", "medicine", and "neutral" words such as Sonderbehandlung ("special treatment", i.e., execution) Selektion (the selection of those to be sent to their death), or Desinfektion (i.e., gassing). The prisoners' diet bordered on starvation and deteriorated further during the war years. The terrible hunger did more than anything else to destroy the human image and even reduced some to cannibalism. The extremely poor conditions of health and hygiene and the lack of water also aided the spread of disease and epidemics, especially typhus and spotted fever. The camp doctor and his prisoner assistant often caused or hastened death through neglect, mistreatment, or lethal injections.
END OF THE CAMPS
As the Russians advanced from the east and the British and Americans from the west, Himmler ordered the emergency evacuation of prisoners from camps in the occupied territories. No means of transportation was available for the evacuation, and in early 1945 most of the prisoners were dragged by the thousands in long death marches lasting several days in cold and rain and without equipment
or food. The German prisoners were given weapons to help the SS. Exhaustion, starvation, thirst, and the killing of escapees and the weak accounted for hundreds of thousands of victims. The local populations, who had been incited against the prisoners, attacked them and refused sanctuary to those who escaped. At the reception camps, masses of the new arrivals died of starvation and overcrowding, which hastened the spread of epidemics such as typhus and spotted fever. The evacuation operation cost the lives of about 250,000 prisoners, many of them Jews.
The concentration and extermination camps constituted a terrifying example of the "new order" which the Nazis were preparing for the whole world, using terror and the impersonal murder of millions of anonymous victims to turn "ideology" into reality. The murder itself was the end process of the destruction of the victims' identity and their ethical personalities. The splitting of groups into individuals, and individuals into atoms reduced most of the prisoners into mere shadows of men; some became hungry animals fighting for their existence at the expense of their neighbor's lives; others became *"muselmann," – the walking dead who had lost the will to live. Nevertheless, there were prisoners, many of them Jews, who had the energy and the ability to organize revolts (as at Treblinka and Sobibor) and try to escape, individually or in groups (e.g., from Auschwitz), but only a small percentage succeeded. When the Reich crumbled there was no one to give the order to exterminate. The SS fled, dragging the remnants of the prisoners with them westward for extermination, in the hopes of destroying all remains of their crime. Only 500,000 concentration camp prisoners and those destined for extermination remained alive, most of them physically crippled and mentally broken. These surviving remnants, together with many documents which authorized the reign of terror, bore witness to the horrors of the phenomenon. Exact data are lacking, but there is a general consensus that at Auschwitz 1.1–1.3 million people were gassed, 9 out of 10 of them Jews; at Treblinka between 750,000 and 870,000 Jews were killed; at Belzec some 500,000 Jews were murdered; at Chelmno some 150,000 Jews were gassed; at Sobibor at least 206,000 Jews were murdered; at Majdanek some 170,000. The total may exceed 2,750,000 in the killing centers alone.
International Tracing Service, Catalogue of Camps and Prisons in Germany and German-Occupied Territories, 2 vols. (1949–51); idem, Vorlaeufiges Verzeichnis der Konzentrationslager… (1969); imt, Trial of the Major War Criminals, 42 vols. (1947–49); idem, Trial of German Major War Criminals, 23 vols. (1946–51); Jewish Black Book Committee, Black Book (1946); E. Kogon, Theory and Practice of Hell (1950); G. Reitlinger, Final Solution (19682); R. Hoess, Commandant of Auschwitz (1959); H. Krausnick et al., Anatomy of the SS State (1968), 397–504; H.G. Adler, in: World Congress of Jewish Studies. Papers, 1 (1967), 27–31; A. Ungerer, Verzeichnis von Ghettos, Zwangsarbeitslagern und Konzentrationslagern… (1953); E. Kossoy and E. Hammitsch, Handbuch zum Entschaedigungsverfahren (1958); R. Hilberg, The Destruction of the European Jews (1961, 1985, 2003). ADD. BIBLIOGRAPHY: H. Friedlander, The Origins of Nazi Genocide: From Euthanasia to the Final Solution (1995); Y. Arad, Belzec, Sobibor, Treblinka: The Operation Reinhard Camps (1987); Y. Gutman and M. Berenbaum (eds.), Anatomy of the Auschwitz Death Camp (1994).
Sources: Encyclopaedia Judaica. © 2008 The Gale Group. All Rights Reserved. | https://www.jewishvirtuallibrary.org/what-are-the-concentration-camps | 21 |