Datasets:
de-francophones
commited on
Commit
•
d202f39
1
Parent(s):
d049ee3
50b99997e2c23e9bcc09e16d7e728e738c6b65e50a0c82eb0282ce20aecaa54e
Browse files- en/1944.html.txt +241 -0
- en/1945.html.txt +49 -0
- en/1946.html.txt +49 -0
- en/1947.html.txt +105 -0
- en/1948.html.txt +174 -0
- en/1949.html.txt +172 -0
- en/195.html.txt +186 -0
- en/1950.html.txt +193 -0
- en/1951.html.txt +69 -0
- en/1952.html.txt +0 -0
- en/1953.html.txt +134 -0
- en/1954.html.txt +111 -0
- en/1955.html.txt +111 -0
- en/1956.html.txt +119 -0
- en/1957.html.txt +119 -0
- en/1958.html.txt +172 -0
- en/1959.html.txt +224 -0
- en/196.html.txt +186 -0
- en/1960.html.txt +224 -0
- en/1961.html.txt +224 -0
- en/1962.html.txt +212 -0
- en/1963.html.txt +23 -0
- en/1964.html.txt +69 -0
- en/1965.html.txt +230 -0
- en/1966.html.txt +84 -0
- en/1967.html.txt +265 -0
- en/1968.html.txt +145 -0
- en/1969.html.txt +62 -0
- en/197.html.txt +290 -0
- en/1970.html.txt +62 -0
- en/1971.html.txt +217 -0
- en/1972.html.txt +69 -0
- en/1973.html.txt +37 -0
- en/1974.html.txt +37 -0
- en/1975.html.txt +67 -0
- en/1976.html.txt +7 -0
- en/1977.html.txt +7 -0
- en/1978.html.txt +63 -0
- en/1979.html.txt +173 -0
- en/198.html.txt +290 -0
- en/1980.html.txt +173 -0
- en/1981.html.txt +173 -0
- en/1982.html.txt +173 -0
- en/1983.html.txt +63 -0
- en/1984.html.txt +61 -0
- en/1985.html.txt +30 -0
- en/1986.html.txt +30 -0
- en/1987.html.txt +0 -0
- en/1988.html.txt +90 -0
- en/1989.html.txt +17 -0
en/1944.html.txt
ADDED
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Fascism (/ˈfæʃɪzəm/) is a form of far-right, authoritarian ultranationalism[1][2] characterized by dictatorial power, forcible suppression of opposition, as well as strong regimentation of society and of the economy[3] which came to prominence in early 20th-century Europe.[4] The first fascist movements emerged in Italy during World War I, before spreading to other European countries.[4] Opposed to liberalism, Marxism, and anarchism, fascism is placed on the far right within the traditional left–right spectrum.[4][5][6]
|
6 |
+
|
7 |
+
Fascists saw World War I as a revolution that brought massive changes to the nature of war, society, the state, and technology. The advent of total war and the total mass mobilization of society had broken down the distinction between civilians and combatants. A "military citizenship" arose in which all citizens were involved with the military in some manner during the war.[7][8] The war had resulted in the rise of a powerful state capable of mobilizing millions of people to serve on the front lines and providing economic production and logistics to support them, as well as having unprecedented authority to intervene in the lives of citizens.[7][8]
|
8 |
+
|
9 |
+
Fascists believe that liberal democracy is obsolete and regard the complete mobilization of society under a totalitarian one-party state as necessary to prepare a nation for armed conflict and to respond effectively to economic difficulties.[9] Such a state is led by a strong leader—such as a dictator and a martial government composed of the members of the governing fascist party—to forge national unity and maintain a stable and orderly society.[9] Fascism rejects assertions that violence is automatically negative in nature and views political violence, war, and imperialism as means that can achieve national rejuvenation.[10][11] Fascists advocate a mixed economy, with the principal goal of achieving autarky (national economic self-sufficiency) through protectionist and interventionist economic policies.[12]
|
10 |
+
|
11 |
+
Since the end of World War II in 1945, few parties have openly described themselves as fascist, and the term is instead now usually used pejoratively by political opponents. The descriptions neo-fascist or post-fascist are sometimes applied more formally to describe parties of the far right with ideologies similar to, or rooted in, 20th-century fascist movements.[4][13]
|
12 |
+
|
13 |
+
The Italian term fascismo is derived from fascio meaning "a bundle of sticks", ultimately from the Latin word fasces.[14] This was the name given to political organizations in Italy known as fasci, groups similar to guilds or syndicates. According to Italian fascist dictator Benito Mussolini's own account, the Fasces of Revolutionary Action were founded in Italy in 1915.[15] In 1919, Mussolini founded the Italian Fasces of Combat in Milan, which became the National Fascist Party two years later. The Fascists came to associate the term with the ancient Roman fasces or fascio littorio[16]—a bundle of rods tied around an axe,[17] an ancient Roman symbol of the authority of the civic magistrate[18] carried by his lictors, which could be used for corporal and capital punishment at his command.[19][20]
|
14 |
+
|
15 |
+
The symbolism of the fasces suggested strength through unity: a single rod is easily broken, while the bundle is difficult to break.[21] Similar symbols were developed by different fascist movements: for example, the Falange symbol is five arrows joined together by a yoke.[22]
|
16 |
+
|
17 |
+
Historians, political scientists, and other scholars have long debated the exact nature of fascism.[23] Each group described as fascist has at least some unique elements, and many definitions of fascism have been criticized as either too wide or narrow.[24][25]
|
18 |
+
|
19 |
+
According to many scholars, fascism – especially once in power – has historically attacked communism, conservatism, and parliamentary liberalism, attracting support primarily from the far-right.[26]
|
20 |
+
|
21 |
+
One common definition of the term, frequently cited by reliable sources as a standard definition, is that of historian Stanley G. Payne. He focuses on three concepts:
|
22 |
+
|
23 |
+
Historian John Lukacs argues that there is no such thing as generic fascism. He claims that National Socialism and communism are essentially manifestations of populism and that states such as National Socialist Germany and Fascist Italy are more different than similar.[31]
|
24 |
+
|
25 |
+
Roger Griffin describes fascism as "a genus of political ideology whose mythic core in its various permutations is a palingenetic form of populist ultranationalism".[32] Griffin describes the ideology as having three core components: "(i) the rebirth myth, (ii) populist ultra-nationalism, and (iii) the myth of decadence".[33] In Griffin's view, fascism is "a genuinely revolutionary, trans-class form of anti-liberal, and in the last analysis, anti-conservative nationalism" built on a complex range of theoretical and cultural influences. He distinguishes an inter-war period in which it manifested itself in elite-led but populist "armed party" politics opposing socialism and liberalism and promising radical politics to rescue the nation from decadence.[34] In Against the Fascist Creep, Alexander Reid Ross writes regarding Griffin's view:
|
26 |
+
|
27 |
+
Following the Cold War and shifts in fascist organizing techniques, a number of scholars have moved toward the minimalist "new consensus" refined by Roger Griffin: "the mythic core" of fascism is "a populist form of palingenetic ultranationalism." That means that fascism is an ideology that draws on old, ancient, and even arcane myths of racial, cultural, ethnic, and national origins to develop a plan for the "new man."[35]
|
28 |
+
|
29 |
+
Indeed, Griffin himself explored this 'mythic' or 'eliminable' core of fascism with his concept of post-fascism to explore the continuation of Nazism in the modern era.[36] Additionally, other historians have applied this minimalist core to explore proto-fascist movements.[37]
|
30 |
+
|
31 |
+
Cas Mudde and Cristóbal Rovira Kaltwasser argue that although fascism "flirted with populism ... in an attempt to generate mass support", it is better seen as an elitist ideology. They cite in particular its exaltation of the Leader, the race, and the state, rather than the people. They see populism as a "thin-centered ideology" with a "restricted morphology" which necessarily becomes attached to "thick-centered" ideologies such as fascism, liberalism, or socialism. Thus populism can be found as an aspect of many specific ideologies, without necessarily being a defining characteristic of those ideologies. They refer to the combination of populism, authoritarianism and ultranationalism as "a marriage of convenience."[38]
|
32 |
+
|
33 |
+
Robert Paxton says that fascism is "a form of political behavior marked by obsessive preoccupation with community decline, humiliation, or victimhood and by compensatory cults of unity, energy, and purity, in which a mass-based party of committed nationalist militants, working in uneasy but effective collaboration with traditional elites, abandons democratic liberties and pursues with redemptive violence and without ethical or legal restraints goals of internal cleansing and external expansion".[39]
|
34 |
+
|
35 |
+
Roger Eatwell defines fascism as "an ideology that strives to forge social rebirth based on a holistic-national radical Third Way",[40] while Walter Laqueur sees the core tenets of fascism as "self-evident: nationalism; Social Darwinism; racialism, the need for leadership, a new aristocracy, and obedience; and the negation of the ideals of the Enlightenment and the French Revolution."[41]
|
36 |
+
|
37 |
+
Racism was a key feature of German fascism, for which the Holocaust was a high priority. According to the historiography of genocide, "In dealing with the Holocaust, it is the consensus of historians that Nazi Germany targeted Jews as a race, not as a religious group."[42] Umberto Eco,[43] Kevin Passmore,[44] John Weiss,[45] Ian Adams,[46] and Moyra Grant[47] stress racism as a characteristic component of German fascism. Historian Robert Soucy stated that "Hitler envisioned the ideal German society as a Volksgemeinschaft, a racially unified and hierarchically organized body in which the interests of individuals would be strictly subordinate to those of the nation, or Volk."[48] Fascist philosophies vary by application, but remain distinct by one theoretical commonality: all traditionally fall into the far-right sector of any political spectrum, catalyzed by afflicted class identities over conventional social inequities.[4]
|
38 |
+
|
39 |
+
Most scholars place fascism on the far right of the political spectrum.[4][5] Such scholarship focuses on its social conservatism and its authoritarian means of opposing egalitarianism.[49][50] Roderick Stackelberg places fascism—including Nazism, which he says is "a radical variant of fascism"—on the political right by explaining: "The more a person deems absolute equality among all people to be a desirable condition, the further left he or she will be on the ideological spectrum. The more a person considers inequality to be unavoidable or even desirable, the further to the right he or she will be".[51]
|
40 |
+
|
41 |
+
Fascism's origins, however, are complex and include many seemingly contradictory viewpoints, ultimately centered around a mythos of national rebirth from decadence.[52] Fascism was founded during World War I by Italian national syndicalists who drew upon both left-wing organizational tactics and right-wing political views.[53]
|
42 |
+
|
43 |
+
Italian Fascism gravitated to the right in the early 1920s.[54][55] A major element of fascist ideology that has been deemed to be far-right is its stated goal to promote the right of a supposedly superior people to dominate, while purging society of supposedly inferior elements.[56]
|
44 |
+
|
45 |
+
In the 1920s, the Italian Fascists described their ideology as right-wing in the political program The Doctrine of Fascism, stating: "We are free to believe that this is the century of authority, a century tending to the 'right,' a fascist century".[57][58] Mussolini stated that fascism's position on the political spectrum was not a serious issue for fascists: "Fascism, sitting on the right, could also have sat on the mountain of the center ... These words in any case do not have a fixed and unchanged meaning: they do have a variable subject to location, time and spirit. We don't give a damn about these empty terminologies and we despise those who are terrorized by these words".[59]
|
46 |
+
|
47 |
+
Major Italian groups politically on the right, especially rich landowners and big business, feared an uprising by groups on the left such as sharecroppers and labour unions.[60] They welcomed Fascism and supported its violent suppression of opponents on the left.[61] The accommodation of the political right into the Italian Fascist movement in the early 1920s created internal factions within the movement. The "Fascist left" included Michele Bianchi, Giuseppe Bottai, Angelo Oliviero Olivetti, Sergio Panunzio, and Edmondo Rossoni, who were committed to advancing national syndicalism as a replacement for parliamentary liberalism in order to modernize the economy and advance the interests of workers and common people.[62] The "Fascist right" included members of the paramilitary Squadristi and former members of the Italian Nationalist Association (ANI).[62] The Squadristi wanted to establish Fascism as a complete dictatorship, while the former ANI members, including Alfredo Rocco, sought to institute an authoritarian corporatist state to replace the liberal state in Italy while retaining the existing elites.[62] Upon accommodating the political right, there arose a group of monarchist fascists who sought to use fascism to create an absolute monarchy under King Victor Emmanuel III of Italy.[62]
|
48 |
+
|
49 |
+
After the fall of the Fascist regime in Italy, when King Victor Emmanuel III forced Mussolini to resign as head of government and placed him under arrest in 1943, Mussolini was rescued by German forces. While continuing to rely on Germany for support, Mussolini and the remaining loyal Fascists founded the Italian Social Republic with Mussolini as head of state. Mussolini sought to re-radicalize Italian Fascism, declaring that the Fascist state had been overthrown because Italian Fascism had been subverted by Italian conservatives and the bourgeoisie.[63] Then the new Fascist government proposed the creation of workers' councils and profit-sharing in industry, although the German authorities, who effectively controlled northern Italy at this point, ignored these measures and did not seek to enforce them.[63]
|
50 |
+
|
51 |
+
A number of post-World War II fascist movements described themselves as a "third position" outside the traditional political spectrum.[64] Spanish Falangist leader José Antonio Primo de Rivera said: "[B]asically the Right stands for the maintenance of an economic structure, albeit an unjust one, while the Left stands for the attempt to subvert that economic structure, even though the subversion thereof would entail the destruction of much that was worthwhile".[65]
|
52 |
+
|
53 |
+
The term "fascist" has been used as a pejorative,[66] regarding varying movements across the far right of the political spectrum.[67] George Orwell wrote in 1944 that "the word 'Fascism' is almost entirely meaningless ... almost any English person would accept 'bully' as a synonym for 'Fascist'".[67]
|
54 |
+
|
55 |
+
Communist states have sometimes been referred to as "fascist", typically as an insult. For example, it has been applied to Marxist-Leninist regimes in Cuba under Fidel Castro and Vietnam under Ho Chi Minh.[68] Chinese Marxists used the term to denounce the Soviet Union during the Sino-Soviet Split, and likewise the Soviets used the term to denounce Chinese Marxists[69] and social democracy (coining a new term in "social fascism").
|
56 |
+
|
57 |
+
In the United States, Herbert Matthews of The New York Times asked in 1946: "Should we now place Stalinist Russia in the same category as Hitlerite Germany? Should we say that she is Fascist?".[70] J. Edgar Hoover, longtime FBI director and ardent anti-communist, wrote extensively of "Red Fascism".[71] The Ku Klux Klan in the 1920s was sometimes called "fascist". Historian Peter Amann states that, "Undeniably, the Klan had some traits in common with European fascism—chauvinism, racism, a mystique of violence, an affirmation of a certain kind of archaic traditionalism—yet their differences were fundamental....[the KKK] never envisioned a change of political or economic system."[72]
|
58 |
+
|
59 |
+
Professor Richard Griffiths of the University of Wales[73] wrote in 2005 that "fascism" is the "most misused, and over-used word, of our times".[25] "Fascist" is sometimes applied to post-World War II organizations and ways of thinking that academics more commonly term "neo-fascist".[74]
|
60 |
+
|
61 |
+
Georges Valois, founder of the first non-Italian fascist party Faisceau,[75] claimed the roots of fascism stemmed from the late 18th century Jacobin movement, seeing in its totalitarian nature a foreshadowing of the fascist state. Historian George Mosse similarly analyzed fascism as an inheritor of the mass ideology and civil religion of the French Revolution, as well as a result of the brutalization of societies in 1914–1918.[76]
|
62 |
+
|
63 |
+
Historians such as Irene Collins and Howard C Payne see Napoleon III, who ran a 'police state' and suppressed the media, as a forerunner of fascism.[77] According to David Thomson,[78] the Italian Risorgimento of 1871 led to the 'nemesis of fascism'. William L Shirer[79] sees a continuity from the views of Fichte and Hegel, through Bismarck, to Hitler; Robert Gerwarth speaks of a 'direct line' from Bismarck to Hitler.[80] Julian Dierkes sees fascism as a 'particularly violent form of imperialism'.[81]
|
64 |
+
|
65 |
+
The historian Zeev Sternhell has traced the ideological roots of fascism back to the 1880s and in particular to the fin de siècle theme of that time.[82][83] The theme was based on a revolt against materialism, rationalism, positivism, bourgeois society and democracy.[84] The fin-de-siècle generation supported emotionalism, irrationalism, subjectivism and vitalism.[85] The fin-de-siècle mindset saw civilization as being in a crisis that required a massive and total solution.[84] The fin-de-siècle intellectual school considered the individual only one part of the larger collectivity, which should not be viewed as an atomized numerical sum of individuals.[84] They condemned the rationalistic individualism of liberal society and the dissolution of social links in bourgeois society.[84]
|
66 |
+
|
67 |
+
The fin-de-siècle outlook was influenced by various intellectual developments, including Darwinian biology; Wagnerian aesthetics; Arthur de Gobineau's racialism; Gustave Le Bon's psychology; and the philosophies of Friedrich Nietzsche, Fyodor Dostoyevsky and Henri Bergson.[86] Social Darwinism, which gained widespread acceptance, made no distinction between physical and social life, and viewed the human condition as being an unceasing struggle to achieve the survival of the fittest.[86] Social Darwinism challenged positivism's claim of deliberate and rational choice as the determining behaviour of humans, with social Darwinism focusing on heredity, race, and environment.[86] Social Darwinism's emphasis on biogroup identity and the role of organic relations within societies fostered legitimacy and appeal for nationalism.[87] New theories of social and political psychology also rejected the notion of human behaviour being governed by rational choice and instead claimed that emotion was more influential in political issues than reason.[86] Nietzsche's argument that "God is dead" coincided with his attack on the "herd mentality" of Christianity, democracy and modern collectivism; his concept of the übermensch; and his advocacy of the will to power as a primordial instinct, were major influences upon many of the fin-de-siècle generation.[88] Bergson's claim of the existence of an "élan vital" or vital instinct centred upon free choice and rejected the processes of materialism and determinism; this challenged Marxism.[89]
|
68 |
+
|
69 |
+
Gaetano Mosca in his work The Ruling Class (1896) developed the theory that claims that in all societies an "organized minority" will dominate and rule over the "disorganized majority".[90][91] Mosca claims that there are only two classes in society, "the governing" (the organized minority) and "the governed" (the disorganized majority).[92] He claims that the organized nature of the organized minority makes it irresistible to any individual of the disorganized majority.[92]
|
70 |
+
|
71 |
+
French nationalist and reactionary monarchist Charles Maurras influenced fascism.[93] Maurras promoted what he called integral nationalism, which called for the organic unity of a nation and Maurras insisted that a powerful monarch was an ideal leader of a nation. Maurras distrusted what he considered the democratic mystification of the popular will that created an impersonal collective subject.[93] He claimed that a powerful monarch was a personified sovereign who could exercise authority to unite a nation's people.[93] Maurras' integral nationalism was idealized by fascists, but modified into a modernized revolutionary form that was devoid of Maurras' monarchism.[93]
|
72 |
+
|
73 |
+
French revolutionary syndicalist Georges Sorel promoted the legitimacy of political violence in his work Reflections on Violence (1908) and other works in which he advocated radical syndicalist action to achieve a revolution to overthrow capitalism and the bourgeoisie through a general strike.[94] In Reflections on Violence, Sorel emphasized need for a revolutionary political religion.[95] Also in his work The Illusions of Progress, Sorel denounced democracy as reactionary, saying "nothing is more aristocratic than democracy".[96] By 1909 after the failure of a syndicalist general strike in France, Sorel and his supporters left the radical left and went to the radical right, where they sought to merge militant Catholicism and French patriotism with their views—advocating anti-republican Christian French patriots as ideal revolutionaries.[97] Initially Sorel had officially been a revisionist of Marxism, but by 1910 announced his abandonment of socialist literature and claimed in 1914, using an aphorism of Benedetto Croce that "socialism is dead" because of the "decomposition of Marxism".[98] Sorel became a supporter of reactionary Maurrassian nationalism beginning in 1909 that influenced his works.[98] Maurras held interest in merging his nationalist ideals with Sorelian syndicalism as a means to confront democracy.[99] Maurras stated "a socialism liberated from the democratic and cosmopolitan element fits nationalism well as a well made glove fits a beautiful hand".[100]
|
74 |
+
|
75 |
+
The fusion of Maurrassian nationalism and Sorelian syndicalism influenced radical Italian nationalist Enrico Corradini.[101] Corradini spoke of the need for a nationalist-syndicalist movement, led by elitist aristocrats and anti-democrats who shared a revolutionary syndicalist commitment to direct action and a willingness to fight.[101] Corradini spoke of Italy as being a "proletarian nation" that needed to pursue imperialism in order to challenge the "plutocratic" French and British.[102] Corradini's views were part of a wider set of perceptions within the right-wing Italian Nationalist Association (ANI), which claimed that Italy's economic backwardness was caused by corruption in its political class, liberalism, and division caused by "ignoble socialism".[102] The ANI held ties and influence among conservatives, Catholics and the business community.[102] Italian national syndicalists held a common set of principles: the rejection of bourgeois values, democracy, liberalism, Marxism, internationalism and pacifism; and the promotion of heroism, vitalism and violence.[103] The ANI claimed that liberal democracy was no longer compatible with the modern world, and advocated a strong state and imperialism, claiming that humans are naturally predatory and that nations were in a constant struggle, in which only the strongest could survive.[104]
|
76 |
+
|
77 |
+
Futurism was both an artistic-cultural movement and initially a political movement in Italy led by Filippo Tommaso Marinetti who founded the Futurist Manifesto (1908), that championed the causes of modernism, action, and political violence as necessary elements of politics while denouncing liberalism and parliamentary politics. Marinetti rejected conventional democracy based on majority rule and egalitarianism, for a new form of democracy, promoting what he described in his work "The Futurist Conception of Democracy" as the following: "We are therefore able to give the directions to create and to dismantle to numbers, to quantity, to the mass, for with us number, quantity and mass will never be—as they are in Germany and Russia—the number, quantity and mass of mediocre men, incapable and indecisive".[105]
|
78 |
+
|
79 |
+
Futurism influenced fascism in its emphasis on recognizing the virile nature of violent action and war as being necessities of modern civilization.[106] Marinetti promoted the need of physical training of young men, saying that in male education, gymnastics should take precedence over books, and he advocated segregation of the genders on this matter, in that womanly sensibility must not enter men's education whom Marinetti claimed must be "lively, bellicose, muscular and violently dynamic".[107]
|
80 |
+
|
81 |
+
At the outbreak of World War I in August 1914, the Italian political left became severely split over its position on the war. The Italian Socialist Party (PSI) opposed the war but a number of Italian revolutionary syndicalists supported war against Germany and Austria-Hungary on the grounds that their reactionary regimes had to be defeated to ensure the success of socialism.[108] Angelo Oliviero Olivetti formed a pro-interventionist fascio called the Revolutionary Fasces of International Action in October 1914.[108] Benito Mussolini upon being expelled from his position as chief editor of the PSI's newspaper Avanti! for his anti-German stance, joined the interventionist cause in a separate fascio.[109] The term "Fascism" was first used in 1915 by members of Mussolini's movement, the Fasces of Revolutionary Action.[110]
|
82 |
+
|
83 |
+
The first meeting of the Fasces of Revolutionary Action was held on 24 January 1915[111] when Mussolini declared that it was necessary for Europe to resolve its national problems—including national borders—of Italy and elsewhere "for the ideals of justice and liberty for which oppressed peoples must acquire the right to belong to those national communities from which they descended".[111] Attempts to hold mass meetings were ineffective and the organization was regularly harassed by government authorities and socialists.[112]
|
84 |
+
|
85 |
+
Similar political ideas arose in Germany after the outbreak of the war. German sociologist Johann Plenge spoke of the rise of a "National Socialism" in Germany within what he termed the "ideas of 1914" that were a declaration of war against the "ideas of 1789" (the French Revolution).[113] According to Plenge, the "ideas of 1789" that included rights of man, democracy, individualism and liberalism were being rejected in favor of "the ideas of 1914" that included "German values" of duty, discipline, law and order.[113] Plenge believed that racial solidarity (Volksgemeinschaft) would replace class division and that "racial comrades" would unite to create a socialist society in the struggle of "proletarian" Germany against "capitalist" Britain.[113] He believed that the "Spirit of 1914" manifested itself in the concept of the "People's League of National Socialism".[114] This National Socialism was a form of state socialism that rejected the "idea of boundless freedom" and promoted an economy that would serve the whole of Germany under the leadership of the state.[114] This National Socialism was opposed to capitalism because of the components that were against "the national interest" of Germany, but insisted that National Socialism would strive for greater efficiency in the economy.[114][115] Plenge advocated an authoritarian rational ruling elite to develop National Socialism through a hierarchical technocratic state.[116]
|
86 |
+
|
87 |
+
Fascists viewed World War I as bringing revolutionary changes in the nature of war, society, the state and technology, as the advent of total war and mass mobilization had broken down the distinction between civilian and combatant, as civilians had become a critical part in economic production for the war effort and thus arose a "military citizenship" in which all citizens were involved to the military in some manner during the war.[7][8] World War I had resulted in the rise of a powerful state capable of mobilizing millions of people to serve on the front lines or provide economic production and logistics to support those on the front lines, as well as having unprecedented authority to intervene in the lives of citizens.[7][8] Fascists viewed technological developments of weaponry and the state's total mobilization of its population in the war as symbolizing the beginning of a new era fusing state power with mass politics, technology and particularly the mobilizing myth that they contended had triumphed over the myth of progress and the era of liberalism.[7]
|
88 |
+
|
89 |
+
The October Revolution of 1917—in which Bolshevik communists led by Vladimir Lenin seized power in Russia—greatly influenced the development of fascism.[117] In 1917, Mussolini, as leader of the Fasces of Revolutionary Action, praised the October Revolution, but later he became unimpressed with Lenin, regarding him as merely a new version of Tsar Nicholas.[118] After World War I, fascists have commonly campaigned on anti-Marxist agendas.[117]
|
90 |
+
|
91 |
+
Liberal opponents of both fascism and the Bolsheviks argue that there are various similarities between the two, including that they believed in the necessity of a vanguard leadership, had disdain for bourgeois values and it is argued had totalitarian ambitions.[117] In practice, both have commonly emphasized revolutionary action, proletarian nation theories, one-party states and party-armies.[117] However, both draw clear distinctions from each other both in aims and tactics, with the Bolsheviks emphasizing the need for an organized participatory democracy and an egalitarian, internationalist vision for society while the fascists emphasize hyper-nationalism and open hostility towards democracy, envisioning a hierarchical social structure as essential to their aims.
|
92 |
+
|
93 |
+
With the antagonism between anti-interventionist Marxists and pro-interventionist Fascists complete by the end of the war, the two sides became irreconcilable. The Fascists presented themselves as anti-Marxists and as opposed to the Marxists.[119] Mussolini consolidated control over the Fascist movement, known as Sansepolcrismo, in 1919 with the founding of the Italian Fasces of Combat.
|
94 |
+
|
95 |
+
In 1919, Alceste De Ambris and Futurist movement leader Filippo Tommaso Marinetti created The Manifesto of the Italian Fasces of Combat (the Fascist Manifesto).[120] The Manifesto was presented on 6 June 1919 in the Fascist newspaper Il Popolo d'Italia. The Manifesto supported the creation of universal suffrage for both men and women (the latter being realized only partly in late 1925, with all opposition parties banned or disbanded);[121] proportional representation on a regional basis; government representation through a corporatist system of "National Councils" of experts, selected from professionals and tradespeople, elected to represent and hold legislative power over their respective areas, including labour, industry, transportation, public health, communications, etc.; and the abolition of the Italian Senate.[122] The Manifesto supported the creation of an eight-hour work day for all workers, a minimum wage, worker representation in industrial management, equal confidence in labour unions as in industrial executives and public servants, reorganization of the transportation sector, revision of the draft law on invalidity insurance, reduction of the retirement age from 65 to 55, a strong progressive tax on capital, confiscation of the property of religious institutions and abolishment of bishoprics, and revision of military contracts to allow the government to seize 85% of profits.[123] It also called for the fulfillment of expansionist aims in the Balkans and other parts of the Mediterranean,[124] the creation of a short-service national militia to serve defensive duties, nationalization of the armaments industry and a foreign policy designed to be peaceful but also competitive.[125]
|
96 |
+
|
97 |
+
The next events that influenced the Fascists in Italy was the raid of Fiume by Italian nationalist Gabriele d'Annunzio and the founding of the Charter of Carnaro in 1920.[126] D'Annunzio and De Ambris designed the Charter, which advocated national-syndicalist corporatist productionism alongside D'Annunzio's political views.[127] Many Fascists saw the Charter of Carnaro as an ideal constitution for a Fascist Italy.[128] This behaviour of aggression towards Yugoslavia and South Slavs was pursued by Italian Fascists with their persecution of South Slavs—especially Slovenes and Croats.
|
98 |
+
|
99 |
+
In 1920, militant strike activity by industrial workers reached its peak in Italy and 1919 and 1920 were known as the "Red Years".[129] Mussolini and the Fascists took advantage of the situation by allying with industrial businesses and attacking workers and peasants in the name of preserving order and internal peace in Italy.[130]
|
100 |
+
|
101 |
+
Fascists identified their primary opponents as the majority of socialists on the left who had opposed intervention in World War I.[128] The Fascists and the Italian political right held common ground: both held Marxism in contempt, discounted class consciousness and believed in the rule of elites.[131] The Fascists assisted the anti-socialist campaign by allying with the other parties and the conservative right in a mutual effort to destroy the Italian Socialist Party and labour organizations committed to class identity above national identity.[131]
|
102 |
+
|
103 |
+
Fascism sought to accommodate Italian conservatives by making major alterations to its political agenda—abandoning its previous populism, republicanism and anticlericalism, adopting policies in support of free enterprise and accepting the Catholic Church and the monarchy as institutions in Italy.[132] To appeal to Italian conservatives, Fascism adopted policies such as promoting family values, including promotion policies designed to reduce the number of women in the workforce limiting the woman's role to that of a mother. The fascists banned literature on birth control and increased penalties for abortion in 1926, declaring both crimes against the state.[133]
|
104 |
+
|
105 |
+
Though Fascism adopted a number of anti-modern positions designed to appeal to people upset with the new trends in sexuality and women's rights—especially those with a reactionary point of view—the Fascists sought to maintain Fascism's revolutionary character, with Angelo Oliviero Olivetti saying: "Fascism would like to be conservative, but it will [be] by being revolutionary".[134] The Fascists supported revolutionary action and committed to secure law and order to appeal to both conservatives and syndicalists.[135]
|
106 |
+
|
107 |
+
Prior to Fascism's accommodations to the political right, Fascism was a small, urban, northern Italian movement that had about a thousand members.[136] After Fascism's accommodation of the political right, the Fascist movement's membership soared to approximately 250,000 by 1921.[137]
|
108 |
+
|
109 |
+
Beginning in 1922, Fascist paramilitaries escalated their strategy from one of attacking socialist offices and homes of socialist leadership figures to one of violent occupation of cities. The Fascists met little serious resistance from authorities and proceeded to take over several northern Italian cities.[138] The Fascists attacked the headquarters of socialist and Catholic labour unions in Cremona and imposed forced Italianization upon the German-speaking population of Trent and Bolzano.[138] After seizing these cities, the Fascists made plans to take Rome.[138]
|
110 |
+
|
111 |
+
On 24 October 1922, the Fascist party held its annual congress in Naples, where Mussolini ordered Blackshirts to take control of public buildings and trains and to converge on three points around Rome.[138] The Fascists managed to seize control of several post offices and trains in northern Italy while the Italian government, led by a left-wing coalition, was internally divided and unable to respond to the Fascist advances.[139] King Victor Emmanuel III of Italy perceived the risk of bloodshed in Rome in response to attempting to disperse the Fascists to be too high.[140] Victor Emmanuel III decided to appoint Mussolini as Prime Minister of Italy and Mussolini arrived in Rome on 30 October to accept the appointment.[140] Fascist propaganda aggrandized this event, known as "March on Rome", as a "seizure" of power because of Fascists' heroic exploits.[138]
|
112 |
+
|
113 |
+
Historian Stanley G. Payne says Fascism in Italy was:
|
114 |
+
|
115 |
+
A primarily political dictatorship....The Fascist Party itself had become almost completely bureaucratized and subservient to, not dominant over, the state itself. Big business, industry, and finance retained extensive autonomy, particularly in the early years. The armed forces also enjoyed considerable autonomy....The Fascist militia was placed under military control....The judicial system was left largely intact and relatively autonomous as well. The police continued to be directed by state officials and were not taken over by party leaders...nor was a major new police elite created....There was never any question of bringing the Church under overall subservience.... Sizable sectors of Italian cultural life retained extensive autonomy, and no major state propaganda-and-culture ministry existed....The Mussolini regime was neither especially sanguinary nor particularly repressive.[141]
|
116 |
+
|
117 |
+
Upon being appointed Prime Minister of Italy, Mussolini had to form a coalition government because the Fascists did not have control over the Italian parliament.[142] Mussolini's coalition government initially pursued economically liberal policies under the direction of liberal finance minister Alberto De Stefani, a member of the Center Party, including balancing the budget through deep cuts to the civil service.[142] Initially, little drastic change in government policy had occurred and repressive police actions were limited.[142]
|
118 |
+
|
119 |
+
The Fascists began their attempt to entrench Fascism in Italy with the Acerbo Law, which guaranteed a plurality of the seats in parliament to any party or coalition list in an election that received 25% or more of the vote.[143] Through considerable Fascist violence and intimidation, the list won a majority of the vote, allowing many seats to go to the Fascists.[143] In the aftermath of the election, a crisis and political scandal erupted after Socialist Party deputy Giacomo Matteotti was kidnapped and murdered by a Fascist.[143] The liberals and the leftist minority in parliament walked out in protest in what became known as the Aventine Secession.[144] On 3 January 1925, Mussolini addressed the Fascist-dominated Italian parliament and declared that he was personally responsible for what happened, but insisted that he had done nothing wrong. Mussolini proclaimed himself dictator of Italy, assuming full responsibility over the government and announcing the dismissal of parliament.[144] From 1925 to 1929, Fascism steadily became entrenched in power: opposition deputies were denied access to parliament, censorship was introduced and a December 1925 decree made Mussolini solely responsible to the King.[145]
|
120 |
+
|
121 |
+
In 1929, the Fascist regime briefly gained what was in effect a blessing of the Catholic Church after the regime signed a concordat with the Church, known as the Lateran Treaty, which gave the papacy state sovereignty and financial compensation for the seizure of Church lands by the liberal state in the nineteenth century, but within two years the Church had renounced Fascism in the Encyclical Non Abbiamo Bisogno as a "pagan idolotry of the state" which teaches "hatred, violence and irreverence".[146] Not long after signing the agreement, by Mussolini's own confession the Church had threatened to have him "excommunicated", in part because of his intractable nature and that he had "confiscated more issues of Catholic newspapers in the next three months than in the previous seven years”.[147] By the late 1930s, Mussolini became more vocal in his anti-clerical rhetoric, repeatedly denouncing the Catholic Church and discussing ways to depose the pope. He took the position that the “papacy was a malignant tumor in the body of Italy and must 'be rooted out once and for all,’ because there was no room in Rome for both the Pope and himself".[148] In her 1974 book, Mussolini's widow Rachele stated that her husband had always been an atheist until near the end of his life, writing that her husband was "basically irreligious until the later years of his life".[149]
|
122 |
+
|
123 |
+
The National Socialists of Germany employed similar anti-clerical policies. The Gestapo confiscated hundreds of monasteries in Austria and Germany, evicted clergymen and laymen alike and often replaced crosses with swastikas.[150] Referring to the swastika as the "Devil’s Cross", church leaders found their youth organizations banned, their meetings limited and various Catholic periodicals censored or banned. Government officials eventually found it necessary to place "Nazis into editorial positions in the Catholic press".[151] Up to 2,720 clerics, mostly Catholics, were arrested by the Gestapo and imprisoned inside of Germany's Dachau concentration camp, resulting in over 1,000 deaths.[152]
|
124 |
+
|
125 |
+
The Fascist regime created a corporatist economic system in 1925 with creation of the Palazzo Vidioni Pact, in which the Italian employers' association Confindustria and Fascist trade unions agreed to recognize each other as the sole representatives of Italy's employers and employees, excluding non-Fascist trade unions.[153] The Fascist regime first created a Ministry of Corporations that organized the Italian economy into 22 sectoral corporations, banned workers' strikes and lock-outs and in 1927 created the Charter of Labour, which established workers' rights and duties and created labour tribunals to arbitrate employer-employee disputes.[153] In practice, the sectoral corporations exercised little independence and were largely controlled by the regime and employee organizations were rarely led by employees themselves, but instead by appointed Fascist party members.[153]
|
126 |
+
|
127 |
+
In the 1920s, Fascist Italy pursued an aggressive foreign policy that included an attack on the Greek island of Corfu, aims to expand Italian territory in the Balkans, plans to wage war against Turkey and Yugoslavia, attempts to bring Yugoslavia into civil war by supporting Croat and Macedonian separatists to legitimize Italian intervention and making Albania a de facto protectorate of Italy, which was achieved through diplomatic means by 1927.[154] In response to revolt in the Italian colony of Libya, Fascist Italy abandoned previous liberal-era colonial policy of cooperation with local leaders. Instead, claiming that Italians were a superior race to African races and thereby had the right to colonize the "inferior" Africans, it sought to settle 10 to 15 million Italians in Libya.[155] This resulted in an aggressive military campaign known as the Pacification of Libya against natives in Libya, including mass killings, the use of concentration camps and the forced starvation of thousands of people.[155] Italian authorities committed ethnic cleansing by forcibly expelling 100,000 Bedouin Cyrenaicans, half the population of Cyrenaica in Libya, from their settlements that was slated to be given to Italian settlers.[156][157]
|
128 |
+
|
129 |
+
The March on Rome brought Fascism international attention. One early admirer of the Italian Fascists was Adolf Hitler, who less than a month after the March had begun to model himself and the Nazi Party upon Mussolini and the Fascists.[158] The Nazis, led by Hitler and the German war hero Erich Ludendorff, attempted a "March on Berlin" modeled upon the March on Rome, which resulted in the failed Beer Hall Putsch in Munich in November 1923.[159]
|
130 |
+
|
131 |
+
The conditions of economic hardship caused by the Great Depression brought about an international surge of social unrest. According to historian Philip Morgan, "the onset of the Great Depression...was the greatest stimulus yet to the diffusion and expansion of fascism outside Italy".[160] Fascist propaganda blamed the problems of the long depression of the 1930s on minorities and scapegoats: "Judeo-Masonic-bolshevik" conspiracies, left-wing internationalism and the presence of immigrants.
|
132 |
+
|
133 |
+
In Germany, it contributed to the rise of the National Socialist German Workers' Party, which resulted in the demise of the Weimar Republic and the establishment of the fascist regime, Nazi Germany, under the leadership of Adolf Hitler. With the rise of Hitler and the Nazis to power in 1933, liberal democracy was dissolved in Germany and the Nazis mobilized the country for war, with expansionist territorial aims against several countries. In the 1930s, the Nazis implemented racial laws that deliberately discriminated against, disenfranchised and persecuted Jews and other racial and minority groups.
|
134 |
+
|
135 |
+
Fascist movements grew in strength elsewhere in Europe. Hungarian fascist Gyula Gömbös rose to power as Prime Minister of Hungary in 1932 and attempted to entrench his Party of National Unity throughout the country. He created an eight-hour work day, a forty-eight-hour work week in industry and sought to entrench a corporatist economy; and pursued irredentist claims on Hungary's neighbors.[161] The fascist Iron Guard movement in Romania soared in political support after 1933, gaining representation in the Romanian government and an Iron Guard member assassinated Romanian prime minister Ion Duca.[162] During the 6 February 1934 crisis, France faced the greatest domestic political turmoil since the Dreyfus Affair when the fascist Francist Movement and multiple far-right movements rioted en masse in Paris against the French government resulting in major political violence.[163] A variety of para-fascist governments that borrowed elements from fascism were formed during the Great Depression, including those of Greece, Lithuania, Poland and Yugoslavia.[164]
|
136 |
+
|
137 |
+
In the Americas, the Brazilian Integralists led by Plínio Salgado claimed as many as 200,000 members although following coup attempts it faced a crackdown from the Estado Novo of Getúlio Vargas in 1937.[165] In the 1930s, the National Socialist Movement of Chile gained seats in Chile's parliament and attempted a coup d'état that resulted in the Seguro Obrero massacre of 1938.[166]
|
138 |
+
|
139 |
+
During the Great Depression, Mussolini promoted active state intervention in the economy. He denounced the contemporary "supercapitalism" that he claimed began in 1914 as a failure because of its alleged decadence, its support for unlimited consumerism and its intention to create the "standardization of humankind".[167] Fascist Italy created the Institute for Industrial Reconstruction (IRI), a giant state-owned firm and holding company that provided state funding to failing private enterprises.[168] The IRI was made a permanent institution in Fascist Italy in 1937, pursued Fascist policies to create national autarky and had the power to take over private firms to maximize war production.[168] While Hitler's regime only nationalized 500 companies in key industries by the early 1940s,[169] Mussolini declared in 1934 that "[t]hree-fourths of Italian economy, industrial and agricultural, is in the hands of the state".[170] Due to the worldwide depression, Mussolini's government was able to take over most of Italy's largest failing banks, who held controlling interest in many Italian businesses. The Institute for Industrial Reconstruction, a state-operated holding company in charge of bankrupt banks and companies, reported in early 1934 that they held assets of "48.5 percent of the share capital of Italy", which later included the capital of the banks themselves.[171] Political historian Martin Blinkhorn estimated Italy's scope of state intervention and ownership "greatly surpassed that in Nazi Germany, giving Italy a public sector second only to that of Stalin’s Russia".[172] In the late 1930s, Italy enacted manufacturing cartels, tariff barriers, currency restrictions and massive regulation of the economy to attempt to balance payments.[173] Italy's policy of autarky failed to achieve effective economic autonomy.[173] Nazi Germany similarly pursued an economic agenda with the aims of autarky and rearmament and imposed protectionist policies, including forcing the German steel industry to use lower-quality German iron ore rather than superior-quality imported iron.[174]
|
140 |
+
|
141 |
+
In Fascist Italy and Nazi Germany, both Mussolini and Hitler pursued territorial expansionist and interventionist foreign policy agendas from the 1930s through the 1940s culminating in World War II. Mussolini called for irredentist Italian claims to be reclaimed, establishing Italian domination of the Mediterranean Sea and securing Italian access to the Atlantic Ocean and the creation of Italian spazio vitale ("vital space") in the Mediterranean and Red Sea regions.[175] Hitler called for irredentist German claims to be reclaimed along with the creation of German Lebensraum ("living space") in Eastern Europe, including territories held by the Soviet Union, that would be colonized by Germans.[176]
|
142 |
+
|
143 |
+
From 1935 to 1939, Germany and Italy escalated their demands for territorial claims and greater influence in world affairs. Italy invaded Ethiopia in 1935 resulting in its condemnation by the League of Nations and its widespread diplomatic isolation. In 1936, Germany remilitarized the industrial Rhineland, a region that had been ordered demilitarized by the Treaty of Versailles. In 1938, Germany annexed Austria and Italy assisted Germany in resolving the diplomatic crisis between Germany versus Britain and France over claims on Czechoslovakia by arranging the Munich Agreement that gave Germany the Sudetenland and was perceived at the time to have averted a European war. These hopes faded when Hitler violated the Munich Agreement by ordering the invasion and partition of Czechoslovakia between Germany and a client state of Slovakia in 1939. At the same time from 1938 to 1939, Italy was demanding territorial and colonial concessions from France and Britain.[177] In 1939, Germany prepared for war with Poland, but attempted to gain territorial concessions from Poland through diplomatic means.[178] The Polish government did not trust Hitler's promises and refused to accept Germany's demands.[178]
|
144 |
+
|
145 |
+
The invasion of Poland by Germany was deemed unacceptable by Britain, France and their allies, resulting in their mutual declaration of war against Germany that was deemed the aggressor in the war in Poland, resulting in the outbreak of World War II. In 1940, Mussolini led Italy into World War II on the side of the Axis. Mussolini was aware that Italy did not have the military capacity to carry out a long war with France or the United Kingdom and waited until France was on the verge of imminent collapse and surrender from the German invasion before declaring war on France and the United Kingdom on 10 June 1940 on the assumption that the war would be short-lived following France's collapse.[179] Mussolini believed that following a brief entry of Italy into war with France, followed by the imminent French surrender, Italy could gain some territorial concessions from France and then concentrate its forces on a major offensive in Egypt where British and Commonwealth forces were outnumbered by Italian forces.[180] Plans by Germany to invade the United Kingdom in 1940 failed after Germany lost the aerial warfare campaign in the Battle of Britain. In 1941, the Axis campaign spread to the Soviet Union after Hitler launched Operation Barbarossa. Axis forces at the height of their power controlled almost all of continental Europe. The war became prolonged—contrary to Mussolini's plans—resulting in Italy losing battles on multiple fronts and requiring German assistance.
|
146 |
+
|
147 |
+
During World War II, the Axis Powers in Europe led by Nazi Germany participated in the extermination of millions of Poles, Jews, Gypsies and others in the genocide known as the Holocaust.
|
148 |
+
|
149 |
+
After 1942, Axis forces began to falter. In 1943, after Italy faced multiple military failures, the complete reliance and subordination of Italy to Germany, the Allied invasion of Italy and the corresponding international humiliation, Mussolini was removed as head of government and arrested on the order of King Victor Emmanuel III, who proceeded to dismantle the Fascist state and declared Italy's switching of allegiance to the Allied side. Mussolini was rescued from arrest by German forces and led the German client state, the Italian Social Republic from 1943 to 1945. Nazi Germany faced multiple losses and steady Soviet and Western Allied offensives from 1943 to 1945.
|
150 |
+
|
151 |
+
On 28 April 1945, Mussolini was captured and executed by Italian communist partisans. On 30 April 1945, Hitler committed suicide. Shortly afterwards, Germany surrendered and the Nazi regime was systematically dismantled by the occupying Allied powers. An International Military Tribunal was subsequently convened in Nuremberg. Beginning in November 1945 and lasting through 1949, numerous Nazi political, military and economic leaders were tried and convicted of war crimes, with many of the worst offenders receiving the death penalty.
|
152 |
+
|
153 |
+
The victory of the Allies over the Axis powers in World War II led to the collapse of many fascist regimes in Europe. The Nuremberg Trials convicted several Nazi leaders of crimes against humanity involving the Holocaust. However, there remained several movements and governments that were ideologically related to fascism.
|
154 |
+
|
155 |
+
Francisco Franco's Falangist one-party state in Spain was officially neutral during World War II and it survived the collapse of the Axis Powers. Franco's rise to power had been directly assisted by the militaries of Fascist Italy and Nazi Germany during the Spanish Civil War and Franco had sent volunteers to fight on the side of Nazi Germany against the Soviet Union during World War II. The first years were characterized by a repression against the anti-fascist ideologies, a deep censorship and the suppression of democratic institutions (elected Parliament, Constitution of 1931, Regional Statutes of Autonomy). After World War II and a period of international isolation, Franco's regime normalized relations with the Western powers during the Cold War, until Franco's death in 1975 and the transformation of Spain into a liberal democracy.
|
156 |
+
|
157 |
+
Historian Robert Paxton observes that one of the main problems in defining fascism is that it was widely mimicked. Paxton says: "In fascism's heyday, in the 1930s, many regimes that were not functionally fascist borrowed elements of fascist decor in order to lend themselves an aura of force, vitality, and mass mobilization". He goes on to observe that Salazar "crushed Portuguese fascism after he had copied some of its techniques of popular mobilization".[181] Paxton says that: "Where Franco subjected Spain’s fascist party to his personal control, Salazar abolished outright in July 1934 the nearest thing Portugal had to an authentic fascist movement, Rolão Preto’s blue-shirted National Syndicalists […] Salazar preferred to control his population through such “organic” institutions traditionally powerful in Portugal as the Church. Salazar's regime was not only non-fascist, but “voluntarily non-totalitarian,” preferring to let those of its citizens who kept out of politics “live by habit".[182] Historians tend to view the Estado Novo as para-fascist in nature,[183] possessing minimal fascist tendencies.
|
158 |
+
[184] Other historians, including Fernando Rosas and Manuel Villaverde Cabral, think that the Estado Novo should be considered fascist.[185] In Argentina, Peronism, associated with the regime of Juan Perón from 1946 to 1955 and 1973 to 1974, was influenced by fascism.[186] Between 1939 and 1941, prior to his rise to power, Perón had developed a deep admiration of Italian Fascism and modelled his economic policies on Italian Fascist policies.[186]
|
159 |
+
|
160 |
+
The term neo-fascism refers to fascist movements after World War II. In Italy, the Italian Social Movement led by Giorgio Almirante was a major neo-fascist movement that transformed itself into a self-described "post-fascist" movement called the National Alliance (AN), which has been an ally of Silvio Berlusconi's Forza Italia for a decade. In 2008, AN joined Forza Italia in Berlusconi's new party The People of Freedom, but in 2012 a group of politicians split from The People of Freedom, refounding the party with the name Brothers of Italy. In Germany, various neo-Nazi movements have been formed and banned in accordance with Germany's constitutional law which forbids Nazism. The National Democratic Party of Germany (NPD) is widely considered a neo-Nazi party, although the party does not publicly identify itself as such.
|
161 |
+
|
162 |
+
After the onset of the Great Recession and economic crisis in Greece, a movement known as the Golden Dawn, widely considered a neo-Nazi party, soared in support out of obscurity and won seats in Greece's parliament, espousing a staunch hostility towards minorities, illegal immigrants and refugees. In 2013, after the murder of an anti-fascist musician by a person with links to Golden Dawn, the Greek government ordered the arrest of Golden Dawn's leader Nikolaos Michaloliakos and other Golden Dawn members on charges related to being associated with a criminal organization.[187][188]
|
163 |
+
|
164 |
+
Robert O. Paxton finds that the transformations undertaken by fascists in power were "profound enough to be called 'revolutionary'". They "often set fascists into conflict with conservatives rooted in families, churches, social rank, and property." Paxton argues:
|
165 |
+
|
166 |
+
[F]ascism redrew the frontiers between private and public, sharply diminishing what had once been untouchably private. It changed the practice of citizenship from the enjoyment of constitutional rights and duties to participation in mass ceremonies of affirmation and conformity. It reconfigured relations between the individual and the collectivity, so that an individual had no rights outside community interest. It expanded the powers of the executive—party and state—in a bid for total control. Finally, it unleashed aggressive emotions hitherto known in Europe only during war or social revolution.[189]
|
167 |
+
|
168 |
+
Ultranationalism, combined with the myth of national rebirth, is a key foundation of fascism.[190]
|
169 |
+
|
170 |
+
The fascist view of a nation is of a single organic entity that binds people together by their ancestry and is a natural unifying force of people.[191] Fascism seeks to solve economic, political and social problems by achieving a millenarian national rebirth, exalting the nation or race above all else and promoting cults of unity, strength and purity.[39][192][193][194][195] European fascist movements typically espouse a racist conception of non-Europeans being inferior to Europeans.[196] Beyond this, fascists in Europe have not held a unified set of racial views.[196] Historically, most fascists promoted imperialism, although there have been several fascist movements that were uninterested in the pursuit of new imperial ambitions.[196] For example, Nazism and Italian Fascism were expansionist and irredentist. Falangism in Spain envisioned worldwide unification of Spanish-speaking peoples (Hispanidad). British Fascism was non-interventionist, though it did embrace the British Empire.
|
171 |
+
|
172 |
+
Fascism promotes the establishment of a totalitarian state.[197] It opposes liberal democracy, rejects multi-party systems and may support a one-party state so that it may synthesize with the nation.[198] Mussolini's The Doctrine of Fascism (1932) – partly ghostwritten by philosopher Giovanni Gentile,[199] who Mussolini described as "the philosopher of Fascism" – states: "The Fascist conception of the State is all-embracing; outside of it no human or spiritual values can exist, much less have value. Thus understood, Fascism is totalitarian, and the Fascist State—a synthesis and a unit inclusive of all values—interprets, develops, and potentiates the whole life of a people".[200] In The Legal Basis of the Total State, Nazi political theorist Carl Schmitt described the Nazi intention to form a "strong state which guarantees a totality of political unity transcending all diversity" in order to avoid a "disastrous pluralism tearing the German people apart".[201]
|
173 |
+
|
174 |
+
Fascist states pursued policies of social indoctrination through propaganda in education and the media and regulation of the production of educational and media materials.[202][203] Education was designed to glorify the fascist movement and inform students of its historical and political importance to the nation. It attempted to purge ideas that were not consistent with the beliefs of the fascist movement and to teach students to be obedient to the state.[204]
|
175 |
+
|
176 |
+
Fascism presented itself as an alternative to both international socialism and free market capitalism.[205] While fascism opposed mainstream socialism, it sometimes regarded itself as a type of nationalist "socialism" to highlight their commitment to national solidarity and unity.[206][207] Fascists opposed international free market capitalism, but supported a type of productive capitalism.[115][208] Economic self-sufficiency, known as autarky, was a major goal of most fascist governments.[209]
|
177 |
+
|
178 |
+
Fascist governments advocated resolution of domestic class conflict within a nation in order to secure national solidarity.[210] This would be done through the state mediating relations between the classes (contrary to the views of classical liberal-inspired capitalists).[211] While fascism was opposed to domestic class conflict, it was held that bourgeois-proletarian conflict existed primarily in national conflict between proletarian nations versus bourgeois nations.[212] Fascism condemned what it viewed as widespread character traits that it associated as the typical bourgeois mentality that it opposed, such as materialism, crassness, cowardice, inability to comprehend the heroic ideal of the fascist "warrior"; and associations with liberalism, individualism and parliamentarianism.[213] In 1918, Mussolini defined what he viewed as the proletarian character, defining proletarian as being one and the same with producers, a productivist perspective that associated all people deemed productive, including entrepreneurs, technicians, workers and soldiers as being proletarian.[214] He acknowledged the historical existence of both bourgeois and proletarian producers, but declared the need for bourgeois producers to merge with proletarian producers.[214]
|
179 |
+
|
180 |
+
While fascism denounced the mainstream internationalist and Marxist socialisms, it claimed to economically represent a type of nationalist productivist socialism that while condemning parasitical capitalism, it was willing to accommodate productivist capitalism within it.[208] This was derived from Henri de Saint Simon, whose ideas inspired the creation of utopian socialism and influenced other ideologies, that stressed solidarity rather than class war and whose conception of productive people in the economy included both productive workers and productive bosses to challenge the influence of the aristocracy and unproductive financial speculators.[215] Saint Simon's vision combined the traditionalist right-wing criticisms of the French Revolution combined with a left-wing belief in the need for association or collaboration of productive people in society.[215] Whereas Marxism condemned capitalism as a system of exploitative property relations, fascism saw the nature of the control of credit and money in the contemporary capitalist system as abusive.[208] Unlike Marxism, fascism did not see class conflict between the Marxist-defined proletariat and the bourgeoisie as a given or as an engine of historical materialism.[208] Instead, it viewed workers and productive capitalists in common as productive people who were in conflict with parasitic elements in society including: corrupt political parties, corrupt financial capital and feeble people.[208] Fascist leaders such as Mussolini and Hitler spoke of the need to create a new managerial elite led by engineers and captains of industry—but free from the parasitic leadership of industries.[208] Hitler stated that the Nazi Party supported bodenständigen Kapitalismus ("productive capitalism") that was based upon profit earned from one's own labour, but condemned unproductive capitalism or loan capitalism, which derived profit from speculation.[216]
|
181 |
+
|
182 |
+
Fascist economics supported a state-controlled economy that accepted a mix of private and public ownership over the means of production.[217] Economic planning was applied to both the public and private sector and the prosperity of private enterprise depended on its acceptance of synchronizing itself with the economic goals of the state.[218] Fascist economic ideology supported the profit motive, but emphasized that industries must uphold the national interest as superior to private profit.[218]
|
183 |
+
|
184 |
+
While fascism accepted the importance of material wealth and power, it condemned materialism which identified as being present in both communism and capitalism and criticized materialism for lacking acknowledgement of the role of the spirit.[219] In particular, fascists criticized capitalism not because of its competitive nature nor support of private property, which fascists supported—but due to its materialism, individualism, alleged bourgeois decadence and alleged indifference to the nation.[220] Fascism denounced Marxism for its advocacy of materialist internationalist class identity, which fascists regarded as an attack upon the emotional and spiritual bonds of the nation and a threat to the achievement of genuine national solidarity.[221]
|
185 |
+
|
186 |
+
In discussing the spread of fascism beyond Italy, historian Philip Morgan states:
|
187 |
+
|
188 |
+
Since the Depression was a crisis of laissez-faire capitalism and its political counterpart, parliamentary democracy, fascism could pose as the 'third-way' alternative between capitalism and Bolshevism, the model of a new European 'civilization'. As Mussolini typically put it in early 1934, "from 1929...fascism has become a universal phenomenon... The dominant forces of the 19th century, democracy, socialism, liberalism have been exhausted...the new political and economic forms of the twentieth-century are fascist'(Mussolini 1935: 32).[160]
|
189 |
+
|
190 |
+
Fascists criticized egalitarianism as preserving the weak, and they instead promoted social Darwinist views and policies.[222][223] They were in principle opposed to the idea of social welfare, arguing that it "encouraged the preservation of the degenerate and the feeble."[224] The Nazi Party condemned the welfare system of the Weimar Republic, as well as private charity and philanthropy, for supporting people whom they regarded as racially inferior and weak, and who should have been weeded out in the process of natural selection.[225] Nevertheless, faced with the mass unemployment and poverty of the Great Depression, the Nazis found it necessary to set up charitable institutions to help racially-pure Germans in order to maintain popular support, while arguing that this represented "racial self-help" and not indiscriminate charity or universal social welfare.[226] Thus, Nazi programs such as the Winter Relief of the German People and the broader National Socialist People's Welfare (NSV) were organized as quasi-private institutions, officially relying on private donations from Germans to help others of their race—although in practice those who refused to donate could face severe consequences.[227] Unlike the social welfare institutions of the Weimar Republic and the Christian charities, the NSV distributed assistance on explicitly racial grounds. It provided support only to those who were "racially sound, capable of and willing to work, politically reliable, and willing and able to reproduce." Non-Aryans were excluded, as well as the "work-shy", "asocials" and the "hereditarily ill."[228] Under these conditions, by 1939, over 17 million Germans had obtained assistance from the NSV, and the agency "projected a powerful image of caring and support" for "those who were judged to have got into difficulties through no fault of their own."[228] Yet the organization was "feared and disliked among society's poorest" because it resorted to intrusive questioning and monitoring to judge who was worthy of support.[229]
|
191 |
+
|
192 |
+
Fascism emphasizes direct action, including supporting the legitimacy of political violence, as a core part of its politics.[11][230] Fascism views violent action as a necessity in politics that fascism identifies as being an "endless struggle".[231] This emphasis on the use of political violence means that most fascist parties have also created their own private militias (e.g. the Nazi Party's Brown shirts and Fascist Italy's Blackshirts).
|
193 |
+
|
194 |
+
The basis of fascism's support of violent action in politics is connected to social Darwinism.[231] Fascist movements have commonly held social Darwinist views of nations, races and societies.[232] They say that nations and races must purge themselves of socially and biologically weak or degenerate people, while simultaneously promoting the creation of strong people, in order to survive in a world defined by perpetual national and racial conflict.[233]
|
195 |
+
|
196 |
+
Fascism emphasizes youth both in a physical sense of age and in a spiritual sense as related to virility and commitment to action.[234] The Italian Fascists' political anthem was called Giovinezza ("The Youth").[234] Fascism identifies the physical age period of youth as a critical time for the moral development of people who will affect society.[235]
|
197 |
+
|
198 |
+
Walter Laqueur argues that:
|
199 |
+
|
200 |
+
Italian Fascism pursued what it called "moral hygiene" of youth, particularly regarding sexuality.[237] Fascist Italy promoted what it considered normal sexual behaviour in youth while denouncing what it considered deviant sexual behaviour.[237] It condemned pornography, most forms of birth control and contraceptive devices (with the exception of the condom), homosexuality and prostitution as deviant sexual behaviour, although enforcement of laws opposed to such practices was erratic and authorities often turned a blind eye.[237] Fascist Italy regarded the promotion of male sexual excitation before puberty as the cause of criminality amongst male youth, declared homosexuality a social disease and pursued an aggressive campaign to reduce prostitution of young women.[237]
|
201 |
+
|
202 |
+
Mussolini perceived women's primary role as primarily child bearers and men, warriors—once saying: "War is to man what maternity is to the woman".[238] In an effort to increase birthrates, the Italian Fascist government gave financial incentives to women who raised large families and initiated policies intended to reduce the number of women employed.[239] Italian Fascism called for women to be honoured as "reproducers of the nation" and the Italian Fascist government held ritual ceremonies to honour women's role within the Italian nation.[240] In 1934, Mussolini declared that employment of women was a "major aspect of the thorny problem of unemployment" and that for women, working was "incompatible with childbearing". Mussolini went on to say that the solution to unemployment for men was the "exodus of women from the work force".[241]
|
203 |
+
|
204 |
+
The German Nazi government strongly encouraged women to stay at home to bear children and keep house.[242] This policy was reinforced by bestowing the Cross of Honor of the German Mother on women bearing four or more children. The unemployment rate was cut substantially, mostly through arms production and sending women home so that men could take their jobs. Nazi propaganda sometimes promoted premarital and extramarital sexual relations, unwed motherhood and divorce, but at other times the Nazis opposed such behaviour.[243]
|
205 |
+
|
206 |
+
The Nazis decriminalized abortion in cases where fetuses had hereditary defects or were of a race the government disapproved of, while the abortion of healthy pure German, Aryan fetuses remained strictly forbidden.[244] For non-Aryans, abortion was often compulsory. Their eugenics program also stemmed from the "progressive biomedical model" of Weimar Germany.[245] In 1935, Nazi Germany expanded the legality of abortion by amending its eugenics law, to promote abortion for women with hereditary disorders.[244] The law allowed abortion if a woman gave her permission and the fetus was not yet viable[246][247] and for purposes of so-called racial hygiene.[248][249]
|
207 |
+
|
208 |
+
The Nazis said that homosexuality was degenerate, effeminate, perverted and undermined masculinity because it did not produce children.[250] They considered homosexuality curable through therapy, citing modern scientism and the study of sexology, which said that homosexuality could be felt by "normal" people and not just an abnormal minority.[251] Open homosexuals were interned in Nazi concentration camps.[252]
|
209 |
+
|
210 |
+
Fascism emphasizes both palingenesis (national rebirth or re-creation) and modernism.[253] In particular, fascism's nationalism has been identified as having a palingenetic character.[190] Fascism promotes the regeneration of the nation and purging it of decadence.[253] Fascism accepts forms of modernism that it deems promotes national regeneration while rejecting forms of modernism that are regarded as antithetical to national regeneration.[254] Fascism aestheticized modern technology and its association with speed, power and violence.[255] Fascism admired advances in the economy in the early 20th century, particularly Fordism and scientific management.[256] Fascist modernism has been recognized as inspired or developed by various figures—such as Filippo Tommaso Marinetti, Ernst Jünger, Gottfried Benn, Louis-Ferdinand Céline, Knut Hamsun, Ezra Pound and Wyndham Lewis.[257]
|
211 |
+
|
212 |
+
In Italy, such modernist influence was exemplified by Marinetti who advocated a palingenetic modernist society that condemned liberal-bourgeois values of tradition and psychology, while promoting a technological-martial religion of national renewal that emphasized militant nationalism.[258] In Germany, it was exemplified by Jünger who was influenced by his observation of the technological warfare during World War I and claimed that a new social class had been created that he described as the "warrior-worker".[259] Jünger like Marinetti emphasized the revolutionary capacities of technology and emphasized an "organic construction" between human and machine as a liberating and regenerative force in that challenged liberal democracy, conceptions of individual autonomy, bourgeois nihilism and decadence.[259] He conceived of a society based on a totalitarian concept of "total mobilization" of such disciplined warrior-workers.[259]
|
213 |
+
|
214 |
+
According to cultural critic Susan Sontag:
|
215 |
+
|
216 |
+
Fascist aesthetics ... flow from (and justify) a preoccupation with situations of control, submissive behavior, extravagant effort, and the endurance of pain; they endorse two seemingly opposite states, egomania and servitude. The relations of domination and enslavement take the form of a characteristic pageantry: the massing of groups of people; the turning of people into things; the multiplication or replication of things; and the grouping of people/things around an all-powerful, hypnotic leader-figure or force. The fascist dramaturgy centers on the orgiastic transactions between mighty forces and their puppets, uniformly garbed and shown in ever swelling numbers. Its choreography alternates between ceaseless motion and a congealed, static, "virile" posing. Fascist art glorifies surrender, it exalts mindlessness, it glamorizes death.[260]
|
217 |
+
|
218 |
+
Sontag also enumerates some commonalities between fascist art of the official art of communist countries, such as the obeisance of the masses to the hero, and a preference for the monumental and the "grandiose and rigid" choreography of mass bodies. But whereas official communist art "aims to expound and reinforce a utopian morality", the art of fascist countries such as Nazi Germany "displays a utopian aesthetics – that of physical perfection", in a way that is "both prurient and idealizing".[260]
|
219 |
+
|
220 |
+
"Fascist aesthetics", according to Sontag, "is based on the containment of vital forces; movements are confined, held tight, held in." And its appeal is not necessarily limited to those who share the fascist political ideology, because "fascism ... stands for an ideal or rather ideals that are persistent today under the other banners: the ideal of life as art, the cult of beauty, the fetishism of courage, the dissolution of alienation in ecstatic feelings of community; the repudiation of the intellect; the family of man (under the parenthood of leaders)."[260]
|
221 |
+
|
222 |
+
Fascism has been widely criticized and condemned in modern times since the defeat of the Axis Powers in World War II.
|
223 |
+
|
224 |
+
One of the most common and strongest criticisms of fascism is that it is a tyranny.[261] Fascism is deliberately and entirely non-democratic and anti-democratic.[262][263][264]
|
225 |
+
|
226 |
+
Some critics of Italian fascism have said that much of the ideology was merely a by-product of unprincipled opportunism by Mussolini and that he changed his political stances merely to bolster his personal ambitions while he disguised them as being purposeful to the public.[265] Richard Washburn Child, the American ambassador to Italy who worked with Mussolini and became his friend and admirer, defended Mussolini's opportunistic behaviour by writing: "Opportunist is a term of reproach used to brand men who fit themselves to conditions for the reasons of self-interest. Mussolini, as I have learned to know him, is an opportunist in the sense that he believed that mankind itself must be fitted to changing conditions rather than to fixed theories, no matter how many hopes and prayers have been expended on theories and programmes".[266] Child quoted Mussolini as saying: "The sanctity of an ism is not in the ism; it has no sanctity beyond its power to do, to work, to succeed in practice. It may have succeeded yesterday and fail to-morrow. Failed yesterday and succeed to-morrow. The machine first of all must run!".[266]
|
227 |
+
|
228 |
+
Some have criticized Mussolini's actions during the outbreak of World War I as opportunist for seeming to suddenly abandon Marxist egalitarian internationalism for non-egalitarian nationalism and note to that effect that upon Mussolini endorsing Italy's intervention in the war against Germany and Austria-Hungary, he and the new fascist movement received financial support from foreign sources, such as Ansaldo (an armaments firm) and other companies[267] as well as the British Security Service MI5.[268] Some, including Mussolini's socialist opponents at the time, have noted that regardless of the financial support he accepted for his pro-interventionist stance, Mussolini was free to write whatever he wished in his newspaper Il Popolo d'Italia without prior sanctioning from his financial backers.[269] Furthermore, the major source of financial support that Mussolini and the fascist movement received in World War I was from France and is widely believed to have been French socialists who supported the French government's war against Germany and who sent support to Italian socialists who wanted Italian intervention on France's side.[270]
|
229 |
+
|
230 |
+
Mussolini's transformation away from Marxism into what eventually became fascism began prior to World War I, as Mussolini had grown increasingly pessimistic about Marxism and egalitarianism while becoming increasingly supportive of figures who opposed egalitarianism, such as Friedrich Nietzsche.[271] By 1902, Mussolini was studying Georges Sorel, Nietzsche and Vilfredo Pareto.[272] Sorel's emphasis on the need for overthrowing decadent liberal democracy and capitalism by the use of violence, direct action, general strikes and neo-Machiavellian appeals to emotion impressed Mussolini deeply.[273] Mussolini's use of Nietzsche made him a highly unorthodox socialist, due to Nietzsche's promotion of elitism and anti-egalitarian views.[271] Prior to World War I, Mussolini's writings over time indicated that he had abandoned the Marxism and egalitarianism that he had previously supported in favour of Nietzsche's übermensch concept and anti-egalitarianism.[271] In 1908, Mussolini wrote a short essay called "Philosophy of Strength" based on his Nietzschean influence, in which Mussolini openly spoke fondly of the ramifications of an impending war in Europe in challenging both religion and nihilism: "[A] new kind of free spirit will come, strengthened by the war, ... a spirit equipped with a kind of sublime perversity, ... a new free spirit will triumph over God and over Nothing".[106]
|
231 |
+
|
232 |
+
Fascism has been criticized for being ideologically dishonest. Major examples of ideological dishonesty have been identified in Italian fascism's changing relationship with German Nazism.[274][275] Fascist Italy's official foreign policy positions were known to commonly utilize rhetorical ideological hyperbole to justify its actions, although during Dino Grandi's tenure as Italy's foreign minister the country engaged in realpolitik free of such fascist hyperbole.[276] Italian fascism's stance towards German Nazism fluctuated from support from the late 1920s to 1934, when it celebrated Hitler's rise to power and meeting with Hitler in 1934; to opposition from 1934 to 1936 after the assassination of Italy's allied leader in Austria, Engelbert Dollfuss, by Austrian Nazis; and again back to support after 1936, when Germany was the only significant power that did not denounce Italy's invasion and occupation of Ethiopia.
|
233 |
+
|
234 |
+
After antagonism exploded between Nazi Germany and Fascist Italy over the assassination of Austrian Chancellor Dollfuss in 1934, Mussolini and Italian fascists denounced and ridiculed Nazism's racial theories, particularly by denouncing its Nordicism, while promoting Mediterraneanism.[275] Mussolini himself responded to Nordicists' claims of Italy being divided into Nordic and Mediterranean racial areas due to Germanic invasions of Northern Italy by claiming that while Germanic tribes such as the Lombards took control of Italy after the fall of Ancient Rome, they arrived in small numbers (about 8,000) and quickly assimilated into Roman culture and spoke the Latin language within fifty years.[277] Italian fascism was influenced by the tradition of Italian nationalists scornfully looking down upon Nordicists' claims and taking pride in comparing the age and sophistication of ancient Roman civilization as well as the classical revival in the Renaissance to that of Nordic societies that Italian nationalists described as "newcomers" to civilization in comparison.[274] At the height of antagonism between the Nazis and Italian fascists over race, Mussolini claimed that the Germans themselves were not a pure race and noted with irony that the Nazi theory of German racial superiority was based on the theories of non-German foreigners, such as Frenchman Arthur de Gobineau.[278] After the tension in German-Italian relations diminished during the late 1930s, Italian fascism sought to harmonize its ideology with German Nazism and combined Nordicist and Mediterranean racial theories, noting that Italians were members of the Aryan Race, composed of a mixed Nordic-Mediterranean subtype.[275]
|
235 |
+
|
236 |
+
In 1938, Mussolini declared upon Italy's adoption of antisemitic laws that Italian fascism had always been antisemitic,[275] In fact, Italian fascism did not endorse antisemitism until the late 1930s when Mussolini feared alienating antisemitic Nazi Germany, whose power and influence were growing in Europe. Prior to that period there had been notable Jewish Italians who had been senior Italian fascist officials, including Margherita Sarfatti, who had also been Mussolini's mistress.[275] Also contrary to Mussolini's claim in 1938, only a small number of Italian fascists were staunchly antisemitic (such as Roberto Farinacci and Giuseppe Preziosi), while others such as Italo Balbo, who came from Ferrara which had one of Italy's largest Jewish communities, were disgusted by the antisemitic laws and opposed them.[275] Fascism scholar Mark Neocleous notes that while Italian fascism did not have a clear commitment to antisemitism, there were occasional antisemitic statements issued prior to 1938, such as Mussolini in 1919 declaring that the Jewish bankers in London and New York were connected by race to the Russian Bolsheviks and that eight percent of the Russian Bolsheviks were Jews.[279]
|
237 |
+
|
238 |
+
|
239 |
+
|
240 |
+
|
241 |
+
|
en/1945.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
About 37; see text.
|
4 |
+
|
5 |
+
Falcons (/ˈfɒlkən, ˈfɔːl-, ˈfæl-/) are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.[1]
|
6 |
+
|
7 |
+
Adult falcons have thin, tapered wings, which enable them to fly at high speed and change directions rapidly. Peregrine falcons have been recorded diving at speeds of 200 mph (320 km/h), making them the fastest-moving creatures on Earth. The fastest recorded dive is 390 km/h (240 mph).[2] Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad-wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.
|
8 |
+
|
9 |
+
The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill preys with their beaks, using a "tooth" on the side of their beaks — unlike the hawks, eagles and other birds of prey in the Accipitridae, who use the talons on their feet.
|
10 |
+
|
11 |
+
The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the Pygmy falcon which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.[3]
|
12 |
+
|
13 |
+
Some small falcons with long, narrow wings are called "hobbies"[4] and some which hover while hunting are called "kestrels".[4][5]
|
14 |
+
|
15 |
+
As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human.[6]
|
16 |
+
|
17 |
+
The genus name Falco is from Late Latin falx, falcis, a sickle, referring to the claws of the bird.[7] The species name vespertinus is Latin for "of evening" from vesper, "evening".[8] In Middle English and Old French, the title faucon refers generically to several captive raptor species.[9]
|
18 |
+
|
19 |
+
The traditional term for a male falcon is tercel (British spelling) or tiercel (American spelling), from the Latin tertius (third) because of the belief that only one in three eggs hatched a male bird. Some sources give the etymology as deriving from the fact that a male falcon is about one-third smaller than a female[10][11][12] (Old French: tiercelet). A falcon chick, especially one reared for falconry, still in its downy stage, is known as an eyas[13][14] (sometimes spelled eyass). The word arose by mistaken division of Old French un niais, from Latin presumed nidiscus (nestling) from nidus (nest). The technique of hunting with trained captive birds of prey is known as falconry.
|
20 |
+
|
21 |
+
Compared to other birds of prey, the fossil record of the falcons is not well distributed in time. The oldest fossils tentatively assigned to this genus are from the Late Miocene, less than 10 million years ago.[citation needed] This coincides with a period in which many modern genera of birds became recognizable in the fossil record. The falcon lineage may, however, be somewhat older than this,[citation needed] and given the distribution of fossil and living Falco taxa, is probably of North American, African, or possibly Middle Eastern or European origin. Falcons are not closely related to other birds of prey, and their nearest relatives are parrots and songbirds.[15]
|
22 |
+
|
23 |
+
Falcons are roughly divisible into three or four groups. The first contains the kestrels (probably excepting the American kestrel);[9] usually small and stocky falcons of mainly brown upperside color and sometimes sexually dimorphic; three African species that are generally gray in color stand apart from the typical members of this group. Kestrels feed chiefly on terrestrial vertebrates and invertebrates of appropriate size, such as rodents, reptiles, or insects.
|
24 |
+
|
25 |
+
The second group contains slightly larger (on average) species, the hobbies and relatives. These birds are characterized by considerable amounts of dark slate-gray in their plumage; their malar areas are nearly always black. They feed mainly on smaller birds.
|
26 |
+
|
27 |
+
Third are the peregrine falcon and its relatives, variably sized powerful birds that also have a black malar area (except some very light color morphs), and often a black cap, as well. Otherwise, they are somewhat intermediate between the other groups, being chiefly medium gray with some lighter or brownish colors on their upper sides. They are, on average, more delicately patterned than the hobbies and, if the hierofalcons are excluded (see below), this group typically contains species with horizontal barring on their undersides. As opposed to the other groups, where tail color varies much in general but little according to evolutionary relatedness,[note 1] However, the fox and greater kestrels can be told apart at first glance by their tail colors, but not by much else; they might be very close relatives and are probably much closer to each other than the lesser and common kestrels. The tails of the large falcons are quite uniformly dark gray with inconspicuous black banding and small, white tips, though this is probably plesiomorphic. These large Falco species feed on mid-sized birds and terrestrial vertebrates.
|
28 |
+
|
29 |
+
Very similar to these, and sometimes included therein, are the four or so species of hierofalcons (literally, "hawk-falcons"). They represent taxa with, usually, more phaeomelanins, which impart reddish or brown colors, and generally more strongly patterned plumage reminiscent of hawks. Their undersides have a lengthwise pattern of blotches, lines, or arrowhead marks.
|
30 |
+
|
31 |
+
While these three or four groups, loosely circumscribed, are an informal arrangement, they probably contain several distinct clades in their entirety.
|
32 |
+
|
33 |
+
A study of mtDNA cytochrome b sequence data of some kestrels[9] identified a clade containing the common kestrel and related "malar-striped" species, to the exclusion of such taxa as the greater kestrel (which lacks a malar stripe), the lesser kestrel (which is very similar to the common, but also has no malar stripe), and the American kestrel, which has a malar stripe, but its color pattern–apart from the brownish back–and also the black feathers behind the ear, which never occur in the true kestrels, are more reminiscent of some hobbies. The malar-striped kestrels apparently split from their relatives in the Gelasian, roughly 2.0–2.5 million years ago (Mya), and are seemingly of tropical East African origin. The entire "true kestrel" group—excluding the American species—is probably a distinct and quite young clade, as also suggested by their numerous apomorphies.
|
34 |
+
|
35 |
+
Other studies[16][17][18][19][20] have confirmed that the hierofalcons are a monophyletic group–and that hybridization is quite frequent at least in the larger falcon species. Initial studies of mtDNA cytochrome b sequence data suggested that the hierofalcons are basal among living falcons.[16][17] The discovery of a NUMT proved this earlier theory erroneous.[18] In reality, the hierofalcons are a rather young group, originating at the same time as the start of the main kestrel radiation, about 2 Mya. Very little fossil history exists for this lineage. However, the present diversity of very recent origin suggests that this lineage may have nearly gone extinct in the recent past.[20][21]
|
36 |
+
|
37 |
+
The phylogeny and delimitations of the peregrine and hobbies groups are more problematic. Molecular studies have only been conducted on a few species, and the morphologically ambiguous taxa have often been little researched. The morphology of the syrinx, which contributes well to resolving the overall phylogeny of the Falconidae,[22][23] is not very informative in the present genus. Nonetheless, a core group containing the peregrine and Barbary falcons, which, in turn, group with the hierofalcons and the more distant prairie falcon (which was sometimes placed with the hierofalcons, though it is entirely distinct biogeographically), as well as at least most of the "typical" hobbies, are confirmed to be monophyletic as suspected.[16][17]
|
38 |
+
|
39 |
+
Given that the American Falco species of today belong to the peregrine group, or are apparently more basal species, the initially most successful evolutionary radiation seemingly was a Holarctic one that originated possibly around central Eurasia or in (northern) Africa. One or several lineages were present in North America by the Early Pliocene at latest.
|
40 |
+
|
41 |
+
The origin of today's major Falco groups—the "typical" hobbies and kestrels, for example, or the peregrine-hierofalcon complex, or the aplomado falcon lineage—can be quite confidently placed from the Miocene-Pliocene boundary through the Zanclean and Piacenzian and just into the Gelasian, that is from 2.4–8.0 Mya, when the malar-striped kestrels diversified. Some groups of falcons, such as the hierofalcon complex and the peregrine-Barbary superspecies, have only evolved in more recent times; the species of the former seem to be 120,000 years old or so.[20]
|
42 |
+
|
43 |
+
The sequence follows the taxonomic order of White et al. (1996),[24] except for adjustments in the kestrel sequence.
|
44 |
+
|
45 |
+
Several more paleosubspecies of extant species also been described; see species accounts for these.
|
46 |
+
|
47 |
+
"Sushkinia" pliocaena from the Early Pliocene of Pavlodar (Kazakhstan) appears to be a falcon of some sort. It might belong in this genus or a closely related one.[25] In any case, the genus name Sushkinia is invalid for this animal because it had already been allocated to a prehistoric dragonfly relative. In 2015 the bird genus was renamed Psushkinia.[34]
|
48 |
+
|
49 |
+
The supposed "Falco" pisanus was actually a pigeon of the genus Columba, possibly the same as Columba omnisanctorum, which, in that case, would adopt the older species name of the "falcon".[26] The Eocene fossil "Falco" falconellus (or "F." falconella) from Wyoming is a bird of uncertain affiliations, maybe a falconid, maybe not; it certainly does not belong in this genus. "Falco" readei is now considered a paleosubspecies of the yellow-headed caracara (Milvago chimachima).
|
en/1946.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
About 37; see text.
|
4 |
+
|
5 |
+
Falcons (/ˈfɒlkən, ˈfɔːl-, ˈfæl-/) are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.[1]
|
6 |
+
|
7 |
+
Adult falcons have thin, tapered wings, which enable them to fly at high speed and change directions rapidly. Peregrine falcons have been recorded diving at speeds of 200 mph (320 km/h), making them the fastest-moving creatures on Earth. The fastest recorded dive is 390 km/h (240 mph).[2] Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad-wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.
|
8 |
+
|
9 |
+
The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill preys with their beaks, using a "tooth" on the side of their beaks — unlike the hawks, eagles and other birds of prey in the Accipitridae, who use the talons on their feet.
|
10 |
+
|
11 |
+
The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the Pygmy falcon which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.[3]
|
12 |
+
|
13 |
+
Some small falcons with long, narrow wings are called "hobbies"[4] and some which hover while hunting are called "kestrels".[4][5]
|
14 |
+
|
15 |
+
As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human.[6]
|
16 |
+
|
17 |
+
The genus name Falco is from Late Latin falx, falcis, a sickle, referring to the claws of the bird.[7] The species name vespertinus is Latin for "of evening" from vesper, "evening".[8] In Middle English and Old French, the title faucon refers generically to several captive raptor species.[9]
|
18 |
+
|
19 |
+
The traditional term for a male falcon is tercel (British spelling) or tiercel (American spelling), from the Latin tertius (third) because of the belief that only one in three eggs hatched a male bird. Some sources give the etymology as deriving from the fact that a male falcon is about one-third smaller than a female[10][11][12] (Old French: tiercelet). A falcon chick, especially one reared for falconry, still in its downy stage, is known as an eyas[13][14] (sometimes spelled eyass). The word arose by mistaken division of Old French un niais, from Latin presumed nidiscus (nestling) from nidus (nest). The technique of hunting with trained captive birds of prey is known as falconry.
|
20 |
+
|
21 |
+
Compared to other birds of prey, the fossil record of the falcons is not well distributed in time. The oldest fossils tentatively assigned to this genus are from the Late Miocene, less than 10 million years ago.[citation needed] This coincides with a period in which many modern genera of birds became recognizable in the fossil record. The falcon lineage may, however, be somewhat older than this,[citation needed] and given the distribution of fossil and living Falco taxa, is probably of North American, African, or possibly Middle Eastern or European origin. Falcons are not closely related to other birds of prey, and their nearest relatives are parrots and songbirds.[15]
|
22 |
+
|
23 |
+
Falcons are roughly divisible into three or four groups. The first contains the kestrels (probably excepting the American kestrel);[9] usually small and stocky falcons of mainly brown upperside color and sometimes sexually dimorphic; three African species that are generally gray in color stand apart from the typical members of this group. Kestrels feed chiefly on terrestrial vertebrates and invertebrates of appropriate size, such as rodents, reptiles, or insects.
|
24 |
+
|
25 |
+
The second group contains slightly larger (on average) species, the hobbies and relatives. These birds are characterized by considerable amounts of dark slate-gray in their plumage; their malar areas are nearly always black. They feed mainly on smaller birds.
|
26 |
+
|
27 |
+
Third are the peregrine falcon and its relatives, variably sized powerful birds that also have a black malar area (except some very light color morphs), and often a black cap, as well. Otherwise, they are somewhat intermediate between the other groups, being chiefly medium gray with some lighter or brownish colors on their upper sides. They are, on average, more delicately patterned than the hobbies and, if the hierofalcons are excluded (see below), this group typically contains species with horizontal barring on their undersides. As opposed to the other groups, where tail color varies much in general but little according to evolutionary relatedness,[note 1] However, the fox and greater kestrels can be told apart at first glance by their tail colors, but not by much else; they might be very close relatives and are probably much closer to each other than the lesser and common kestrels. The tails of the large falcons are quite uniformly dark gray with inconspicuous black banding and small, white tips, though this is probably plesiomorphic. These large Falco species feed on mid-sized birds and terrestrial vertebrates.
|
28 |
+
|
29 |
+
Very similar to these, and sometimes included therein, are the four or so species of hierofalcons (literally, "hawk-falcons"). They represent taxa with, usually, more phaeomelanins, which impart reddish or brown colors, and generally more strongly patterned plumage reminiscent of hawks. Their undersides have a lengthwise pattern of blotches, lines, or arrowhead marks.
|
30 |
+
|
31 |
+
While these three or four groups, loosely circumscribed, are an informal arrangement, they probably contain several distinct clades in their entirety.
|
32 |
+
|
33 |
+
A study of mtDNA cytochrome b sequence data of some kestrels[9] identified a clade containing the common kestrel and related "malar-striped" species, to the exclusion of such taxa as the greater kestrel (which lacks a malar stripe), the lesser kestrel (which is very similar to the common, but also has no malar stripe), and the American kestrel, which has a malar stripe, but its color pattern–apart from the brownish back–and also the black feathers behind the ear, which never occur in the true kestrels, are more reminiscent of some hobbies. The malar-striped kestrels apparently split from their relatives in the Gelasian, roughly 2.0–2.5 million years ago (Mya), and are seemingly of tropical East African origin. The entire "true kestrel" group—excluding the American species—is probably a distinct and quite young clade, as also suggested by their numerous apomorphies.
|
34 |
+
|
35 |
+
Other studies[16][17][18][19][20] have confirmed that the hierofalcons are a monophyletic group–and that hybridization is quite frequent at least in the larger falcon species. Initial studies of mtDNA cytochrome b sequence data suggested that the hierofalcons are basal among living falcons.[16][17] The discovery of a NUMT proved this earlier theory erroneous.[18] In reality, the hierofalcons are a rather young group, originating at the same time as the start of the main kestrel radiation, about 2 Mya. Very little fossil history exists for this lineage. However, the present diversity of very recent origin suggests that this lineage may have nearly gone extinct in the recent past.[20][21]
|
36 |
+
|
37 |
+
The phylogeny and delimitations of the peregrine and hobbies groups are more problematic. Molecular studies have only been conducted on a few species, and the morphologically ambiguous taxa have often been little researched. The morphology of the syrinx, which contributes well to resolving the overall phylogeny of the Falconidae,[22][23] is not very informative in the present genus. Nonetheless, a core group containing the peregrine and Barbary falcons, which, in turn, group with the hierofalcons and the more distant prairie falcon (which was sometimes placed with the hierofalcons, though it is entirely distinct biogeographically), as well as at least most of the "typical" hobbies, are confirmed to be monophyletic as suspected.[16][17]
|
38 |
+
|
39 |
+
Given that the American Falco species of today belong to the peregrine group, or are apparently more basal species, the initially most successful evolutionary radiation seemingly was a Holarctic one that originated possibly around central Eurasia or in (northern) Africa. One or several lineages were present in North America by the Early Pliocene at latest.
|
40 |
+
|
41 |
+
The origin of today's major Falco groups—the "typical" hobbies and kestrels, for example, or the peregrine-hierofalcon complex, or the aplomado falcon lineage—can be quite confidently placed from the Miocene-Pliocene boundary through the Zanclean and Piacenzian and just into the Gelasian, that is from 2.4–8.0 Mya, when the malar-striped kestrels diversified. Some groups of falcons, such as the hierofalcon complex and the peregrine-Barbary superspecies, have only evolved in more recent times; the species of the former seem to be 120,000 years old or so.[20]
|
42 |
+
|
43 |
+
The sequence follows the taxonomic order of White et al. (1996),[24] except for adjustments in the kestrel sequence.
|
44 |
+
|
45 |
+
Several more paleosubspecies of extant species also been described; see species accounts for these.
|
46 |
+
|
47 |
+
"Sushkinia" pliocaena from the Early Pliocene of Pavlodar (Kazakhstan) appears to be a falcon of some sort. It might belong in this genus or a closely related one.[25] In any case, the genus name Sushkinia is invalid for this animal because it had already been allocated to a prehistoric dragonfly relative. In 2015 the bird genus was renamed Psushkinia.[34]
|
48 |
+
|
49 |
+
The supposed "Falco" pisanus was actually a pigeon of the genus Columba, possibly the same as Columba omnisanctorum, which, in that case, would adopt the older species name of the "falcon".[26] The Eocene fossil "Falco" falconellus (or "F." falconella) from Wyoming is a bird of uncertain affiliations, maybe a falconid, maybe not; it certainly does not belong in this genus. "Falco" readei is now considered a paleosubspecies of the yellow-headed caracara (Milvago chimachima).
|
en/1947.html.txt
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
One-day races and Classics
|
4 |
+
|
5 |
+
Other
|
6 |
+
|
7 |
+
Angelo Fausto Coppi (Italian pronunciation: [ˈfausto ˈkɔppi]; 15 September 1919 – 2 January 1960) was an Italian cyclist, the dominant international cyclist of the years after the Second World War. His successes earned him the title Il Campionissimo ("Champion of Champions"). He was an all-round racing cyclist: he excelled in both climbing and time trialing, and was also a great sprinter. He won the Giro d'Italia five times (1940, 1947, 1949, 1952, 1953), the Tour de France twice (1949 and 1952), and the World Championship in 1953. Other notable results include winning the Giro di Lombardia five times, the Milan–San Remo three times, as well as wins at Paris–Roubaix and La Flèche Wallonne and setting the hour record (45.798 km) in 1942.
|
8 |
+
|
9 |
+
Coppi was born in Castellania (now known as Castellania Coppi), near Alessandria, one of five children born to Domenico Coppi and Angiolina Boveri,[1] who married on 29 July 1914. Fausto was the fourth child, born at 5:00 pm on 15 September 1919. His mother wanted to call him Angelo, but his father preferred Fausto. He was named Angelo Fausto but was known most of his life as Fausto.[2]
|
10 |
+
|
11 |
+
Coppi had poor health as a child and showed little interest in school. In 1927 he wrote "I ought to be at school, not riding my bicycle" after skipping lessons to spend the day riding a family bike which he had found in a cellar, rusty and without brake blocks.[3] He left school at age 13 to work for Domenico Merlani, a butcher in Novi Ligure more widely known as Signor Ettore.
|
12 |
+
|
13 |
+
Cycling to and from the shop and meeting cyclists who came there interested him in racing. The money to buy a bike came from his uncle, also called Fausto Coppi, and his father. Coppi said:
|
14 |
+
|
15 |
+
"... [My uncle] was a merchant navy officer on a petrol tanker, and a real cycling fan. He was touched when he heard of my passion for the bike and decided that I deserved a real tool for the job on which I had set my heart, instead of the rusty old crock I was pushing around. I just cried with joy when my kind uncle gave me the 600 lire that were to make my dream come true. I knew from advertisements I had seen in the local papers that for 600 lire I could get a frame built to my measurements in Genoa. Out of my slender savings I took enough for the train fare to Genoa and back, gave my measurements, and handed over the 600 lire. I would have to buy the fittings and tyres from my errand-boy salary. Oh how my legs used to ache at night through climbing all those stairs during the day! But I'm glad I did, because it surely made my legs so strong".[4]
|
16 |
+
"Come back within a week; your frame will be ready" said the owner of the cycle shop".[4] "But it wasn't ready, and not the next week, and not the next. For eight weeks I threw precious money away taking the train to Genoa and still no made-to-measure bike for me. The fellow just couldn't be bothered making a frame for a skinny country kid who didn't look as if he could pedal a fairy-cycle, let alone a racing bike. I used to cry bitterly as I went back home without the frame. On the ninth journey I took a frame home. But it wasn't a 'made to measure'. The chap just took one down off the rack. I was furious inside, but too shy to do anything about it".[4]
|
17 |
+
|
18 |
+
Coppi rode his first race at age 15, among other boys not attached to cycling clubs, and won first prize: 20 lire and a salami sandwich. Coppi took a racing licence at the start of 1938 and won his first race, at Castelleto d'Orba, near the butcher's shop. He won alone, winning an alarm clock. A regular caller at the butcher's shop in Novi Ligure was a former boxer who had become a masseur, a job he could do after losing his sight, in 1938. Giuseppe Cavanna was known to friends as Biagio. Coppi met him that year, recommended by another of Cavanna's riders. Cavanna suggested in 1939 that Coppi should become an independent, a class of semi-professionals who could ride against both amateurs and professionals. He sent Coppi to the Tour of Tuscany that April with the advice: "Follow Gino Bartali!" He was forced to stop with a broken wheel. But at Varzi on 7 May 1939 he won one of the races counting to the season-long national independent championship. He finished seven minutes clear of the field and won his next race by six minutes.
|
19 |
+
|
20 |
+
His first large success was in 1940, winning the Giro d'Italia at the age of 20. In 1942 he set a world hour record (45.798 km at the Velodromo Vigorelli in Milan) which stood for 14 years until it was broken by Jacques Anquetil in 1956. His career was then interrupted by the Second World War. In 1946 he resumed racing and achieved remarkable successes which would be exceeded only by Eddy Merckx. The veteran writer Pierre Chany said that from 1946 to 1954 Coppi was never once recaught once he had broken away from the rest.[5]
|
21 |
+
|
22 |
+
Twice, 1949 and 1952, Coppi won the Giro d'Italia and the Tour de France in the same year, the first to do so. He won the Giro five times, a record shared with Alfredo Binda and Eddy Merckx. During the 1949 Giro he left Gino Bartali by 11 minutes between Cuneo and Pinerolo. Coppi won the 1949 Tour de France by almost half an hour over everyone except Bartali. From the start of the mountains in the Pyrenees to their end in the Alps, Coppi took back the 55 minutes by which Jacques Marinelli led him.[6]
|
23 |
+
|
24 |
+
Coppi won the Giro di Lombardia a record five times (1946, 1947, 1948, 1949 and 1954). He won Milan–San Remo three times (1946, 1948 and 1949). In the 1946 Milan–San Remo he attacked with nine others, five kilometres into a race of 292 km. He dropped the rest on the Turchino climb and won by 14 minutes.[7][8] He also won Paris–Roubaix and La Flèche Wallonne (1950). He was also 1953 world road champion.
|
25 |
+
|
26 |
+
In 1952 Coppi won on the Alpe d'Huez, which had been included for the first time that year. He attacked six kilometres from the summit to rid himself of the French rider, Jean Robic. Coppi said: "I knew he was no longer there when I couldn't hear his breathing any more or the sound of his tyres on the road behind me".[9][10] He rode like "a Martian on a bicycle", said Raphaël Géminiani. "He asked my advice about the gears to use, I was in the French team and he in the Italian, but he was a friend and normally my captain in our everyday team, so I could hardly refuse him. I saw a phenomenal rider that day".[11] Coppi won the Tour by 28m 27s and the organiser, Jacques Goddet, had to double the prizes for lower placings to keep other riders interested.[12] It was his last Tour, having ridden three and won two. To conserve energy, he would have soigneurs carry him around his hotel during Grand Tours.[13]
|
27 |
+
|
28 |
+
Bill McGann wrote:
|
29 |
+
|
30 |
+
Comparing riders from different eras is a risky business subject to the prejudices of the judge. But if Coppi isn't the greatest rider of all time, then he is second only to Eddy Merckx. One can't judge his accomplishments by his list of wins because World War II interrupted his career just as World War I interrupted that of Philippe Thys. Coppi won it all: the world hour record, the world championships, the grands tours, classics as well as time trials. The great French cycling journalist, Pierre Chany says that between 1946 and 1954, once Coppi had broken away from the peloton, the peloton never saw him again. Can this be said of any other racer? Informed observers who saw both ride agree that Coppi was the more elegant rider who won by dint of his physical gifts as opposed to Merckx who drove himself and hammered his competition relentlessly by being the very embodiment of pure will.[14]
|
31 |
+
|
32 |
+
Coppi broke the world hour record on the track in Milan on 7 November 1942.[15] He rode a 93.6 inch (7.47 metre) gear and pedaled with an average cadence of 103.3rpm.[16] The bike is on display in the chapel of Madonna del Ghisallo near Como, Italy.[17] Coppi beat Maurice Archambaud's 45.767 km, set five years earlier on the same track.[18] The record stood until it was beaten by Jacques Anquetil in 1956.[8]
|
33 |
+
|
34 |
+
In 1955 Coppi and his lover Giulia Occhini were put on trial for adultery, then illegal in Italy, and got suspended sentences. The scandal rocked conservative ultra-Catholic Italy and Coppi was disgraced.[19] Coppi's career declined after the scandal. He had already been hit in 1951 by the death of his younger brother, Serse Coppi, who crashed in a sprint in the Giro del Piemonte and died of a cerebral haemorrhage.[n 1] Coppi could never match his old successes. Pierre Chany said he was first to be dropped each day in the Vuelta a España in 1959. Criterium organisers frequently cut their races to 45 km to be certain that Coppi could finish, he said. "Physically, he wouldn't have been able to ride even 10km further. He charged himself [took drugs] before every race". Coppi, said Chany, was "a magnificent and grotesque washout of a man, ironical towards himself; nothing except the warmth of simple friendship could penetrate his melancholia. But I'm talking of the end of his career. The last year! In 1959! I'm not talking about the great era. In 1959, he wasn't a racing cyclist any more. He was just clinging on [il tentait de sauver les meubles]."[20]
|
35 |
+
|
36 |
+
Jacques Goddet wrote in an appreciation of Coppi's career in L'Équipe: "We would like to have cried out to him 'Stop!' And as nobody dared to, destiny took care of it."
|
37 |
+
|
38 |
+
Raphaël Géminiani said of Coppi's domination:
|
39 |
+
|
40 |
+
When Fausto won and you wanted to check the time gap to the man in second place, you didn't need a Swiss stopwatch. The bell of the church clock tower would do the job just as well. Paris–Roubaix? Milan–San Remo? Lombardy? We're talking 10 minutes to a quarter of an hour. That's how Fausto Coppi was.[21]
|
41 |
+
|
42 |
+
Tim Hilton, The Guardian[22]
|
43 |
+
|
44 |
+
Coppi's racing days are generally referred to as the beginning of the golden years of cycle racing. A factor is the competition between Coppi and Gino Bartali. Italian tifosi (fans) divided into coppiani and bartaliani. Bartali's rivalry with Coppi divided Italy.[23] Bartali, conservative, religious, was venerated in the rural, agrarian south, while Coppi, more worldly, secular, innovative in diet and training, was hero of the industrial north. The writer Curzio Malaparte said:
|
45 |
+
|
46 |
+
"Bartali belongs to those who believe in tradition ... he is a metaphysical man protected by the saints. Coppi has nobody in heaven to take care of him. His manager, his masseur, have no wings. He is alone, alone on a bicycle ... Bartali prays while he is pedalling: the rational Cartesian and sceptical Coppi is filled with doubts, believes only in his body, his motor".
|
47 |
+
|
48 |
+
Their lives came together on 7 January 1940 when Eberardo Pavesi, head of the Legnano team, took on Coppi to ride for Bartali. Their rivalry started when Coppi, the helping hand, won the Giro and Bartali, the star, marshalled the team to chase. By the 1948 world championship at Valkenburg, Limburg in the Netherlands, both climbed off rather than help the other. The Italian cycling association said: "They have forgotten to honour the Italian prestige they represent. Thinking only of their personal rivalry, they abandoned the race, to the approbation of all sportsmen". They were suspended for three months.[24]
|
49 |
+
|
50 |
+
The thaw partly broke when the pair shared a bottle on the Col d'Izoard in the 1952 Tour[n 2] but the two fell out over who had offered it. "I did", Bartali insisted. "He never gave me anything".[25] Their rivalry was the subject of intense coverage and resulted in epic races.
|
51 |
+
|
52 |
+
Coppi joined the army as soldier 7,375 of the 38th Infantry when Italy entered World War II. Officers favoured him at first to keep him riding his bike, but in March 1943 they sent him to North Africa. There he was taken prisoner by the British between Mateur and Medjez-el-Bab on 13 April 1943. He was kept in a prisoner of war camp, where he shared plates with the father of Claudio Chiappucci, who rode the Tour in the 1990s. He was given odd jobs to do. The British cyclist Len Levesley said he was astonished to find Coppi giving him a haircut.[26]
|
53 |
+
Levesley, who was on a stretcher with polio, said:
|
54 |
+
|
55 |
+
"I should think it took me all of a full second to realise who it was. He looked fine, he looked slim, and having been in the desert, he looked tanned. I'd only seen him in cycling magazines but I knew instantly who he was. So he cut away at my hair and I tried to have a conversation with him, but he didn't speak English and I don't speak Italian. But we managed one or two words and I got over to him that I did some club racing. And I gave him a bar of chocolate that I had with me and he was grateful for that and that was the end of it".[n 3]
|
56 |
+
|
57 |
+
The British moved Coppi to an RAF base at Caserta in Italy in 1945. There he worked for an officer who had never heard of him. Coppi was allowed liberal terms, the war being as good as over. On release he cycled and hitched lifts home. On Sunday 8 July 1945 he won the Circuit of the Aces in Milan after four years away from racing. The following season he won Milan–San Remo (about these years see also "Viva Coppi!", a historical novel written by Filippo Timo).
|
58 |
+
|
59 |
+
Coppi's beloved, "The Woman in White" was Giulia Occhini, described by the French broadcaster Jean-Paul Ollivier as "strikingly beautiful with thick chestnut hair divided into enormous plaits". She was married to an army captain, Enrico Locatelli. Coppi was married to Bruna Ciampolini. Locatelli was a cycling fan. His wife wasn't but she joined him on 8 August 1948 to see the Tre Valli Varesine race. Their car was caught beside Coppi's in a traffic jam. That evening Occhini went to Coppi's hotel and asked for a photograph. He wrote "With friendship to ...", asked her name and then added it. From then on the two spent more and more time together.
|
60 |
+
|
61 |
+
Italy was a strait-laced country in which adultery was thought of poorly. In 1954, Luigi Boccaccini of La Stampa saw her waiting for Coppi at the end of a race in St-Moritz. She and Coppi hugged and La Stampa printed a picture in which she was described as la dama in bianco di Fausto Coppi—the "woman in white of Fausto Coppi".
|
62 |
+
|
63 |
+
It took only a while to find out who she was. She and Coppi moved in together but so great was the scandal that the landlord of their apartment in Tortona demanded they move out. Reporters pursued them to a hotel in Castelletto d'Orba and again they moved, buying the Villa Carla, a house near Novi Ligure. There police raided them at night to see if they were sharing a bed. Pope Pius XII asked Coppi to return to his wife. He refused to bless the Giro d'Italia when Coppi rode it. The Pope then went through the Italian cycling federation. Its president, Bartolo Paschetta, wrote on 8 July 1954: "Dear Fausto, yesterday evening St. Peter made it known to me that the news [of adultery] had caused him great pain".
|
64 |
+
|
65 |
+
Bruna Ciampolini refused a divorce. To end a marriage was shameful and still illegal in the country. Coppi was shunned and spectators spat at him. He and Giulia Occhini had a son, Faustino.[27]
|
66 |
+
|
67 |
+
In December 1959, the president of Burkina Faso, Maurice Yaméogo, invited Coppi, Raphaël Géminiani, Jacques Anquetil, Louison Bobet, Roger Hassenforder and Henry Anglade to ride against local riders and then go hunting. Géminiani remembered:
|
68 |
+
|
69 |
+
"I slept in the same room as Coppi in a house infested by mosquitos. I'd got used to them but Coppi hadn't. Well, when I say we 'slept', that's an overstatement. It was like the safari had been brought forward several hours, except that for the moment we were hunting mosquitos. Coppi was swiping at them with a towel. Right then, of course, I had no clue of what the tragic consequences of that night would be. Ten times, twenty times, I told Fausto 'Do what I'm doing and get your head under the sheets; they can't bite you there'".[28]
|
70 |
+
|
71 |
+
Both caught malaria and fell ill when they got home. Géminiani said:
|
72 |
+
|
73 |
+
"My temperature got to 41.6 °C ... I was delirious and I couldn't stop talking. I imagined or maybe saw people all round but I didn't recognise anyone. The doctor treated me for hepatitis, then for yellow fever, finally for typhoid".[28]
|
74 |
+
|
75 |
+
Geminiani was diagnosed as being infected with plasmodium falciparum, one of the more lethal strains of malaria. Géminiani recovered but Coppi died, his doctors convinced he had a bronchial complaint. La Gazzetta dello Sport, the Italian daily sports paper, published a Coppi supplement. The editor wrote that he prayed that God would soon send another Coppi.[29] Coppi was an atheist.[30]
|
76 |
+
|
77 |
+
In January 2002 a man identified only as Giovanni, who lived in Burkina Faso until 1964, said Coppi died not of malaria but of an overdose of cocaine. The newspaper Corriere dello Sport said Giovanni had his information from Angelo Bonazzi. Giovanni said: "It is Angelo who told me that Coppi had been killed. I was a supporter of Coppi, and you can imagine my state when he told me that Coppi had been poisoned in Fada Gourma, at the time of a reception organised by the head of the village. Angelo also told me that [Raphael] Géminiani was also present... Fausto's plate fell, they replaced it, and then..."[31]
|
78 |
+
|
79 |
+
The story has also been attributed to a 75-year-old Benedictine monk called Brother Adrien. He told Mino Caudullo of the Italian National Olympic Committee: "Coppi was killed with a potion mixed with grass. Here in Burkina Faso this awful phenomenon happens. People are still being killed like that". Coppi's doctor, Ettore Allegri, dismissed the story as "absolute drivel".[32][33]
|
80 |
+
|
81 |
+
A court in Tortona opened an investigation and asked toxicologists about exhuming Coppi's body to look for poison. A year later, without exhumation, the case was dismissed.[34]
|
82 |
+
|
83 |
+
The Giro remembers Coppi as it goes through the mountain stages. A mountain bonus, called the Cima Coppi, is awarded to the first rider who reaches the Giro's highest summit. In 1999, Coppi placed second in balloting for greatest Italian athlete of the 20th century.
|
84 |
+
|
85 |
+
Coppi's life story was depicted in the 1995 TV movie, Il Grande Fausto, written and directed by Alberto Sironi. Coppi was played by Sergio Castellitto and Giulia la 'Dama Bianca' (The Woman in White) was played by Ornella Muti.[35]
|
86 |
+
|
87 |
+
A commonly repeated trope is that when Coppi was asked how to be a champion, his reply was: "Just ride. Just ride. Just ride."[36] An Italian Restaurant in Belfast, designed with road bike parts and pictures, is named Coppi. Asteroid 214820 Faustocoppi was named in his memory in December 2017.
|
88 |
+
|
89 |
+
The village of his birth, previously known as 'Castellania', was renamed Castellania Coppi by the Piemont regional council in 2019, in preparation for the centenary of his birth.[37]
|
90 |
+
|
91 |
+
Gino Bartali, Miroir des Sports, 1946,[38]
|
92 |
+
|
93 |
+
Coppi was often said to have introduced "modern" methods to cycling, particularly his diet. Gino Bartali established that some of those methods included taking drugs, which were not then against the rules.
|
94 |
+
|
95 |
+
Bartali and Coppi appeared on television revues and sang together, Bartali singing about "The drugs you used to take" as he looked at Coppi. Coppi spoke of the subject in a television interview:
|
96 |
+
|
97 |
+
Coppi "set the pace" in drug-taking, said his contemporary Wim van Est.[41] Rik Van Steenbergen said Coppi was "the first I knew who took drugs".[42] That didn't stop Coppi's protesting against others using it. He told René de Latour:
|
98 |
+
|
99 |
+
"What is the good of having world champions if those boys are worn out before turning professional? Maybe the officials are proud to come back with a rainbow jersey.[n 4] but if this is done at the expense of the boys' futures, then I say it's wrong. Do you think it normal that our best amateurs become nothing but 'gregari'?"[n 5][43]
|
100 |
+
|
101 |
+
Source:[44][45][46]
|
102 |
+
|
103 |
+
Source:[45]
|
104 |
+
|
105 |
+
Source:[44][45]
|
en/1948.html.txt
ADDED
@@ -0,0 +1,174 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A wheelchair is a chair with wheels, used when walking is difficult or impossible due to illness, injury, old age related problems, or disability. These can includes spinal cord injuries (paraplegia, Hemiplegia, and quadriplegia), broken leg(s), cerebral palsy, brain injury, Osteogenesis imperfecta a.k.a. brittle bones, motor neurone diseases (MND), multiple sclerosis (MS), muscular dystrophy (MD), Spina bifida, and more. Wheelchairs come in a wide variety of formats to meet the specific needs of their users. They may include specialized seating adaptions, individualized controls, and may be specific to particular activities, as seen with sports wheelchairs and beach wheelchairs. The most widely recognized distinction is between powered wheelchairs, where propulsion is provided by batteries and electric motors, and manually propelled wheelchairs, where the propulsive force is provided either by the wheelchair user/occupant pushing the wheelchair by hand ("self-propelled"), by an attendant pushing from the rear using handle(s), or by an attendant pushing from the side use a handle attachment.
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The earliest records of wheeled furniture are an inscription found on a stone slate in China and a child's bed depicted in a frieze on a Greek vase, both dating between the 6th and 5th century BCE.[2][3][4][5] The first records of wheeled seats being used for transporting disabled persons date to three centuries later in China; the Chinese used early wheelbarrows to move people as well as heavy objects. A distinction between the two functions was not made for another several hundred years, until around 525 CE, when images of wheeled chairs made specifically to carry people begin to occur in Chinese art.[5]
|
6 |
+
|
7 |
+
Although Europeans eventually developed a similar design, this method of transportation did not exist until 1595[6] when an unknown inventor from Spain built one for King Phillip II. Although it was an elaborate chair having both armrests and leg rests, the design still had shortcomings since it did not feature an efficient propulsion mechanism and thus, requires assistance to propel it. This makes the design more of a modern-day highchair or portable throne for the wealthy rather than a modern-day wheelchair for the disabled.[2]
|
8 |
+
|
9 |
+
In 1655, Stephan Farffler, a 22-year-old paraplegic watchmaker, built the world's first self-propelling chair on a three-wheel chassis using a system of cranks and cogwheels.[6][3] However, the device had an appearance of a hand bike more than a wheelchair since the design included hand cranks mounted at the front wheel.[2]
|
10 |
+
|
11 |
+
The invalid carriage or Bath chair brought the technology into more common use from around 1760.[7]
|
12 |
+
|
13 |
+
In 1887, wheelchairs ("rolling chairs") were introduced to Atlantic City so invalid tourists could rent them to enjoy the Boardwalk. Soon, many healthy tourists also rented the decorated "rolling chairs" and servants to push them as a show of decadence and treatment they could never experience at home.[8]
|
14 |
+
|
15 |
+
In 1933 Harry C. Jennings, Sr. and his disabled friend Herbert Everest, both mechanical engineers, invented the first lightweight, steel, folding, portable wheelchair.[9] Everest had previously broken his back in a mining accident. Everest and Jennings saw the business potential of the invention and went on to become the first mass-market manufacturers of wheelchairs. Their "X-brace" design is still in common use, albeit with updated materials and other improvements. The X-brace idea came to Harry from the men's folding “camp chairs / stools”, rotated 90 degrees, that Harry and Herbert used in the outdoors and at the mines.
|
16 |
+
|
17 |
+
Wooden wheelchair dating to the early part of the 20th century
|
18 |
+
|
19 |
+
19th-century wheelchair
|
20 |
+
|
21 |
+
There are a wide variety of types of wheelchair, differing by propulsion method, mechanisms of control, and technology used. Some wheelchairs are designed for general everyday use, others for single activities, or to address specific access needs. Innovation within the wheelchair industry is relatively common, but many innovations ultimately fall by the wayside, either from over-specialization, or from failing to come to market at an accessible price-point. The iBOT is perhaps the best known example of this in recent years.
|
22 |
+
|
23 |
+
A self-propelled manual wheelchair incorporates a frame, seat, one or two footplates (footrests) and four wheels: usually two caster wheels at the front and two large wheels at the back. There will generally also be a separate seat cushion. The larger rear wheels usually have push-rims of slightly smaller diameter projecting just beyond the tyre; these allow the user to manoeuvre the chair by pushing on them without requiring them to grasp the tyres. Manual wheelchairs generally have brakes that bear on the tyres of the rear wheels, however these are solely a parking brake and in-motion braking is provided by the user's palms bearing directly on the push-rims. As this causes friction and heat build-up, particularly on long downslopes, many wheelchair users will choose to wear padded wheelchair gloves. Manual wheelchairs often have two push handles at the upper rear of the frame to allow for manual propulsion by a second person, however many active wheelchair users will remove these to prevent unwanted pushing from people who believe they are being helpful.
|
24 |
+
|
25 |
+
Everyday manual wheelchairs come in two major varieties, folding or rigid. Folding chairs are generally low-end designs, whose predominant advantage is being able to fold, generally by bringing the two sides together. However this is largely an advantage for part-time users who may need to store the wheelchair more often than use it. Rigid wheelchairs, which are increasingly preferred by full-time and active users, have permanently welded joints and many fewer moving parts. This reduces the energy required to push the chair by eliminating many points where the chair would flex and absorb energy as it moves. Welded rather than folding joints also reduce the overall weight of the chair. Rigid chairs typically feature instant-release rear wheels and backrests that fold down flat, allowing the user to dismantle the chair quickly for storage in a car. A few wheelchairs attempt to combine the features of both designs by providing a fold-to-rigid mechanism in which the joints are mechanically locked when the wheelchair is in use.
|
26 |
+
|
27 |
+
Many rigid models are now made with ultralight materials such as aircraft-grade aluminium and titanium, and wheelchairs of composite materials such as carbon-fibre have started to appear. Ultra lightweight rigid wheelchairs are commonly known as 'active user chairs' as they are ideally suited to independent use. Another innovation in rigid chair design is the installation of shock absorbers, such as Frog Legs, which cushion the bumps over which the chair rolls. These shock absorbers may be added to the front wheels, to the rear wheels, or both. Rigid chairs also have the option for their rear wheels to have a camber, or tilt, which angles the tops of the wheels in toward the chair. This allows for more mechanically efficient propulsion by the user and also makes it easier to hold a straight line while moving across a slope. Sport wheelchairs often have large camber angles to improve stability.
|
28 |
+
|
29 |
+
Rigid-framed chairs are generally made to measure, to suit both the specific size of the user and their needs and preferences around areas such as the "tippyness" of the chair - its stability around the rear axle. Experienced users with sufficient upper-body strength can generally balance the chair on its rear wheels, a "wheelie", and the "tippyness" of the chair controls the ease with which this can be initiated. The wheelie allows an independent wheelchair user to climb and descend curbs and move more easily over small obstacles and irregular ground such as cobbles.
|
30 |
+
|
31 |
+
The rear wheels of self-propelled wheelchairs typically range from 20–24 inches (51–61 cm)in diameter, and commonly resemble bicycle wheels. Wheels are rubber-tired and may be solid, pneumatic or gel-filled. The wheels of folding chairs may be permanently attached, but those for rigid chairs are commonly fitted with quick-release axles activated by depressing a button at the centre of the wheel.
|
32 |
+
|
33 |
+
All major varieties of wheelchair can be highly customized for the user's needs. Such customization may encompass the seat dimensions, height, seat angle, footplates, leg rests, front caster outriggers, adjustable backrests and controls. Various optional accessories are available, such as anti-tip bars or wheels, safety belts, adjustable backrests, tilt and/or recline features, extra support for limbs or head and neck, holders for crutches, walkers or oxygen tanks, drink holders, and mud and wheel-guards as clothing protectors.
|
34 |
+
|
35 |
+
Light weight and high cost are related in the manual wheelchair market. At the low-cost end, heavy, folding steel chairs with sling seats and little adaptability dominate. Users may be temporarily disabled, or using such a chair as a loaner, or simply unable to afford better. These chairs are common as "loaners" at large facilities such as airports, amusement parks and shopping centers. A slightly higher price band sees the same folding design produced in aluminium. Price typically then jumps from low to mid hundreds of pounds/dollars/euros to a four figure price range, with individually custom manufactured lightweight chairs with more options. The high end of the market contains ultra-light models, extensive seating options and accessories, all-terrain features, and so forth. The most expensive manual chairs may rival the cost of a small car.
|
36 |
+
|
37 |
+
An attendant-propelled wheelchair is generally similar to a self-propelled manual wheelchair, but with small diameter wheels at both front and rear. The chair is maneuvered and controlled by a person standing at the rear and pushing on handles incorporated into the frame. Braking is supplied directly by the attendant who will usually also be provided with a foot- or hand-operated parking brake.
|
38 |
+
|
39 |
+
These chairs are common in institutional settings and as loaner-chairs in large public venues. They are usually constructed from steel as light weight is less of a concern when the user is not required to self-propel.
|
40 |
+
|
41 |
+
Specially designed transfer chairs are now required features at airports in much of the developed world in order to allow access down narrow airliner aisles and facilitate the transfer of wheelchair-using passengers to and from their seats on the aircraft.
|
42 |
+
|
43 |
+
An electric-powered wheelchair, commonly called a "powerchair" is a wheelchair which additionally incorporates batteries and electric motors into the frame and that is controlled by either the user or an attendant, most commonly via a small joystick mounted on the armrest, or on the upper rear of the frame. Alternatives exist for the traditional manual joystick, including headswitches, chin-operated joysticks, sip-and-puff controllers or other specialist controls, which may allow independent operation of the wheelchair for a wider population of users with varying motor impairments.[10] Ranges of over 10 miles/15 km are commonly available from standard batteries.
|
44 |
+
|
45 |
+
Powerchairs are commonly divided by their access capabilities. An indoor-chair may only reliably be able to cross completely flat surfaces, limiting them to household use. An indoor-outdoor chair is less limited, but may have restricted range or ability to deal with slopes or uneven surfaces. An outdoor chair is more capable, but will still have a very restricted ability to deal with rough terrain. A very few specialist designs offer a true cross-country capability.
|
46 |
+
|
47 |
+
Powerchairs have access to the full range of wheelchair options, including ones which are difficult to provide in an unpowered manual chair, but have the disadvantage of significant extra weight. Where an ultra-lightweight manual chair may weigh under 10 kg, the largest outdoor power-chairs may weigh 200 kg or more.
|
48 |
+
|
49 |
+
Smaller power chairs often have four wheels, with front or rear wheel drive, but large outdoor designs commonly have six wheels, with small wheels at front and rear and somewhat larger powered wheels in the centre.
|
50 |
+
|
51 |
+
A power-assisted wheelchair is a recent development that uses the frame and seating of a typical rigid manual chair while replacing the standard rear wheels with wheels of similar size which incorporate batteries and battery-powered motors in the hubs. A floating rim design senses the pressure applied by the users push and activates the motors proportionately to provide a power assist. This results in the convenience, and small size of a manual chair while providing motorised assistance for rough/uneven terrain and steep slopes that would otherwise be difficult or impossible to navigate, especially by those with limited upper-body function. As the wheels necessarily come at a weight penalty it is often possible to exchange them with standard wheels to match the capabilities of the wheelchair to the current activity.
|
52 |
+
|
53 |
+
Mobility scooters share some features with powerchairs, but primarily address a different market segment, people with a limited ability to walk, but who might not otherwise consider themselves disabled. Smaller mobility scooters are typically three wheeled, with a base on which is mounted a basic seat at the rear, with a control tiller at the front. Larger scooters are frequently four-wheeled, with a much more substantial seat.
|
54 |
+
|
55 |
+
Opinions are often polarized as to whether mobility scooters should be considered wheelchairs or not, and negative stereotyping of scooter users is worse than for manual or powerchair users. Some commercial organisations draw a distinction between powerchairs and scooters when making access provisions due to a lack of clarity in the law as to whether scooters fall under the same equality legislation as wheelchairs.
|
56 |
+
|
57 |
+
One-arm or single arm drive enables a user to self-propel a manual wheelchair using only a single arm. The large wheel on the same side as the arm to be used is fitted with two concentric handrims, one of smaller diameter than the other. On most models the outer, smaller rim, is connected to the wheel on the opposite side by an inner concentric axle. When both handrims are grasped together, the chair may be propelled forward or backward in a straight line. When either handrim is moved independently, only a single wheel is used and the chair will turn left or right in response to the handrim used. Some wheelchairs, designed for use by hemiplegics, provide a similar function by linking both wheels rigidly together and using one of the footplates to control steering via a linkage to the front caster.
|
58 |
+
|
59 |
+
Reclining or tilt-in-space wheelchairs have seating surfaces which can be tilted to various angles. The original concept was developed by an orthotist, Hugh Barclay, who worked with disabled children and observed that postural deformities such as scoliosis could be supported or partially corrected by allowing the wheelchair user to relax in a tilted position. The feature is also of value to users who are unable to sit upright for extended periods for pain or other reasons.
|
60 |
+
|
61 |
+
In the case of reclining wheelchairs, the seat-back tilts back, and the leg rests can be raised, while the seat base remains in the same position, somewhat similar to a common recliner chair. Some reclining wheelchairs lean back far enough that the user can lie down completely flat. Reclining wheelchairs are preferred in some cases for some medical purposes, such as reducing the risk of pressure sores, providing passive movement of hip and knee joints, and making it easier to perform some nursing procedures, such as intermittent catheterization to empty the bladder and transfers to beds, and also for personal reasons, such as people who like using an attached tray.[11] The use of reclining wheelchairs is particularly common among people with spinal cord injuries such as quadriplegia.[11]
|
62 |
+
|
63 |
+
In the case of tilting wheelchairs, the seat-back, seat base, and leg rests tilt back as one unit, somewhat similar to the way a person might tip a four-legged chair backwards to balance it on the back legs. While fully reclining spreads the person's weight over the entire back side of the body, tilting wheelchairs transfer it from only the buttocks and thighs (in the seated position) to partially on the back and head (in the tilted position).[11] Tilting wheelchairs are preferred for people who use molded or contoured seats, who need to maintain a particular posture, who adversely affected by sheer forces (reclining causes the body to slide slightly every time), or who need to keep a communication device, powered wheelchair controls, or other attached device in the same relative position throughout the day.[11] Tilting wheelchairs are commonly used by people with cerebral palsy, people with some muscle diseases, and people with limited range of motion in the hip or knee joints.[11] Tilting options are more common than reclining options in wheelchairs designed for use by children.[11]
|
64 |
+
|
65 |
+
A standing wheelchair is one that supports the user in a nearly standing position. They can be used as both a wheelchair and a standing frame, allowing the user to sit or stand in the wheelchair as they wish. Some versions are entirely manual, others have powered stand on an otherwise manual chair, while others have full power, tilt, recline and variations of powered stand functions available. The benefits of such a device include, but are not limited to: aiding independence and productivity, raising self-esteem and psychological well-being, heightening social status, extending access, relief of pressure, reduction of pressure sores, improved functional reach, improved respiration, reduced occurrence of UTI, improved flexibility, help in maintaining bone mineral density, improved passive range motion, reduction in abnormal muscle tone and spasticity, and skeletal deformities. Other wheelchairs provide some of the same benefits by raising the entire seat to lift the user to standing height.
|
66 |
+
|
67 |
+
A range of disabled sports have been developed for disabled athletes, including basketball, rugby, tennis, racing and dancing. The wheelchairs used for each sport have evolved to suit the specific needs of that sport and often no longer resemble their everyday cousins. They are usually non-folding (in order to increase rigidity), with a pronounced negative camber for the wheels (which provides stability and is helpful for making sharp turns), and often are made of composite, lightweight materials. Even seating position may be radically different, with racing wheelchairs generally used in a kneeling position. Sport wheelchairs are rarely suited for everyday use, and are often a 'second' chair specifically for sport use, although some users prefer the sport options for everyday use. Some disabled people, specifically lower-limb amputees, may use a wheelchair for sports, but not for everyday activities.[12]
|
68 |
+
|
69 |
+
While most wheelchair sports use manual chairs, some power chair sports, such as powerchair football, exist.
|
70 |
+
|
71 |
+
E-hockey is hockey played from electrical wheelchairs.
|
72 |
+
|
73 |
+
Wheelchair stretchers are a variant of wheeled stretchers/gurneys that can accommodate a sitting patient, or be adjusted to lie flat to help in the lateral (or supine) transfer of a patient from a bed to the chair or back. Once transferred, the stretcher can be adjusted to allow the patient to assume a sitting position.
|
74 |
+
|
75 |
+
All-terrain wheelchairs can allow users to access terrain otherwise completely inaccessible to a wheelchair user. Two different formats have been developed. One hybridises wheelchair and mountain bike technology, generally taking the form of a frame within which the user sits and with four mountain bike wheels at the corners. In general there are no push-rims and propulsion/braking is by pushing directly on the tyres.
|
76 |
+
|
77 |
+
A more common variant is the beach wheelchair (Beach-Going Wheelchair)[13] which can allow better mobility on beach sand, including in the water, on uneven terrain, and even on snow. The common adaptation among the different designs is that they have extra-wide balloon wheels or tires, to increase stability and decrease ground pressure on uneven or unsteady terrain. Different models are available, both manual and battery-driven. In some countries in Europe, where accessible tourism is well established, many beaches have wheelchairs of this type available for loan/hire.
|
78 |
+
|
79 |
+
A smart wheelchair is any powerchair using a control system to augment or replace user control.[14] Its purpose is to reduce or eliminate the user's task of driving a powerchair. Usually, a smart wheelchair is controlled via a computer, has a suite of sensors and applies techniques in mobile robotics, but this is not necessary. The type of sensors most frequently used by smart wheelchairs are the ultrasonic acoustic range finder (i.e. sonar) and infrared red (IR) range finder.[15] The interface may consist of a conventional wheelchair joystick, a "sip-and-puff" device or a touch-sensitive display. This differs from a conventional powerchair, in which the user exerts manual control over speed and direction without intervention by the wheelchair's control system.
|
80 |
+
|
81 |
+
Smart wheelchairs are designed for a variety of user types. Some are designed for users with cognitive impairments, such as dementia, these typically apply collision-avoidance techniques to ensure that users do not accidentally select a drive command that results in a collision. Othersfocus on users living with severe motor disabilities, such as cerebral palsy, or with quadriplegia, and the role of the smart wheelchair is to interpret small muscular activations as high-level commands and execute them. Such wheelchairs typically employ techniques from artificial intelligence, such as path-planning.
|
82 |
+
|
83 |
+
Recent technological advances are slowly improving wheelchair and powerchair technology.
|
84 |
+
|
85 |
+
A variation on the manually-propelled wheelchair is the Leveraged Freedom Chair (LFC), designed by the MIT Mobility Lab. This wheelchair is designed to be low-cost, constructed with local materials, for users in developing countries. Engineering modifications have added hand-controlled levers to the LFC, to enable users to move the chair over uneven ground and minor obstacles, such as bumpy dirt roads, that are common in developing countries. It is under development, and has been tested in Kenya and India so far.
|
86 |
+
|
87 |
+
The addition of geared, all-mechanical wheels for manual wheelchairs is a new development incorporating a hypocycloidal reduction gear into the wheel design. The 2-gear wheels can be added to a manual wheelchair. The geared wheels provide a user with additional assistance by providing leverage through gearing (like a bicycle, not a motor). The two-gear wheels offer two speed ratios- 1:1 (no help, no extra torque) and 2:1, providing 100% more hill climbing force. The low gear incorporates an automatic "hill hold" function which holds the wheelchair in place on a hill between pushes, but will allow the user to override the hill hold to roll the wheels backwards if needed. The low gear also provides downhill control when descending.
|
88 |
+
|
89 |
+
A recent development related to wheelchairs is the handcycle. They come in a variety of forms, from road and track racing models to off-road types modelled after mountain bikes. While dedicated handcycle designs are manufactured, clip-on versions are available than can convert a manual wheelchair to a handcycle in seconds. The general concept is a clip-on front-fork with hand-pedals, usually attaching to a mounting on the footplate. A somewhat related concept is the Freewheel, a large dolley wheel attaching to the front of a manual wheelchair, again generally to the footplate mounting, which improves wheelchair performance over rough terrain. Unlike a handcycle, a wheelchair with Freewheel continues to be propelled via the rear wheels. There are several types of hybrid-powered handcycles where hand-pedals and used along with the electrical motor that helps on hills and large distances.
|
90 |
+
|
91 |
+
The most recent generation of clip-on handcycles is fully electrical wheelchair power add-ons that use Lithium-ion battery, Brushless DC electric motor and light-weight aluminium frames with easy to attach clamps to convert almost any manual wheelchair into electrical trike in seconds. That makes long-distance journeys and everyday tasks much easier and keeps wheelchair user hands clean.
|
92 |
+
|
93 |
+
There have been significant efforts over the past 20 years to develop stationary wheelchair trainer platforms that could enable wheelchair users to exercise as one would on a treadmill or bicycle trainer.[16][17]
|
94 |
+
Some devices have been created that could be used in conjunction with virtual travel and interactive gaming similar to an omnidirectional treadmill.[citation needed]
|
95 |
+
|
96 |
+
In 2011, British inventor Andrew Slorance developed Carbon Black the first wheelchair to be made almost entirely out of carbon fibre[18][19]
|
97 |
+
|
98 |
+
Recently, EPFL's CNBI project has succeeded in making wheelchairs that can be controlled by brain impulses.[20][21]
|
99 |
+
|
100 |
+
Interest in electric-powered wheelchairs that are able to climb stairs has increased over the past twenty years. Therefore, many electric wheelchairs with the ability to climb stairs have been developed. Electric-powered wheelchairs with climbing ability need to be stronger and have greater movement in comparison to an electric-powered wheelchair that cannot climb stairs. They must also be stable in order to prevent injury to the wheelchair user. There are currently a number of electric powered wheelchairs that are able to climb stairs available to purchase. Technical developments are continuing in this area.[22]
|
101 |
+
|
102 |
+
Experiments have also been made with unusual variant wheels, like the omniwheel or the mecanum wheel. These allow for a broader spectrum of movement, but have made no mass-market penetration.
|
103 |
+
The electric wheelchair shown on the right is fitted with Mecanum wheels (sometimes known as Ilon wheels) which give it complete freedom of movement. It can be driven forwards, backwards, sideways, and diagonally, and also turned round on the spot or turned around while moving, all operated from a simple joystick.
|
104 |
+
|
105 |
+
A beach wheelchair at a public beach in the Netherlands
|
106 |
+
|
107 |
+
A snow wheelchair at an outdoor park
|
108 |
+
|
109 |
+
A Leveraged Freedom Chair wheelchair user in Kenya. The chair has been engineered to be low-cost and usable on the rough roads common in developing countries.
|
110 |
+
|
111 |
+
Wheelchair fitted with Mecanum wheels, taken at an Trade fair in the early 1980s
|
112 |
+
|
113 |
+
Foot propulsion of a manual wheelchair by the occupant is possible for users who have limited hand movement capabilities or simply do not wish to use their hands for propulsion. Foot propulsion also allows patients to exercise their legs to increase blood flow and limit further disability. Users who do this commonly may elect to have a lower seat height and no footplate to better suit the wheelchair to their needs.
|
114 |
+
|
115 |
+
Wheelbase chairs are powered or manual wheelchairs with specially molded seating systems interfaced with them for users with a more complicated posture. A molded seating system involves taking a cast of a person's best achievable seated position and then either carving the shape from memory foam or forming a plastic mesh around it. This seat is then covered, framed, and attached to a wheelbase.
|
116 |
+
|
117 |
+
A bariatric wheelchair is one designed to support larger weights; most standard chairs are designed to support no more than 250 lb (113 kg) on average.
|
118 |
+
|
119 |
+
Pediatric wheelchairs are another available subset of wheelchairs. These can address needs such as being able to play on the floor with other children, or cater for children in large hip-spica casts due to problems such as hip dysplasia.
|
120 |
+
|
121 |
+
Hemi wheelchairs have lower seats which are designed for easy foot propulsion. The decreased seat height also allows them to be used by children and shorter individuals.
|
122 |
+
|
123 |
+
A knee scooter is a related device with some features of a wheelchair and some of walking aids. Unlike wheelchairs they are only suitable for below knee injuries to a single leg. The user rests the injured leg on the scooter, grasps the handlebars, and pushes with the uninjured leg.
|
124 |
+
|
125 |
+
Some walkers can be used as a wheelchair. These walkers have a seat and foot plates, so an attendant can push while the patient is sitting on the walker. This is useful for a person who gets tired while walking with a walker, or have a limited walking range meaning the person can walk, but after a while the person will collapse and fall to the ground.
|
126 |
+
|
127 |
+
A commode wheelchair is a wheelchair made for the bathroom. A commode wheelchair has a hole on the seat so the user does not have to transfer into the toilet. Sometimes the hole can be covered. Sometimes there is a pan attached to the hole, so the user can pee/poop without having to wheel over the toilet.
|
128 |
+
|
129 |
+
Adapting the built environment to make it more accessible to wheelchair users is one of the key campaigns of disability rights movements and local equality legislation such the Americans with Disabilities Act of 1990 (ADA). The social model of disability defines 'disability' as the discrimination experienced by people with impairments as a result of the failure of society to provide the adaptions needed for them to participate in society as equals. This includes both physical adaption of the built environment and adaption of organizational and social structures and attitudes. A core principle of access is universal design - that all people regardless of disability are entitled to equal access to all parts of society like public transportation and buildings. A wheelchair user is less disabled in an environment without stairs.
|
130 |
+
|
131 |
+
Access starts outside of the building, with the provision of reduced height kerb-cuts where wheelchair users may need to cross roads, and the provision of adequate wheelchair parking, which must provide extra space in order to allow wheelchair users to transfer directly from seat to chair. Some tension exists between access provisions for visually impaired pedestrians and wheelchair users and other mobility impaired pedestrians as textured paving, vital for visually impaired people to recognise the edge of features such as light-controlled crossings, is uncomfortable at best, and dangerous at worst, to those with mobility impairments.
|
132 |
+
|
133 |
+
For access to public buildings, it is frequently necessary to adapt older buildings with features such as ramps or elevators in order to allow access by wheelchair users and other people with mobility impairments. Other important adaptations can include powered doors, lowered fixtures such as sinks and water fountains, and accessible toilets with adequate space and grab bars to allow the disabled person to transfer out of their wheelchair onto the fixture. Access needs for people with other disabilities, for instance visual impairments, may also be required, such as by provision of high visibility markings on the edges of steps and braille labelling. Increasingly new construction for public use is required by local equality laws to have these features incorporated at the design stage.
|
134 |
+
|
135 |
+
The same principles of access that apply to public buildings also apply to private homes and may be required as part of local building regulations. Important adaptations include external access, providing sufficient space for a wheelchair user to move around the home, doorways that are wide enough for convenient use, access to upper floors, where they exist, which can be provided either by dedicated wheelchair lifts, or in some cases by using a stairlift to transfer between wheelchairs on different floors, and by providing accessible bathrooms with showers and/or bathtubs that are designed for accessibility. Accessible bathrooms can permit the use of mobile shower chairs or transfer benches to facilitate bathing for people with disabilities. Wet rooms are bathrooms where the shower floor and bathroom floor are one continuous waterproof surface. Such floor designs allow a wheelchair user using a dedicated shower chair, or transferring onto a shower seat, to enter the shower without needing to overcome a barrier or lip.
|
136 |
+
|
137 |
+
The construction of low floor trams and buses is increasingly required by law, whereas the use of inaccessible features such as paternoster lifts in public buildings without any alternative methods of wheelchair access is increasingly deprecated. Modern architecture is increasingly required by law and recognised good practise to incorporate better accessibility at the design stage.
|
138 |
+
|
139 |
+
In many countries, such as the UK, the owners of inaccessible buildings who have not provided permanent access measures are still required by local equality legislation to provide 'reasonable adjustments' to ensure that disabled people are able to access their services and are not excluded. These may range from keeping a portable ramp on hand to allow a wheelchair user to cross an inaccessible threshold, to providing personal service to access goods they are not otherwise able to reach.
|
140 |
+
|
141 |
+
Public transit vehicles are increasingly required to be accessible to people who use wheelchairs.
|
142 |
+
|
143 |
+
In the UK, all single deck buses are required to be accessible to wheelchair users by 2017, all double-deck coaches by 2020. Similar requirements exist for trains, with most trains already incorporating a number of wheelchair-spaces.
|
144 |
+
|
145 |
+
The EU has required airline and airport operators to support the use of airports and airliners by wheelchair users and other 'Persons with Reduced Mobility' since the introduction of EU Directive EC1107/2006.
|
146 |
+
|
147 |
+
In Los Angeles there is a program to remove a small amount of seating on some trains to make more room for bicycles and wheelchairs.[23]
|
148 |
+
|
149 |
+
New York City's entire bus system is wheelchair-accessible, and a multimillion-dollar renovation program is underway to provide elevator access to many of the city's 485 subway stations.
|
150 |
+
|
151 |
+
In Adelaide, Australia, all public transport has provision for at least two wheelchairs per bus, tram or train. In addition all trains have space available for bicycles.
|
152 |
+
|
153 |
+
The Washington, D.C. Metro system features complete accessibility on all its subways and buses.
|
154 |
+
|
155 |
+
In Paris, France, the entire bus network, i.e. 60 lines, has been accessible to wheelchair users since 2010.[24]
|
156 |
+
|
157 |
+
In the United States a wheelchair that has been designed and tested for use as a seat in motor vehicles is often referred to as a "WC19 Wheelchair" or a "transit wheelchair". ANSI-RESNA WC19 (officially, SECTION 19 ANSI/RESNA WC/VOL. 1 Wheelchairs for use in Motor Vehicles) is a voluntary standard for wheelchairs designed for use when traveling facing forward in a motor vehicle. ISO 7176/19 is an international transit wheelchair standard that specifies similar design and performance requirements as ANSI/RESNA WC19.
|
158 |
+
|
159 |
+
There are special vans equipped for wheelchairs. These vans are large and have a ramp on a side door or the back door, so a wheelchair can get inside the vehicle while the user is still in it. Some of the back seats will be removed and replaced with wheelchair security harnesses. Sometimes wheelchair vans are equipped so the wheelchair user can drive the van without getting out of the wheelchair.
|
160 |
+
|
161 |
+
A vehicle can be equipped with hand controls. Hand controls are used when a person can move their legs to push the pedals. The hand controls do the pushing of the pedals. Some racecar drivers are paralyzed and use hand controls.
|
162 |
+
|
163 |
+
Several organizations exist that help to give and receive wheelchair equipment. Organizations that accept wheelchair equipment donations typically attempt to identify recipients and match them with the donated equipment they have received. Organizations that accept donations in the form of money for wheelchairs typically have the wheelchairs manufactured and distributed in large numbers, often in developing countries. Organizations focusing on wheelchairs include Direct Relief, the Free Wheelchair Mission, Hope Haven, Personal Energy Transportation, the Wheelchair Foundation and WheelPower.
|
164 |
+
|
165 |
+
In the United Kingdom wheelchairs are supplied and maintained free of charge for disabled people whose need for such a chair is permanent.[25]
|
166 |
+
|
167 |
+
Wheelchair seating systems are designed both to support the user in the sitting position and to redistribute pressure from areas of the body that are at risk of pressure ulcers.[26] For someone in the sitting position, the parts of the body that are the most at risk for tissue breakdown include the ischial tuberosities, coccyx, sacrum and greater trochanters. Wheelchair cushions are the prime method of delivering this protection and are nearly universally used. Wheelchair cushions are also used to provide stability, comfort, aid posture and absorb shock.[27] Wheelchair cushions range from simple blocks of foam costing a few pounds or dollars, to specifically engineered multilayer designs with costs running into the hundreds of pounds/dollars/euros.
|
168 |
+
|
169 |
+
Prior to 1970, little was known about the effectiveness of wheelchair cushions and there was not a clinical method of evaluating wheelchair seat cushions. Most recently, pressure imaging (or pressure mapping) is used to help determine each individual's pressure distribution to properly determine and fit a seating system.[28][29][30]
|
170 |
+
|
171 |
+
While almost all wheelchair users will use a wheelchair cushion, some users need more extensive postural support. This can be provided by adaptions to the back of the wheelchair, which can provide increased rigidity, head/neck rests and lateral support and in some cases by adaptions to the seat such as pommels and knee-blocks. Harnesses may also be required.
|
172 |
+
|
173 |
+
There are a wide range of accessories for wheelchairs. There are cushions, cup holders, seatbelts, storage bags, lights, and more.
|
174 |
+
A wheelchair user uses seatbelts for security or posture. Some wheelchair users want to use a seatbelt to make sure they never fall out of the wheelchair. Other wheelchair users use a seatbelt because they can sit up straight on their own.
|
en/1949.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
|
6 |
+
|
7 |
+
Apiformes (from Latin 'apis')
|
8 |
+
|
9 |
+
Bees are flying insects closely related to wasps and ants, known for their role in pollination and, in the case of the best-known bee species, the western honey bee, for producing honey. Bees are a monophyletic lineage within the superfamily Apoidea. They are presently considered a clade, called Anthophila. There are over 16,000 known species of bees in seven recognized biological families.[1][2] Some species — including honey bees, bumblebees, and stingless bees — live socially in colonies while some species — including mason bees, carpenter bees, leafcutter bees, and sweat bees — are solitary.
|
10 |
+
|
11 |
+
Bees are found on every continent except for Antarctica, in every habitat on the planet that contains insect-pollinated flowering plants. The most common bees in the Northern Hemisphere are the Halictidae, or sweat bees, but they are small and often mistaken for wasps or flies. Bees range in size from tiny stingless bee species, whose workers are less than 2 millimetres (0.08 in) long, to Megachile pluto, the largest species of leafcutter bee, whose females can attain a length of 39 millimetres (1.54 in).
|
12 |
+
|
13 |
+
Bees feed on nectar and pollen, the former primarily as an energy source and the latter primarily for protein and other nutrients. Most pollen is used as food for their larvae. Vertebrate predators of bees include birds such as bee-eaters; insect predators include beewolves and dragonflies.
|
14 |
+
|
15 |
+
Bee pollination is important both ecologically and commercially, and the decline in wild bees has increased the value of pollination by commercially managed hives of honey bees. The analysis of 353 wild bee and hoverfly species across Britain from 1980 to 2013 found the insects have been lost from a quarter of the places they inhabited in 1980.[3]
|
16 |
+
|
17 |
+
Human beekeeping or apiculture has been practised for millennia, since at least the times of Ancient Egypt and Ancient Greece. Bees have appeared in mythology and folklore, through all phases of art and literature from ancient times to the present day, although primarily focused in the Northern Hemisphere where beekeeping is far more common.
|
18 |
+
|
19 |
+
The ancestors of bees were wasps in the family Crabronidae, which were predators of other insects. The switch from insect prey to pollen may have resulted from the consumption of prey insects which were flower visitors and were partially covered with pollen when they were fed to the wasp larvae. This same evolutionary scenario may have occurred within the vespoid wasps, where the pollen wasps evolved from predatory ancestors. Until recently, the oldest non-compression bee fossil had been found in New Jersey amber, Cretotrigona prisca of Cretaceous age, a corbiculate bee.[4] A bee fossil from the early Cretaceous (~100 mya), Melittosphex burmensis, is considered "an extinct lineage of pollen-collecting Apoidea sister to the modern bees".[5] Derived features of its morphology (apomorphies) place it clearly within the bees, but it retains two unmodified ancestral traits (plesiomorphies) of the legs (two mid-tibial spurs, and a slender hind basitarsus), showing its transitional status.[5] By the Eocene (~45 mya) there was already considerable diversity among eusocial bee lineages.[6][a]
|
20 |
+
|
21 |
+
The highly eusocial corbiculate Apidae appeared roughly 87 Mya, and the Allodapini (within the Apidae) around 53 Mya.[9]
|
22 |
+
The Colletidae appear as fossils only from the late Oligocene (~25 Mya) to early Miocene.[10]
|
23 |
+
The Melittidae are known from Palaeomacropis eocenicus in the Early Eocene.[11]
|
24 |
+
The Megachilidae are known from trace fossils (characteristic leaf cuttings) from the Middle Eocene.[12]
|
25 |
+
The Andrenidae are known from the Eocene-Oligocene boundary, around 34 Mya, of the Florissant shale.[13]
|
26 |
+
The Halictidae first appear in the Early Eocene[14] with species[15][16] found in amber. The Stenotritidae are known from fossil brood cells of Pleistocene age.[17]
|
27 |
+
|
28 |
+
The earliest animal-pollinated flowers were shallow, cup-shaped blooms pollinated by insects such as beetles, so the syndrome of insect pollination was well established before the first appearance of bees. The novelty is that bees are specialized as pollination agents, with behavioral and physical modifications that specifically enhance pollination, and are the most efficient pollinating insects. In a process of coevolution, flowers developed floral rewards[18] such as nectar and longer tubes, and bees developed longer tongues to extract the nectar.[19] Bees also developed structures known as scopal hairs and pollen baskets to collect and carry pollen. The location and type differ among and between groups of bees. Most species have scopal hairs on their hind legs or on the underside of their abdomens. Some species in the family Apidae have pollen baskets on their hind legs, while very few lack these and instead collect pollen in their crops.[2] The appearance of these structures drove the adaptive radiation of the angiosperms, and, in turn, bees themselves.[7] Bees coevolved not only with flowers but it is believed that some species coevolved with mites. Some provide tufts of hairs called acarinaria that appear to provide lodgings for mites; in return, it is believed that mites eat fungi that attack pollen, so the relationship in this case may be mutualistc.[20][21]
|
29 |
+
|
30 |
+
This phylogenetic tree is based on Debevic et al, 2012, which used molecular phylogeny to demonstrate that the bees (Anthophila) arose from deep within the Crabronidae, which is therefore paraphyletic. The placement of the Heterogynaidae is uncertain.[22] The small subfamily Mellininae was not included in this analysis.
|
31 |
+
|
32 |
+
Ampulicidae (Cockroach wasps)
|
33 |
+
|
34 |
+
Heterogynaidae (possible placement #1)
|
35 |
+
|
36 |
+
Sphecidae (sensu stricto)
|
37 |
+
|
38 |
+
Crabroninae (part of "Crabronidae")
|
39 |
+
|
40 |
+
Bembicini
|
41 |
+
|
42 |
+
Nyssonini, Astatinae
|
43 |
+
|
44 |
+
Heterogynaidae (possible placement #2)
|
45 |
+
|
46 |
+
Pemphredoninae, Philanthinae
|
47 |
+
|
48 |
+
Anthophila (bees)
|
49 |
+
|
50 |
+
This cladogram of the bee families is based on Hedtke et al., 2013, which places the former families Dasypodaidae and Meganomiidae as subfamilies inside the Melittidae.[23] English names, where available, are given in parentheses.
|
51 |
+
|
52 |
+
Melittidae (inc. Dasypodainae, Meganomiinae) at least 50 Mya
|
53 |
+
|
54 |
+
Apidae (inc. honeybees, cuckoo bees, carpenter bees) ≈87 Mya
|
55 |
+
|
56 |
+
Megachilidae (mason, leafcutter bees) ≈50 Mya
|
57 |
+
|
58 |
+
Andrenidae (mining bees) ≈34 Mya
|
59 |
+
|
60 |
+
Halictidae (sweat bees) ≈50 Mya
|
61 |
+
|
62 |
+
Colletidae (plasterer bees) ≈25 Mya
|
63 |
+
|
64 |
+
Stenotritidae (large Australian bees) ≈2 Mya
|
65 |
+
|
66 |
+
Bees differ from closely related groups such as wasps by having branched or plume-like setae (hairs), combs on the forelimbs for cleaning their antennae, small anatomical differences in limb structure, and the venation of the hind wings; and in females, by having the seventh dorsal abdominal plate divided into two half-plates.[24]
|
67 |
+
|
68 |
+
Bees have the following characteristics:
|
69 |
+
|
70 |
+
The largest species of bee is thought to be Wallace's giant bee Megachile pluto, whose females can attain a length of 39 millimetres (1.54 in).[26] The smallest species may be dwarf stingless bees in the tribe Meliponini whose workers are less than 2 millimetres (0.08 in) in length.[27]
|
71 |
+
|
72 |
+
According to inclusive fitness theory, organisms can gain fitness not just through increasing their own reproductive output, but also that of close relatives. In evolutionary terms, individuals should help relatives when Cost < Relatedness * Benefit. The requirements for eusociality are more easily fulfilled by haplodiploid species such as bees because of their unusual relatedness structure.[28]
|
73 |
+
|
74 |
+
In haplodiploid species, females develop from fertilized eggs and males from unfertilized eggs. Because a male is haploid (has only one copy of each gene), his daughters (which are diploid, with two copies of each gene) share 100% of his genes and 50% of their mother's. Therefore, they share 75% of their genes with each other. This mechanism of sex determination gives rise to what W. D. Hamilton termed "supersisters", more closely related to their sisters than they would be to their own offspring.[29] Workers often do not reproduce, but they can pass on more of their genes by helping to raise their sisters (as queens) than they would by having their own offspring (each of which would only have 50% of their genes), assuming they would produce similar numbers. This unusual situation has been proposed as an explanation of the multiple (at least 9) evolutions of eusociality within Hymenoptera.[30][31]
|
75 |
+
|
76 |
+
Haplodiploidy is neither necessary nor sufficient for eusociality. Some eusocial species such as termites are not haplodiploid. Conversely, all bees are haplodiploid but not all are eusocial, and among eusocial species many queens mate with multiple males, creating half-sisters that share only 25% of each-other's genes.[32] But, monogamy (queens mating singly) is the ancestral state for all eusocial species so far investigated, so it is likely that haplodiploidy contributed to the evolution of eusociality in bees.[30]
|
77 |
+
|
78 |
+
Bees may be solitary or may live in various types of communities. Eusociality appears to have originated from at least three independent origins in halictid bees.[33] The most advanced of these are species with eusocial colonies; these are characterised by cooperative brood care and a division of labour into reproductive and non-reproductive adults, plus overlapping generations.[34] This division of labour creates specialized groups within eusocial societies which are called castes. In some species, groups of cohabiting females may be sisters, and if there is a division of labour within the group, they are considered semisocial. The group is called eusocial if, in addition, the group consists of a mother (the queen) and her daughters (workers). When the castes are purely behavioural alternatives, with no morphological differentiation other than size, the system is considered primitively eusocial, as in many paper wasps; when the castes are morphologically discrete, the system is considered highly eusocial.[19]
|
79 |
+
|
80 |
+
True honey bees (genus Apis, of which seven species are currently recognized) are highly eusocial, and are among the best known insects. Their colonies are established by swarms, consisting of a queen and several hundred workers. There are 29 subspecies of one of these species, Apis mellifera, native to Europe, the Middle East, and Africa. Africanized bees are a hybrid strain of A. mellifera that escaped from experiments involving crossing European and African subspecies; they are extremely defensive.[35]
|
81 |
+
|
82 |
+
Stingless bees are also highly eusocial. They practise mass provisioning, with complex nest architecture and perennial colonies also established via swarming.[36]
|
83 |
+
|
84 |
+
Many bumblebees are eusocial, similar to the eusocial Vespidae such as hornets in that the queen initiates a nest on her own rather than by swarming. Bumblebee colonies typically have from 50 to 200 bees at peak population, which occurs in mid to late summer. Nest architecture is simple, limited by the size of the pre-existing nest cavity, and colonies rarely last more than a year.[37] In 2011, the International Union for Conservation of Nature set up the Bumblebee Specialist Group to review the threat status of all bumblebee species worldwide using the IUCN Red List criteria.[38]
|
85 |
+
|
86 |
+
There are many more species of primitively eusocial than highly eusocial bees, but they have been studied less often. Most are in the family Halictidae, or "sweat bees". Colonies are typically small, with a dozen or fewer workers, on average. Queens and workers differ only in size, if at all. Most species have a single season colony cycle, even in the tropics, and only mated females hibernate. A few species have long active seasons and attain colony sizes in the hundreds, such as Halictus hesperus.[39] Some species are eusocial in parts of their range and solitary in others,[40] or have a mix of eusocial and solitary nests in the same population.[41] The orchid bees (Apidae) include some primitively eusocial species with similar biology. Some allodapine bees (Apidae) form primitively eusocial colonies, with progressive provisioning: a larva's food is supplied gradually as it develops, as is the case in honey bees and some bumblebees.[42]
|
87 |
+
|
88 |
+
Most other bees, including familiar insects such as carpenter bees, leafcutter bees and mason bees are solitary in the sense that every female is fertile, and typically inhabits a nest she constructs herself. There is no division of labor so these nests lack queens and worker bees for these species. Solitary bees typically produce neither honey nor beeswax.
|
89 |
+
Bees collect pollen to feed their young, and have the necessary adaptations to do this. However, certain wasp species such as pollen wasps have similar behaviours, and a few species of bee scavenge from carcases to feed their offspring.[24] Solitary bees are important pollinators; they gather pollen to provision their nests with food for their brood. Often it is mixed with nectar to form a paste-like consistency. Some solitary bees have advanced types of pollen-carrying structures on their bodies. Very few species of solitary bee are being cultured for commercial pollination. Most of these species belong to a distinct set of genera which are commonly known by their nesting behavior or preferences, namely: carpenter bees, sweat bees, mason bees, plasterer bees, squash bees, dwarf carpenter bees, leafcutter bees, alkali bees and digger bees.[43]
|
90 |
+
|
91 |
+
Most solitary bees nest in the ground in a variety of soil textures and conditions while others create nests in hollow reeds or twigs, holes in wood. The female typically creates a compartment (a "cell") with an egg and some provisions for the resulting larva, then seals it off. A nest may consist of numerous cells. When the nest is in wood, usually the last (those closer to the entrance) contain eggs that will become males. The adult does not provide care for the brood once the egg is laid, and usually dies after making one or more nests. The males typically emerge first and are ready for mating when the females emerge. Solitary bees are either stingless or very unlikely to sting (only in self-defense, if ever).[44][45]
|
92 |
+
|
93 |
+
While solitary, females each make individual nests. Some species, such as the European mason bee Hoplitis anthocopoides,[46] and the Dawson's Burrowing bee, Amegilla dawsoni,[47] are gregarious, preferring to make nests near others of the same species, and giving the appearance of being social. Large groups of solitary bee nests are called aggregations, to distinguish them from colonies. In some species, multiple females share a common nest, but each makes and provisions her own cells independently. This type of group is called "communal" and is not uncommon. The primary advantage appears to be that a nest entrance is easier to defend from predators and parasites when there are multiple females using that same entrance on a regular basis.[46]
|
94 |
+
|
95 |
+
The life cycle of a bee, be it a solitary or social species, involves the laying of an egg, the development through several moults of a legless larva, a pupation stage during which the insect undergoes complete metamorphosis, followed by the emergence of a winged adult. Most solitary bees and bumble bees in temperate climates overwinter as adults or pupae and emerge in spring when increasing numbers of flowering plants come into bloom. The males usually emerge first and search for females with which to mate. The sex of a bee is determined by whether or not the egg is fertilised; after mating, a female stores the sperm, and determines which sex is required at the time each individual egg is laid, fertilised eggs producing female offspring and unfertilised eggs, males. Tropical bees may have several generations in a year and no diapause stage.[48][49][50][51]
|
96 |
+
|
97 |
+
The egg is generally oblong, slightly curved and tapering at one end. Solitary bees, lay each egg in a separate cell with a supply of mixed pollen and nectar next to it. This may be rolled into a pellet or placed in a pile and is known as mass provisioning. Social bee species provision progressively, that is, they feed the larva regularly while it grows. The nest varies from a hole in the ground or in wood, in solitary bees, to a substantial structure with wax combs in bumblebees and honey bees.[52]
|
98 |
+
|
99 |
+
In most species, larvae are whitish grubs, roughly oval and bluntly-pointed at both ends. They have 15 segments and spiracles in each segment for breathing. They have no legs but move within the cell, helped by tubercles on their sides. They have short horns on the head, jaws for chewing food and an appendage on either side of the mouth tipped with a bristle. There is a gland under the mouth that secretes a viscous liquid which solidifies into the silk they use to produce a cocoon. The cocoon is semi-transparent and the pupa can be seen through it. Over the course of a few days, the larva undergoes metamorphosis into a winged adult. When ready to emerge, the adult splits its skin dorsally and climbs out of the exuviae and breaks out of the cell.[52]
|
100 |
+
|
101 |
+
Nest of common carder bumblebee, wax canopy removed to show winged workers and pupae in irregularly placed wax cells
|
102 |
+
|
103 |
+
Carpenter bee nests in a cedar wood beam (sawn open)
|
104 |
+
|
105 |
+
Honeybees on brood comb with eggs and larvae in cells
|
106 |
+
|
107 |
+
Antoine Magnan's 1934 book Le vol des insectes, says that he and André Sainte-Laguë had applied the equations of air resistance to insects and found that their flight could not be explained by fixed-wing calculations, but that "One shouldn't be surprised that the results of the calculations don't square with reality".[53] This has led to a common misconception that bees "violate aerodynamic theory". In fact it merely confirms that bees do not engage in fixed-wing flight, and that their flight is explained by other mechanics, such as those used by helicopters.[54] In 1996 it was shown that vortices created by many insects' wings helped to provide lift.[55] High-speed cinematography[56] and robotic mock-up of a bee wing[57] showed that lift was generated by "the unconventional combination of short, choppy wing strokes, a rapid rotation of the wing as it flops over and reverses direction, and a very fast wing-beat frequency". Wing-beat frequency normally increases as size decreases, but as the bee's wing beat covers such a small arc, it flaps approximately 230 times per second, faster than a fruitfly (200 times per second) which is 80 times smaller.[58]
|
108 |
+
|
109 |
+
The ethologist Karl von Frisch studied navigation in the honey bee. He showed that honey bees communicate by the waggle dance, in which a worker indicates the location of a food source to other workers in the hive. He demonstrated that bees can recognize a desired compass direction in three different ways: by the sun, by the polarization pattern of the blue sky, and by the earth's magnetic field. He showed that the sun is the preferred or main compass; the other mechanisms are used under cloudy skies or inside a dark beehive.[59] Bees navigate using spatial memory with a "rich, map-like organization".[60]
|
110 |
+
|
111 |
+
The gut of bees is relatively simple, but multiple metabolic strategies exist in the gut microbiota.[61] Pollinating bees consume nectar and pollen, which require different digestion strategies by somewhat specialized bacteria. While nectar is a liquid of mostly monosaccharide sugars and so easily absorbed, pollen contains complex polysaccharides: branching pectin and hemicellulose.[62] Approximately five groups of bacteria are involved in digestion. Three groups specialize in simple sugars (Snodgrassella and two groups of Lactobacillus), and two other groups in complex sugars (Gilliamella and Bifidobacterium). Digestion of pectin and hemicellulose is dominated by bacterial clades Gilliamella and Bifidobacterium respectively. Bacteria that cannot digest polysaccharides obtain enzymes from their neighbors, and bacteria that lack certain amino acids do the same, creating multiple ecological niches.[63]
|
112 |
+
|
113 |
+
Although most bee species are nectarivorous and palynivorous, some are not. Particularly unusual are vulture bees in the genus Trigona, which consume carrion and wasp brood, turning meat into a honey-like substance.[64]
|
114 |
+
|
115 |
+
Most bees are polylectic (generalist) meaning they collect pollen from a range of flowering plants, but some are oligoleges (specialists), in that they only gather pollen from one or a few species or genera of closely related plants.[65] Specialist pollinators also include bee species which gather floral oils instead of pollen, and male orchid bees, which gather aromatic compounds from orchids (one of the few cases where male bees are effective pollinators). Bees are able to sense the presence of desirable flowers through ultraviolet patterning on flowers, floral odors,[66] and even electromagnetic fields.[67] Once landed, a bee then uses nectar quality[66] and pollen taste[68] to determine whether to continue visiting similar flowers.
|
116 |
+
|
117 |
+
In rare cases, a plant species may only be effectively pollinated by a single bee species, and some plants are endangered at least in part because their pollinator is also threatened. But, there is a pronounced tendency for oligolectic bees to be associated with common, widespread plants visited by multiple pollinator species. For example, the creosote bush in the arid parts of the United States southwest is associated with some 40 oligoleges.[69]
|
118 |
+
|
119 |
+
Many bees are aposematically coloured, typically orange and black, warning of their ability to defend themselves with a powerful sting. As such they are models for Batesian mimicry by non-stinging insects such as bee-flies, robber flies and hoverflies,[70] all of which gain a measure of protection by superficially looking and behaving like bees.[70]
|
120 |
+
|
121 |
+
Bees are themselves Müllerian mimics of other aposematic insects with the same colour scheme, including wasps, lycid and other beetles, and many butterflies and moths (Lepidoptera) which are themselves distasteful, often through acquiring bitter and poisonous chemicals from their plant food. All the Müllerian mimics, including bees, benefit from the reduced risk of predation that results from their easily recognised warning coloration.[71]
|
122 |
+
|
123 |
+
Bees are also mimicked by plants such as the bee orchid which imitates both the appearance and the scent of a female bee; male bees attempt to mate (pseudocopulation) with the furry lip of the flower, thus pollinating it.[72]
|
124 |
+
|
125 |
+
Brood parasites occur in several bee families including the apid subfamily Nomadinae.[73] Females of these species lack pollen collecting structures (the scopa) and do not construct their own nests. They typically enter the nests of pollen collecting species, and lay their eggs in cells provisioned by the host bee. When the "cuckoo" bee larva hatches, it consumes the host larva's pollen ball, and often the host egg also.[74] In particular, the Arctic bee species, Bombus hyperboreus is an aggressive species that attacks and enslaves other bees of the same subgenus. However, unlike many other bee brood parasites, they have pollen baskets and often collect pollen.[75]
|
126 |
+
|
127 |
+
In Southern Africa, hives of African honeybees (A. mellifera scutellata) are being destroyed by parasitic workers of the Cape honeybee, A. m. capensis. These lay diploid eggs ("thelytoky"), escaping normal worker policing, leading to the colony's destruction; the parasites can then move to other hives.[76]
|
128 |
+
|
129 |
+
The cuckoo bees in the Bombus subgenus Psithyrus are closely related to, and resemble, their hosts in looks and size. This common pattern gave rise to the ecological principle "Emery's rule". Others parasitize bees in different families, like Townsendiella, a nomadine apid, two species of which are cleptoparasites of the dasypodaid genus Hesperapis,[77] while the other species in the same genus attacks halictid bees.[78]
|
130 |
+
|
131 |
+
Four bee families (Andrenidae, Colletidae, Halictidae, and Apidae) contain some species that are crepuscular. Most are tropical or subtropical, but some live in arid regions at higher latitudes. These bees have greatly enlarged ocelli, which are extremely sensitive to light and dark, though incapable of forming images. Some have refracting superposition compound eyes: these combine the output of many elements of their compound eyes to provide enough light for each retinal photoreceptor. Their ability to fly by night enables them to avoid many predators, and to exploit flowers that produce nectar only or also at night.[79]
|
132 |
+
|
133 |
+
Vertebrate predators of bees include bee-eaters, shrikes and flycatchers, which make short sallies to catch insects in flight.[80] Swifts and swallows[80] fly almost continually, catching insects as they go. The honey buzzard attacks bees' nests and eats the larvae.[81] The greater honeyguide interacts with humans by guiding them to the nests of wild bees. The humans break open the nests and take the honey and the bird feeds on the larvae and the wax.[82] Among mammals, predators such as the badger dig up bumblebee nests and eat both the larvae and any stored food.[83]
|
134 |
+
|
135 |
+
Specialist ambush predators of visitors to flowers include crab spiders, which wait on flowering plants for pollinating insects; predatory bugs, and praying mantises,[80] some of which (the flower mantises of the tropics) wait motionless, aggressive mimics camouflaged as flowers.[84] Beewolves are large wasps that habitually attack bees;[80] the ethologist Niko Tinbergen estimated that a single colony of the beewolf Philanthus triangulum might kill several thousand honeybees in a day: all the prey he observed were honeybees.[85] Other predatory insects that sometimes catch bees include robber flies and dragonflies.[80] Honey bees are affected by parasites including acarine and Varroa mites.[86] However, some bees are believed to have a mutualistic relationship with mites.[21]
|
136 |
+
|
137 |
+
Homer's Hymn to Hermes describes three bee-maidens with the power of divination and thus speaking truth, and identifies the food of the gods as honey. Sources associated the bee maidens with Apollo and, until the 1980s, scholars followed Gottfried Hermann (1806) in incorrectly identifying the bee-maidens with the Thriae.[87] Honey, according to a Greek myth, was discovered by a nymph called Melissa ("Bee"); and honey was offered to the Greek gods from Mycenean times. Bees were also associated with the Delphic oracle and the prophetess was sometimes called a bee.[88]
|
138 |
+
|
139 |
+
The image of a community of honey bees has been used from ancient to modern times, in Aristotle and Plato; in Virgil and Seneca; in Erasmus and Shakespeare; Tolstoy, and by political and social theorists such as Bernard Mandeville and Karl Marx as a model for human society.[89] In English folklore, bees would be told of important events in the household, in a custom known as "Telling the bees".[90]
|
140 |
+
|
141 |
+
Some of the oldest examples of bees in art are rock paintings in Spain which have been dated to 15,000 BC.[91]
|
142 |
+
|
143 |
+
W. B. Yeats's poem The Lake Isle of Innisfree (1888) contains the couplet "Nine bean rows will I have there, a hive for the honey bee, / And live alone in the bee loud glade." At the time he was living in Bedford Park in the West of London.[92] Beatrix Potter's illustrated book The Tale of Mrs Tittlemouse (1910) features Babbity Bumble and her brood (pictured). Kit Williams' treasure hunt book The Bee on the Comb (1984) uses bees and beekeeping as part of its story and puzzle. Sue Monk Kidd's The Secret Life of Bees (2004), and the 2009 film starring Dakota Fanning, tells the story of a girl who escapes her abusive home and finds her way to live with a family of beekeepers, the Boatwrights.
|
144 |
+
|
145 |
+
The humorous 2007 animated film Bee Movie used Jerry Seinfeld's first script and was his first work for children; he starred as a bee named Barry B. Benson, alongside Renée Zellweger. Critics found its premise awkward and its delivery tame.[93] Dave Goulson's A Sting in the Tale (2014) describes his efforts to save bumblebees in Britain, as well as much about their biology. The playwright Laline Paull's fantasy The Bees (2015) tells the tale of a hive bee named Flora 717 from hatching onwards.[94]
|
146 |
+
|
147 |
+
Humans have kept honey bee colonies, commonly in hives, for millennia. Beekeepers collect honey, beeswax, propolis, pollen, and royal jelly from hives; bees are also kept to pollinate crops and to produce bees for sale to other beekeepers.
|
148 |
+
|
149 |
+
Depictions of humans collecting honey from wild bees date to 15,000 years ago; efforts to domesticate them are shown in Egyptian art around 4,500 years ago.[95] Simple hives and smoke were used;[96][97] jars of honey were found in the tombs of pharaohs such as Tutankhamun. From the 18th century, European understanding of the colonies and biology of bees allowed the construction of the moveable comb hive so that honey could be harvested without destroying the colony.[98][99] Among Classical Era authors, beekeeping with the use of smoke is described in Aristotle's History of Animals Book 9.[100] The account mentions that bees die after stinging; that workers remove corpses from the hive, and guard it; castes including workers and non-working drones, but "kings" rather than queens; predators including toads and bee-eaters; and the waggle dance, with the "irresistible suggestion" of άpοσειονται ("aroseiontai", it waggles) and παρακολουθούσιν ("parakolouthousin", they watch).[101][b]
|
150 |
+
|
151 |
+
Beekeeping is described in detail by Virgil in his Eclogues; it is also mentioned in his Aeneid, and in Pliny's Natural History.[101]
|
152 |
+
|
153 |
+
Bees play an important role in pollinating flowering plants, and are the major type of pollinator in many ecosystems that contain flowering plants. It is estimated that one third of the human food supply depends on pollination by insects, birds and bats, most of which is accomplished by bees, whether wild or domesticated.[102][103] Over the last half century, there has been a general decline in the species richness of wild bees and other pollinators, probably attributable to stress from increased parasites and disease, the use of pesticides, and a general decrease in the number of wild flowers. Climate change probably exacerbates the problem.[104]
|
154 |
+
|
155 |
+
Contract pollination has overtaken the role of honey production for beekeepers in many countries. After the introduction of Varroa mites, feral honey bees declined dramatically in the US, though their numbers have since recovered.[105][106] The number of colonies kept by beekeepers declined slightly, through urbanization, systematic pesticide use, tracheal and Varroa mites, and the closure of beekeeping businesses. In 2006 and 2007 the rate of attrition increased, and was described as colony collapse disorder.[107] In 2010 invertebrate iridescent virus and the fungus Nosema ceranae were shown to be in every killed colony, and deadly in combination.[108][109][110][111] Winter losses increased to about 1/3.[112][113] Varroa mites were thought to be responsible for about half the losses.[114]
|
156 |
+
|
157 |
+
Apart from colony collapse disorder, losses outside the US have been attributed to causes including pesticide seed dressings, using neonicotinoids such as Clothianidin, Imidacloprid and Thiamethoxam.[115][116] From 2013 the European Union restricted some pesticides to stop bee populations from declining further.[117] In 2014 the Intergovernmental Panel on Climate Change report warned that bees faced increased risk of extinction because of global warming.[118] In 2018 the European Union decided to ban field use of all three major neonicotinoids; they remain permitted in veterinary, greenhouse, and vehicle transport usage.[119]
|
158 |
+
|
159 |
+
Farmers have focused on alternative solutions to mitigate these problems. By raising native plants, they provide food for native bee pollinators like Lasioglossum vierecki[120] and L. leucozonium,[121] leading to less reliance on honey bee populations.
|
160 |
+
|
161 |
+
Honey is a natural product produced by bees and stored for their own use, but its sweetness has always appealed to humans. Before domestication of bees was even attempted, humans were raiding their nests for their honey. Smoke was often used to subdue the bees and such activities are depicted in rock paintings in Spain dated to 15,000 BC.[91]
|
162 |
+
|
163 |
+
Honey bees are used commercially to produce honey.[122] They also produce some substances used as dietary supplements with possible health benefits, pollen,[123] propolis,[124] and royal jelly,[125] though all of these can also cause allergic reactions.
|
164 |
+
|
165 |
+
Bees are partly considered edible insects. Indigenous people in many countries eat insects, including the larvae and pupae of bees, mostly stingless species. They also gather larvae, pupae and surrounding cells, known as bee brood, for consumption.[126] In the Indonesian dish botok tawon from Central and East Java, bee larvae are eaten as a companion to rice, after being mixed with shredded coconut, wrapped in banana leaves, and steamed.[127][128]
|
166 |
+
|
167 |
+
Bee brood (pupae and larvae) although low in calcium, has been found to be high in protein and carbohydrate, and a useful source of phosphorus, magnesium, potassium, and trace minerals iron, zinc, copper, and selenium. In addition, while bee brood was high in fat, it contained no fat soluble vitamins (such as A, D, and E) but it was a good source of most of the water-soluble B-vitamins including choline as well as vitamin C. The fat was composed mostly of saturated and monounsaturated fatty acids with 2.0% being polyunsaturated fatty acids.[129][130]
|
168 |
+
|
169 |
+
Apitherapy is a branch of alternative medicine that uses honey bee products, including raw honey, royal jelly, pollen, propolis, beeswax and apitoxin (Bee venom).[131] The claim that apitherapy treats cancer, which some proponents of apitherapy make, remains unsupported by evidence-based medicine.[132][133]
|
170 |
+
|
171 |
+
The painful stings of bees are mostly associated with the poison gland and the Dufour's gland which are abdominal exocrine glands containing various chemicals. In Lasioglossum leucozonium, the Dufour's Gland mostly contains octadecanolide as well as some eicosanolide. There is also evidence of n-triscosane, n-heptacosane,[134] and 22-docosanolide.[135] However, the secretions of these glands could also be used for nest construction.[134]
|
172 |
+
|
en/195.html.txt
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere. It can also be described as a northern subcontinent of the Americas, or America,[5][6] in models that use fewer than seven continents. It is bordered to the north by the Arctic Ocean, to the east by the Atlantic Ocean, to the west and south by the Pacific Ocean, and to the southeast by South America and the Caribbean Sea.
|
6 |
+
|
7 |
+
North America covers an area of about 24,709,000 square kilometers (9,540,000 square miles), about 16.5% of the earth's land area and about 4.8% of its total surface.
|
8 |
+
North America is the third-largest continent by area, following Asia and Africa,[7][better source needed] and the fourth by population after Asia, Africa, and Europe.[8] In 2013, its population was estimated at nearly 579 million people in 23 independent states, or about 7.5% of the world's population, if nearby islands (most notably around the Caribbean) are included.
|
9 |
+
|
10 |
+
North America was reached by its first human populations during the last glacial period, via crossing the Bering land bridge approximately 40,000 to 17,000 years ago. The so-called Paleo-Indian period is taken to have lasted until about 10,000 years ago (the beginning of the Archaic or Meso-Indian period). The Classic stage spans roughly the 6th to 13th centuries. The Pre-Columbian era ended in 1492, with the beginning of the transatlantic migrations—the arrival of European settlers during the Age of Discovery and the Early Modern period. Present-day cultural and ethnic patterns reflect interactions between European colonists, indigenous peoples, African slaves and all of their descendants.
|
11 |
+
|
12 |
+
Owing to Europe's colonization of the Americas, most North Americans speak European languages such as English, Spanish or French, and their states' cultures commonly reflect Western traditions.
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
The Americas are usually accepted as having been named after the Italian explorer Amerigo Vespucci by the German cartographers Martin Waldseemüller and Matthias Ringmann.[9] Vespucci, who explored South America between 1497 and 1502, was the first European to suggest that the Americas were not the East Indies, but a different landmass previously unknown by Europeans. In 1507, Waldseemüller produced a world map, in which he placed the word "America" on the continent of South America, in the middle of what is today Brazil. He explained the rationale for the name in the accompanying book Cosmographiae Introductio:[10]
|
17 |
+
|
18 |
+
... ab Americo inventore ... quasi Americi terram sive Americam (from Americus the discoverer ... as if it were the land of Americus, thus America).
|
19 |
+
|
20 |
+
For Waldseemüller, no one should object to the naming of the land after its discoverer. He used the Latinized version of Vespucci's name (Americus Vespucius), but in its feminine form "America", following the examples of "Europa", "Asia" and "Africa". Later, other mapmakers extended the name America to the northern continent. In 1538, Gerard Mercator used America on his map of the world for all the Western Hemisphere.[11]
|
21 |
+
|
22 |
+
Some argue that because the convention is to use the surname for naming discoveries (except in the case of royalty), the derivation from "Amerigo Vespucci" could be put in question.[12] In 1874, Thomas Belt proposed a derivation from the Amerrique mountains of Central America; the next year, Jules Marcou suggested that the name of the mountain range stemmed from indigenous American languages.[13] Marcou corresponded with Augustus Le Plongeon, who wrote: "The name AMERICA or AMERRIQUE in the Mayan language means, a country of perpetually strong wind, or the Land of the Wind, and ... the [suffixes] can mean ... a spirit that breathes, life itself."[11]
|
23 |
+
|
24 |
+
The United Nations formally recognizes "North America" as comprising three areas: Northern America, Central America, and The Caribbean. This has been formally defined by the UN Statistics Division.[14][15][16]
|
25 |
+
|
26 |
+
"Northern America", as a term distinct from "North America", excludes Central America, which itself may or may not include Mexico (see Central America § Different definitions). In the limited context of the North American Free Trade Agreement, the term covers Canada, the United States, and Mexico, which are the three signatories of that treaty.
|
27 |
+
|
28 |
+
France, Italy, Portugal, Spain, Romania, Greece, and the countries of Latin America use a six-continent model, with the Americas viewed as a single continent and North America designating a subcontinent comprising Canada, the United States, and Mexico, and often Greenland, Saint Pierre et Miquelon, and Bermuda.[17][18][19][20][21]
|
29 |
+
|
30 |
+
North America has been historically referred to by other names. Spanish North America (New Spain) was often referred to as Northern America, and this was the first official name given to Mexico.[22]
|
31 |
+
|
32 |
+
Geographically the North American continent has many regions and subregions. These include cultural, economic, and geographic regions. Economic regions included those formed by trade blocs, such as the North American Trade Agreement bloc and Central American Trade Agreement. Linguistically and culturally, the continent could be divided into Anglo-America and Latin America. Anglo-America includes most of Northern America, Belize, and Caribbean islands with English-speaking populations (though sub-national entities, such as Louisiana and Quebec, have large Francophone populations; in Quebec, French is the sole official language[23]).
|
33 |
+
|
34 |
+
The southern North American continent is composed of two regions. These are Central America and the Caribbean.[24][25] The north of the continent maintains recognized regions as well. In contrast to the common definition of "North America", which encompasses the whole continent, the term "North America" is sometimes used to refer only to Mexico, Canada, the United States, and Greenland.[26][27][28][29][30]
|
35 |
+
|
36 |
+
The term Northern America refers to the northern-most countries and territories of North America: the United States, Bermuda, St. Pierre and Miquelon, Canada and Greenland.[31][32] Although the term does not refer to a unified region,[33] Middle America—not to be confused with the Midwestern United States—groups the regions of Mexico, Central America, and the Caribbean.[34]
|
37 |
+
|
38 |
+
The largest countries of the continent, Canada and the United States, also contain well-defined and recognized regions. In the case of Canada these are (from east to west) Atlantic Canada, Central Canada, Canadian Prairies, the British Columbia Coast, and Northern Canada. These regions also contain many subregions. In the case of the United States – and in accordance with the US Census Bureau definitions – these regions are: New England, Mid-Atlantic, South Atlantic States, East North Central States, West North Central States, East South Central States, West South Central States, Mountain States, and Pacific States. Regions shared between both nations included the Great Lakes Region. Megalopolises have formed between both nations in the case of the Pacific Northwest and the Great Lakes Megaregion.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
Laurentia is an ancient craton which forms the geologic core of North America; it formed between 1.5 and 1.0 billion years ago during the Proterozoic eon.[44] The Canadian Shield is the largest exposure of this craton. From the Late Paleozoic to Early Mesozoic eras, North America was joined with the other modern-day continents as part of the supercontinent Pangaea, with Eurasia to its east. One of the results of the formation of Pangaea was the Appalachian Mountains, which formed some 480 million years ago, making it among the oldest mountain ranges in the world. When Pangaea began to rift around 200 million years ago, North America became part of Laurasia, before it separated from Eurasia as its own continent during the mid-Cretaceous period.[45] The Rockies and other western mountain ranges began forming around this time from a period of mountain building called the Laramide orogeny, between 80 and 55 million years ago. The formation of the Isthmus of Panama that connected the continent to South America arguably occurred approximately 12 to 15 million years ago,[46] and the Great Lakes (as well as many other northern freshwater lakes and rivers) were carved by receding glaciers about 10,000 years ago.
|
43 |
+
|
44 |
+
North America is the source of much of what humanity knows about geologic time periods.[47] The geographic area that would later become the United States has been the source of more varieties of dinosaurs than any other modern country.[47] According to paleontologist Peter Dodson, this is primarily due to stratigraphy, climate and geography, human resources, and history.[47] Much of the Mesozoic Era is represented by exposed outcrops in the many arid regions of the continent.[47] The most significant Late Jurassic dinosaur-bearing fossil deposit in North America is the Morrison Formation of the western United States.[48]
|
45 |
+
|
46 |
+
The indigenous peoples of the Americas have many creation myths by which they assert that they have been present on the land since its creation,[49] but there is no evidence that humans evolved there.[50] The specifics of the initial settlement of the Americas by ancient Asians are subject to ongoing research and discussion.[51] The traditional theory has been that hunters entered the Beringia land bridge between eastern Siberia and present-day Alaska from 27,000 to 14,000 years ago.[52][53][h] A growing viewpoint is that the first American inhabitants sailed from Beringia some 13,000 years ago,[55] with widespread habitation of the Americas during the end of the Last Glacial Period, in what is known as the Late Glacial Maximum, around 12,500 years ago.[56] The oldest petroglyphs in North America date from 15,000 to 10,000 years before present.[57][i] Genetic research and anthropology indicate additional waves of migration from Asia via the Bering Strait during the Early-Middle Holocene.[59][60][61]
|
47 |
+
|
48 |
+
Before contact with Europeans, the natives of North America were divided into many different polities, from small bands of a few families to large empires. They lived in several "culture areas", which roughly correspond to geographic and biological zones and give a good indication of the main way of life of the people who lived there (e.g., the bison hunters of the Great Plains, or the farmers of Mesoamerica). Native groups can also be classified by their language family (e.g., Athapascan or Uto-Aztecan). Peoples with similar languages did not always share the same material culture, nor were they always allies. Anthropologists think that the Inuit people of the high Arctic came to North America much later than other native groups, as evidenced by the disappearance of Dorset culture artifacts from the archaeological record, and their replacement by the Thule people.
|
49 |
+
|
50 |
+
During the thousands of years of native habitation on the continent, cultures changed and shifted. One of the oldest yet discovered is the Clovis culture (c. 9550–9050 BCE) in modern New Mexico.[58] Later groups include the Mississippian culture and related Mound building cultures, found in the Mississippi river valley and the Pueblo culture of what is now the Four Corners. The more southern cultural groups of North America were responsible for the domestication of many common crops now used around the world, such as tomatoes, squash, and maize. As a result of the development of agriculture in the south, many other cultural advances were made there. The Mayans developed a writing system, built huge pyramids and temples, had a complex calendar, and developed the concept of zero around 400 CE.[62]
|
51 |
+
|
52 |
+
The first recorded European references to North America are in Norse sagas where it is referred to as Vinland.[63] The earliest verifiable instance of pre-Columbian trans-oceanic contact by any European culture with the North America mainland has been dated to around 1000 CE.[64] The site, situated at the northernmost extent of the island named Newfoundland, has provided unmistakable evidence of Norse settlement.[65] Norse explorer Leif Erikson (c. 970–1020 CE) is thought to have visited the area.[j] Erikson was the first European to make landfall on the continent (excluding Greenland).[67][68]
|
53 |
+
|
54 |
+
The Mayan culture was still present in southern Mexico and Guatemala when the Spanish conquistadors arrived, but political dominance in the area had shifted to the Aztec Empire, whose capital city Tenochtitlan was located further north in the Valley of Mexico. The Aztecs were conquered in 1521 by Hernán Cortés.[69]
|
55 |
+
|
56 |
+
During the Age of Discovery, Europeans explored and staked claims to various parts of North America. Upon their arrival in the "New World", the Native American population declined substantially, because of violent conflicts with the invaders and the introduction of European diseases to which the Native Americans lacked immunity.[70] Native culture changed drastically and their affiliation with political and cultural groups also changed. Several linguistic groups died out, and others changed quite quickly. The names and cultures that Europeans recorded were not necessarily the same as the names they had used a few generations before, or the ones in use today.
|
57 |
+
|
58 |
+
Britain, Spain, and France took over extensive territories in North America. In the late 18th and early 19th century, independence movements sprung up across the continent, leading to the founding of the modern countries in the area. The 13 British Colonies on the North Atlantic coast declared independence in 1776, becoming the United States of America. Canada was formed from the unification of northern territories controlled by Britain and France. New Spain, a territory that stretched from the modern-day southern US to Central America, declared independence in 1810, becoming the First Mexican Empire. In 1823 the former Captaincy General of Guatemala, then part of the Mexican Empire, became the first independent state in Central America, officially changing its name to the United Provinces of Central America.
|
59 |
+
|
60 |
+
Over three decades of work on the Panama Canal led to the connection of Atlantic and Pacific waters in 1913, physically making North America a separate continent.[attribution needed]
|
61 |
+
|
62 |
+
North America occupies the northern portion of the landmass generally referred to as the New World, the Western Hemisphere, the Americas, or simply America (which, less commonly, is considered by some as a single continent[71][72][73] with North America a subcontinent).[74] North America's only land connection to South America is at the Isthmus of Darian/ Isthmus of Panama. The continent is delimited on the southeast by most geographers at the Darién watershed along the Colombia-Panama border, placing almost all of Panama within North America.[75][76][77] Alternatively, some geologists physiographically locate its southern limit at the Isthmus of Tehuantepec, Mexico, with Central America extending southeastward to South America from this point.[78] The Caribbean islands, or West Indies, are considered part of North America.[5] The continental coastline is long and irregular. The Gulf of Mexico is the largest body of water indenting the continent, followed by Hudson Bay. Others include the Gulf of Saint Lawrence and the Gulf of California.
|
63 |
+
|
64 |
+
Before the Central American isthmus formed, the region had been underwater. The islands of the West Indies delineate a submerged former land bridge, which had connected North and South America via what are now Florida and Venezuela.
|
65 |
+
|
66 |
+
There are numerous islands off the continent's coasts; principally, the Arctic Archipelago, the Bahamas, Turks & Caicos, the Greater and Lesser Antilles, the Aleutian Islands (some of which are in the Eastern Hemisphere proper), the Alexander Archipelago, the many thousand islands of the British Columbia Coast, and Newfoundland. Greenland, a self-governing Danish island, and the world's largest, is on the same tectonic plate (the North American Plate) and is part of North America geographically. In a geologic sense, Bermuda is not part of the Americas, but an oceanic island which was formed on the fissure of the Mid-Atlantic Ridge over 100 million years ago. The nearest landmass to it is Cape Hatteras, North Carolina. However, Bermuda is often thought of as part of North America, especially given its historical, political and cultural ties to Virginia and other parts of the continent.
|
67 |
+
|
68 |
+
The vast majority of North America is on the North American Plate. Parts of western Mexico, including Baja California, and of California, including the cities of San Diego, Los Angeles, and Santa Cruz, lie on the eastern edge of the Pacific Plate, with the two plates meeting along the San Andreas fault. The southernmost portion of the continent and much of the West Indies lie on the Caribbean Plate, whereas the Juan de Fuca and Cocos plates border the North American Plate on its western frontier.
|
69 |
+
|
70 |
+
The continent can be divided into four great regions (each of which contains many subregions): the Great Plains stretching from the Gulf of Mexico to the Canadian Arctic; the geologically young, mountainous west, including the Rocky Mountains, the Great Basin, California and Alaska; the raised but relatively flat plateau of the Canadian Shield in the northeast; and the varied eastern region, which includes the Appalachian Mountains, the coastal plain along the Atlantic seaboard, and the Florida peninsula. Mexico, with its long plateaus and cordilleras, falls largely in the western region, although the eastern coastal plain does extend south along the Gulf.
|
71 |
+
|
72 |
+
The western mountains are split in the middle into the main range of the Rockies and the coast ranges in California, Oregon, Washington, and British Columbia, with the Great Basin—a lower area containing smaller ranges and low-lying deserts—in between. The highest peak is Denali in Alaska.
|
73 |
+
|
74 |
+
The United States Geographical Survey (USGS) states that the geographic center of North America is "6 miles [10 km] west of Balta, Pierce County, North Dakota" at about 48°10′N 100°10′W / 48.167°N 100.167°W / 48.167; -100.167, about 24 kilometres (15 mi) from Rugby, North Dakota. The USGS further states that "No marked or monumented point has been established by any government agency as the geographic center of either the 50 States, the conterminous United States, or the North American continent." Nonetheless, there is a 4.6-metre (15 ft) field stone obelisk in Rugby claiming to mark the center. The North American continental pole of inaccessibility is located 1,650 km (1,030 mi) from the nearest coastline, between Allen and Kyle, South Dakota at 43°22′N 101°58′W / 43.36°N 101.97°W / 43.36; -101.97 (Pole of Inaccessibility North America).[79]
|
75 |
+
|
76 |
+
Geologically, Canada is one of the oldest regions in the world, with more than half of the region consisting of precambrian rocks that have been above sea level since the beginning of the Palaeozoic era.[80] Canada's mineral resources are diverse and extensive.[80] Across the Canadian Shield and in the north there are large iron, nickel, zinc, copper, gold, lead, molybdenum, and uranium reserves. Large diamond concentrations have been recently developed in the Arctic,[81] making Canada one of the world's largest producers. Throughout the Shield there are many mining towns extracting these minerals. The largest, and best known, is Sudbury, Ontario. Sudbury is an exception to the normal process of forming minerals in the Shield since there is significant evidence that the Sudbury Basin is an ancient meteorite impact crater. The nearby, but less known Temagami Magnetic Anomaly has striking similarities to the Sudbury Basin. Its magnetic anomalies are very similar to the Sudbury Basin, and so it could be a second metal-rich impact crater.[82] The Shield is also covered by vast boreal forests that support an important logging industry.
|
77 |
+
|
78 |
+
The lower 48 US states can be divided into roughly five physiographic provinces:
|
79 |
+
|
80 |
+
The geology of Alaska is typical of that of the cordillera, while the major islands of Hawaii consist of Neogene volcanics erupted over a hot spot.
|
81 |
+
|
82 |
+
Central America is geologically active with volcanic eruptions and earthquakes occurring from time to time. In 1976 Guatemala was hit by a major earthquake, killing 23,000 people; Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972, the last one killing about 5,000 people; three earthquakes devastated El Salvador, one in 1986 and two in 2001; one earthquake devastated northern and central Costa Rica in 2009, killing at least 34 people; in Honduras a powerful earthquake killed seven people in 2009.
|
83 |
+
|
84 |
+
Volcanic eruptions are common in the region. In 1968 the Arenal Volcano, in Costa Rica, erupted and killed 87 people. Fertile soils from weathered volcanic lavas have made it possible to sustain dense populations in the agriculturally productive highland areas.
|
85 |
+
|
86 |
+
Central America has many mountain ranges; the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia, and the Cordillera de Talamanca. Between the mountain ranges lie fertile valleys that are suitable for the people; in fact, most of the population of Honduras, Costa Rica, and Guatemala live in valleys. Valleys are also suitable for the production of coffee, beans, and other crops.
|
87 |
+
|
88 |
+
North America is a very large continent which surpasses the Arctic Circle, and the Tropic of Cancer. Greenland, along with the Canadian Shield, is tundra with average temperatures ranging from 10 to 20 °C (50 to 68 °F), but central Greenland is composed of a very large ice sheet. This tundra radiates throughout Canada, but its border ends near the Rocky Mountains (but still contains Alaska) and at the end of the Canadian Shield, near the Great Lakes.
|
89 |
+
Climate west of the Cascades is described as being a temperate weather with average precipitation 20 inches (510 mm).[83]
|
90 |
+
Climate in coastal California is described to be Mediterranean, with average temperatures in cities like San Francisco ranging from 57 to 70 °F (14 to 21 °C) over the course of the year.[84]
|
91 |
+
|
92 |
+
Stretching from the East Coast to eastern North Dakota, and stretching down to Kansas, is the continental-humid climate featuring intense seasons, with a large amount of annual precipitation, with places like New York City averaging 50 inches (1,300 mm).[85]
|
93 |
+
Starting at the southern border of the continental-humid climate and stretching to the Gulf of Mexico (whilst encompassing the eastern half of Texas) is the subtropical climate. This area has the wettest cities in the contiguous U.S. with annual precipitation reaching 67 inches (1,700 mm) in Mobile, Alabama.[86]
|
94 |
+
Stretching from the borders of the continental humid and subtropical climates, and going west to the Cascades Sierra Nevada, south to the southern tip of durango, north to the border with tundra climate, the steppe/desert climate is the driest climate in the U.S.[87] Highland climates cut from north to south of the continent, where subtropical or temperate climates occur just below the tropics, as in central Mexico and Guatemala. Tropical climates appear in the island regions and in the subcontinent's bottleneck. Usually of the savannah type, with rains and high temperatures constants the whole year. Found in countries and states bathed by the Caribbean Sea or to south of the Gulf of Mexico and Pacific Ocean.[88]
|
95 |
+
|
96 |
+
Notable North American fauna include the bison, black bear, prairie dog, turkey, pronghorn, raccoon, coyote and monarch butterfly.
|
97 |
+
|
98 |
+
Notable plants that were domesticated in North America include tobacco, maize, squash, tomato, sunflower, blueberry, avocado, cotton, chile pepper
|
99 |
+
and vanilla.
|
100 |
+
|
101 |
+
Economically, Canada and the United States are the wealthiest and most developed nations in the continent, followed by Mexico, a newly industrialized country.[89] The countries of Central America and the Caribbean are at various levels of economic and human development. For example, small Caribbean island-nations, such as Barbados, Trinidad and Tobago, and Antigua and Barbuda, have a higher GDP (PPP) per capita than Mexico due to their smaller populations. Panama and Costa Rica have a significantly higher Human Development Index and GDP than the rest of the Central American nations.[90] Additionally, despite Greenland's vast resources in oil and minerals, much of them remain untapped, and the island is economically dependent on fishing, tourism, and subsidies from Denmark. Nevertheless, the island is highly developed.[91]
|
102 |
+
|
103 |
+
Demographically, North America is ethnically diverse. Its three main groups are Caucasians, Mestizos and Blacks.[citation needed] There is a significant minority of Indigenous Americans and Asians among other less numerous groups.[citation needed]
|
104 |
+
|
105 |
+
The dominant languages in North America are English, Spanish, and French. Danish is prevalent in Greenland alongside Greenlandic, and Dutch is spoken side by side local languages in the Dutch Caribbean. The term Anglo-America is used to refer to the anglophone countries of the Americas: namely Canada (where English and French are co-official) and the United States, but also sometimes Belize and parts of the tropics, especially the Commonwealth Caribbean. Latin America refers to the other areas of the Americas (generally south of the United States) where the Romance languages, derived from Latin, of Spanish and Portuguese (but French speaking countries are not usually included) predominate: the other republics of Central America (but not always Belize), part of the Caribbean (not the Dutch-, English-, or French-speaking areas), Mexico, and most of South America (except Guyana, Suriname, French Guiana (France), and the Falkland Islands (UK)).
|
106 |
+
|
107 |
+
The French language has historically played a significant role in North America and now retains a distinctive presence in some regions. Canada is officially bilingual. French is the official language of the Province of Quebec, where 95% of the people speak it as either their first or second language, and it is co-official with English in the Province of New Brunswick. Other French-speaking locales include the Province of Ontario (the official language is English, but there are an estimated 600,000 Franco-Ontarians), the Province of Manitoba (co-official as de jure with English), the French West Indies and Saint-Pierre et Miquelon, as well as the US state of Louisiana, where French is also an official language. Haiti is included with this group based on historical association but Haitians speak both Creole and French. Similarly, French and French Antillean Creole is spoken in Saint Lucia and the Commonwealth of Dominica alongside English.
|
108 |
+
|
109 |
+
Christianity is the largest religion in the United States, Canada and Mexico. According to a 2012 Pew Research Center survey, 77% of the population considered themselves Christians.[92] Christianity also is the predominant religion in the 23 dependent territories in North America.[93] The United States has the largest Christian population in the world, with nearly 247 million Christians (70%), although other countries have higher percentages of Christians among their populations.[94] Mexico has the world's second largest number of Catholics, surpassed only by Brazil.[95] A 2015 study estimates about 493,000 Christian believers from a Muslim background in North America, most of them belonging to some form of Protestantism.[96]
|
110 |
+
|
111 |
+
According to the same study religiously unaffiliated (include agnostic and atheist) make up about 17% of the population of Canada and the United States.[97] No religion make up about 24% of the United States population, and 24% of Canada total population.[98]
|
112 |
+
|
113 |
+
Canada, the United States and Mexico host communities of both Jews (6 million or about 1.8%),[99] Buddhists (3.8 million or 1.1%)[100] and Muslims (3.4 million or 1.0%).[101] The biggest number of Jewish individuals can be found in the United States (5.4 million),[102] Canada (375,000)[103] and Mexico (67,476).[104] The United States host the largest Muslim population in North America with 2.7 million or 0.9%,[105][106] While Canada host about one million Muslim or 3.2% of the population.[107] While in Mexico there were 3,700 Muslims in the country.[108] In 2012, U-T San Diego estimated U.S. practitioners of Buddhism at 1.2 million people, of whom 40% are living in Southern California.[109]
|
114 |
+
|
115 |
+
The predominant religion in Central America is Christianity (96%).[110] Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion. Also Christianity is the predominant religion in the Caribbean (85%).[110] Other religious groups in the region are Hinduism, Islam, Rastafari (in Jamaica), and Afro-American religions such as Santería and Vodou.
|
116 |
+
|
117 |
+
The most populous country in North America is the United States with 329.7 million persons. The second largest country is Mexico with a population of 112.3 million.[111] Canada is the third most populous country with 37.0 million.[112] The majority of Caribbean island-nations have national populations under a million, though Cuba, Dominican Republic, Haiti, Puerto Rico (a territory of the United States), Jamaica, and Trinidad and Tobago each have populations higher than a million.[113][114][115][116][117] Greenland has a small population of 55,984 for its massive size (2,166,000 km² or 836,300 mi²), and therefore, it has the world's lowest population density at 0.026 pop./km² (0.067 pop./mi²).[118]
|
118 |
+
|
119 |
+
While the United States, Canada, and Mexico maintain the largest populations, large city populations are not restricted to those nations. There are also large cities in the Caribbean. The largest cities in North America, by far, are Mexico City and New York. These cities are the only cities on the continent to exceed eight million, and two of three in the Americas. Next in size are Los Angeles, Toronto,[119] Chicago, Havana, Santo Domingo, and Montreal. Cities in the sun belt regions of the United States, such as those in Southern California and Houston, Phoenix, Miami, Atlanta, and Las Vegas, are experiencing rapid growth. These causes included warm temperatures, retirement of Baby Boomers, large industry, and the influx of immigrants. Cities near the United States border, particularly in Mexico, are also experiencing large amounts of growth. Most notable is Tijuana, a city bordering San Diego that receives immigrants from all over Latin America and parts of Europe and Asia. Yet as cities grow in these warmer regions of North America, they are increasingly forced to deal with the major issue of water shortages.[120]
|
120 |
+
|
121 |
+
Eight of the top ten metropolitan areas are located in the United States. These metropolitan areas all have a population of above 5.5 million and include the New York City metropolitan area, Los Angeles metropolitan area, Chicago metropolitan area, and the Dallas–Fort Worth metroplex.[121] Whilst the majority of the largest metropolitan areas are within the United States, Mexico is host to the largest metropolitan area by population in North America: Greater Mexico City.[122] Canada also breaks into the top ten largest metropolitan areas with the Toronto metropolitan area having six million people.[123] The proximity of cities to each other on the Canada–United States border and Mexico–United States border has led to the rise of international metropolitan areas. These urban agglomerations are observed at their largest and most productive in Detroit–Windsor and San Diego–Tijuana and experience large commercial, economic, and cultural activity. The metropolitan areas are responsible for millions of dollars of trade dependent on international freight. In Detroit-Windsor the Border Transportation Partnership study in 2004 concluded US$13 billion was dependent on the Detroit–Windsor international border crossing while in San Diego-Tijuana freight at the Otay Mesa Port of Entry was valued at US$20 billion.[124][125]
|
122 |
+
|
123 |
+
North America has also been witness to the growth of megapolitan areas. In the United States exists eleven megaregions that transcend international borders and comprise Canadian and Mexican metropolitan regions. These are the Arizona Sun Corridor, Cascadia, Florida, Front Range, Great Lakes Megaregion, Gulf Coast Megaregion, Northeast, Northern California, Piedmont Atlantic, Southern California, and the Texas Triangle.[126] Canada and Mexico are also the home of megaregions. These include the Quebec City – Windsor Corridor, Golden Horseshoe – both of which are considered part of the Great Lakes Megaregion – and megalopolis of Central Mexico. Traditionally the largest megaregion has been considered the Boston-Washington, DC Corridor, or the Northeast, as the region is one massive contiguous area. Yet megaregion criterion have allowed the Great Lakes Megalopolis to maintain status as the most populated region, being home to 53,768,125 people in 2000.[127]
|
124 |
+
|
125 |
+
The top ten largest North American metropolitan areas by population as of 2013, based on national census numbers from the United States and census estimates from Canada and Mexico.
|
126 |
+
|
127 |
+
†2011 Census figures.
|
128 |
+
|
129 |
+
North America's GDP per capita was evaluated in October 2016 by the International Monetary Fund (IMF) to be $41,830, making it the richest continent in the world,[129] followed by Oceania.[130]
|
130 |
+
|
131 |
+
Canada, Mexico, and the United States have significant and multifaceted economic systems. The United States has the largest economy of all three countries and in the world.[130] In 2016, the U.S. had an estimated per capita gross domestic product (PPP) of $57,466 according to the World Bank, and is the most technologically developed economy of the three.[131] The United States' services sector comprises 77% of the country's GDP (estimated in 2010), industry comprises 22% and agriculture comprises 1.2%.[130] The U.S. economy is also the fastest growing economy in North America and the Americas as a whole,[132][129] with the highest GDP per capita in the Americas as well.[129]
|
132 |
+
|
133 |
+
Canada shows significant growth in the sectors of services, mining and manufacturing.[133] Canada's per capita GDP (PPP) was estimated at $44,656 and it had the 11th largest GDP (nominal) in 2014.[133] Canada's services sector comprises 78% of the country's GDP (estimated in 2010), industry comprises 20% and agriculture comprises 2%.[133] Mexico has a per capita GDP (PPP) of $16,111 and as of 2014 is the 15th largest GDP (nominal) in the world.[134] Being a newly industrialized country,[89] Mexico maintains both modern and outdated industrial and agricultural facilities and operations.[135] Its main sources of income are oil, industrial exports, manufactured goods, electronics, heavy industry, automobiles, construction, food, banking and financial services.[136]
|
134 |
+
|
135 |
+
The North American economy is well defined and structured in three main economic areas.[137] These areas are the North American Free Trade Agreement (NAFTA), Caribbean Community and Common Market (CARICOM), and the Central American Common Market (CACM).[137] Of these trade blocs, the United States takes part in two. In addition to the larger trade blocs there is the Canada-Costa Rica Free Trade Agreement among numerous other free trade relations, often between the larger, more developed countries and Central American and Caribbean countries.
|
136 |
+
|
137 |
+
The North America Free Trade Agreement (NAFTA) forms one of the four largest trade blocs in the world.[138] Its implementation in 1994 was designed for economic homogenization with hopes of eliminating barriers of trade and foreign investment between Canada, the United States and Mexico.[139] While Canada and the United States already conducted the largest bilateral trade relationship – and to present day still do – in the world and Canada–United States trade relations already allowed trade without national taxes and tariffs,[140] NAFTA allowed Mexico to experience a similar duty-free trade. The free trade agreement allowed for the elimination of tariffs that had previously been in place on United States-Mexico trade. Trade volume has steadily increased annually and in 2010, surface trade between the three NAFTA nations reached an all-time historical increase of 24.3% or US$791 billion.[141] The NAFTA trade bloc GDP (PPP) is the world's largest with US$17.617 trillion.[142] This is in part attributed to the fact that the economy of the United States is the world's largest national economy; the country had a nominal GDP of approximately $14.7 trillion in 2010.[143] The countries of NAFTA are also some of each other's largest trade partners. The United States is the largest trade partner of Canada and Mexico;[144] while Canada and Mexico are each other's third largest trade partners.[145][146]
|
138 |
+
|
139 |
+
The Caribbean trade bloc – CARICOM – came into agreement in 1973 when it was signed by 15 Caribbean nations. As of 2000, CARICOM trade volume was US$96 billion. CARICOM also allowed for the creation of a common passport for associated nations. In the past decade the trade bloc focused largely on Free Trade Agreements and under the CARICOM Office of Trade Negotiations (OTN) free trade agreements have been signed into effect.
|
140 |
+
|
141 |
+
Integration of Central American economies occurred under the signing of the Central American Common Market agreement in 1961; this was the first attempt to engage the nations of this area into stronger financial cooperation. Recent implementation of the Central American Free Trade Agreement (CAFTA) has left the future of the CACM unclear.[147] The Central American Free Trade Agreement was signed by five Central American countries, the Dominican Republic, and the United States. The focal point of CAFTA is to create a free trade area similar to that of NAFTA. In addition to the United States, Canada also has relations in Central American trade blocs. Currently under proposal, the Canada – Central American Free Trade Agreement (CA4) would operate much the same as CAFTA with the United States does.
|
142 |
+
|
143 |
+
These nations also take part in inter-continental trade blocs. Mexico takes a part in the G3 Free Trade Agreement with Colombia and Venezuela and has a trade agreement with the EU. The United States has proposed and maintained trade agreements under the Transatlantic Free Trade Area between itself and the European Union; the US-Middle East Free Trade Area between numerous Middle Eastern nations and itself; and the Trans-Pacific Strategic Economic Partnership between Southeast Asian nations, Australia, and New Zealand.
|
144 |
+
|
145 |
+
The Pan-American Highway route in the Americas is the portion of a network of roads nearly 48,000 km (30,000 mi) in length which travels through the mainland nations. No definitive length of the Pan-American Highway exists because the US and Canadian governments have never officially defined any specific routes as being part of the Pan-American Highway, and Mexico officially has many branches connecting to the US border. However, the total length of the portion from Mexico to the northern extremity of the highway is roughly 26,000 km (16,000 mi).
|
146 |
+
|
147 |
+
The First Transcontinental Railroad in the United States was built in the 1860s, linking the railroad network of the eastern US with California on the Pacific coast. Finished on 10 May 1869 at the famous golden spike event at Promontory Summit, Utah, it created a nationwide mechanized transportation network that revolutionized the population and economy of the American West, catalyzing the transition from the wagon trains of previous decades to a modern transportation system.[148] Although an accomplishment, it achieved the status of first transcontinental railroad by connecting myriad eastern US railroads to the Pacific and was not the largest single railroad system in the world. The Canadian Grand Trunk Railway (GTR) had, by 1867, already accumulated more than 2,055 km (1,277 mi) of track by connecting Ontario with the Canadian Atlantic provinces west as far as Port Huron, Michigan, through Sarnia, Ontario.
|
148 |
+
|
149 |
+
A shared telephone system known as the North American Numbering Plan (NANP) is an integrated telephone numbering plan of 24 countries and territories: the United States and its territories, Canada, Bermuda, and 17 Caribbean nations.
|
150 |
+
|
151 |
+
Canada and the United States were both former British colonies. There is frequent cultural interplay between the United States and English-speaking Canada.
|
152 |
+
|
153 |
+
Greenland has experienced many immigration waves from Northern Canada, e.g. the Thule People. Therefore, Greenland shares some cultural ties with the indigenous people of Canada. Greenland is also considered Nordic and has strong Danish ties due to centuries of colonization by Denmark.[149]
|
154 |
+
|
155 |
+
Spanish-speaking North America shares a common past as former Spanish colonies. In Mexico and the Central American countries where civilizations like the Maya developed, indigenous people preserve traditions across modern boundaries. Central American and Spanish-speaking Caribbean nations have historically had more in common due to geographical proximity.
|
156 |
+
|
157 |
+
Northern Mexico, particularly in the cities of Monterrey, Tijuana, Ciudad Juárez, and Mexicali, is strongly influenced by the culture and way of life of the United States. Of the aforementioned cities, Monterrey has been regarded as the most Americanized city in Mexico.[150] Immigration to the United States and Canada remains a significant attribute of many nations close to the southern border of the US. The Anglophone Caribbean states have witnessed the decline of the British Empire and its influence on the region, and its replacement by the economic influence of Northern America in the Anglophone Caribbean. This is partly due to the relatively small populations of the English-speaking Caribbean countries, and also because many of them now have more people living abroad than those remaining at home. Northern Mexico, the Western United States and Alberta, Canada share a cowboy culture.
|
158 |
+
|
159 |
+
Canada, Mexico and the US submitted a joint bid to host the 2026 FIFA World Cup.
|
160 |
+
The following table shows the most prominent sports leagues in North America, in order of average revenue.[151][152]
|
161 |
+
|
162 |
+
Footnotes
|
163 |
+
|
164 |
+
Citations
|
165 |
+
|
166 |
+
Africa
|
167 |
+
|
168 |
+
Antarctica
|
169 |
+
|
170 |
+
Asia
|
171 |
+
|
172 |
+
Australia
|
173 |
+
|
174 |
+
Europe
|
175 |
+
|
176 |
+
North America
|
177 |
+
|
178 |
+
South America
|
179 |
+
|
180 |
+
Afro-Eurasia
|
181 |
+
|
182 |
+
America
|
183 |
+
|
184 |
+
Eurasia
|
185 |
+
|
186 |
+
Oceania
|
en/1950.html.txt
ADDED
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A slum is usually a highly populated urban residential area consisting mostly of closely packed, decrepit housing units in a situation of deteriorated or incomplete infrastructure, inhabited primarily by impoverished persons.[1] Although slums, especially in America, are usually located in urban areas, in other countries they can be located in suburban areas where housing quality is low and living conditions are poor.[2] While slums differ in size and other characteristics, most lack reliable sanitation services, supply of clean water, reliable electricity, law enforcement, and other basic services. Slum residences vary from shanty houses to professionally built dwellings which, because of poor-quality construction and/or lack of basic maintenance, have deteriorated.[3]
|
2 |
+
|
3 |
+
Due to increasing urbanization of the general populace, slums became common in the 18th to late 20th centuries in the United States and Europe.[4][5] Slums are still predominantly found in urban regions of developing countries, but are also still found in developed economies.[6][7]
|
4 |
+
|
5 |
+
According to UN-Habitat, around 33% of the urban population in the developing world in 2012, or about 863 million people, lived in slums.[8] The proportion of urban population living in slums in 2012 was highest in Sub-Saharan Africa (62%), followed by Southern Asia (35%), Southeastern Asia (31%), Eastern Asia (28%), Western Asia (25%), Oceania (24%), Latin America (24%), the Caribbean (24%), and North Africa (13%).[8]:127 Among individual countries, the proportion of urban residents living in slum areas in 2009 was highest in the Central African Republic (95.9%). Between 1990 and 2010, the percentage of people living in slums dropped, even as the total urban population increased.[8] The world's largest slum city is found in the Neza-Chalco-Ixtapaluca area, located in the State of Mexico.[9][10][11]
|
6 |
+
|
7 |
+
Slums form and grow in different parts of the world for many different reasons. Causes include rapid rural-to-urban migration, economic stagnation and depression, high unemployment, poverty, informal economy, forced or manipulated ghettoization, poor planning, politics, natural disasters, and social conflicts.[1][12][13] Strategies tried to reduce and transform slums in different countries, with varying degrees of success, include a combination of slum removal, slum relocation, slum upgrading, urban planning with citywide infrastructure development, and public housing.[14][15]
|
8 |
+
|
9 |
+
It is thought[16] that slum is a British slang word from the East End of London meaning "room", which evolved to "back slum" around 1845 meaning 'back alley, street of poor people.'
|
10 |
+
|
11 |
+
Numerous other non English terms are often used interchangeably with slum: shanty town, favela, rookery, gecekondu, skid row, barrio, ghetto, bidonville, taudis, bandas de miseria, barrio marginal, morro, loteamento, barraca, musseque, tugurio, solares, mudun safi, kawasan kumuh, karyan, medina achouaia, brarek, ishash, galoos, tanake, baladi, trushchoby, chalis, katras, zopadpattis, basti, estero, looban, dagatan, umjondolo, watta, udukku, and chereka bete.[17]
|
12 |
+
|
13 |
+
The word slum has negative connotations, and using this label for an area can be seen as an attempt to delegitimize that land use when hoping to repurpose it.[18]
|
14 |
+
|
15 |
+
Before the 19th century, rich and poor people lived in the same districts, witht he wealthy living on the high sctreets, and the poor in the service streets behind them. But in the 19th century, wealthy and upper middle class people began to move out of the central part of rapidly growing cities, leaving poorer residents behind. [19]
|
16 |
+
|
17 |
+
Slums were common in the United States and Europe before the early 20th century. London's East End is generally considered the locale where the term originated in the 19th century, where massive and rapid urbanization of the dockside and industrial areas led to intensive overcrowding in a warren of post-medieval streetscape. The suffering of the poor was described in popular fiction by moralist authors such as Charles Dickens – most famously Oliver Twist (1837-9) and echoed the Christian Socialist values of the time, which soon found legal expression in the Public Health Act of 1848. As the slum clearance movement gathered pace, deprived areas such as Old Nichol were fictionalised to raise awareness in the middle classes in the form of moralist novels such as A Child of the Jago (1896) resulting in slum clearance and reconstruction programmes such as the Boundary Estate (1893-1900) and the creation of charitable trusts such as the Peabody Trust founded in 1862 and Joseph Rowntree Foundation (1904) which still operate to provide decent housing today.
|
18 |
+
|
19 |
+
Slums are often associated with Victorian Britain, particularly in industrial English towns, lowland Scottish towns and Dublin City in Ireland. Engels described these British neighborhoods as "cattle-sheds for human beings".[20] These were generally still inhabited until the 1940s, when the British government started slum clearance and built new council houses.[21] There are still examples left of slum housing in the UK, but many have been removed by government initiative, redesigned and replaced with better public housing.
|
20 |
+
In Europe, slums were common.[22][23] By the 1920s it had become a common slang expression in England, meaning either various taverns and eating houses, "loose talk" or gypsy language, or a room with "low going-ons". In Life in London Pierce Egan used the word in the context of the "back slums" of Holy Lane or St Giles. A footnote defined slum to mean "low, unfrequent parts of the town". Charles Dickens used the word slum in a similar way in 1840, writing "I mean to take a great, London, back-slum kind walk tonight". Slum began to be used to describe bad housing soon after and was used as alternative expression for rookeries.[24] In 1850 the Catholic Cardinal Wiseman described the area known as Devil's Acre in Westminster, London as follows:
|
21 |
+
|
22 |
+
Close under the Abbey of Westminster there lie concealed labyrinths of lanes and potty and alleys and slums, nests of ignorance, vice, depravity, and crime, as well as of squalor, wretchedness, and disease; whose atmosphere is typhus, whose ventilation is cholera; in which swarms of huge and almost countless population, nominally at least, Catholic; haunts of filth, which no sewage committee can reach – dark corners, which no lighting board can brighten.[25]
|
23 |
+
|
24 |
+
This passage was widely quoted in the national press,[26] leading to the popularization of the word slum to describe bad housing.[24][27]
|
25 |
+
|
26 |
+
In France as in most industrialised European capitals, slums were widespread in Paris and other urban areas in the 19th century, many of which continued through first half of the 20th century. The first cholera epidemic of 1832 triggered a political debate, and Louis René Villermé study[30] of various arrondissements of Paris demonstrated the differences and connection between slums, poverty and poor health.[31] Melun Law first passed in 1849 and revised in 1851, followed by establishment of Paris Commission on Unhealthful Dwellings in 1852 began the social process of identifying the worst housing inside slums, but did not remove or replace slums. After World War II, French people started mass migration from rural to urban areas of France. This demographic and economic trend rapidly raised rents of existing housing as well as expanded slums. French government passed laws to block increase in the rent of housing, which inadvertently made many housing projects unprofitable and increased slums. In 1950, France launched its Habitation à Loyer Modéré[32][33] initiative to finance and build public housing and remove slums, managed by techniciens – urban technocrats.,[34] and financed by Livret A[35] – a tax free savings account for French public.
|
27 |
+
|
28 |
+
New York City is believed to have created the United States' first slum, named the Five Points in 1825, as it evolved into a large urban settlement.[5][36] Five Points was named for a lake named Collect.[36][37] which, by the late 1700s, was surrounded by slaughterhouses and tanneries which emptied their waste directly into its waters. Trash piled up as well and by the early 1800s the lake was filled up and dry. On this foundation was built Five Points, the United States' first slum. Five Points was occupied by successive waves of freed slaves, Irish, then Italian, then Chinese, immigrants. It housed the poor, rural people leaving farms for opportunity, and the persecuted people from Europe pouring into New York City. Bars, bordellos, squalid and lightless tenements lined its streets. Violence and crime were commonplace. Politicians and social elite discussed it with derision. Slums like Five Points triggered discussions of affordable housing and slum removal. As of the start of the 21st century, Five Points slum had been transformed into the Little Italy and Chinatown neighborhoods of New York City, through that city's campaign of massive urban renewal.[4][36]
|
29 |
+
|
30 |
+
Five Points was not the only slum in America.[38][39] Jacob Riis, Walker Evans, Lewis Hine and others photographed many before World War II. Slums were found in every major urban region of the United States throughout most of the 20th century, long after the Great Depression. Most of these slums had been ignored by the cities and states which encompassed them until the 1960s' War on Poverty was undertaken by the Federal government of the United States.
|
31 |
+
|
32 |
+
A type of slum housing, sometimes called poorhouses, crowded the Boston Commons, later at the fringes of the city.[40]
|
33 |
+
|
34 |
+
Rio de Janeiro documented its first slum in 1920 census. By the 1960s, over 33% of population of Rio lived in slums, 45% of Mexico City and Ankara, 65% of Algiers, 35% of Caracas, 25% of Lima and Santiago, 15% of Singapore. By 1980, in various cities and towns of Latin America alone, there were about 25,000 slums.[45]
|
35 |
+
|
36 |
+
Slums sprout and continue for a combination of demographic, social, economic, and political reasons. Common causes include rapid rural-to-urban migration, poor planning, economic stagnation and depression, poverty, high unemployment, informal economy, colonialism and segregation, politics, natural disasters and social conflicts.
|
37 |
+
|
38 |
+
Rural–urban migration is one of the causes attributed to the formation and expansion of slums.[1] Since 1950, world population has increased at a far greater rate than the total amount of arable land, even as agriculture contributes a much smaller percentage of the total economy. For example, in India, agriculture accounted for 52% of its GDP in 1954 and only 19% in 2004;[49] in Brazil, the 2050 GDP contribution of agriculture is one-fifth of its contribution in 1951.[50] Agriculture, meanwhile, has also become higher yielding, less disease prone, less physically harsh and more efficient with tractors and other equipment. The proportion of people working in agriculture has declined by 30% over the last 50 years, while global population has increased by 250%.[1]
|
39 |
+
|
40 |
+
Many people move to urban areas primarily because cities promise more jobs, better schools for poor's children, and diverse income opportunities than subsistence farming in rural areas.[51] For example, in 1995, 95.8% of migrants to Surabaya, Indonesia reported that jobs were their primary motivation for moving to the city.[52] However, some rural migrants may not find jobs immediately because of their lack of skills and the increasingly competitive job markets, which leads to their financial shortage.[53] Many cities, on the other hand, do not provide enough low-cost housing for a large number of rural-urban migrant workers. Some rural–urban migrant workers cannot afford housing in cities and eventually settle down in only affordable slums.[54] Further, rural migrants, mainly lured by higher incomes, continue to flood into cities. They thus expand the existing urban slums.[53]
|
41 |
+
|
42 |
+
According to Ali and Toran, social networks might also explain rural–urban migration and people's ultimate settlement in slums. In addition to migration for jobs, a portion of people migrate to cities because of their connection with relatives or families. Once their family support in urban areas is in slums, those rural migrants intend to live with them in slums[55]
|
43 |
+
|
44 |
+
The formation of slums is closely linked to urbanization.[56] In 2008, more than 50% of the world's population lived in urban areas. In China, for example, it is estimated that the population living in urban areas will increase by 10% within a decade according to its current rates of urbanization.[57] The UN-Habitat reports that 43% of urban population in developing countries and 78% of those in the least developed countries are slum dwellers.[7]
|
45 |
+
|
46 |
+
Some scholars suggest that urbanization creates slums because local governments are unable to manage urbanization, and migrant workers without an affordable place to live in, dwell in slums.[58] Rapid urbanization drives economic growth and causes people to seek working and investment opportunities in urban areas.[59][60] However, as evidenced by poor urban infrastructure and insufficient housing, the local governments sometimes are unable to manage this transition.[61][62] This incapacity can be attributed to insufficient funds and inexperience to handle and organize problems brought by migration and urbanization.[60] In some cases, local governments ignore the flux of immigrants during the process of urbanization.[59] Such examples can be found in many African countries. In the early 1950s, many African governments believed that slums would finally disappear with economic growth in urban areas. They neglected rapidly spreading slums due to increased rural-urban migration caused by urbanization.[63] Some governments, moreover, mapped the land where slums occupied as undeveloped land.[64]
|
47 |
+
|
48 |
+
Another type of urbanization does not involve economic growth but economic stagnation or low growth, mainly contributing to slum growth in Sub-Saharan Africa and parts of Asia. This type of urbanization involves a high rate of unemployment, insufficient financial resources and inconsistent urban planning policy.[65] In these areas, an increase of 1% in urban population will result in an increase of 1.84% in slum prevalence.[66]
|
49 |
+
|
50 |
+
Urbanization might also force some people to live in slums when it influences land use by transforming agricultural land into urban areas and increases land value. During the process of urbanization, some agricultural land is used for additional urban activities. More investment will come into these areas, which increases the land value.[67] Before some land is completely urbanized, there is a period when the land can be used for neither urban activities nor agriculture. The income from the land will decline, which decreases the people's incomes in that area. The gap between people's low income and the high land price forces some people to look for and construct cheap informal settlements, which are known as slums in urban areas.[62] The transformation of agricultural land also provides surplus labor, as peasants have to seek jobs in urban areas as rural-urban migrant workers.[58]
|
51 |
+
|
52 |
+
Many slums are part of economies of agglomeration in which there is an emergence of economies of scale at the firm level, transport costs and the mobility of the industrial labour force.[68] The increase in returns of scale will mean that the production of each good will take place in a single location.[68] And even though an agglomerated economy benefits these cities by bringing in specialization and multiple competing suppliers, the conditions of slums continue to lag behind in terms of quality and adequate housing. Alonso-Villar argues that the existence of transport costs implies that the best locations for a firm will be those with easy access to markets, and the best locations for workers, those with easy access to goods. The concentration is the result of a self-reinforcing process of agglomeration.[68] Concentration is a common trend of the distribution of population. Urban growth is dramatically intense in the less developed countries, where a large number of huge cities have started to appear; which means high poverty rates, crime, pollution and congestion.[68]
|
53 |
+
|
54 |
+
Lack of affordable low cost housing and poor planning encourages the supply side of slums.[69] The Millennium Development Goals proposes that member nations should make a "significant improvement in the lives of at least 100 million slum dwellers" by 2020.[70] If member nations succeed in achieving this goal, 90% of the world total slum dwellers may remain in the poorly housed settlements by 2020.[71] Choguill claims that the large number of slum dwellers indicates a deficiency of practical housing policy.[71] Whenever there is a significant gap in growing demand for housing and insufficient supply of affordable housing, this gap is typically met in part by slums.[69] The Economist summarizes this as, "good housing is obviously better than a slum, but a slum is better than none".[72]
|
55 |
+
|
56 |
+
Insufficient financial resources [73] and lack of coordination in government bureaucracy [66] are two main causes of poor house planning. Financial deficiency in some governments may explain the lack of affordable public housing for the poor since any improvement of the tenant in slums and expansion of public housing programs involve a great increase in the government expenditure.[73] The problem can also lie on the failure in coordination among different departments in charge of economic development, urban planning, and land allocation. In some cities, governments assume that the housing market will adjust the supply of housing with a change in demand. However, with little economic incentive, the housing market is more likely to develop middle-income housing rather than low-cost housing. The urban poor gradually become marginalized in the housing market where few houses are built to sell to them.[66][74]
|
57 |
+
|
58 |
+
Some of the slums in today's world are a product of urbanization brought by colonialism. For instance, the Europeans arrived in Kenya in the nineteenth century and created urban centers such as Nairobi mainly to serve their financial interests. They regarded the Africans as temporary migrants and needed them only for supply of labor. The housing policy aiming to accommodate these workers was not well enforced and the government built settlements in the form of single-occupancy bedspaces. Due to the cost of time and money in their movement back and forth between rural and urban areas, their families gradually migrated to the urban centre. As they could not afford to buy houses, slums were thus formed.[78]
|
59 |
+
|
60 |
+
Others were created because of segregation imposed by the colonialists. For example, Dharavi slum of Mumbai – now one of the largest slums in India, used to be a village referred to as Koliwadas, and Mumbai used to be referred as Bombay. In 1887, the British colonial government expelled all tanneries, other noxious industry and poor natives who worked in the peninsular part of the city and colonial housing area, to what was back then the northern fringe of the city – a settlement now called Dharavi. This settlement attracted no colonial supervision or investment in terms of road infrastructure, sanitation, public services or housing. The poor moved into Dharavi, found work as servants in colonial offices and homes and in the foreign owned tanneries and other polluting industries near Dharavi. To live, the poor built shanty towns within easy commute to work. By 1947, the year India became an independent nation of the commonwealth, Dharavi had blossomed into Bombay's largest slum.
|
61 |
+
[75]
|
62 |
+
|
63 |
+
Similarly, some of the slums of Lagos, Nigeria sprouted because of neglect and policies of the colonial era.[79] During apartheid era of South Africa, under the pretext of sanitation and plague epidemic prevention, racial and ethnic group segregation was pursued, people of color were moved to the fringes of the city, policies that created Soweto and other slums – officially called townships.[80] Large slums started at the fringes of segregation-conscious colonial city centers of Latin America.[81] Marcuse suggests ghettoes in the United States, and elsewhere, have been created and maintained by the segregationist policies of the state and regionally dominant group.[82][83]
|
64 |
+
|
65 |
+
Social exclusion and poor infrastructure forces the poor to adapt to conditions beyond his or her control. Poor families that cannot afford transportation, or those who simply lack any form of affordable public transportation, generally end up in squat settlements within walking distance or close enough to the place of their formal or informal employment.[69] Ben Arimah cites this social exclusion and poor infrastructure as a cause for numerous slums in African cities.[66] Poor quality, unpaved streets encourage slums; a 1% increase in paved all-season roads, claims Arimah, reduces slum incidence rate by about 0.35%. Affordable public transport and economic infrastructure empowers poor people to move and consider housing options other than their current slums.[85][86]
|
66 |
+
|
67 |
+
A growing economy that creates jobs at rate faster than population growth, offers people opportunities and incentive to relocate from poor slum to more developed neighborhoods. Economic stagnation, in contrast, creates uncertainties and risks for the poor, encouraging people to stay in the slums. Economic stagnation in a nation with a growing population reduces per capita disposal income in urban and rural areas, increasing urban and rural poverty. Rising rural poverty also encourages migration to urban areas. A poorly performing economy, in other words, increases poverty and rural-to-urban migration, thereby increasing slums.[87][88]
|
68 |
+
|
69 |
+
Many slums grow because of growing informal economy which creates demand for workers. Informal economy is that part of an economy that is neither registered as a business nor licensed, one that does not pay taxes and is not monitored by local or state or federal government.[89] Informal economy grows faster than formal economy when government laws and regulations are opaque and excessive, government bureaucracy is corrupt and abusive of entrepreneurs, labor laws are inflexible, or when law enforcement is poor.[90] Urban informal sector is between 20 and 60% of most developing economies' GDP; in Kenya, 78 per cent of non-agricultural employment is in the informal sector making up 42 per cent of GDP.[1] In many cities the informal sector accounts for as much as 60 per cent of employment of the urban population. For example, in Benin, slum dwellers comprise 75 per cent of informal sector workers, while in Burkina Faso, the Central African Republic, Chad and Ethiopia, they make up 90 per cent of the informal labour force.[91] Slums thus create an informal alternate economic ecosystem, that demands low paid flexible workers, something impoverished residents of slums deliver. In other words, countries where starting, registering and running a formal business is difficult, tend to encourage informal businesses and slums.[92][93][94] Without a sustainable formal economy that raise incomes and create opportunities, squalid slums are likely to continue.[95]
|
70 |
+
|
71 |
+
The World Bank and UN Habitat estimate, assuming no major economic reforms are undertaken, more than 80% of additional jobs in urban areas of developing world may be low-paying jobs in the informal sector. Everything else remaining same, this explosive growth in the informal sector is likely to be accompanied by a rapid growth of slums.[1]
|
72 |
+
|
73 |
+
Labour, Work
|
74 |
+
|
75 |
+
Research in the latest years based on ethnographic studies, conducted since 2008 about slums, published initially in 2017, has found out the primary importance of labour as the main cause of emergence, rural-urban migration, consolidation and growth of informal settlements .[96][97] It also showed that work has also a crucial role in the self-construction of houses, alleys and overall informal planning of slums, as well as constituting a central aspect by residents living in slums when their communities suffer upgrading schemes or when they are resettled to formal housing.[96]
|
76 |
+
|
77 |
+
For example, it was recently proved that in a small favela in the northeast of Brazil (Favela Sururu de Capote), the migration of dismissed sugar cane factory workers to the city of Maceió (who initiated the self-construction of the favela), has been driven by the necessity to find a job in the city.[97] The same observation was noticed on the new migrants who contribute to the consolidation and growth of the slum. Also, the choice of the terrain for the construction of the favela (the margins of a lagoon) followed the rationale that it could offer conditions to provide them means of work. Circa 80% of residents living in that community live from the fishery of a mussel which divides the community through gender and age.[97] Alleys and houses were planned to facilitate the working activities, that provided subsistence and livelihood to the community. When resettled, the main reason of changes of formal housing units was due to the lack of possibilities to perform their work in the new houses designed according to formal architecture principles, or even by the distances they had to travel to work in the slum where they originally lived, which was in turn faced by residents by self-constructing spaces to shelter the work originally performed in the slum, in the formal housing units.[96] Similar observations were made in other slums.[96] Residents also reported that their work constitutes their dignity, citizenship, and self-esteem in the underprivileged settings in which they live.[96] The reflection of this recent research was possible due to participatory observations and the fact that the author of the research has lived in a slum to verify the socioeconomic practices which were prone to shape, plan and govern space in slums.[96]
|
78 |
+
|
79 |
+
Urban poverty encourages the formation and demand for slums.[3] With rapid shift from rural to urban life, poverty migrates to urban areas. The urban poor arrives with hope, and very little of anything else. He or she typically has no access to shelter, basic urban services and social amenities. Slums are often the only option for the urban poor.[98]
|
80 |
+
|
81 |
+
Many local and national governments have, for political interests, subverted efforts to remove, reduce or upgrade slums into better housing options for the poor.[13] Throughout the second half of the 19th century, for example, French political parties relied on votes from slum population and had vested interests in maintaining that voting block. Removal and replacement of slum created a conflict of interest, and politics prevented efforts to remove, relocate or upgrade the slums into housing projects that are better than the slums. Similar dynamics are cited in favelas of Brazil,[99] slums of India,[100][101] and shanty towns of Kenya.[102]
|
82 |
+
|
83 |
+
Scholars[13][103] claim politics also drives rural-urban migration and subsequent settlement patterns. Pre-existing patronage networks, sometimes in the form of gangs and other times in the form of political parties or social activists, inside slums seek to maintain their economic, social and political power. These social and political groups have vested interests to encourage migration by ethnic groups that will help maintain the slums, and reject alternate housing options even if the alternate options are better in every aspect than the slums they seek to replace.[101][104]
|
84 |
+
|
85 |
+
Millions of Lebanese people formed slums during the Lebanese Civil War from 1975 to 1990.[105][106] Similarly, in recent years, numerous slums have sprung around Kabul to accommodate rural Afghans escaping Taliban violence.[107]
|
86 |
+
|
87 |
+
Major natural disasters in poor nations often lead to migration of disaster-affected families from areas crippled by the disaster to unaffected areas, the creation of temporary tent city and slums, or expansion of existing slums.[108] These slums tend to become permanent because the residents do not want to leave, as in the case of slums near Port-au-Prince after the 2010 Haiti earthquake,[109][110] and slums near Dhaka after 2007 Bangladesh Cyclone Sidr.[111]
|
88 |
+
|
89 |
+
Slums typically begin at the outskirts of a city. Over time, the city may expand past the original slums, enclosing the slums inside the urban perimeter. New slums sprout at the new boundaries of the expanding city, usually on publicly owned lands, thereby creating an urban sprawl mix of formal settlements, industry, retail zones and slums. This makes the original slums valuable property, densely populated with many conveniences attractive to the poor.[112]
|
90 |
+
|
91 |
+
At their start, slums are typically located in least desirable lands near the town or city, that are state owned or philanthropic trust owned or religious entity owned or have no clear land title. In cities located over a mountainous terrain, slums begin on difficult to reach slopes or start at the bottom of flood prone valleys, often hidden from plain view of city center but close to some natural water source.[112] In cities located near lagoons, marshlands and rivers, they start at banks or on stilts above water or the dry river bed; in flat terrain, slums begin on lands unsuitable for agriculture, near city trash dumps, next to railway tracks,[113] and other shunned undesirable locations.
|
92 |
+
|
93 |
+
These strategies shield slums from the risk of being noticed and removed when they are small and most vulnerable to local government officials. Initial homes tend to be tents and shacks that are quick to install, but as slum grows, becomes established and newcomers pay the informal association or gang for the right to live in the slum, the construction materials for the slums switches to more lasting materials such as bricks and concrete, suitable for slum's topography.[114][115]
|
94 |
+
|
95 |
+
The original slums, over time, get established next to centers of economic activity, schools, hospitals, sources of employment, which the poor rely on.[96] Established old slums, surrounded by the formal city infrastructure, cannot expand horizontally; therefore, they grow vertically by stacking additional rooms, sometimes for a growing family and sometimes as a source of rent from new arrivals in slums.[116] Some slums name themselves after founders of political parties, locally respected historical figures, current politicians or politician's spouse to garner political backing against eviction.[117]
|
96 |
+
|
97 |
+
Informality of land tenure is a key characteristic of urban slums.[1] At their start, slums are typically located in least desirable lands near the town or city, that are state owned or philanthropic trust owned or religious entity owned or have no clear land title.[112] Some immigrants regard unoccupied land as land without owners and therefore occupy it.[118] In some cases the local community or the government allots lands to people, which will later develop into slums and over which the dwellers don't have property rights.[60] Informal land tenure also includes occupation of land belonging to someone else.[119] According to Flood, 51 percent of slums are based on invasion to private land in sub-Saharan Africa, 39 percent in North Africa and West Asia, 10 percent in South Asia, 40 percent in East Asia, and 40 percent in Latin America and the Caribbean.[120] In some cases, once the slum has many residents, the early residents form a social group, an informal association or a gang that controls newcomers, charges a fee for the right to live in the slums, and dictates where and how new homes get built within the slum. The newcomers, having paid for the right, feel they have commercial right to the home in that slum.[112][121] The slum dwellings, built earlier or in later period as the slum grows, are constructed without checking land ownership rights or building codes, are not registered with the city, and often not recognized by the city or state governments.[122][123]
|
98 |
+
|
99 |
+
Secure land tenure is important for slum dwellers as an authentic recognition of their residential status in urban areas. It also encourages them to upgrade their housing facilities, which will give them protection against natural and unnatural hazards.[60] Undocumented ownership with no legal title to the land also prevents slum settlers from applying for mortgage, which might worsen their financial situations. In addition, without registration of the land ownership, the government has difficulty in upgrading basic facilities and improving the living environment.[118] Insecure tenure of the slum, as well as lack of socially and politically acceptable alternatives to slums, also creates difficulty in citywide infrastructure development such as rapid mass transit, electrical line and sewer pipe layout, highways and roads.[124]
|
100 |
+
|
101 |
+
Slum areas are characterized by substandard housing structures.[125][126] Shanty homes are often built hurriedly, on ad hoc basis, with materials unsuitable for housing. Often the construction quality is inadequate to withstand heavy rains, high winds, or other local climate and location. Paper, plastic, earthen floors, mud-and-wattle walls, wood held together by ropes, straw or torn metal pieces as roofs are some of the materials of construction. In some cases, brick and cement is used, but without attention to proper design and structural engineering requirements.[127] Various space, dwelling placement bylaws and local building codes may also be extensively violated.[3][128]
|
102 |
+
|
103 |
+
Overcrowding is another characteristic of slums. Many dwellings are single room units, with high occupancy rates. Each dwelling may be cohabited by multiple families. Five and more persons may share a one-room unit; the room is used for cooking, sleeping and living. Overcrowding is also seen near sources of drinking water, cleaning, and sanitation where one toilet may serve dozens of families.[129][130][131] In a slum of Kolkata, India, over 10 people sometimes share a 45 m2 room.[132] In Kibera slum of Nairobi, Kenya, population density is estimated at 2,000 people per hectare — or about 500,000 people in one square mile.[133]
|
104 |
+
|
105 |
+
However, the density and neighbourhood effects of slum populations may also offer an opportunity to target health interventions.[134]
|
106 |
+
|
107 |
+
One of the identifying characteristics of slums is the lack of or inadequate public infrastructure.[135][136] From safe drinking water to electricity, from basic health care to police services, from affordable public transport to fire/ambulance services, from sanitation sewer to paved roads, new slums usually lack all of these. Established, old slums sometimes garner official support and get some of these infrastructure such as paved roads and unreliable electricity or water supply.[137] Slums usually lack street addresses, which creates further problems.[138]
|
108 |
+
|
109 |
+
Slums often have very narrow alleys that do not allow vehicles (including emergency vehicles) to pass. The lack of services such as routine garbage collection allows rubbish to accumulate in huge quantities.[139] The lack of infrastructure is caused by the informal nature of settlement and no planning for the poor by government officials. Fires are often a serious problem.[140]
|
110 |
+
|
111 |
+
In many countries, local and national government often refuse to recognize slums, because the slum are on disputed land, or because of the fear that quick official recognition will encourage more slum formation and seizure of land illegally. Recognizing and notifying slums often triggers a creation of property rights, and requires that the government provide public services and infrastructure to the slum residents.[141][142] With poverty and informal economy, slums do not generate tax revenues for the government and therefore tend to get minimal or slow attention. In other cases, the narrow and haphazard layout of slum streets, houses and substandard shacks, along with persistent threat of crime and violence against infrastructure workers, makes it difficult to layout reliable, safe, cost effective and efficient infrastructure. In yet others, the demand far exceeds the government bureaucracy's ability to deliver.[143][144]
|
112 |
+
|
113 |
+
Low socioeconomic status of its residents is another common characteristic attributed to slum residents.[145]
|
114 |
+
|
115 |
+
Slums are often placed among the places vulnerable to natural disasters such as landslides[146] and floods.[147][148] In cities located over mountainous terrain, slums begin on slopes difficult to reach or start at the bottom of flood-prone valleys, often hidden from plain view of city center but close to some natural water source.[112] In cities located near lagoons, marshlands and rivers, they start at banks or on stilts above water or the dry river bed; in flat terrain, slums begin on lands unsuitable for agriculture, near city trash dumps, next to railway tracks,[113] and other shunned, undesirable locations. These strategies shield slums from the risk of being noticed and removed when they are small and most vulnerable to local government officials.[112] However, the ad hoc construction, lack of quality control on building materials used, poor maintenance, and uncoordinated spatial design make them prone to extensive damage during earthquakes as well from decay.[149][150] These risks will be intensified by climate change.[151]
|
116 |
+
|
117 |
+
Some slums risk man-made hazards such as toxic industries, traffic congestion and collapsing infrastructure.[56] Fires are another major risk to slums and its inhabitants,[153][154] with streets too narrow to allow proper and quick access to fire control trucks.[152][155]
|
118 |
+
|
119 |
+
Due to lack of skills and education as well as competitive job markets,[156] many slum dwellers face high rates of unemployment.[157] The limit of job opportunities causes many of them to employ themselves in the informal economy, inside the slum or in developed urban areas near the slum. This can sometimes be licit informal economy or illicit informal economy without working contract or any social security. Some of them are seeking jobs at the same time and some of those will eventually find jobs in formal economies after gaining some professional skills in informal sectors.[156]
|
120 |
+
|
121 |
+
Examples of licit informal economy include street vending, household enterprises, product assembly and packaging, making garlands and embroideries, domestic work, shoe polishing or repair, driving tuk-tuk or manual rickshaws, construction workers or manually driven logistics, and handicrafts production.[158][159][97] In some slums, people sort and recycle trash of different kinds (from household garbage to electronics) for a living – selling either the odd usable goods or stripping broken goods for parts or raw materials.[97] Typically these licit informal economies require the poor to regularly pay a bribe to local police and government officials.[160]
|
122 |
+
|
123 |
+
Examples of illicit informal economy include illegal substance and weapons trafficking, drug or moonshine/changaa production, prostitution and gambling – all sources of risks to the individual, families and society.[162][163][164] Recent reports reflecting illicit informal economies include drug trade and distribution in Brazil's favelas, production of fake goods in the colonías of Tijuana, smuggling in katchi abadis and slums of Karachi, or production of synthetic drugs in the townships of Johannesburg.[165]
|
124 |
+
|
125 |
+
The slum-dwellers in informal economies run many risks. The informal sector, by its very nature, means income insecurity and lack of social mobility. There is also absence of legal contracts, protection of labor rights, regulations and bargaining power in informal employments.[166]
|
126 |
+
|
127 |
+
Some scholars suggest that crime is one of the main concerns in slums.[167] Empirical data suggest crime rates are higher in some slums than in non-slums, with slum homicides alone reducing life expectancy of a resident in a Brazil slum by 7 years than for a resident in nearby non-slum.[7][168] In some countries like Venezuela, officials have sent in the military to control slum criminal violence involved with drugs and weapons.[169] Rape is another serious issue related to crime in slums. In Nairobi slums, for example, one fourth of all teenage girls are raped each year.[170]
|
128 |
+
|
129 |
+
On the other hand, while UN-Habitat reports some slums are more exposed to crimes with higher crime rates (for instance, the traditional inner-city slums), crime is not the direct resultant of block layout in many slums. Rather crime is one of the symptoms of slum dwelling; thus slums consist of more victims than criminals.[7] Consequently, slums in all do not have consistently high crime rates; slums have the worst crime rates in sectors maintaining influence of illicit economy – such as drug trafficking, brewing, prostitution and gambling –. Often in such circumstance, multiple gangs fight for control over revenue.[171][172]
|
130 |
+
|
131 |
+
Slum crime rate correlates with insufficient law enforcement and inadequate public policing. In main cities of developing countries, law enforcement lags behind urban growth and slum expansion. Often police can not reduce crime because, due to ineffective city planning and governance, slums set inefficient crime prevention system. Such problems is not primarily due to community indifference. Leads and information intelligence from slums are rare, streets are narrow and a potential death traps to patrol, and many in the slum community have an inherent distrust of authorities from fear ranging from eviction to collection on unpaid utility bills to general law and order.[173] Lack of formal recognition by the governments also leads to few formal policing and public justice institutions in slums.[7]
|
132 |
+
|
133 |
+
Women in slums are at greater risk of physical and sexual violence.[174] Factors such as unemployment that lead to insufficient resources in the household can increase marital stress and therefore exacerbate domestic violence.[175]
|
134 |
+
|
135 |
+
Slums are often non-secured areas and women often risk sexual violence when they walk alone in slums late at night. Violence against women and women's security in slums emerge as recurrent issues.[176]
|
136 |
+
|
137 |
+
Another prevalent form of violence in slums is armed violence (gun violence), mostly existing in African and Latin American slums. It leads to homicide and the emergence of criminal gangs.[177] Typical victims are male slum residents.[178][178][179] Violence often leads to retaliatory and vigilante violence within the slum.[180] Gang and drug wars are endemic in some slums, predominantly between male residents of slums.[181][182] The police sometimes participate in gender-based violence against men as well by picking up some men, beating them and putting them in jail. Domestic violence against men also exists in slums, including verbal abuses and even physical violence from households.[182]
|
138 |
+
|
139 |
+
Cohen as well as Merton theorized that the cycle of slum violence does not mean slums are inevitably criminogenic, rather in some cases it is frustration against life in slum, and a consequence of denial of opportunity to slum residents to leave the slum.[183][184][185] Further, crime rates are not uniformly high in world's slums; the highest crime rates in slums are seen where illicit economy – such as drug trafficking, brewing, prostitution and gambling – is strong and multiple gangs are fighting for control.[186][187]
|
140 |
+
|
141 |
+
Slum dwellers usually experience a high rate of disease.[188][134] Diseases that have been reported in slums include cholera,[189][190] HIV/AIDS,[191][192] measles,[193] malaria,[194] dengue,[195] typhoid,[196] drug resistant tuberculosis,[197][198] and other epidemics.[199][200] Studies focus on children's health in slums address that cholera and diarrhea are especially common among young children.[201][202] Besides children's vulnerability to diseases, many scholars also focus on high HIV/AIDS prevalence in slums among women.[203][204] Throughout slum areas in various parts of the world, infectious diseases are a significant contributor to high mortality rates.[205] For example, according to a study in Nairobi's slums, HIV/AIDS and tuberculosis attributed to about 50% of the mortality burden.[206]
|
142 |
+
|
143 |
+
Factors that have been attributed to a higher rate of disease transmission in slums include high population densities, poor living conditions, low vaccination rates, insufficient health-related data and inadequate health service.[207] Overcrowding leads to faster and wider spread of diseases due to the limited space in slum housing.[188][134] Poor living conditions also make slum dwellers more vulnerable to certain diseases. Poor water quality, a manifest example, is a cause of many major illnesses including malaria, diarrhea and trachoma.[208] Improving living conditions such as introduction of better sanitation and access to basic facilities can ameliorate the effects of diseases, such as cholera.[201][209]
|
144 |
+
|
145 |
+
Slums have been historically linked to epidemics, and this trend has continued in modern times.[210][211][212] For example, the slums of West African nations such as Liberia were crippled by as well as contributed to the outbreak and spread of Ebola in 2014.[213][214] Slums are considered a major public health concern and potential breeding grounds of drug resistant diseases for the entire city, the nation, as well as the global community.[215][216]
|
146 |
+
|
147 |
+
Child malnutrition is more common in slums than in non-slum areas.[217]
|
148 |
+
In Mumbai and New Delhi, 47% and 51% of slum children under the age of five are stunted and 35% and 36% of them are underweighted. These children all suffer from third-degree malnutrition, the most severe level, according to WHO standards.[218] A study conducted by Tada et al. in Bangkok slums illustrates that in terms of weight-forage, 25.4% of the children who participated in the survey suffered from malnutrition, compared to around 8% national malnutrition prevalence in Thailand.[219] In Ethiopia and the Niger, rates of child malnutrition in urban slums are around 40%.[220]
|
149 |
+
|
150 |
+
The major nutritional problems in slums are protein-energy malnutrition (PEM), vitamin A deficiency (VAD), iron deficiency anemia (IDA) and iodine deficiency disorders (IDD).[217] Malnutrition can sometimes lead to death among children.[221] Dr. Abhay Bang's report shows that malnutrition kills 56,000 children annually in urban slums in India.[222]
|
151 |
+
|
152 |
+
Widespread child malnutrition in slums is closely related to family income, mothers' food practice, mothers' educational level, and maternal employment or housewifery.[219] Poverty may result in inadequate food intake when people cannot afford to buy and store enough food, which leads to malnutrition.[223] Another common cause is mothers' faulty feeding practices, including inadequate breastfeeding and wrongly preparation of food for children.[217] Tada et al.'s study in Bangkok slums shows that around 64% of the mothers sometimes fed their children instant food instead of a normal meal. And about 70% of the mothers did not provide their children three meals everyday. Mothers' lack of education leads to their faulty feeding practices. Many mothers in slums don't have knowledge on food nutrition for children.[219] Maternal employment also influences children's nutritional status. For the mothers who work outside, their children are prone to be malnourished. These children are likely to be neglected by their mothers or sometimes not carefully looked after by their female relatives.[217] Recent study has shown improvements in health awareness in adolescent age group of a rural slum area.[224]
|
153 |
+
|
154 |
+
A multitude of non-contagious diseases also impact health for slum residents. Examples of prevalent non-infectious diseases include: cardiovascular disease, diabetes, chronic respiratory disease, neurological disorders, and mental illness.[225] In some slum areas of India, diarrhea is a significant health problem among children. Factors like poor sanitation, low literacy rates, and limited awareness make diarrhea and other dangerous diseases extremely prevalent and burdensome on the community.[226]
|
155 |
+
|
156 |
+
Lack of reliable data also has a negative impact on slum dwellers' health. A number of slum families do not report cases or seek professional medical care, which results in insufficient data.[227] This might prevent appropriate allocation of health care resources in slum areas since many countries base their health care plans on data from clinic, hospital, or national mortality registry.[228] Moreover, health service is insufficient or inadequate in most of the world's slums.[228] Emergency ambulance service and urgent care services are typically unavailable, as health service providers sometimes avoid servicing slums.[229][228] A study shows that more than half of slum dwellers are prone to visit private practitioners or seek self-medication with medicines available in the home.[230] Private practitioners in slums are usually those who are unlicensed or poorly trained and they run clinics and pharmacies mainly for the sake of money.[228] The categorization of slum health by the government and census data also has an effect on the distribution and allocation of health resources in inner city areas. A significant portion of city populations face challenges with access to health care but do not live in locations that are described as within the "slum" area.[231]
|
157 |
+
|
158 |
+
Overall, a complex network of physical, social, and environmental factors contribute to the health threats faced by slum residents.[232]
|
159 |
+
|
160 |
+
Recent years have seen a dramatic growth in the number of slums as urban populations have increased in developing countries.[233] Nearly a billion people worldwide live in slums, and some project the figure may grow to 2 billion by 2030 if governments and the global community ignore slums and continue current urban policies. United Nations Habitat group believes change is possible. To achieve the goal of "cities without slums", the UN claims that governments must undertake vigorous urban planning, city management, infrastructure development, slum upgrading and poverty reduction.[14]
|
161 |
+
|
162 |
+
Some city and state officials have simply sought to remove slums.[234][235] This strategy for dealing with slums is rooted in the fact that slums typically start illegally on someone else's land property, and they are not recognized by the state. As the slum started by violating another's property rights, the residents have no legal claim to the land.[236][237]
|
163 |
+
|
164 |
+
Critics argue that slum removal by force tend to ignore the social problems that cause slums. The poor children as well as working adults of a city's informal economy need a place to live. Slum clearance removes the slum, but it does not remove the causes that create and maintain the slum.[238][239]
|
165 |
+
|
166 |
+
Slum relocation strategies rely on removing the slums and relocating the slum poor to free semi-rural peripheries of cities, sometimes in free housing. This strategy ignores several dimensions of a slum life. The strategy sees slum as merely a place where the poor lives. In reality, slums are often integrated with every aspect of a slum resident's life, including sources of employment, distance from work and social life.[240] Slum relocation that displaces the poor from opportunities to earn a livelihood, generates economic insecurity in the poor.[241] In some cases, the slum residents oppose relocation even if the replacement land and housing to the outskirts of cities is free and of better quality than their current house. Examples include Zone One Tondo Organization of Manila, Philippines and Abahlali baseMjondolo of Durban, South Africa.[242] In other cases, such as Ennakhil slum relocation project in Morocco, systematic social mediation has worked. The slum residents have been convinced that their current location is a health hazard, prone to natural disaster, or that the alternative location is well connected to employment opportunities.[243]
|
167 |
+
|
168 |
+
Some governments have begun to approach slums as a possible opportunity to urban development by slum upgrading. This approach was inspired in part by the theoretical writings of John Turner in 1972.[244][245] The approach seeks to upgrade the slum with basic infrastructure such as sanitation, safe drinking water, safe electricity distribution, paved roads, rain water drainage system, and bus/metro stops.[246] The assumption behind this approach is that if slums are given basic services and tenure security – that is, the slum will not be destroyed and slum residents will not be evicted, then the residents will rebuild their own housing, engage their slum community to live better, and over time attract investment from government organizations and businesses. Turner argued to demolish the housing, but to improve the environment: if governments can clear existing slums of unsanitary human waste, polluted water and litter, and from muddy unlit lanes, they do not have to worry about the shanty housing.[247] "Squatters" have shown great organizational skills in terms of land management, and they will maintain the infrastructure that is provided.[247]
|
169 |
+
|
170 |
+
In Mexico City for example, the government attempted to upgrade and urbanize settled slums in the periphery during the 1970s and 1980s by including basic amenities such as concrete roads, parks, illumination and sewage. Currently, most slums in Mexico City face basic characteristics of traditional slums, characterized to some extent in housing, population density, crime and poverty, however, the vast majority of its inhabitants have access to basic amenities and most areas are connected to major roads and completely urbanized. Nevertheless, smaller settlements lacking these can still be found in the periphery of the city and its inhabitants are known as "paracaidistas".
|
171 |
+
Another example of this approach is the slum upgrade in Tondo slum near Manila, Philippines.[248] The project was anticipated to be complete in four years, but it took nine. There was a large increase in cost, numerous delays, re-engineering of details to address political disputes, and other complications after the project. Despite these failures, the project reaffirmed the core assumption and Tondo families did build their own houses of far better quality than originally assumed. Tondo residents became property owners with a stake in their neighborhood. A more recent example of slum-upgrading approach is PRIMED initiative in Medellin, Colombia, where streets, Metrocable transportation and other public infrastructure has been added. These slum infrastructure upgrades were combined with city infrastructure upgrade such as addition of metro, paved roads and highways to empower all city residents including the poor with reliable access throughout city.[249]
|
172 |
+
|
173 |
+
Most slum upgrading projects, however, have produced mixed results. While initial evaluations were promising and success stories widely reported by media, evaluations done 5 to 10 years after a project completion have been disappointing. Herbert Werlin[247] notes that the initial benefits of slum upgrading efforts have been ephemeral. The slum upgrading projects in kampungs of Jakarta Indonesia, for example, looked promising in first few years after upgrade, but thereafter returned to a condition worse than before, particularly in terms of sanitation, environmental problems and safety of drinking water. Communal toilets provided under slum upgrading effort were poorly maintained, and abandoned by slum residents of Jakarta.[250] Similarly slum upgrading efforts in Philippines,[251][252] India[253] and Brazil[254][255] have proven to be excessively more expensive than initially estimated, and the condition of the slums 10 years after completion of slum upgrading has been slum like. The anticipated benefits of slum upgrading, claims Werlin, have proven to be a myth.[247] There is limited but consistent evidence that slums upgrading may prevent diarrhoeal diseases and water-related expenditure.[256]
|
174 |
+
|
175 |
+
Slum upgrading is largely a government controlled, funded and run process, rather than a competitive market driven process. Krueckeberg and Paulsen note[259] conflicting politics, government corruption and street violence in slum regularization process is part of the reality. Slum upgrading and tenure regularization also upgrade and regularize the slum bosses and political agendas, while threatening the influence and power of municipal officials and ministries. Slum upgrading does not address poverty, low paying jobs from informal economy, and other characteristics of slums.[96] Recent research shows that the lack of these options make residents to undertake measures to assure their working needs.[97] One example in the northeast of Brazil, Vila S. Pedro, was mischaracterized by informal self-constructions by residents to restore working opportunities originally employed in the informal settlement.[96] It is unclear whether slum upgrading can lead to long-term sustainable improvement to slums.[260]
|
176 |
+
|
177 |
+
Urban infrastructure such as reliable high speed mass transit system, motorways/interstates, and public housing projects have been cited[261][262] as responsible for the disappearance of major slums in the United States and Europe from the 1960s through 1970s. Charles Pearson argued in UK Parliament that mass transit would enable London to reduce slums and relocate slum dwellers. His proposal was initially rejected for lack of land and other reasons; but Pearson and others persisted with creative proposals such as building the mass transit under the major roads already in use and owned by the city. London Underground was born, and its expansion has been credited to reducing slums in respective cities[263] (and to an extent, the New York City Subway's smaller expansion).[264]
|
178 |
+
|
179 |
+
As cities expanded and business parks scattered due to cost ineffectiveness, people moved to live in the suburbs; thus retail, logistics, house maintenance and other businesses followed demand patterns. City governments used infrastructure investments and urban planning to distribute work, housing, green areas, retail, schools and population densities. Affordable public mass transit in cities such as New York City, London and Paris allowed the poor to reach areas where they could earn a livelihood. Public and council housing projects cleared slums and provided more sanitary housing options than what existed before the 1950s.[265]
|
180 |
+
|
181 |
+
Slum clearance became a priority policy in Europe between 1950–1970s, and one of the biggest state-led programs. In the UK, the slum clearance effort was bigger in scale than the formation of British Railways, the National Health Service and other state programs. UK Government data suggests the clearances that took place after 1955 demolished about 1.5 million slum properties, resettling about 15% of UK's population out of these properties.[266] Similarly, after 1950, Denmark and others pursued parallel initiatives to clear slums and resettle the slum residents.[257]
|
182 |
+
|
183 |
+
The US and European governments additionally created a procedure by which the poor could directly apply to the government for housing assistance, thus becoming a partner to identifying and meeting the housing needs of its citizens.[267][268] One historically effective approach to reduce and prevent slums has been citywide infrastructure development combined with affordable, reliable public mass transport and public housing projects.[269]
|
184 |
+
|
185 |
+
In Brazil, in 2014, the government built about 2 million houses around the country for lower income families. The public program was named "Minha casa, minha vida" which means "My house, my life".[citation needed] The project has built 2 million popular houses and it has 2 million more under construction.[citation needed]
|
186 |
+
|
187 |
+
However, slum relocation in the name of urban development is criticized for uprooting communities without consultation or consideration of ongoing livelihood. For example, the Sabarmati Riverfront Project, a recreational development in Ahmedabad, India, forcefully relocated over 19,000 families from shacks along the river to 13 public housing complexes that were an average of 9 km away from the family's original dwelling.[270]
|
188 |
+
|
189 |
+
Slums exist in many countries and have become a global phenomenon.[272] A UN-Habitat report states that in 2006 there were nearly 1 billion people settling in slum settlements in most cities of Latin America, Asia, and Africa, and a smaller number in the cities of Europe and North America.[273] In 2012, according to UN-Habitat, about 863 million people in the developing world lived in slums. Of these, the urban slum population at mid-year was around 213 million in Sub-Saharan Africa, 207 million in East Asia, 201 million in South Asia, 113 million in Latin America and Caribbean, 80 million in Southeast Asia, 36 million in West Asia, and 13 million in North Africa.[8]:127 Among individual countries, the proportion of urban residents living in slum areas in 2009 was highest in the Central African Republic (95.9%), Chad (89.3%), Niger (81.7%), and Mozambique (80.5%).[8]
|
190 |
+
|
191 |
+
The distribution of slums within a city varies throughout the world. In most of the developed countries, it is easier to distinguish the slum-areas and non-slum areas. In the United States, slum dwellers are usually in city neighborhoods and inner suburbs, while in Europe, they are more common in high rise housing on the urban outskirts. In many developing countries, slums are prevalent as distributed pockets or as urban orbits of densely constructed informal settlements.[272] In some cities, especially in countries in Southern Asia and Sub-Saharan Africa, slums are not just marginalized neighborhoods holding a small population; slums are widespread, and are home to a large part of urban population. These are sometimes called slum cities.[274]
|
192 |
+
|
193 |
+
The percentage of developing world's urban population living in slums has been dropping with economic development, even while total urban population has been increasing. In 1990, 46 percent of the urban population lived in slums; by 2000, the percentage had dropped to 39%; which further dropped to 32% by 2010.[275]
|
en/1951.html.txt
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fertility is the natural capability to produce offspring. As a measure, fertility rate is the number of offspring born per mating pair, individual or population. Fertility differs from fecundity, which is defined as the potential for reproduction (influenced by gamete production, fertilization and carrying a pregnancy to term)[1] A lack of fertility is infertility while a lack of fecundity would be called sterility.
|
2 |
+
|
3 |
+
Human fertility depends on factors of nutrition, sexual behavior, consanguinity, culture, instinct, endocrinology, timing, economics, way of life, and emotions.
|
4 |
+
|
5 |
+
In demographic contexts, fertility refers to the actual production of offspring, rather than the physical capability to produce which is termed fecundity.[2][3] While fertility can be measured, fecundity cannot be. Demographers measure the fertility rate in a variety of ways, which can be broadly broken into "period" measures and "cohort" measures. "Period" measures refer to a cross-section of the population in one year. "Cohort" data on the other hand, follows the same people over a period of decades. Both period and cohort measures are widely used.[4]
|
6 |
+
|
7 |
+
A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have.[6] Factors generally associated with increased fertility include religiosity,[7] intention to have children,[8] and maternal support.[9] Factors generally associated with decreased fertility include wealth, education,[10] female labor participation,[11] urban residence,[12] cost of housing,[13] intelligence, increased female age and (to a lesser degree) increased male age.
|
8 |
+
|
9 |
+
The "Three-step Analysis" of the fertility process was introduced by Kingsley Davis and Judith Blake in 1956 and makes use of three proximate determinants:[14][15] The economic analysis of fertility is part of household economics, a field that has grown out of the New Home Economics. Influential economic analyses of fertility include Becker (1960),[16] Mincer (1963),[17] and Easterlin (1969).[18] The latter developed the Easterlin hypothesis to account for the Baby Boom.
|
10 |
+
|
11 |
+
Bongaarts proposed a model where the total fertility rate of a population can be calculated from four proximate determinants and the total fecundity (TF). The index of marriage (Cm), the index of contraception (Cc), the index of induced abortion (Ca) and the index of postpartum infecundability (Ci). These indices range from 0 to 1. The higher the index, the higher it will make the TFR, for example a population where there are no induced abortions would have a Ca of 1, but a country where everybody used infallible contraception would have a Cc of 0.
|
12 |
+
|
13 |
+
TFR = TF × Cm × Ci × Ca × Cc
|
14 |
+
|
15 |
+
These four indices can also be used to calculate the total marital fertility (TMFR) and the total natural fertility (TN).
|
16 |
+
|
17 |
+
TFR = TMFR × Cm
|
18 |
+
|
19 |
+
TMFR = TN × Cc × Ca
|
20 |
+
|
21 |
+
TN = TF × Ci
|
22 |
+
|
23 |
+
Women have hormonal cycles which determine when they can achieve pregnancy. The cycle is approximately twenty-eight days long, with a fertile period of five days per cycle, but can deviate greatly from this norm. Men are fertile continuously, but their sperm quality is affected by their health, frequency of ejaculation, and environmental factors.
|
24 |
+
|
25 |
+
Fertility declines with age in both sexes. In women the decline is more rapid, with complete infertility normally occurring around the age of 50.
|
26 |
+
|
27 |
+
Pregnancy rates for sexual intercourse are highest when it is done every 1 or 2 days,[19] or every 2 or 3 days.[20] Studies have shown no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina.[21]
|
28 |
+
|
29 |
+
A woman's menstrual cycle begins, as it has been arbitrarily assigned, with menses. Next is the follicular phase where estrogen levels build as an ovum matures (due to the follicular stimulating hormone, or FSH) within the ovary. When estrogen levels peak, it spurs a surge of luteinizing hormone (LH) which finishes the ovum and enables it to break through the ovary wall.[23] This is ovulation. During the luteal phase, which follows ovulation LH and FSH cause the post-ovulation ovary to develop into the corpus luteum which produces progesterone. The production of progesterone inhibits the LH and FSH hormones which (in a cycle without pregnancy) causes the corpus luteum to atrophy, and menses to begin the cycle again.
|
30 |
+
|
31 |
+
Peak fertility occurs during just a few days of the cycle: usually two days before and two days after the ovulation date.[24] This fertile window varies from woman to woman, just as the ovulation date often varies from cycle to cycle for the same woman.[25] The ovule is usually capable of being fertilized for up to 48 hours after it is released from the ovary. Sperm survive inside the uterus between 48 and 72 hours on average, with the maximum being 120 hours (5 days).
|
32 |
+
|
33 |
+
These periods and intervals are important factors for couples using the rhythm method of contraception.
|
34 |
+
|
35 |
+
The average age of menarche in the United States is about 12.5 years.[26] In postmenarchal girls, about 80% of the cycles are anovulatory (ovulation does not actually take place) in the first year after menarche, 50% in the third and 10% in the sixth year.[27]
|
36 |
+
|
37 |
+
Menopause occurs during a woman's midlife (between ages 48 and 55).[28][29] During menopause, hormonal production by the ovaries is reduced, eventually causing a permanent cessation of the primary function of the ovaries, particularly the creation of the uterine lining (period). This is considered the end of the fertile phase of a woman's life.
|
38 |
+
|
39 |
+
The following effects of age and female fertility have been found in women trying to get pregnant, without using fertility drugs or in vitro fertilization (data from 1670 to 1830):[30]
|
40 |
+
|
41 |
+
[30]
|
42 |
+
|
43 |
+
Studies of actual couples trying to conceive have come up with higher results: one 2004 study of 770 European women found that 82% of 35- to 39-year-old women conceived within a year,[31] while another in 2013 of 2,820 Danish women saw 78% of 35- to 40-year-olds conceive within a year.[32]
|
44 |
+
|
45 |
+
The use of fertility drugs and/or invitro fertilization can increase the chances of becoming pregnant at a later age.[33] Successful pregnancies facilitated by fertility treatment have been documented in women as old as 67.[34] Studies since 2004 now show that mammals may continue to produce new eggs throughout their lives, rather than being born with a finite number as previously thought. Researchers at the Massachusetts General Hospital in Boston, US, say that if eggs are newly created each month in humans as well, all current theories about the aging of the female reproductive system will have to be overhauled, although at this time this is simply conjecture.[35][36]
|
46 |
+
|
47 |
+
According to the March of Dimes, "about 9 percent of recognized pregnancies for women aged 20 to 24 ended in miscarriage. The risk rose to about 20 percent at age 35 to 39, and more than 50 percent by age 42".[37] Birth defects, especially those involving chromosome number and arrangement, also increase with the age of the mother. According to the March of Dimes, "At age 25, your risk of having a baby with Down syndrome is 1 in 1,340. At age 30, your risk is 1 in 940. At age 35, your risk is 1 in 353. At age 40, your risk is 1 in 85. At age 45, your risk is 1 in 35."[38]
|
48 |
+
|
49 |
+
Some research suggest that increased male age is associated with a decline in semen volume, sperm motility, and sperm morphology.[39] In studies that controlled for female age, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%.[39] It is suggested that sperm count declines with age, with men aged 50–80 years producing sperm at an average rate of 75% compared with men aged 20–50 years and that larger differences are seen in how many of the seminiferous tubules in the testes contain mature sperm:[39]
|
50 |
+
|
51 |
+
Decline in male fertility is influenced by many factors, including lifestyle, environment and psychological factors.[41]
|
52 |
+
|
53 |
+
Some research also suggests increased risks for health problems for children of older fathers, but no clear association has been proven.[42] A large scale in Israel study suggested that the children of men 40 or older were 5.75 times more likely than children of men under 30 to have an autism spectrum disorder, controlling for year of birth, socioeconomic status, and maternal age.[43] Increased paternal age is suggested by some to directly correlate to schizophrenia but it is not proven.[44][45][46][47][48]
|
54 |
+
|
55 |
+
Australian researchers have found evidence to suggest overweight obesity may cause subtle damage to sperm and prevent a healthy pregnancy. They say fertilization was 40% less likely to succeed when the father was overweight.[49]
|
56 |
+
|
57 |
+
The American Fertility Society recommends an age limit for sperm donors of 50 years or less,[50] and many fertility clinics in the United Kingdom will not accept donations from men over 40 or 45 years of age.[51]
|
58 |
+
|
59 |
+
The French pronatalist movement from 1919–1945 failed to convince French couples they had a patriotic duty to help increase their country's birthrate. Even the government was reluctant in its support to the movement. It was only between 1938 and 1939 that the French government became directly and permanently involved in the pronatalist effort. Although the birthrate started to surge in late 1941, the trend was not sustained. Falling birthrate once again became a major concern among demographers and government officials beginning in the 1970s.[52]
|
60 |
+
|
61 |
+
From 1800 to 1940, fertility fell in the US. There was a marked decline in fertility in the early 1900s, associated with improved contraceptives, greater access to contraceptives and sexuality information and the "first" sexual revolution in the 1920s.
|
62 |
+
|
63 |
+
After 1940 fertility suddenly started going up again, reaching a new peak in 1957. After 1960, fertility started declining rapidly. In the Baby Boom years (1946–1964), women married earlier and had their babies sooner; the number of children born to mothers after age 35 did not increase.[54]
|
64 |
+
|
65 |
+
After 1960, new methods of contraception became available, ideal family size fell, from 3 to 2 children. Couples postponed marriage and first births, and they sharply reduced the number of third and fourth births.[55]
|
66 |
+
|
67 |
+
Infertility primarily refers to the biological inability of a person to contribute to conception. Infertility may also refer to the state of a woman who is unable to carry a pregnancy to full term. There are many biological causes of infertility, including some that medical intervention can treat.[56]
|
68 |
+
|
69 |
+
This article incorporates material from the Citizendium article "Fertility (demography)", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
|
en/1952.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/1953.html.txt
ADDED
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Federico Fellini, Cavaliere di Gran Croce OMRI (Italian: [fedeˈriːko felˈliːni]; 20 January 1920 – 31 October 1993) was an Italian film director and screenwriter known for his distinctive style, which blends fantasy and baroque images with earthiness. He is recognized as one of the greatest and most influential filmmakers of all time. His films have ranked in polls such as Cahiers du cinéma and Sight & Sound, which lists his 1963 film 8 1⁄2 as the 10th-greatest film.
|
4 |
+
|
5 |
+
Fellini won the Palme d'Or for La Dolce Vita, was nominated for twelve Academy Awards, and won four in the category of Best Foreign Language Film, the most for any director in the history of the Academy. He received an honorary award for Lifetime Achievement at the 65th Academy Awards in Los Angeles. His other well-known films include La Strada (1954), Nights of Cabiria (1957), Juliet of the Spirits (1967), Satyricon (1969), Roma (1972), Amarcord (1973), and Fellini's Casanova (1976).
|
6 |
+
|
7 |
+
Fellini was born on 20 January 1920, to middle-class parents in Rimini, then a small town on the Adriatic Sea. On 25 January, at the San Nicolò church he was baptized Federico Domenico Marcello Fellini.[1] His father, Urbano Fellini (1894–1956), born to a family of Romagnol peasants and small landholders from Gambettola, moved to Rome in 1915 as a baker apprenticed to the Pantanella pasta factory. His mother, Ida Barbiani (1896–1984), came from a bourgeois Catholic family of Roman merchants. Despite her family's vehement disapproval, she had eloped with Urbano in 1917 to live at his parents' home in Gambettola.[2] A civil marriage followed in 1918 with the religious ceremony held at Santa Maria Maggiore in Rome a year later.
|
8 |
+
|
9 |
+
The couple settled in Rimini where Urbano became a traveling salesman and wholesale vendor. Fellini had two siblings: Riccardo (1921–1991), a documentary director for RAI Television, and Maria Maddalena (m. Fabbri; 1929–2002).
|
10 |
+
In 1924, Fellini started primary school in an institute run by the nuns of San Vincenzo in Rimini, attending the Carlo Tonni public school two years later. An attentive student, he spent his leisure time drawing, staging puppet shows and reading Il corriere dei piccoli, the popular children's magazine that reproduced traditional American cartoons by Winsor McCay, George McManus and Frederick Burr Opper. (Opper's Happy Hooligan would provide the visual inspiration for Gelsomina in Fellini's 1954 film La Strada; McCay's Little Nemo would directly influence his 1980 film City of Women.)[3] In 1926, he discovered the world of Grand Guignol, the circus with Pierino the Clown and the movies. Guido Brignone’s Maciste all’Inferno (1926), the first film he saw, would mark him in ways linked to Dante and the cinema throughout his entire career.[4]
|
11 |
+
|
12 |
+
Enrolled at the Ginnasio Giulio Cesare in 1929, he made friends with Luigi Titta Benzi, later a prominent Rimini lawyer (and the model for young Titta in Amarcord (1973)). In Mussolini’s Italy, Fellini and Riccardo became members of the Avanguardista, the compulsory Fascist youth group for males. He visited Rome with his parents for the first time in 1933, the year of the maiden voyage of the transatlantic ocean liner SS Rex (which is shown in Amarcord). The sea creature found on the beach at the end of La Dolce Vita (1960) has its basis in a giant fish marooned on a Rimini beach during a storm in 1934.
|
13 |
+
|
14 |
+
Although Fellini adapted key events from his childhood and adolescence in films such as I Vitelloni (1953), 8 1⁄2 (1963), and Amarcord (1973), he insisted that such autobiographical memories were inventions:
|
15 |
+
|
16 |
+
It is not memory that dominates my films. To say that my films are autobiographical is an overly facile liquidation, a hasty classification. It seems to me that I have invented almost everything: childhood, character, nostalgias, dreams, memories, for the pleasure of being able to recount them.[5]
|
17 |
+
|
18 |
+
In 1937, Fellini opened Febo, a portrait shop in Rimini, with the painter Demos Bonini. His first humorous article appeared in the "Postcards to Our Readers" section of Milan's Domenica del Corriere. Deciding on a career as a caricaturist and gag writer, Fellini travelled to Florence in 1938, where he published his first cartoon in the weekly 420. According to a biographer, Fellini found school "exasperating"[6] and, in one year, had 67 absences.[7] Failing his military culture exam, he graduated from high school in July 1938 after doubling the exam.
|
19 |
+
|
20 |
+
In September 1939, he enrolled in law school at the University of Rome to please his parents. Biographer Hollis Alpert reports that "there is no record of his ever having attended a class".[8] Installed in a family pensione, he met another lifelong friend, the painter Rinaldo Geleng. Desperately poor, they unsuccessfully joined forces to draw sketches of restaurant and café patrons. Fellini eventually found work as a cub reporter on the dailies Il Piccolo and Il Popolo di Roma, but quit after a short stint, bored by the local court news assignments.
|
21 |
+
|
22 |
+
Four months after publishing his first article in Marc’Aurelio, the highly influential biweekly humour magazine, he joined the editorial board, achieving success with a regular column titled But Are You Listening?[9] Described as “the determining moment in Fellini’s life”,[10] the magazine gave him steady employment between 1939 and 1942, when he interacted with writers, gagmen, and scriptwriters. These encounters eventually led to opportunities in show business and cinema. Among his collaborators on the magazine's editorial board were the future director Ettore Scola, Marxist theorist and scriptwriter Cesare Zavattini, and Bernardino Zapponi, a future Fellini screenwriter. Conducting interviews for CineMagazzino also proved congenial: when asked to interview Aldo Fabrizi, Italy's most popular variety performer, he established such immediate personal rapport with the man that they collaborated professionally. Specializing in humorous monologues, Fabrizi commissioned material from his young protégé.[11]
|
23 |
+
|
24 |
+
Retained on business in Rimini, Urbano sent wife and family to Rome in 1940 to share an apartment with his son. Fellini and Ruggero Maccari, also on the staff of Marc’Aurelio, began writing radio sketches and gags for films.
|
25 |
+
|
26 |
+
Not yet twenty and with Fabrizi's help, Fellini obtained his first screen credit as a comedy writer on Mario Mattoli’s Il pirata sono io (The Pirate's Dream). Progressing rapidly to numerous collaborations on films at Cinecittà, his circle of professional acquaintances widened to include novelist Vitaliano Brancati and scriptwriter Piero Tellini. In the wake of Mussolini’s declaration of war against France and Britain on 10 June 1940, Fellini discovered Kafka’s The Metamorphosis, Gogol, John Steinbeck and William Faulkner along with French films by Marcel Carné, René Clair, and Julien Duvivier.[12] In 1941 he published Il mio amico Pasqualino, a 74-page booklet in ten chapters describing the absurd adventures of Pasqualino, an alter ego.[13]
|
27 |
+
|
28 |
+
Writing for radio while attempting to avoid the draft, Fellini met his future wife Giulietta Masina in a studio office at the Italian public radio broadcaster EIAR in the autumn of 1942. Well-paid as the voice of Pallina in Fellini's radio serial, Cico and Pallina, Masina was also well known for her musical-comedy broadcasts which cheered an audience depressed by the war.
|
29 |
+
|
30 |
+
Giulietta is practical, and likes the fact that she earns a handsome fee for her radio work, whereas theater never pays well. And of course the fame counts for something too. Radio is a booming business and comedy reviews have a broad and devoted public.[14]
|
31 |
+
|
32 |
+
In November 1942, Fellini was sent to Libya, occupied by Fascist Italy, to work on the screenplay of I cavalieri del deserto (Knights of the Desert, 1942), directed by Osvaldo Valenti and Gino Talamo. Fellini welcomed the assignment as it allowed him "to secure another extension on his draft order".[15] Responsible for emergency re-writing, he also directed the film's first scenes. When Tripoli fell under siege by British forces, he and his colleagues made a narrow escape by boarding a German military plane flying to Sicily. His African adventure, later published in Marc’Aurelio as "The First Flight", marked “the emergence of a new Fellini, no longer just a screenwriter, working and sketching at his desk, but a filmmaker out in the field”.[16]
|
33 |
+
|
34 |
+
The apolitical Fellini was finally freed of the draft when an Allied air raid over Bologna destroyed his medical records. Fellini and Giulietta hid in her aunt's apartment until Mussolini's fall on 25 July 1943. After dating for nine months, the couple were married on 30 October 1943. Several months later, Masina fell down the stairs and suffered a miscarriage. She gave birth to a son, Pierfederico, on 22 March 1945, but the child died of encephalitis a month later on 24 April 1945.[17] The tragedy had enduring emotional and artistic repercussions.[18]
|
35 |
+
|
36 |
+
After the Allied liberation of Rome on 4 June 1944, Fellini and Enrico De Seta opened the Funny Face Shop where they survived the postwar recession drawing caricatures of American soldiers. He became involved with Italian Neorealism when Roberto Rossellini, at work on Stories of Yesteryear (later Rome, Open City), met Fellini in his shop, and proposed he contribute gags and dialogue for the script. Aware of Fellini's reputation as Aldo Fabrizi's “creative muse”,[19] Rossellini also requested that he try to convince the actor to play the role of Father Giuseppe Morosini, the parish priest executed by the SS on 4 April 1944.
|
37 |
+
|
38 |
+
In 1947, Fellini and Sergio Amidei received an Oscar nomination for the screenplay of Rome, Open City.
|
39 |
+
|
40 |
+
Working as both screenwriter and assistant director on Rossellini's Paisà (Paisan) in 1946, Fellini was entrusted to film the Sicilian scenes in Maiori. In February 1948, he was introduced to Marcello Mastroianni, then a young theatre actor appearing in a play with Giulietta Masina.[20] Establishing a close working relationship with Alberto Lattuada, Fellini co-wrote the director's Senza pietà (Without Pity) and Il mulino del Po (The Mill on the Po). Fellini also worked with Rossellini on the anthology film L'Amore (1948), co-writing the screenplay and in one segment titled, "The Miracle", acting opposite Anna Magnani. To play the role of a vagabond rogue mistaken by Magnani for a saint, Fellini had to bleach his black hair blond.
|
41 |
+
|
42 |
+
In 1950 Fellini co-produced and co-directed with Alberto Lattuada Variety Lights (Luci del varietà), his first feature film. A backstage comedy set among the world of small-time travelling performers, it featured Giulietta Masina and Lattuada's wife, Carla Del Poggio. Its release to poor reviews and limited distribution proved disastrous for all concerned. The production company went bankrupt, leaving both Fellini and Lattuada with debts to pay for over a decade.[21] In February 1950, Paisà received an Oscar nomination for the screenplay by Rossellini, Sergio Amidei, and Fellini.
|
43 |
+
|
44 |
+
After travelling to Paris for a script conference with Rossellini on Europa '51, Fellini began production on The White Sheik in September 1951, his first solo-directed feature. Starring Alberto Sordi in the title role, the film is a revised version of a treatment first written by Michelangelo Antonioni in 1949 and based on the fotoromanzi, the photographed cartoon strip romances popular in Italy at the time. Producer Carlo Ponti commissioned Fellini and Tullio Pinelli to write the script but Antonioni rejected the story they developed. With Ennio Flaiano, they re-worked the material into a light-hearted satire about newlywed couple Ivan and Wanda Cavalli (Leopoldo Trieste, Brunella Bovo) in Rome to visit the Pope. Ivan's prissy mask of respectability is soon demolished by his wife's obsession with the White Sheik. Highlighting the music of Nino Rota, the film was selected at Cannes (among the films in competition was Orson Welles’s Othello) and then retracted. Screened at the 13th Venice International Film Festival, it was razzed by critics in "the atmosphere of a soccer match”.[22] One reviewer declared that Fellini had “not the slightest aptitude for cinema direction".
|
45 |
+
|
46 |
+
In 1953, I Vitelloni found favour with the critics and public. Winning the Silver Lion Award in Venice, it secured Fellini his first international distributor.
|
47 |
+
|
48 |
+
Fellini directed La Strada based on a script completed in 1952 with Pinelli and Flaiano. During the last three weeks of shooting, Fellini experienced the first signs of severe clinical depression.[24] Aided by his wife, he undertook a brief period of therapy with Freudian psychoanalyst Emilio Servadio.[24]
|
49 |
+
|
50 |
+
Fellini cast American actor Broderick Crawford to interpret the role of an aging swindler in Il Bidone. Based partly on stories told to him by a petty thief during production of La Strada, Fellini developed the script into a con man's slow descent towards a solitary death. To incarnate the role's "intense, tragic face", Fellini's first choice had been Humphrey Bogart,[25] but after learning of the actor's lung cancer, chose Crawford after seeing his face on the theatrical poster of All the King’s Men (1949).[26] The film shoot was wrought with difficulties stemming from Crawford's alcoholism.[27] Savaged by critics at the 16th Venice International Film Festival, the film did miserably at the box office and did not receive international distribution until 1964.
|
51 |
+
|
52 |
+
During the autumn, Fellini researched and developed a treatment based on a film adaptation of Mario Tobino’s novel, The Free Women of Magliano. Set in a mental institution for women, the project was abandoned when financial backers considered the subject had no potential.[28]
|
53 |
+
|
54 |
+
While preparing Nights of Cabiria in spring 1956, Fellini learned of his father’s death by cardiac arrest at the age of sixty-two. Produced by Dino De Laurentiis and starring Giulietta Masina, the film took its inspiration from news reports of a woman’s severed head retrieved in a lake and stories by Wanda, a shantytown prostitute Fellini met on the set of Il Bidone.[29] Pier Paolo Pasolini was hired to translate Flaiano and Pinelli’s dialogue into Roman dialect and to supervise researches in the vice-afflicted suburbs of Rome. The movie won the Academy Award for Best Foreign Language Film at the 30th Academy Awards and brought Masina the Best Actress Award at Cannes for her performance.[30]
|
55 |
+
|
56 |
+
With Pinelli, he developed Journey with Anita for Sophia Loren and Gregory Peck. An "invention born out of intimate truth", the script was based on Fellini's return to Rimini with a mistress to attend his father's funeral.[31] Due to Loren's unavailability, the project was shelved and resurrected twenty-five years later as Lovers and Liars (1981), a comedy directed by Mario Monicelli with Goldie Hawn and Giancarlo Giannini. For Eduardo De Filippo, he co-wrote the script of Fortunella, tailoring the lead role to accommodate Masina's particular sensibility.[citation needed]
|
57 |
+
|
58 |
+
The Hollywood on the Tiber phenomenon of 1958 in which American studios profited from the cheap studio labour available in Rome provided the backdrop for photojournalists to steal shots of celebrities on the via Veneto.[32] The scandal provoked by Turkish dancer Haish Nana's improvised striptease at a nightclub captured Fellini's imagination: he decided to end his latest script-in-progress, Moraldo in the City, with an all-night "orgy" at a seaside villa. Pierluigi Praturlon’s photos of Anita Ekberg wading fully dressed in the Trevi Fountain provided further inspiration for Fellini and his scriptwriters.[citation needed]
|
59 |
+
|
60 |
+
Changing the title of the screenplay to La Dolce Vita, Fellini soon clashed with his producer on casting: the director insisted on the relatively unknown Mastroianni while De Laurentiis wanted Paul Newman as a hedge on his investment. Reaching an impasse, De Laurentiis sold the rights to publishing mogul Angelo Rizzoli. Shooting began on 16 March 1959 with Anita Ekberg climbing the stairs to the cupola of Saint Peter’s in a mammoth décor constructed at Cinecittà. The statue of Christ flown by helicopter over Rome to Saint Peter's Square was inspired by an actual media event on 1 May 1956, which Fellini had witnessed. The film wrapped August 15 on a deserted beach at Passo Oscuro with a bloated mutant fish designed by Piero Gherardi.[citation needed]
|
61 |
+
|
62 |
+
La Dolce Vita broke all box office records. Despite scalpers selling tickets at 1000 lire,[33] crowds queued in line for hours to see an “immoral movie” before the censors banned it. At an exclusive Milan screening on 5 February 1960, one outraged patron spat on Fellini while others hurled insults. Denounced in parliament by right-wing conservatives, undersecretary Domenico Magrì of the Christian Democrats demanded tolerance for the film's controversial themes.[34] The Vatican's official press organ, l'Osservatore Romano, lobbied for censorship while the Board of Roman Parish Priests and the Genealogical Board of Italian Nobility attacked the film. In one documented instance involving favourable reviews written by the Jesuits of San Fedele, defending La Dolce Vita had severe consequences.[35] In competition at Cannes alongside Antonioni's L’Avventura, the film won the Palme d'Or awarded by presiding juror Georges Simenon. The Belgian writer was promptly “hissed at” by the disapproving festival crowd.[36]
|
63 |
+
|
64 |
+
A major discovery for Fellini after his Italian neorealism period (1950–1959) was the work of Carl Jung. After meeting Jungian psychoanalyst Dr. Ernst Bernhard in early 1960, he read Jung's autobiography, Memories, Dreams, Reflections (1963) and experimented with LSD.[37] Bernhard also recommended that Fellini consult the I Ching and keep a record of his dreams. What Fellini formerly accepted as "his extrasensory perceptions"[38] were now interpreted as psychic manifestations of the unconscious. Bernhard's focus on Jungian depth psychology proved to be the single greatest influence on Fellini's mature style and marked the turning point in his work from neorealism to filmmaking that was "primarily oneiric".[39] As a consequence, Jung's seminal ideas on the anima and the animus, the role of archetypes and the collective unconscious directly influenced such films as 8 1⁄2 (1963), Juliet of the Spirits (1965), Fellini Satyricon (1969), Casanova (1976), and City of Women (1980).[40] Other key influences on his work include Luis Buñuel.[a] Charlie Chaplin,[b] Sergei Eisenstein,[c] Buster Keaton,[41] Laurel and Hardy,[41] the Marx Brothers,[41] and Roberto Rossellini.[d]
|
65 |
+
|
66 |
+
Exploiting La Dolce Vita’s success, financier Angelo Rizzoli set up Federiz in 1960, an independent film company, for Fellini and production manager Clemente Fracassi to discover and produce new talent. Despite the best intentions, their overcautious editorial and business skills forced the company to close down soon after cancelling Pasolini’s project, Accattone (1961).[42]
|
67 |
+
|
68 |
+
Condemned as a "public sinner",[43] for La Dolce Vita, Fellini responded with The Temptations of Doctor Antonio, a segment in the omnibus Boccaccio '70. His second colour film, it was the sole project green-lighted at Federiz. Infused with the surrealistic satire that characterized the young Fellini's work at Marc’Aurelio, the film ridiculed a crusader against vice, interpreted by Peppino De Filippo, who goes insane trying to censor a billboard of Anita Ekberg espousing the virtues of milk.[44]
|
69 |
+
|
70 |
+
In an October 1960 letter to his colleague Brunello Rondi, Fellini first outlined his film ideas about a man suffering creative block: "Well then - a guy (a writer? any kind of professional man? a theatrical producer?) has to interrupt the usual rhythm of his life for two weeks because of a not-too-serious disease. It’s a warning bell: something is blocking up his system."[45] Unclear about the script, its title, and his protagonist's profession, he scouted locations throughout Italy “looking for the film”,[46] in the hope of resolving his confusion. Flaiano suggested La bella confusione (literally The Beautiful Confusion) as the movie's title. Under pressure from his producers, Fellini finally settled on 8 1⁄2, a self-referential title referring principally (but not exclusively)[47] to the number of films he had directed up to that time.
|
71 |
+
|
72 |
+
Giving the order to start production in spring 1962, Fellini signed deals with his producer Rizzoli, fixed dates, had sets constructed, cast Mastroianni, Anouk Aimée, and Sandra Milo in lead roles, and did screen tests at the Scalera Studios in Rome. He hired cinematographer Gianni Di Venanzo, among key personnel. But apart from naming his hero Guido Anselmi, he still couldn't decide what his character did for a living.[48] The crisis came to a head in April when, sitting in his Cinecittà office, he began a letter to Rizzoli confessing he had "lost his film" and had to abandon the project. Interrupted by the chief machinist requesting he celebrate the launch of 8 1⁄2, Fellini put aside the letter and went on the set. Raising a toast to the crew, he "felt overwhelmed by shame… I was in a no exit situation. I was a director who wanted to make a film he no longer remembers. And lo and behold, at that very moment everything fell into place. I got straight to the heart of the film. I would narrate everything that had been happening to me. I would make a film telling the story of a director who no longer knows what film he wanted to make".[49] The self-mirroring structure makes that the entire film is inseparable from its reflecting construction.
|
73 |
+
|
74 |
+
Shooting began on 9 May 1962. Perplexed by the seemingly chaotic, incessant improvisation on the set, Deena Boyer, the director's American press officer at the time, asked for a rationale. Fellini told her that he hoped to convey the three levels "on which our minds live: the past, the present, and the conditional - the realm of fantasy".[50] After shooting wrapped on 14 October, Nino Rota composed various circus marches and fanfares that would later become signature tunes of the maestro's cinema.[51] Nominated for four Oscars, 8 1⁄2 won awards for best foreign language film and best costume design in black-and-white. In California for the ceremony, Fellini toured Disneyland with Walt Disney the day after.
|
75 |
+
|
76 |
+
Increasingly attracted to parapsychology, Fellini met the Turin magician Gustavo Rol in 1963. Rol, a former banker, introduced him to the world of Spiritism and séances. In 1964, Fellini took LSD[52] under the supervision of Emilio Servadio, his psychoanalyst during the 1954 production of La Strada.[53] For years reserved about what actually occurred that Sunday afternoon, he admitted in 1992 that
|
77 |
+
|
78 |
+
objects and their functions no longer had any significance. All I perceived was perception itself, the hell of forms and figures devoid of human emotion and detached from the reality of my unreal environment. I was an instrument in a virtual world that constantly renewed its own meaningless image in a living world that was itself perceived outside of nature. And since the appearance of things was no longer definitive but limitless, this paradisiacal awareness freed me from the reality external to my self. The fire and the rose, as it were, became one.[54]
|
79 |
+
|
80 |
+
Fellini's hallucinatory insights were given full flower in his first colour feature Juliet of the Spirits (1965), depicting Giulietta Masina as Juliet, a housewife who rightly suspects her husband's infidelity and succumbs to the voices of spirits summoned during a séance at her home. Her sexually voracious next door neighbor Suzy (Sandra Milo) introduces Juliet to a world of uninhibited sensuality but Juliet is haunted by childhood memories of her Catholic guilt and a teenaged friend who committed suicide. Complex and filled with psychological symbolism, the film is set to a jaunty score by Nino Rota.
|
81 |
+
|
82 |
+
To help promote Satyricon in the United States, Fellini flew to Los Angeles in January 1970 for interviews with Dick Cavett and David Frost. He also met with film director Paul Mazursky who wanted to star him alongside Donald Sutherland in his new film, Alex in Wonderland.[55] In February, Fellini scouted locations in Paris for The Clowns, a docufiction both for cinema and television, based on his childhood memories of the circus and a "coherent theory of clowning."[56] As he saw it, the clown "was always the caricature of a well-established, ordered, peaceful society. But today all is temporary, disordered, grotesque. Who can still laugh at clowns?... All the world plays a clown now."[57]
|
83 |
+
|
84 |
+
In March 1971, Fellini began production on Roma, a seemingly random collection of episodes informed by the director's memories and impressions of Rome. The "diverse sequences," writes Fellini scholar Peter Bondanella, "are held together only by the fact that they all ultimately originate from the director’s fertile imagination."[58] The film's opening scene anticipates Amarcord while its most surreal sequence involves an ecclesiastical fashion show in which nuns and priests roller skate past shipwrecks of cobwebbed skeletons.
|
85 |
+
|
86 |
+
Over a period of six months between January and June 1973, Fellini shot the Oscar-winning Amarcord. Loosely based on the director's 1968 autobiographical essay My Rimini,[59] the film depicts the adolescent Titta and his friends working out their sexual frustrations against the religious and Fascist backdrop of a provincial town in Italy during the 1930s. Produced by Franco Cristaldi, the seriocomic movie became Fellini's second biggest commercial success after La Dolce Vita.[60] Circular in form, Amarcord avoids plot and linear narrative in a way similar to The Clowns and Roma.[61] The director's overriding concern with developing a poetic form of cinema was first outlined in a 1965 interview he gave to The New Yorker journalist Lillian Ross: "I am trying to free my work from certain constrictions – a story with a beginning, a development, an ending. It should be more like a poem with metre and cadence."[62]
|
87 |
+
|
88 |
+
Organized by his publisher Diogenes Verlag in 1982, the first major exhibition of 63 drawings by Fellini was held in Paris, Brussels, and the Pierre Matisse Gallery in New York.[63] A gifted caricaturist, much of the inspiration for his sketches was derived from his own dreams while the films-in-progress both originated from and stimulated drawings for characters, decor, costumes and set designs. Under the title, I disegni di Fellini (Fellini's Designs), he published 350 drawings executed in pencil, watercolours, and felt pens.[64]
|
89 |
+
|
90 |
+
On 6 September 1985 Fellini was awarded the Golden Lion for lifetime achievement at the 42nd Venice Film Festival. That same year, he became the first non-American to receive the Film Society of Lincoln Center’s annual award for cinematic achievement.[3]
|
91 |
+
|
92 |
+
Long fascinated by Carlos Castaneda’s The Teachings of Don Juan: A Yaqui Way of Knowledge, Fellini accompanied the Peruvian author on a journey to the Yucatán to assess the feasibility of a film. After first meeting Castaneda in Rome in October 1984, Fellini drafted a treatment with Pinelli titled Viaggio a Tulun. Producer Alberto Grimaldi, prepared to buy film rights to all of Castaneda's work, then paid for pre-production research taking Fellini and his entourage from Rome to Los Angeles and the jungles of Mexico in October 1985.[65] When Castaneda inexplicably disappeared and the project fell through, Fellini's mystico-shamanic adventures were scripted with Pinelli and serialized in Corriere della Sera in May 1986. A barely veiled satirical interpretation of Castaneda's work,[66] Viaggio a Tulun was published in 1989 as a graphic novel with artwork by Milo Manara and as Trip to Tulum in America in 1990.
|
93 |
+
|
94 |
+
For Intervista, produced by Ibrahim Moussa and RAI Television, Fellini intercut memories of the first time he visited Cinecittà in 1939 with present-day footage of himself at work on a screen adaptation of Franz Kafka’s Amerika. A meditation on the nature of memory and film production, it won the special 40th Anniversary Prize at Cannes and the 15th Moscow International Film Festival Golden Prize. In Brussels later that year, a panel of thirty professionals from eighteen European countries named Fellini the world’s best director and 8 1⁄2 the best European film of all time.[67]
|
95 |
+
|
96 |
+
In early 1989 Fellini began production on The Voice of the Moon, based on Ermanno Cavazzoni’s novel, Il poema dei lunatici (The Lunatics' Poem). A small town was built at Empire Studios on the via Pontina outside Rome. Starring Roberto Benigni as Ivo Salvini, a madcap poetic figure newly released from a mental institution, the character is a combination of La Strada's Gelsomina, Pinocchio, and Italian poet Giacomo Leopardi.[68] Fellini improvised as he filmed, using as a guide a rough treatment written with Pinelli.[69] Despite its modest critical and commercial success in Italy, and its warm reception by French critics, it failed to interest North American distributors.[citation needed]
|
97 |
+
|
98 |
+
Fellini won the Praemium Imperiale, the equivalent of the Nobel Prize in the visual arts, awarded by the Japan Art Association in 1990.[70]
|
99 |
+
|
100 |
+
In July 1991 and April 1992, Fellini worked in close collaboration with Canadian filmmaker Damian Pettigrew to establish "the longest and most detailed conversations ever recorded on film".[71] Described as the "Maestro's spiritual testament” by his biographer Tullio Kezich,[72] excerpts culled from the conversations later served as the basis of their feature documentary, Fellini: I'm a Born Liar (2002) and the book, I'm a Born Liar: A Fellini Lexicon. Finding it increasingly difficult to secure financing for feature films, Fellini developed a suite of television projects whose titles reflect their subjects: Attore, Napoli, L’Inferno, L'opera lirica, and L’America.[citation needed]
|
101 |
+
|
102 |
+
In April 1993 Fellini received his fifth Oscar, for lifetime achievement, "in recognition of his cinematic accomplishments that have thrilled and entertained audiences worldwide". On 16 June, he entered the Cantonal Hospital in Zürich for an angioplasty on his femoral artery[73] but suffered a stroke at the Grand Hotel in Rimini two months later. Partially paralyzed, he was first transferred to Ferrara for rehabilitation and then to the Policlinico Umberto I in Rome to be near his wife, also hospitalized. He suffered a second stroke and fell into an irreversible coma.[74]
|
103 |
+
|
104 |
+
Fellini died in Rome on 31 October 1993 at the age of 73 after a heart attack he suffered a few weeks earlier,[75] a day after his 50th wedding anniversary. The memorial service, in Studio 5 at Cinecittà, was attended by an estimated 70,000 people.[76] At Giulietta Masina's request, trumpeter Mauro Maur played Nino Rota's "Improvviso dell'Angelo" during the ceremony.[77]
|
105 |
+
|
106 |
+
Five months later, on 23 March 1994, Masina died of lung cancer. Fellini, Masina and their son, Pierfederico, are buried in a bronze sepulchre sculpted by Arnaldo Pomodoro. Designed as a ship's prow, the tomb is at the main entrance to the Cemetery of Rimini. The Federico Fellini Airport in Rimini is named in his honour.
|
107 |
+
|
108 |
+
Fellini was raised in a Roman Catholic family and considered himself a Catholic, but avoided formal activity in the Catholic Church. Fellini's films include Catholic themes; some celebrate Catholic teachings, while others criticize or ridicule church dogma.[78]
|
109 |
+
|
110 |
+
While Fellini was for the most part indifferent to politics,[79] he had a general dislike of authoritarian institutions, and is interpreted by Bondanella as believing in "the dignity and even the nobility of the individual human being".[80] In a 1966 interview, he said, "I make it a point to see if certain ideologies or political attitudes threaten the private freedom of the individual. But for the rest, I am not prepared nor do I plan to become interested in politics."[81]
|
111 |
+
|
112 |
+
Despite various famous Italian actors favouring the Communists, Fellini was not left-wing. It is rumored that he supported Christian Democracy (DC).[82] Bondanella writes that DC "was far too aligned with an extremely conservative and even reactionary pre-Vatican II church to suit Fellini's tastes",[80] but Fellini opposed the '68 Movement and befriended Giulio Andreotti.[83]
|
113 |
+
|
114 |
+
Apart from satirizing Silvio Berlusconi and mainstream television in Ginger and Fred,[84] Fellini rarely expressed political views in public and never directed an overtly political film. He directed two electoral television spots during the 1990s: one for DC and another for the Italian Republican Party (PRI).[85] His slogan "Non si interrompe un'emozione" (Don't interrupt an emotion) was directed against the excessive use of TV advertisements. The Democratic Party of the Left also used the slogan in the referendums of 1995.[86]
|
115 |
+
|
116 |
+
Personal and highly idiosyncratic visions of society, Fellini's films are a unique combination of memory, dreams, fantasy and desire. The adjectives "Fellinian" and "Felliniesque" are "synonymous with any kind of extravagant, fanciful, even baroque image in the cinema and in art in general".[10] La Dolce Vita contributed the term paparazzi to the English language, derived from Paparazzo, the photographer friend of journalist Marcello Rubini (Marcello Mastroianni).[87]
|
117 |
+
|
118 |
+
Contemporary filmmakers such as Tim Burton,[88] Terry Gilliam,[89] Emir Kusturica,[90] and David Lynch[91] have cited Fellini's influence on their work.
|
119 |
+
|
120 |
+
Polish director Wojciech Has, whose two best-received films, The Saragossa Manuscript (1965) and The Hour-Glass Sanatorium (1973), are examples of modernist fantasies, has been compared to Fellini for the sheer "luxuriance of his images".[92]
|
121 |
+
|
122 |
+
I Vitelloni inspired European directors Juan Antonio Bardem, Marco Ferreri, and Lina Wertmüller and influenced Martin Scorsese's Mean Streets (1973), George Lucas's American Graffiti (1974), Joel Schumacher's St. Elmo's Fire (1985), and Barry Levinson's Diner (1987), among many others.[93] When the American magazine Cinema asked Stanley Kubrick in 1963 to name his ten favorite films, he ranked I Vitelloni number one.[94]
|
123 |
+
|
124 |
+
Nights of Cabiria was adapted as the Broadway musical Sweet Charity and the movie Sweet Charity (1969) by Bob Fosse starring Shirley MacLaine. City of Women was adapted for the Berlin stage by Frank Castorf in 1992.[95]
|
125 |
+
|
126 |
+
8 1⁄2 inspired, among others, Mickey One (Arthur Penn, 1965), Alex in Wonderland (Paul Mazursky, 1970), Beware of a Holy Whore (Rainer Werner Fassbinder, 1971), Day for Night (François Truffaut, 1973), All That Jazz (Bob Fosse, 1979), Stardust Memories (Woody Allen, 1980), Sogni d'oro (Nanni Moretti, 1981), Parad Planet (Vadim Abdrashitov, 1984), La Pelicula del rey (Carlos Sorin, 1986), Living in Oblivion (Tom DiCillo, 1995), 8 1⁄2 Women (Peter Greenaway, 1999), Falling Down (Joel Schumacher, 1993), and the Broadway musical Nine (Maury Yeston and Arthur Kopit, 1982).[96] Yo-Yo Boing! (1998), a Spanish novel by Puerto Rican writer Giannina Braschi, features a dream sequence with Fellini inspired by 8 1⁄2.[97]
|
127 |
+
|
128 |
+
Fellini's work is referenced on the albums Fellini Days (2001) by Fish, Another Side of Bob Dylan (1964) by Bob Dylan with Motorpsycho Nitemare, Funplex (2008) by the B-52's with the song Juliet of the Spirits, and in the opening traffic jam of the music video Everybody Hurts by R.E.M.[98] American singer Lana Del Rey has cited Fellini as an influence.[99] It influenced the American TV shows Northern Exposure and Third Rock from the Sun.[100] Wes Anderson's short film Castello Cavalcanti (2013) is in many places a direct homage to Fellini.[101]
|
129 |
+
|
130 |
+
Various film-related material and personal papers of Fellini are in the Wesleyan University Cinema Archives, to which scholars and media experts have full access.[102] In October 2009, the Jeu de Paume in Paris opened an exhibit devoted to Fellini that included ephemera, television interviews, behind-the-scenes photographs, Book of Dreams (based on 30 years of the director's illustrated dreams and notes), along with excerpts from La dolce vita and 8 1⁄2.[103]
|
131 |
+
|
132 |
+
In 2014, the Blue Devils Drum and Bugle Corps of Concord, California, performed "Felliniesque", a show themed around Fellini's work, with which they won a record 16th Drum Corps International World Class championship with a record score of 99.650.[104] That same year, the weekly entertainment-trade magazine Variety announced that French director Sylvain Chomet was moving forward with The Thousand Miles, a project based on various Fellini works, including his unpublished drawings and writings.[105]
|
133 |
+
|
134 |
+
Television commercials
|
en/1954.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Federico del Sagrado Corazón de Jesús García Lorca[1] (Spanish pronunciation: [feðeˈɾiko ðel saˈɣɾaðo koɾaˈθon de xeˈsuz ɣaɾˈθi.a ˈloɾka]; 5 June 1898 – 19 August 1936), known as Federico García Lorca[a] (English: /ɡɑːrˌsiːə ˈlɔːrkə/ gar-SEE-ə LOR-kə), was a Spanish poet, playwright, and theatre director.
|
4 |
+
|
5 |
+
García Lorca achieved international recognition as an emblematic member of the Generation of '27, a group consisting of mostly poets who introduced the tenets of European movements (such as symbolism, futurism, and surrealism) into Spanish literature.[2][3] He is believed to have been killed by Nationalist forces at the beginning of the Spanish Civil War.[4][5][6][7][8] His remains have never been found.
|
6 |
+
|
7 |
+
García Lorca was born on 5 June 1898, in Fuente Vaqueros, a small town 17 km west of Granada, southern Spain.[9] His father, Federico García Rodríguez, was a prosperous landowner with a farm in the fertile vega (valley) surrounding Granada and a comfortable villa in the heart of the city. García Rodríguez saw his fortunes rise with a boom in the sugar industry. García Lorca's mother, Vicenta Lorca Romero, was a teacher. After Fuente Vaqueros, the family moved in 1905 to the nearby town of Valderrubio (at the time named Asquerosa). In 1909, when the boy was 11, his family moved to the regional capital of Granada, where there was the equivalent of a high school; their best known residence there is the summer home called the Huerta de San Vicente, on what were then the outskirts of the city of Granada. For the rest of his life, he maintained the importance of living close to the natural world, praising his upbringing in the country.[9] All three of these homes—Fuente Vaqueros, Valderrubio, and Huerta de San Vicente—are today museums.[10][11][12]
|
8 |
+
|
9 |
+
In 1915, after graduating from secondary school, García Lorca attended the University of Granada. During this time his studies included law, literature and composition. Throughout his adolescence he felt a deeper affinity for music than for literature. When he was 11 years old, he began six years of piano lessons with Antonio Segura Mesa, a harmony teacher in the local conservatory and a composer. It was Segura who inspired Federico's dream of developing a career in music.[13] His first artistic inspirations arose from the scores of Claude Debussy, Frédéric Chopin and Ludwig van Beethoven.[13] Later, with his friendship with composer Manuel de Falla, Spanish folklore became his muse. García Lorca did not begin a career in writing until Segura died in 1916, and his first prose works such as "Nocturne", "Ballade", and "Sonata" drew on musical forms.[14] His milieu of young intellectuals gathered in El Rinconcillo at the Café Alameda in Granada. During 1916 and 1917, García Lorca traveled throughout Castile, León, and Galicia, in northern Spain, with a professor of his university, who also encouraged him to write his first book, Impresiones y paisajes (Impressions and Landscapes—printed at his father's expense in 1918). Fernando de los Rios persuaded García Lorca's parents to let him move to the progressive, Oxbridge-inspired Residencia de Estudiantes in Madrid in 1919, while nominally attending classes at the University of Madrid.[14]
|
10 |
+
|
11 |
+
At the Residencia de Estudiantes in Madrid, García Lorca befriended Luis Buñuel and Salvador Dalí and many other creative artists who were, or would become, influential across Spain.[14] He was taken under the wing of the poet Juan Ramón Jiménez, becoming close to playwright Eduardo Marquina and Gregorio Martínez Sierra, the Director of Madrid's Teatro Eslava.[14]
|
12 |
+
|
13 |
+
In 1919–20, at Sierra's invitation, he wrote and staged his first play, The Butterfly's Evil Spell. It was a verse play dramatising the impossible love between a cockroach and a butterfly, with a supporting cast of other insects; it was laughed off the stage by an unappreciative public after only four performances and influenced García Lorca's attitude to the theatre-going public for the rest of his career. He would later claim that Mariana Pineda, written in 1927, was, in fact, his first play. During the time at the Residencia de Estudiantes, he pursued degrees in law and philosophy, though he had more interest in writing than study.[14]
|
14 |
+
|
15 |
+
García Lorca's first book of poems, Libro de poemas, was published in 1921, collecting work written from 1918 and selected with the help of his brother Francisco (nicknamed Paquito). They concern the themes of religious faith, isolation, and nature that had filled his prose reflections.[15] Early in 1922 at Granada García Lorca joined the composer Manuel de Falla in order to promote the Concurso de Cante Jondo, a festival dedicated to enhance flamenco performance. The year before Lorca had begun to write his Poema del cante jondo ("Poem of the Deep Song," not published until 1931), so he naturally composed an essay on the art of flamenco,[16] and began to speak publicly in support of the Concurso. At the music festival in June he met the celebrated Manuel Torre, a flamenco cantaor. The next year in Granada he also collaborated with Falla and others on the musical production of a play for children, La niña que riega la albahaca y el príncipe preguntón (The Girl that Waters the Basil and the Inquisitive Prince) adapted by Lorca from an Andalusian story.[17] Inspired by the same structural form of sequence as "Deep Song," his collection Suites (1923) was never finished and not published until 1983.[15]
|
16 |
+
|
17 |
+
Over the next few years, García Lorca became increasingly involved in Spain's avant-garde. He published a poetry collection called Canciones (Songs), although it did not contain songs in the usual sense. Shortly after, Lorca was invited to exhibit a series of drawings at the Galeries Dalmau in Barcelona, from 25 June to 2 July 1927.[18] Lorca's sketches were a blend of popular and avant-garde styles, complementing Canción. Both his poetry and drawings reflected the influence of traditional Andalusian motifs, Cubist syntax, and a preoccupation with sexual identity. Several drawings consisted of superimposed dreamlike faces (or shadows). He later described the double faces as self-portraits, showing "man's capacity for crying as well as winning," inline with his conviction that sorrow and joy were inseparable, just as life and death.[19]
|
18 |
+
|
19 |
+
Romancero Gitano (Gypsy Ballads, 1928), part of his Cancion series, became his best known book of poetry.[20] It was a highly stylised imitation of the ballads and poems that were still being told throughout the Spanish countryside. García Lorca describes the work as a "carved altar piece" of Andalusia with "gypsies, horses, archangels, planets, its Jewish and Roman breezes, rivers, crimes, the everyday touch of the smuggler and the celestial note of the naked children of Córdoba. A book that hardly expresses visible Andalusia at all, but where the hidden Andalusia trembles."[20] In 1928, the book brought him fame across Spain and the Hispanic world, and it was only much later that he gained notability as a playwright. For the rest of his life, the writer would search for the elements of Andaluce culture, trying to find its essence without resorting to the "picturesque" or the cliched use of "local colour."[21]
|
20 |
+
|
21 |
+
His second play, Mariana Pineda, with stage settings by Salvador Dalí, opened to great acclaim in Barcelona in 1927.[14] In 1926, García Lorca wrote the play The Shoemaker's Prodigious Wife, which would not be shown until the early 1930s. It was a farce about fantasy, based on the relationship between a flirtatious, petulant wife and a hen-pecked shoemaker.
|
22 |
+
|
23 |
+
From 1925 to 1928, he was passionately involved with Dalí.[22] Although Dali's friendship with Lorca had a strong element of mutual passion,[b] Dalí said he rejected the erotic advances of the poet.[23] With the success of "Gypsy Ballads", came an estrangement from Dalí and the breakdown of a love affair with sculptor Emilio Aladrén Perojo. These brought on an increasing depression, a situation exacerbated by his anguish over his homosexuality. He felt he was trapped between the persona of the successful author, which he was forced to maintain in public, and the tortured, authentic self, which he could acknowledge only in private. He also had the sense that he was being pigeon-holed as a "gypsy poet". He wrote: "The gypsies are a theme. And nothing more. I could just as well be a poet of sewing needles or hydraulic landscapes. Besides, this gypsyism gives me the appearance of an uncultured, ignorant and primitive poet that you know very well I'm not. I don't want to be typecast."[21]
|
24 |
+
|
25 |
+
Growing estrangement between García Lorca and his closest friends reached its climax when surrealists Dalí and Luis Buñuel collaborated on their 1929 film Un Chien Andalou (An Andalusian Dog). García Lorca interpreted it, perhaps erroneously, as a vicious attack upon himself.[24] At this time Dalí also met his future wife Gala. Aware of these problems (though not perhaps of their causes), García Lorca's family arranged for him to make a lengthy visit to the United States in 1929–30.
|
26 |
+
|
27 |
+
Green wind. Green branches.
|
28 |
+
The ship out on the sea
|
29 |
+
and the horse on the mountain.
|
30 |
+
With the shadow at the waist
|
31 |
+
she dreams on her balcony,
|
32 |
+
green flesh, green hair,
|
33 |
+
with eyes of cold silver.
|
34 |
+
|
35 |
+
From "Romance Sonámbulo", ("Sleepwalking Romance"), García Lorca
|
36 |
+
|
37 |
+
In June 1929, García Lorca travelled to the US with Fernando de los Rios on the RMS Olympic, a sister liner to the RMS Titanic.[25] They stayed mostly in New York City, where Rios started a lecture tour and García Lorca enrolled at Columbia University School of General Studies, funded by his parents. He studied English but, as before, was more absorbed by writing than study. He also spent time in Vermont and later in Havana, Cuba.
|
38 |
+
|
39 |
+
His collection Poeta en Nueva York (Poet in New York, published posthumously in 1942) explores alienation and isolation through some graphically experimental poetic techniques and was influenced by the Wall Street crash which he personally witnessed.[26]
|
40 |
+
[27]
|
41 |
+
[28]
|
42 |
+
|
43 |
+
This condemnation of urban capitalist society and materialistic modernity was a sharp departure from his earlier work and label as a folklorist.[25] His play of this time, El público (The Public), was not published until the late 1970s and has never been published in its entirety, the complete manuscript apparently lost. However, the Hispanic Society of America in New York City retains several of his personal letters.[29][30]
|
44 |
+
|
45 |
+
García Lorca's return to Spain in 1930 coincided with the fall of the dictatorship of Primo de Rivera and the establishment of the liberal, leftist Second Spanish Republic.[25] In 1931, García Lorca was appointed director of a student theatre company, Teatro Universitario La Barraca (The Shack). It was funded by the Second Republic's Ministry of Education, and it was charged with touring Spain's rural areas in order to introduce audiences to classical Spanish theatre free of charge. With a portable stage and little equipment, they sought to bring theatre to people who had never seen any, with García Lorca directing as well as acting. He commented: "Outside of Madrid, the theatre, which is in its very essence a part of the life of the people, is almost dead, and the people suffer accordingly, as they would if they had lost their two eyes, or ears, or a sense of taste. We [La Barraca] are going to give it back to them."[25] His experiences traveling through impoverished rural Spain and New York (particularly amongst the disenfranchised African-American population), transformed him into a passionate advocate of the theatre of social action.[25] He wrote "The theatre is a school of weeping and of laughter, a free forum, where men can question norms that are outmoded or mistaken and explain with living example the eternal norms of the human heart."[25]
|
46 |
+
|
47 |
+
While touring with La Barraca, García Lorca wrote his now best-known plays, the "Rural Trilogy" of Blood Wedding, Yerma and The House of Bernarda Alba, which all rebelled against the norms of bourgeois Spanish society.[25] He called for a rediscovery of the roots of European theatre and the questioning of comfortable conventions such as the popular drawing-room comedies of the time. His work challenged the accepted role of women in society and explored taboo issues of homoeroticism and class. García Lorca wrote little poetry in this last period of his life, declaring in 1936, "theatre is poetry that rises from the book and becomes human enough to talk and shout, weep and despair."[31]
|
48 |
+
|
49 |
+
Travelling to Buenos Aires in 1933 to give lectures and direct the Argentine premiere of Blood Wedding, García Lorca spoke of his distilled theories on artistic creation and performance in the famous lecture Play and Theory of the Duende. This attempted to define a schema of artistic inspiration, arguing that great art depends upon a vivid awareness of death, connection with a nation's soil, and an acknowledgment of the limitations of reason.[31][32]
|
50 |
+
|
51 |
+
As well as returning to the classical roots of theatre, García Lorca also turned to traditional forms in poetry. His last poetic work, Sonetos de amor oscuro (Sonnets of Dark Love, 1936), was long thought to have been inspired by his passion for Rafael Rodríguez Rapun, secretary of La Barraca. Documents and mementos revealed in 2012 suggest that the actual inspiration was Juan Ramírez de Lucas, a 19-year-old with whom Lorca hoped to emigrate to Mexico.[33] The love sonnets are inspired by the 16th-century poet San Juan de la Cruz.[34] La Barraca's subsidy was cut in half by the rightist government elected in 1934, and its last performance was given in April 1936.
|
52 |
+
|
53 |
+
Lorca spent summers at the Huerta de San Vicente from 1926 to 1936. Here he wrote, totally or in part, some of his major works, among them When Five Years Pass (Así que pasen cinco años) (1931), Blood Wedding (1932), Yerma (1934) and Diván del Tamarit (1931–1936). The poet lived in the Huerta de San Vicente in the days just before his arrest and assassination in August 1936.[35]
|
54 |
+
|
55 |
+
Although García Lorca's drawings do not often receive attention, he was also a talented artist.[36][37]
|
56 |
+
|
57 |
+
Political and social tensions had greatly intensified after the murder of prominent monarchist and anti-Popular Front spokesman José Calvo Sotelo by Republican Assault Guards (Guardias de asalto).[38] García Lorca knew that he would be suspect to the rising right-wing for his outspoken socialist views.[34] Granada was so tumultuous that it had not had a mayor for months; no one dared accept the job. When Lorca's brother-in-law, Manuel Fernández-Montesinos, agreed to accept the position, he was assassinated within a week. On the same day he was shot, 18 August, Lorca was arrested.[39]
|
58 |
+
|
59 |
+
It is thought that García Lorca was shot and killed by Nationalist militia[40][41] on 19 August 1936.[42] The author Ian Gibson in his book The Assassination of García Lorca argues that he was shot with three others (Joaquín Arcollas Cabezas, Francisco Galadí Melgar and Dióscoro Galindo González) at a place known as the Fuente Grande ('Great Spring') which is on the road between Víznar and Alfacar.[43] Police reports released by radio station Cadena SER in April 2015 conclude that Lorca was executed by fascist forces. The Franco-era report, dated 9 July 1965, describes the writer as a "socialist" and "freemason belonging to the Alhambra lodge", who engaged in "homosexual and abnormal practices".[44][45][46]
|
60 |
+
|
61 |
+
Significant controversy exists about the motives and details of Lorca's murder. Personal, non-political motives have been suggested. García Lorca's biographer, Stainton, states that his killers made remarks about his sexual orientation, suggesting that it played a role in his death.[47] Ian Gibson suggests that García Lorca's assassination was part of a campaign of mass killings intended to eliminate supporters of the Leftist Popular Front.[39] However, Gibson proposes that rivalry between the right-wing Spanish Confederation of the Autonomous Right (CEDA) and the fascist Falange was a major factor in Lorca's death. Former CEDA Parliamentary Deputy Ramón Ruiz Alonso arrested García Lorca at the Rosales's home, and was the one responsible for the original denunciation that led to the arrest warrant being issued.
|
62 |
+
|
63 |
+
Then I realized I had been murdered.
|
64 |
+
They looked for me in cafes, cemeteries and churches
|
65 |
+
.... but they did not find me.
|
66 |
+
They never found me?
|
67 |
+
No. They never found me.
|
68 |
+
|
69 |
+
From "The Fable And Round of the Three Friends", Poet in New York (1929), García Lorca
|
70 |
+
|
71 |
+
It has been argued that García Lorca was apolitical and had many friends in both Republican and Nationalist camps. Gibson disputes this in his 1978 book about the poet's death.[39] He cites, for example, Mundo Obrero's published manifesto, which Lorca later signed, and alleges that Lorca was an active supporter of the Popular Front.[48] Lorca read out this manifesto at a banquet in honour of fellow poet Rafael Alberti on 9 February 1936.
|
72 |
+
|
73 |
+
Many anti-communists were sympathetic to Lorca or assisted him. In the days before his arrest he found shelter in the house of the artist and leading Falange member Luis Rosales. Indeed, evidence suggests that Rosales was very nearly shot as well by the Civil Governor Valdés for helping García Lorca. Poet Gabriel Celaya wrote in his memoirs that he once found García Lorca in the company of Falangist José Maria Aizpurua. Celaya further wrote that Lorca dined every Friday with Falangist founder and leader José Antonio Primo de Rivera.[49] On 11 March 1937 an article appeared in the Falangist press denouncing the murder and lionizing García Lorca; the article opened: "The finest poet of Imperial Spain has been assassinated."[50] Jean-Louis Schonberg also put forward the 'homosexual jealousy' theory.[51] The dossier on the murder, compiled in 1936 at Franco's request and referred to by Gibson and others without having seen it, has yet to surface. The first published account of an attempt to locate Lorca's grave can be found in British traveller and Hispanist Gerald Brenan's book The Face of Spain.[52] Despite early attempts such as Brenan's in 1949, the site remained undiscovered throughout the Franco era.
|
74 |
+
|
75 |
+
In 2008, a Spanish judge opened an investigation into Lorca's death. The García Lorca family dropped objections to the excavation of a potential gravesite near Alfacar, but no human remains were found.[53][54] The investigation was dropped. A further investigation was begun in 2016, to no avail.[55]
|
76 |
+
|
77 |
+
In late October 2009, a team of archaeologists and historians from the University of Granada began excavations outside Alfacar.[56] The site was identified three decades previously by a man who said he had helped dig Lorca's grave.[57][58] Lorca was thought to be buried with at least three other men beside a winding mountain road that connects the villages of Víznar and Alfacar.[59]
|
78 |
+
|
79 |
+
The excavations began at the request of another victim's family.[60] Following a long-standing objection, the Lorca family also gave their permission.[60] In October 2009 Francisco Espínola, a spokesman for the Justice Ministry of the Andalusian regional government, said that after years of pressure García Lorca's body would "be exhumed in a matter of weeks."[61] Lorca's relatives, who had initially opposed an exhumation, said they might provide a DNA sample in order to identify his remains.[60]
|
80 |
+
|
81 |
+
In late November 2009, after two weeks of excavating the site, organic material believed to be human bones was recovered. The remains were taken to the University of Granada for examination.[62] But in mid-December 2009, doubts were raised as to whether the poet's remains would be found.[63] The dig produced "not one bone, item of clothing or bullet shell", said Begoña Álvarez, justice minister of Andalucia. She added, "the soil was only 40 cm (16in) deep, making it too shallow for a grave."[64][65] The failed excavation cost €70,000.[66]
|
82 |
+
|
83 |
+
In January 2012, a local historian, Miguel Caballero Pérez, author of "The last 13 hours of García Lorca",[67] applied for permission to excavate another area less than half a kilometre from the site, where he believes Lorca's remains are located.[68]
|
84 |
+
|
85 |
+
Claims in 2016, by Stephen Roberts, an associate professor in Spanish literature at Nottingham University, and others that the poet's body was buried in a well in Alfacar have not been substantiated.[69]
|
86 |
+
|
87 |
+
Francisco Franco's Falangist regime placed a general ban on García Lorca's work, which was not rescinded until 1953. That year, a (censored) Obras completas (Complete Works) was released. Following this, Blood Wedding, Yerma and The House of Bernarda Alba were successfully played on the main Spanish stages. Obras completas did not include his late heavily homoerotic Sonnets of Dark Love, written in November 1935 and shared only with close friends. They were lost until 1983/4 when they were finally published in draft form. (No final manuscripts have ever been found.) It was only after Franco's death that García Lorca's life and death could be openly discussed in Spain. This was due not only to political censorship, but also to the reluctance of the García Lorca family to allow publication of unfinished poems and plays prior to the publication of a critical edition of his works.
|
88 |
+
|
89 |
+
South African Roman Catholic poet Roy Campbell, who enthusiastically supported the Nationalists both during and after the Civil War, later produced acclaimed translations of Lorca's work. In his poem, The Martyrdom of F. Garcia Lorca, Campbell wrote,
|
90 |
+
|
91 |
+
Not only did he lose his life
|
92 |
+
By shots assassinated:
|
93 |
+
But with a hammer and a knife
|
94 |
+
Was after that
|
95 |
+
– translated.[70]
|
96 |
+
|
97 |
+
In Granada, the city of his birth, the Parque Federico García Lorca is dedicated to his memory and includes the Huerta de San Vicente, the Lorca family summer home, opened as a museum in 1995. The grounds, including nearly two hectares of land, the two adjoining houses, works of art, and the original furnishings have been preserved.[71] There is a statue of Lorca on the Avenida de la Constitución in the city center, and a cultural center bearing his name is under construction[when?] and will play a major role in preserving and disseminating his works.
|
98 |
+
|
99 |
+
The Parque Federico García Lorca, in Alfacar, is near Fuente Grande; in 2009 excavations in it failed to locate Lorca's body. Close to the olive tree indicated by some as marking the location of the grave, there is a stone memorial to Federico García Lorca and all other victims of the Civil War, 1936–39. Flowers are laid at the memorial every year on the anniversary of his death, and a commemorative event including music and readings of the poet's works is held every year in the park to mark the anniversary. On 17 August 2011, to remember the 75th anniversary of Lorca's assassination and to celebrate his life and legacy, this event included dance, song, poetry and dramatic readings and attracted hundreds of spectators.
|
100 |
+
|
101 |
+
At the Barranco de Viznar, between Viznar and Alfacar, there is a memorial stone bearing the words "Lorca eran todos, 18-8-2002" ("All were Lorca"). The Barranco de Viznar is the site of mass graves and has been proposed as another possible location of the poet's remains.
|
102 |
+
|
103 |
+
García Lorca is honored by a statue prominently located in Madrid's Plaza de Santa Ana. Political philosopher David Crocker reports that "the statue, at least, is still an emblem of the contested past: each day, the Left puts a red kerchief on the neck of the statue, and someone from the Right comes later to take it off."[72]
|
104 |
+
|
105 |
+
In Paris, France, the memory of García Lorca is honored on the Federico García Lorca Garden, in the center of the French capital, on the Seine.
|
106 |
+
|
107 |
+
The Fundación Federico García Lorca, directed by Lorca's niece Laura García Lorca, sponsors the celebration and dissemination of the writer's work and is currently[when?] building the Centro Federico García Lorca in Madrid. The Lorca family deposited all Federico documents with the foundation, which holds them on their behalf.[73]
|
108 |
+
|
109 |
+
In the Hotel Castelar in Buenos Aires, Argentina, where Lorca lived for six months in 1933, the room where he lived has been kept as a shrine and contains original writings and drawings of his.
|
110 |
+
|
111 |
+
In 2014 Lorca was one of the inaugural honorees in the Rainbow Honor Walk, a walk of fame in San Francisco's Castro neighborhood noting LGBTQ people who have "made significant contributions in their fields."[74][75][76]
|
en/1955.html.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Federico del Sagrado Corazón de Jesús García Lorca[1] (Spanish pronunciation: [feðeˈɾiko ðel saˈɣɾaðo koɾaˈθon de xeˈsuz ɣaɾˈθi.a ˈloɾka]; 5 June 1898 – 19 August 1936), known as Federico García Lorca[a] (English: /ɡɑːrˌsiːə ˈlɔːrkə/ gar-SEE-ə LOR-kə), was a Spanish poet, playwright, and theatre director.
|
4 |
+
|
5 |
+
García Lorca achieved international recognition as an emblematic member of the Generation of '27, a group consisting of mostly poets who introduced the tenets of European movements (such as symbolism, futurism, and surrealism) into Spanish literature.[2][3] He is believed to have been killed by Nationalist forces at the beginning of the Spanish Civil War.[4][5][6][7][8] His remains have never been found.
|
6 |
+
|
7 |
+
García Lorca was born on 5 June 1898, in Fuente Vaqueros, a small town 17 km west of Granada, southern Spain.[9] His father, Federico García Rodríguez, was a prosperous landowner with a farm in the fertile vega (valley) surrounding Granada and a comfortable villa in the heart of the city. García Rodríguez saw his fortunes rise with a boom in the sugar industry. García Lorca's mother, Vicenta Lorca Romero, was a teacher. After Fuente Vaqueros, the family moved in 1905 to the nearby town of Valderrubio (at the time named Asquerosa). In 1909, when the boy was 11, his family moved to the regional capital of Granada, where there was the equivalent of a high school; their best known residence there is the summer home called the Huerta de San Vicente, on what were then the outskirts of the city of Granada. For the rest of his life, he maintained the importance of living close to the natural world, praising his upbringing in the country.[9] All three of these homes—Fuente Vaqueros, Valderrubio, and Huerta de San Vicente—are today museums.[10][11][12]
|
8 |
+
|
9 |
+
In 1915, after graduating from secondary school, García Lorca attended the University of Granada. During this time his studies included law, literature and composition. Throughout his adolescence he felt a deeper affinity for music than for literature. When he was 11 years old, he began six years of piano lessons with Antonio Segura Mesa, a harmony teacher in the local conservatory and a composer. It was Segura who inspired Federico's dream of developing a career in music.[13] His first artistic inspirations arose from the scores of Claude Debussy, Frédéric Chopin and Ludwig van Beethoven.[13] Later, with his friendship with composer Manuel de Falla, Spanish folklore became his muse. García Lorca did not begin a career in writing until Segura died in 1916, and his first prose works such as "Nocturne", "Ballade", and "Sonata" drew on musical forms.[14] His milieu of young intellectuals gathered in El Rinconcillo at the Café Alameda in Granada. During 1916 and 1917, García Lorca traveled throughout Castile, León, and Galicia, in northern Spain, with a professor of his university, who also encouraged him to write his first book, Impresiones y paisajes (Impressions and Landscapes—printed at his father's expense in 1918). Fernando de los Rios persuaded García Lorca's parents to let him move to the progressive, Oxbridge-inspired Residencia de Estudiantes in Madrid in 1919, while nominally attending classes at the University of Madrid.[14]
|
10 |
+
|
11 |
+
At the Residencia de Estudiantes in Madrid, García Lorca befriended Luis Buñuel and Salvador Dalí and many other creative artists who were, or would become, influential across Spain.[14] He was taken under the wing of the poet Juan Ramón Jiménez, becoming close to playwright Eduardo Marquina and Gregorio Martínez Sierra, the Director of Madrid's Teatro Eslava.[14]
|
12 |
+
|
13 |
+
In 1919–20, at Sierra's invitation, he wrote and staged his first play, The Butterfly's Evil Spell. It was a verse play dramatising the impossible love between a cockroach and a butterfly, with a supporting cast of other insects; it was laughed off the stage by an unappreciative public after only four performances and influenced García Lorca's attitude to the theatre-going public for the rest of his career. He would later claim that Mariana Pineda, written in 1927, was, in fact, his first play. During the time at the Residencia de Estudiantes, he pursued degrees in law and philosophy, though he had more interest in writing than study.[14]
|
14 |
+
|
15 |
+
García Lorca's first book of poems, Libro de poemas, was published in 1921, collecting work written from 1918 and selected with the help of his brother Francisco (nicknamed Paquito). They concern the themes of religious faith, isolation, and nature that had filled his prose reflections.[15] Early in 1922 at Granada García Lorca joined the composer Manuel de Falla in order to promote the Concurso de Cante Jondo, a festival dedicated to enhance flamenco performance. The year before Lorca had begun to write his Poema del cante jondo ("Poem of the Deep Song," not published until 1931), so he naturally composed an essay on the art of flamenco,[16] and began to speak publicly in support of the Concurso. At the music festival in June he met the celebrated Manuel Torre, a flamenco cantaor. The next year in Granada he also collaborated with Falla and others on the musical production of a play for children, La niña que riega la albahaca y el príncipe preguntón (The Girl that Waters the Basil and the Inquisitive Prince) adapted by Lorca from an Andalusian story.[17] Inspired by the same structural form of sequence as "Deep Song," his collection Suites (1923) was never finished and not published until 1983.[15]
|
16 |
+
|
17 |
+
Over the next few years, García Lorca became increasingly involved in Spain's avant-garde. He published a poetry collection called Canciones (Songs), although it did not contain songs in the usual sense. Shortly after, Lorca was invited to exhibit a series of drawings at the Galeries Dalmau in Barcelona, from 25 June to 2 July 1927.[18] Lorca's sketches were a blend of popular and avant-garde styles, complementing Canción. Both his poetry and drawings reflected the influence of traditional Andalusian motifs, Cubist syntax, and a preoccupation with sexual identity. Several drawings consisted of superimposed dreamlike faces (or shadows). He later described the double faces as self-portraits, showing "man's capacity for crying as well as winning," inline with his conviction that sorrow and joy were inseparable, just as life and death.[19]
|
18 |
+
|
19 |
+
Romancero Gitano (Gypsy Ballads, 1928), part of his Cancion series, became his best known book of poetry.[20] It was a highly stylised imitation of the ballads and poems that were still being told throughout the Spanish countryside. García Lorca describes the work as a "carved altar piece" of Andalusia with "gypsies, horses, archangels, planets, its Jewish and Roman breezes, rivers, crimes, the everyday touch of the smuggler and the celestial note of the naked children of Córdoba. A book that hardly expresses visible Andalusia at all, but where the hidden Andalusia trembles."[20] In 1928, the book brought him fame across Spain and the Hispanic world, and it was only much later that he gained notability as a playwright. For the rest of his life, the writer would search for the elements of Andaluce culture, trying to find its essence without resorting to the "picturesque" or the cliched use of "local colour."[21]
|
20 |
+
|
21 |
+
His second play, Mariana Pineda, with stage settings by Salvador Dalí, opened to great acclaim in Barcelona in 1927.[14] In 1926, García Lorca wrote the play The Shoemaker's Prodigious Wife, which would not be shown until the early 1930s. It was a farce about fantasy, based on the relationship between a flirtatious, petulant wife and a hen-pecked shoemaker.
|
22 |
+
|
23 |
+
From 1925 to 1928, he was passionately involved with Dalí.[22] Although Dali's friendship with Lorca had a strong element of mutual passion,[b] Dalí said he rejected the erotic advances of the poet.[23] With the success of "Gypsy Ballads", came an estrangement from Dalí and the breakdown of a love affair with sculptor Emilio Aladrén Perojo. These brought on an increasing depression, a situation exacerbated by his anguish over his homosexuality. He felt he was trapped between the persona of the successful author, which he was forced to maintain in public, and the tortured, authentic self, which he could acknowledge only in private. He also had the sense that he was being pigeon-holed as a "gypsy poet". He wrote: "The gypsies are a theme. And nothing more. I could just as well be a poet of sewing needles or hydraulic landscapes. Besides, this gypsyism gives me the appearance of an uncultured, ignorant and primitive poet that you know very well I'm not. I don't want to be typecast."[21]
|
24 |
+
|
25 |
+
Growing estrangement between García Lorca and his closest friends reached its climax when surrealists Dalí and Luis Buñuel collaborated on their 1929 film Un Chien Andalou (An Andalusian Dog). García Lorca interpreted it, perhaps erroneously, as a vicious attack upon himself.[24] At this time Dalí also met his future wife Gala. Aware of these problems (though not perhaps of their causes), García Lorca's family arranged for him to make a lengthy visit to the United States in 1929–30.
|
26 |
+
|
27 |
+
Green wind. Green branches.
|
28 |
+
The ship out on the sea
|
29 |
+
and the horse on the mountain.
|
30 |
+
With the shadow at the waist
|
31 |
+
she dreams on her balcony,
|
32 |
+
green flesh, green hair,
|
33 |
+
with eyes of cold silver.
|
34 |
+
|
35 |
+
From "Romance Sonámbulo", ("Sleepwalking Romance"), García Lorca
|
36 |
+
|
37 |
+
In June 1929, García Lorca travelled to the US with Fernando de los Rios on the RMS Olympic, a sister liner to the RMS Titanic.[25] They stayed mostly in New York City, where Rios started a lecture tour and García Lorca enrolled at Columbia University School of General Studies, funded by his parents. He studied English but, as before, was more absorbed by writing than study. He also spent time in Vermont and later in Havana, Cuba.
|
38 |
+
|
39 |
+
His collection Poeta en Nueva York (Poet in New York, published posthumously in 1942) explores alienation and isolation through some graphically experimental poetic techniques and was influenced by the Wall Street crash which he personally witnessed.[26]
|
40 |
+
[27]
|
41 |
+
[28]
|
42 |
+
|
43 |
+
This condemnation of urban capitalist society and materialistic modernity was a sharp departure from his earlier work and label as a folklorist.[25] His play of this time, El público (The Public), was not published until the late 1970s and has never been published in its entirety, the complete manuscript apparently lost. However, the Hispanic Society of America in New York City retains several of his personal letters.[29][30]
|
44 |
+
|
45 |
+
García Lorca's return to Spain in 1930 coincided with the fall of the dictatorship of Primo de Rivera and the establishment of the liberal, leftist Second Spanish Republic.[25] In 1931, García Lorca was appointed director of a student theatre company, Teatro Universitario La Barraca (The Shack). It was funded by the Second Republic's Ministry of Education, and it was charged with touring Spain's rural areas in order to introduce audiences to classical Spanish theatre free of charge. With a portable stage and little equipment, they sought to bring theatre to people who had never seen any, with García Lorca directing as well as acting. He commented: "Outside of Madrid, the theatre, which is in its very essence a part of the life of the people, is almost dead, and the people suffer accordingly, as they would if they had lost their two eyes, or ears, or a sense of taste. We [La Barraca] are going to give it back to them."[25] His experiences traveling through impoverished rural Spain and New York (particularly amongst the disenfranchised African-American population), transformed him into a passionate advocate of the theatre of social action.[25] He wrote "The theatre is a school of weeping and of laughter, a free forum, where men can question norms that are outmoded or mistaken and explain with living example the eternal norms of the human heart."[25]
|
46 |
+
|
47 |
+
While touring with La Barraca, García Lorca wrote his now best-known plays, the "Rural Trilogy" of Blood Wedding, Yerma and The House of Bernarda Alba, which all rebelled against the norms of bourgeois Spanish society.[25] He called for a rediscovery of the roots of European theatre and the questioning of comfortable conventions such as the popular drawing-room comedies of the time. His work challenged the accepted role of women in society and explored taboo issues of homoeroticism and class. García Lorca wrote little poetry in this last period of his life, declaring in 1936, "theatre is poetry that rises from the book and becomes human enough to talk and shout, weep and despair."[31]
|
48 |
+
|
49 |
+
Travelling to Buenos Aires in 1933 to give lectures and direct the Argentine premiere of Blood Wedding, García Lorca spoke of his distilled theories on artistic creation and performance in the famous lecture Play and Theory of the Duende. This attempted to define a schema of artistic inspiration, arguing that great art depends upon a vivid awareness of death, connection with a nation's soil, and an acknowledgment of the limitations of reason.[31][32]
|
50 |
+
|
51 |
+
As well as returning to the classical roots of theatre, García Lorca also turned to traditional forms in poetry. His last poetic work, Sonetos de amor oscuro (Sonnets of Dark Love, 1936), was long thought to have been inspired by his passion for Rafael Rodríguez Rapun, secretary of La Barraca. Documents and mementos revealed in 2012 suggest that the actual inspiration was Juan Ramírez de Lucas, a 19-year-old with whom Lorca hoped to emigrate to Mexico.[33] The love sonnets are inspired by the 16th-century poet San Juan de la Cruz.[34] La Barraca's subsidy was cut in half by the rightist government elected in 1934, and its last performance was given in April 1936.
|
52 |
+
|
53 |
+
Lorca spent summers at the Huerta de San Vicente from 1926 to 1936. Here he wrote, totally or in part, some of his major works, among them When Five Years Pass (Así que pasen cinco años) (1931), Blood Wedding (1932), Yerma (1934) and Diván del Tamarit (1931–1936). The poet lived in the Huerta de San Vicente in the days just before his arrest and assassination in August 1936.[35]
|
54 |
+
|
55 |
+
Although García Lorca's drawings do not often receive attention, he was also a talented artist.[36][37]
|
56 |
+
|
57 |
+
Political and social tensions had greatly intensified after the murder of prominent monarchist and anti-Popular Front spokesman José Calvo Sotelo by Republican Assault Guards (Guardias de asalto).[38] García Lorca knew that he would be suspect to the rising right-wing for his outspoken socialist views.[34] Granada was so tumultuous that it had not had a mayor for months; no one dared accept the job. When Lorca's brother-in-law, Manuel Fernández-Montesinos, agreed to accept the position, he was assassinated within a week. On the same day he was shot, 18 August, Lorca was arrested.[39]
|
58 |
+
|
59 |
+
It is thought that García Lorca was shot and killed by Nationalist militia[40][41] on 19 August 1936.[42] The author Ian Gibson in his book The Assassination of García Lorca argues that he was shot with three others (Joaquín Arcollas Cabezas, Francisco Galadí Melgar and Dióscoro Galindo González) at a place known as the Fuente Grande ('Great Spring') which is on the road between Víznar and Alfacar.[43] Police reports released by radio station Cadena SER in April 2015 conclude that Lorca was executed by fascist forces. The Franco-era report, dated 9 July 1965, describes the writer as a "socialist" and "freemason belonging to the Alhambra lodge", who engaged in "homosexual and abnormal practices".[44][45][46]
|
60 |
+
|
61 |
+
Significant controversy exists about the motives and details of Lorca's murder. Personal, non-political motives have been suggested. García Lorca's biographer, Stainton, states that his killers made remarks about his sexual orientation, suggesting that it played a role in his death.[47] Ian Gibson suggests that García Lorca's assassination was part of a campaign of mass killings intended to eliminate supporters of the Leftist Popular Front.[39] However, Gibson proposes that rivalry between the right-wing Spanish Confederation of the Autonomous Right (CEDA) and the fascist Falange was a major factor in Lorca's death. Former CEDA Parliamentary Deputy Ramón Ruiz Alonso arrested García Lorca at the Rosales's home, and was the one responsible for the original denunciation that led to the arrest warrant being issued.
|
62 |
+
|
63 |
+
Then I realized I had been murdered.
|
64 |
+
They looked for me in cafes, cemeteries and churches
|
65 |
+
.... but they did not find me.
|
66 |
+
They never found me?
|
67 |
+
No. They never found me.
|
68 |
+
|
69 |
+
From "The Fable And Round of the Three Friends", Poet in New York (1929), García Lorca
|
70 |
+
|
71 |
+
It has been argued that García Lorca was apolitical and had many friends in both Republican and Nationalist camps. Gibson disputes this in his 1978 book about the poet's death.[39] He cites, for example, Mundo Obrero's published manifesto, which Lorca later signed, and alleges that Lorca was an active supporter of the Popular Front.[48] Lorca read out this manifesto at a banquet in honour of fellow poet Rafael Alberti on 9 February 1936.
|
72 |
+
|
73 |
+
Many anti-communists were sympathetic to Lorca or assisted him. In the days before his arrest he found shelter in the house of the artist and leading Falange member Luis Rosales. Indeed, evidence suggests that Rosales was very nearly shot as well by the Civil Governor Valdés for helping García Lorca. Poet Gabriel Celaya wrote in his memoirs that he once found García Lorca in the company of Falangist José Maria Aizpurua. Celaya further wrote that Lorca dined every Friday with Falangist founder and leader José Antonio Primo de Rivera.[49] On 11 March 1937 an article appeared in the Falangist press denouncing the murder and lionizing García Lorca; the article opened: "The finest poet of Imperial Spain has been assassinated."[50] Jean-Louis Schonberg also put forward the 'homosexual jealousy' theory.[51] The dossier on the murder, compiled in 1936 at Franco's request and referred to by Gibson and others without having seen it, has yet to surface. The first published account of an attempt to locate Lorca's grave can be found in British traveller and Hispanist Gerald Brenan's book The Face of Spain.[52] Despite early attempts such as Brenan's in 1949, the site remained undiscovered throughout the Franco era.
|
74 |
+
|
75 |
+
In 2008, a Spanish judge opened an investigation into Lorca's death. The García Lorca family dropped objections to the excavation of a potential gravesite near Alfacar, but no human remains were found.[53][54] The investigation was dropped. A further investigation was begun in 2016, to no avail.[55]
|
76 |
+
|
77 |
+
In late October 2009, a team of archaeologists and historians from the University of Granada began excavations outside Alfacar.[56] The site was identified three decades previously by a man who said he had helped dig Lorca's grave.[57][58] Lorca was thought to be buried with at least three other men beside a winding mountain road that connects the villages of Víznar and Alfacar.[59]
|
78 |
+
|
79 |
+
The excavations began at the request of another victim's family.[60] Following a long-standing objection, the Lorca family also gave their permission.[60] In October 2009 Francisco Espínola, a spokesman for the Justice Ministry of the Andalusian regional government, said that after years of pressure García Lorca's body would "be exhumed in a matter of weeks."[61] Lorca's relatives, who had initially opposed an exhumation, said they might provide a DNA sample in order to identify his remains.[60]
|
80 |
+
|
81 |
+
In late November 2009, after two weeks of excavating the site, organic material believed to be human bones was recovered. The remains were taken to the University of Granada for examination.[62] But in mid-December 2009, doubts were raised as to whether the poet's remains would be found.[63] The dig produced "not one bone, item of clothing or bullet shell", said Begoña Álvarez, justice minister of Andalucia. She added, "the soil was only 40 cm (16in) deep, making it too shallow for a grave."[64][65] The failed excavation cost €70,000.[66]
|
82 |
+
|
83 |
+
In January 2012, a local historian, Miguel Caballero Pérez, author of "The last 13 hours of García Lorca",[67] applied for permission to excavate another area less than half a kilometre from the site, where he believes Lorca's remains are located.[68]
|
84 |
+
|
85 |
+
Claims in 2016, by Stephen Roberts, an associate professor in Spanish literature at Nottingham University, and others that the poet's body was buried in a well in Alfacar have not been substantiated.[69]
|
86 |
+
|
87 |
+
Francisco Franco's Falangist regime placed a general ban on García Lorca's work, which was not rescinded until 1953. That year, a (censored) Obras completas (Complete Works) was released. Following this, Blood Wedding, Yerma and The House of Bernarda Alba were successfully played on the main Spanish stages. Obras completas did not include his late heavily homoerotic Sonnets of Dark Love, written in November 1935 and shared only with close friends. They were lost until 1983/4 when they were finally published in draft form. (No final manuscripts have ever been found.) It was only after Franco's death that García Lorca's life and death could be openly discussed in Spain. This was due not only to political censorship, but also to the reluctance of the García Lorca family to allow publication of unfinished poems and plays prior to the publication of a critical edition of his works.
|
88 |
+
|
89 |
+
South African Roman Catholic poet Roy Campbell, who enthusiastically supported the Nationalists both during and after the Civil War, later produced acclaimed translations of Lorca's work. In his poem, The Martyrdom of F. Garcia Lorca, Campbell wrote,
|
90 |
+
|
91 |
+
Not only did he lose his life
|
92 |
+
By shots assassinated:
|
93 |
+
But with a hammer and a knife
|
94 |
+
Was after that
|
95 |
+
– translated.[70]
|
96 |
+
|
97 |
+
In Granada, the city of his birth, the Parque Federico García Lorca is dedicated to his memory and includes the Huerta de San Vicente, the Lorca family summer home, opened as a museum in 1995. The grounds, including nearly two hectares of land, the two adjoining houses, works of art, and the original furnishings have been preserved.[71] There is a statue of Lorca on the Avenida de la Constitución in the city center, and a cultural center bearing his name is under construction[when?] and will play a major role in preserving and disseminating his works.
|
98 |
+
|
99 |
+
The Parque Federico García Lorca, in Alfacar, is near Fuente Grande; in 2009 excavations in it failed to locate Lorca's body. Close to the olive tree indicated by some as marking the location of the grave, there is a stone memorial to Federico García Lorca and all other victims of the Civil War, 1936–39. Flowers are laid at the memorial every year on the anniversary of his death, and a commemorative event including music and readings of the poet's works is held every year in the park to mark the anniversary. On 17 August 2011, to remember the 75th anniversary of Lorca's assassination and to celebrate his life and legacy, this event included dance, song, poetry and dramatic readings and attracted hundreds of spectators.
|
100 |
+
|
101 |
+
At the Barranco de Viznar, between Viznar and Alfacar, there is a memorial stone bearing the words "Lorca eran todos, 18-8-2002" ("All were Lorca"). The Barranco de Viznar is the site of mass graves and has been proposed as another possible location of the poet's remains.
|
102 |
+
|
103 |
+
García Lorca is honored by a statue prominently located in Madrid's Plaza de Santa Ana. Political philosopher David Crocker reports that "the statue, at least, is still an emblem of the contested past: each day, the Left puts a red kerchief on the neck of the statue, and someone from the Right comes later to take it off."[72]
|
104 |
+
|
105 |
+
In Paris, France, the memory of García Lorca is honored on the Federico García Lorca Garden, in the center of the French capital, on the Seine.
|
106 |
+
|
107 |
+
The Fundación Federico García Lorca, directed by Lorca's niece Laura García Lorca, sponsors the celebration and dissemination of the writer's work and is currently[when?] building the Centro Federico García Lorca in Madrid. The Lorca family deposited all Federico documents with the foundation, which holds them on their behalf.[73]
|
108 |
+
|
109 |
+
In the Hotel Castelar in Buenos Aires, Argentina, where Lorca lived for six months in 1933, the room where he lived has been kept as a shrine and contains original writings and drawings of his.
|
110 |
+
|
111 |
+
In 2014 Lorca was one of the inaugural honorees in the Rainbow Honor Walk, a walk of fame in San Francisco's Castro neighborhood noting LGBTQ people who have "made significant contributions in their fields."[74][75][76]
|
en/1956.html.txt
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A fairy (also fay, fae, fair folk) is a type of mythical being or legendary creature found in the folklore of multiple European cultures (including Celtic, Slavic, German, English, and French folklore), a form of spirit, often described as metaphysical, supernatural, or preternatural.
|
4 |
+
|
5 |
+
Myths and stories about fairies do not have a single origin, but are rather a collection of folk beliefs from disparate sources. Various folk theories about the origins of fairies include casting them as either demoted angels or demons in a Christian tradition, as minor deities in Pagan belief systems, as spirits of the dead, as prehistoric precursors to humans, or as elementals.
|
6 |
+
|
7 |
+
The label of fairy has at times applied only to specific magical creatures with human appearance, small stature, magical powers, and a penchant for trickery. At other times it has been used to describe any magical creature, such as goblins and gnomes. Fairy has at times been used as an adjective, with a meaning equivalent to "enchanted" or "magical".
|
8 |
+
|
9 |
+
A recurring motif of legends about fairies is the need to ward off fairies using protective charms. Common examples of such charms include church bells, wearing clothing inside out, four-leaf clover, and food. Fairies were also sometimes thought to haunt specific locations, and to lead travelers astray using will-o'-the-wisps. Before the advent of modern medicine, fairies were often blamed for sickness, particularly tuberculosis and birth deformities.
|
10 |
+
|
11 |
+
In addition to their folkloric origins, fairies were a common feature of Renaissance literature and Romantic art, and were especially popular in the United Kingdom during the Victorian and Edwardian eras. The Celtic Revival also saw fairies established as a canonical part of Celtic cultural heritage.
|
12 |
+
|
13 |
+
The English fairy derives from the Early Modern English faerie, meaning "realm of the fays". Faerie, in turn, derives from the Old French form faierie, a derivation from faie (from Vulgar Latin fata) with the abstract noun suffix -erie.
|
14 |
+
|
15 |
+
In Old French romance, a faie or fee was a woman skilled in magic, and who knew the power and virtue of words, of stones, and of herbs.[1]
|
16 |
+
|
17 |
+
"Fairy" was used to represent: an illusion or enchantment; the land of the Faes; collectively the inhabitants thereof; an individual such as a fairy knight.[1]
|
18 |
+
Faie became Modern English fay, while faierie became fairy, but this spelling almost exclusively refers to one individual (the same meaning as fay). In the sense of "land where fairies dwell", archaic spellings faery and faerie are still in use.
|
19 |
+
|
20 |
+
Latinate fay is not related the Germanic fey (from Old English fǣġe), meaning "fated to die",[2]. Yet, this unrelated Germanic word "fey" may have been influenced by Old French fae (fay or fairy) as the meaning had shifted slightly to "fated" from the earlier "doomed" or "accursed".[3]
|
21 |
+
|
22 |
+
Various folklore traditions refer to fairies euphemistically as wee folk, good folk, people of peace, fair folk (Welsh: Tylwyth Teg), etc.[4]
|
23 |
+
|
24 |
+
The term fairy is sometimes used to describe any magical creature, including goblins and gnomes, while at other times, the term describes only a specific type of ethereal creature or sprite.[5] The concept of "fairy" in the narrower sense is unique to English folklore, later made diminutive in accordance with prevailing tastes of the Victorian era, as in "fairy tales" for children.
|
25 |
+
|
26 |
+
Historical origins include various traditions of Brythonic (Bretons, Welsh, Cornish), Gaelic (Irish, Scots, Manx), and Germanic peoples, and of Middle French medieval romances. Fairie was used adjectivally, meaning "enchanted" (as in fairie knight, fairie queene), but also became a generic term for various "enchanted" creatures during the Late Middle English period. Literature of the Elizabethan era conflated elves with the fairies of Romance culture, rendering these terms somewhat interchangeable.
|
27 |
+
|
28 |
+
The Victorian era and Edwardian era saw a heightened increase of interest in fairies. The Celtic Revival cast fairies as part of Ireland's cultural heritage. Carole Silvers and others suggested this fascination of English antiquarians arose from a reaction to greater industrialization and loss of older folk ways.[6]
|
29 |
+
|
30 |
+
Fairies are generally described as human in appearance and having magical powers. Diminutive fairies of various kinds have been reported through centuries, ranging from quite tiny to the size of a human child.[7] These small sizes could be magically assumed, rather than constant.[8] Some smaller fairies could expand their figures to imitate humans.[9] On Orkney, fairies were described as short in stature, dressed in dark grey, and sometimes seen in armour.[10] In some folklore, fairies have green eyes. Some depictions of fairies show them with footwear, others as barefoot. Wings, while common in Victorian and later artworks, are rare in folklore; fairies flew by means of magic, sometimes perched on ragwort stems or the backs of birds.[11] Modern illustrations often include dragonfly or butterfly wings.[12]
|
31 |
+
|
32 |
+
Early modern fairies does not derive from a single origin; the term is a conflation of disparate elements from folk belief sources, influenced by literature and speculation. In folklore of Ireland, the mythic aes sídhe, or 'little folk', have come to a modern meaning somewhat inclusive of fairies. The Scandinavian elves also served as an influence. Folklorists and mythologists have variously depicted fairies as: the unworthy dead, the children of Eve, a kind of demon, a species independent of humans, an older race of humans, and fallen angels.[13] The folkloristic or mythological elements combine Celtic, Germanic and Greco-Roman elements. Folklorists have suggested that 'fairies' arose from various earlier beliefs, which lost currency with the advent of Christianity.[14] These disparate explanations are not necessarily incompatible, as 'fairies' may be traced to multiple sources.
|
33 |
+
|
34 |
+
King James, in his dissertation Daemonologie, stated the term "faries" referred to illusory spirits (demonic entities) that prophesied to, consorted with, and transported the individuals they served; in medieval times, a witch or sorcerer who had a pact with a familiar spirit might receive these services.[15]
|
35 |
+
|
36 |
+
A Christian tenet held that fairies were a class of "demoted" angels.[16] One story described a group of angels revolting, and God ordering the gates of heaven shut; those still in heaven remained angels, those in hell became demons, and those caught in between became fairies.[17] Others wrote that some angels, not being godly enough, yet not evil enough for hell, were thrown out of heaven.[18] This concept may explain the tradition of paying a "teind" or tithe to hell; as fallen angels, although not quite devils, they could be viewed as subjects of Satan.[19]
|
37 |
+
|
38 |
+
In England's Theosophist circles of the 19th century, a belief in the "angelic" nature of fairies was reported.[20] Entities referred to as Devas were said to guide many processes of nature, such as evolution of organisms, growth of plants, etc., many of which resided inside the Sun (Solar Angels). The more Earthbound Devas included nature spirits, elementals, and fairies,[21] which were described as appearing in the form of colored flames, roughly the size of a human.[22]
|
39 |
+
|
40 |
+
Arthur Conan Doyle, in his The Coming of the Fairies; The Theosophic View of Fairies, reported that eminent theosophist E. L. Gardner had likened fairies to butterflies, whose function was to provide an essential link between the energy of the sun and the plants of Earth, describing them as having no clean-cut shape ... small, hazy, and somewhat luminous clouds of colour with a brighter sparkish nucleus. "That growth of a plant which we regard as the customary and inevitable result of associating the three factors of sun, seed, and soil would never take place if the fairy builders were absent."[23]
|
41 |
+
|
42 |
+
For a similar concept in Persian mythology, see Peri.
|
43 |
+
|
44 |
+
At one time it was thought that fairies were originally worshiped as minor deities, such as nymphs and tree spirits,[24] and with the burgeoning predominance of the Christian Church, reverence for these deities carried on, but in a dwindling state of perceived power. Many deprecated deities of older folklore and myth were repurposed as fairies in Victorian fiction (See the works of W. B. Yeats for examples).
|
45 |
+
|
46 |
+
A recorded Christian belief of the 17th century cast all fairies as demons.[25] This perspective grew more popular with the rise of Puritanism among the Reformed Church of England (See: Anglicanism).[26] The hobgoblin, once a friendly household spirit, became classed as a wicked goblin.[27] Dealing with fairies was considered a form of witchcraft, and punished as such.[28] In William Shakespeare's A Midsummer Night's Dream, Oberon, king of the faeries, states that neither he nor his court fear the church bells, which the renowned author and Christian apologist C. S. Lewis cast as a politic disassociation from faeries.[29]
|
47 |
+
In an era of intellectual and religious upheaval, some Victorian reappraisals of mythology cast deities in general as metaphors for natural events,[30] which was later refuted by other authors (See: The Triumph of the Moon, by Ronald Hutton). This contentious environment of thought contributed to the modern meaning of 'fairies'.
|
48 |
+
|
49 |
+
One belief held that fairies were spirits of the dead.[31] This derived from many factors common in various folklore and myths: same or similar tales of both ghosts and fairies; the Irish sídhe, origin of their term for fairies, were ancient burial mounds; deemed dangerous to eat food in Fairyland and Hades; the dead and fairies depicted as living underground.[32] Diane Purkiss observed an equating of fairies with the untimely dead who left "unfinished lives".[33] One tale recounted a man caught by the fairies, who found that whenever he looked steadily at a fairy, it appeared as a dead neighbor of his.[34] This theory was among the more common traditions related, although many informants also expressed doubts.[35]
|
50 |
+
|
51 |
+
There is a theory that fairy folklore evolved from folk memories of a prehistoric race: newcomers superseded a body of earlier human or humanoid peoples, and the memories of this defeated race developed into modern conceptions of fairies. Proponents find support in the tradition of cold iron as a charm against fairies, viewed as a cultural memory of invaders with iron weapons displacing peoples who had just stone, bone, wood, etc., at their disposal, and were easily defeated. 19th-century archaeologists uncovered underground rooms in the Orkney islands that resembled the Elfland described in Childe Rowland,[36] which lent additional support. In folklore, flint arrowheads from the Stone Age were attributed to the fairies as "elfshot",[37] while their green clothing and underground homes spoke to a need for camouflage and covert shelter from hostile humans, their magic a necessary skill for combating those with superior weaponry. In a Victorian tenet of evolution, mythic cannibalism among ogres was attributed to memories of more savage races, practising alongside "superior" races of more refined sensibilities.[38]
|
52 |
+
|
53 |
+
A theory that fairies, et al., were intelligent species, distinct from humans and angels.[39] An alchemist, Paracelsus, classed gnomes and sylphs as elementals, meaning magical entities who personify a particular force of nature, and exert powers over these forces.[40] Folklore accounts have described fairies as "spirits of the air".[41]
|
54 |
+
|
55 |
+
Much folklore of fairies involves methods of protecting oneself from their malice, by means such as cold iron, charms (see amulet, talisman) of rowan trees or various herbs, or simply shunning locations "known" to be theirs, ergo avoiding offending any fairies.[42] Less harmful pranks ascribed to fairies include: tangling the hair of sleepers into fairy-locks (aka elf-locks), stealing small items, and leading a traveler astray. More dangerous behaviors were also attributed to fairies; any form of sudden death might have stemmed from a fairy kidnapping, the evident corpse a magical replica of wood.[43] Consumption (tuberculosis) was sometimes blamed on fairies who forced young men and women to dance at revels every night, causing them to waste away from lack of rest.[44] Rowan trees were considered sacred to fairies,[45] and a charm tree to protect one's home.[46]
|
56 |
+
|
57 |
+
In Scottish folklore, fairies are divided into the Seelie Court (more beneficently inclined, but still dangerous), and the Unseelie Court (more malicious). While fairies of the Seelie Court enjoyed playing generally harmless pranks on humans, those of the Unseelie Court often brought harm to humans for entertainment.[37]
|
58 |
+
|
59 |
+
Trooping fairies refers to those who appear in groups and might form settlements, as opposed to solitary fairies, who do not live or associate with others of their kind. In this context, the term fairy is usually held in a wider sense, including various similar beings, such as dwarves and elves of Germanic folklore.[47]
|
60 |
+
|
61 |
+
A considerable amount of lore about fairies revolves around changelings, fairy children left in the place of stolen human babies.[6] In particular, folklore describes how to prevent the fairies from stealing babies and substituting changelings, and abducting older people as well.[48] The theme of the swapped child is common in medieval literature and reflects concern over infants thought to be afflicted with unexplained diseases, disorders, or developmental disabilities. In pre-industrial Europe, a peasant family's subsistence frequently depended upon the productive labor of each member, and a person who was a permanent drain on the family's scarce resources could pose a threat to the survival of the entire family.[49]
|
62 |
+
|
63 |
+
In terms of protective charms, wearing clothing inside out,[50] church bells, St. John's wort, and four-leaf clovers are regarded as effective. In Newfoundland folklore, the most popular type of fairy protection is bread, varying from stale bread to hard tack or a slice of fresh homemade bread. Bread is associated with the home and the hearth, as well as with industry and the taming of nature, and as such, seems to be disliked by some types of fairies. On the other hand, in much of the Celtic folklore, baked goods are a traditional offering to the folk, as are cream and butter.[20] “The prototype of food, and therefore a symbol of life, bread was one of the commonest protections against fairies. Before going out into a fairy-haunted place, it was customary to put a piece of dry bread in one’s pocket.”[51] In County Wexford, Ireland, in 1882, it was reported that “if an infant is carried out after dark a piece of bread is wrapped in its bib or dress, and this protects it from any witchcraft or evil.”[52]
|
64 |
+
|
65 |
+
Bells also have an ambiguous role; while they protect against fairies, the fairies riding on horseback — such as the fairy queen — often have bells on their harness. This may be a distinguishing trait between the Seelie Court from the Unseelie Court, such that fairies use them to protect themselves from more wicked members of their race.[53] Another ambiguous piece of folklore revolves about poultry: a cock's crow drove away fairies, but other tales recount fairies keeping poultry.[54]
|
66 |
+
|
67 |
+
While many fairies will confuse travelers on the path, the will-o'-the-wisp can be avoided by not following it. Certain locations, known to be haunts of fairies, are to be avoided; C. S. Lewis reported hearing of a cottage more feared for its reported fairies than its reported ghost.[55] In particular, digging in fairy hills was unwise. Paths that the fairies travel are also wise to avoid. Home-owners have knocked corners from houses because the corner blocked the fairy path,[56] and cottages have been built with the front and back doors in line, so that the owners could, in need, leave them both open and let the fairies troop through all night.[57] Locations such as fairy forts were left undisturbed; even cutting brush on fairy forts was reputed to be the death of those who performed the act.[58] Fairy trees, such as thorn trees, were dangerous to chop down; one such tree was left alone in Scotland, though it prevented a road from being widened for seventy years.[59]
|
68 |
+
|
69 |
+
Other actions were believed to offend fairies. Brownies were known to be driven off by being given clothing, though some folktales recounted that they were offended by the inferior quality of the garments given, and others merely stated it, some even recounting that the brownie was delighted with the gift and left with it.[60] Other brownies left households or farms because they heard a complaint, or a compliment.[61] People who saw the fairies were advised not to look closely, because they resented infringements on their privacy.[62] The need to not offend them could lead to problems: one farmer found that fairies threshed his corn, but the threshing continued after all his corn was gone, and he concluded that they were stealing from his neighbors, leaving him the choice between offending them, dangerous in itself, and profiting by the theft.[63]
|
70 |
+
|
71 |
+
Millers were thought by the Scots to be "no canny", owing to their ability to control the forces of nature, such as fire in the kiln, water in the burn, and for being able to set machinery a-whirring. Superstitious communities sometimes believed that the miller must be in league with the fairies. In Scotland, fairies were often mischievous and to be feared. No one dared to set foot in the mill or kiln at night, as it was known that the fairies brought their corn to be milled after dark. So long as the locals believed this, the miller could sleep secure in the knowledge that his stores were not being robbed. John Fraser, the miller of Whitehill, claimed to have hidden and watched the fairies trying unsuccessfully to work the mill. He said he decided to come out of hiding and help them, upon which one of the fairy women gave him a gowpen (double handful of meal) and told him to put it in his empty girnal (store), saying that the store would remain full for a long time, no matter how much he took out.[64]
|
72 |
+
|
73 |
+
It is also believed that to know the name of a particular fairy, a person could summon it and force it to do their bidding. The name could be used as an insult towards the fairy in question, but it could also rather contradictorily be used to grant powers and gifts to the user.[citation needed]
|
74 |
+
|
75 |
+
Before the advent of modern medicine, many physiological conditions were untreatable and when children were born with abnormalities, it was common to blame the fairies.[65]
|
76 |
+
|
77 |
+
Sometimes fairies are described as assuming the guise of an animal.[66] In Scotland, it was peculiar to the fairy women to assume the shape of deer; while witches became mice, hares, cats, gulls, or black sheep. In "The Legend of Knockshigowna", in order to frighten a farmer who pastured his herd on fairy ground, a fairy queen took on the appearance of a great horse, with the wings of an eagle, and a tail like a dragon, hissing loud and spitting fire. Then she would change into a little man lame of a leg, with a bull's head, and a lambent flame playing round it.[67]
|
78 |
+
|
79 |
+
In the 19th-century child ballad "Lady Isabel and the Elf-Knight", the elf-knight is a Bluebeard figure, and Isabel must trick and kill him to preserve her life.[68] The child ballad "Tam Lin" reveals that the title character, though living among the fairies and having fairy powers, was, in fact, an "earthly knight" and though his life was pleasant now, he feared that the fairies would pay him as their teind (tithe) to hell.[68]
|
80 |
+
|
81 |
+
"Sir Orfeo" tells how Sir Orfeo's wife was kidnapped by the King of Faerie and only by trickery and an excellent harping ability was he able to win her back. "Sir Degare" narrates the tale of a woman overcome by her fairy lover, who in later versions of the story is unmasked as a mortal. "Thomas the Rhymer" shows Thomas escaping with less difficulty, but he spends seven years in Elfland.[69] Oisín is harmed not by his stay in Faerie but by his return; when he dismounts, the three centuries that have passed catch up with him, reducing him to an aged man.[70] King Herla (O.E. "Herla cyning"), originally a guise of Woden but later Christianised as a king in a tale by Walter Map, was said, by Map, to have visited a dwarf's underground mansion and returned three centuries later; although only some of his men crumbled to dust on dismounting, Herla and his men who did not dismount were trapped on horseback, this being one account of the origin of the Wild Hunt of European folklore.[71][72]
|
82 |
+
|
83 |
+
A common feature of the fairies is the use of magic to disguise their appearance. Fairy gold is notoriously unreliable, appearing as gold when paid but soon thereafter revealing itself to be leaves, gorse blossoms, gingerbread cakes, or a variety of other comparatively worthless things.[73]
|
84 |
+
|
85 |
+
These illusions are also implicit in the tales of fairy ointment. Many tales from Northern Europe[74][75] tell of a mortal woman summoned to attend a fairy birth — sometimes attending a mortal, kidnapped woman's childbed. Invariably, the woman is given something for the child's eyes, usually an ointment; through mischance, or sometimes curiosity, she uses it on one or both of her own eyes. At that point, she sees where she is; one midwife realizes that she was not attending a great lady in a fine house but her own runaway maid-servant in a wretched cave. She escapes without making her ability known but sooner or later betrays that she can see the fairies. She is invariably blinded in that eye or in both if she used the ointment on both.[76]
|
86 |
+
|
87 |
+
There have been claims by people in the past, like William Blake, to have seen fairy funerals. Allan Cunningham in his Lives of Eminent British Painters records that William Blake claimed to have seen a fairy funeral. "'Did you ever see a fairy's funeral, madam?' said Blake to a lady who happened to sit next to him. 'Never, sir!' said the lady. 'I have,' said Blake, 'but not before last night.' And he went on to tell how, in his garden, he had seen 'a procession of creatures of the size and colour of green and grey grasshoppers, bearing a body laid out on a rose-leaf, which they buried with songs, and then disappeared.' They are believed to be an omen of death.
|
88 |
+
|
89 |
+
The Tuath(a) Dé Danann are a race of supernaturally-gifted people in Irish mythology. They are thought to represent the main deities of pre-Christian Gaelic Ireland. Many of the Irish tales of the Tuatha Dé Danann refer to these beings as fairies, though in more ancient times they were regarded as goddesses and gods. The Tuatha Dé Danann were spoken of as having come from islands in the north of the world or, in other sources, from the sky. After being defeated in a series of battles with other otherworldly beings, and then by the ancestors of the current Irish people, they were said to have withdrawn to the sídhe (fairy mounds), where they lived on in popular imagination as "fairies."[citation needed]
|
90 |
+
|
91 |
+
They are associated with several Otherworld realms including Mag Mell (the Pleasant Plain), Emain Ablach (the Fortress of Apples, the Land of Promise or the Isle of Women), and Tir na nÓg (the Land of Youth).
|
92 |
+
|
93 |
+
The aos sí is the Irish term for a supernatural race in Irish and Scottish, comparable to the fairies or elves. They are variously said to be ancestors, the spirits of nature, or goddesses and gods.[77] A common theme found among the Celtic nations describes a race of diminutive people who had been driven into hiding by invading humans. In old Celtic fairy lore the Aos Sí (fairy folk) are immortals living in the ancient barrows and cairns. The Irish banshee (Irish Gaelic bean sí or Scottish Gaelic bean shìth, which both mean "woman of the fairy mound") is sometimes described as a ghost.[78]
|
94 |
+
|
95 |
+
In the 1691 The Secret Commonwealth of Elves, Fauns and Fairies, Reverend Robert Kirk, minister of the Parish of Aberfoyle, Stirling, Scotland, wrote:
|
96 |
+
|
97 |
+
These Siths or Fairies they call Sleagh Maith or the Good People...are said to be of middle nature between Man and Angel, as were Daemons thought to be of old; of intelligent fluidous Spirits, and light changeable bodies (lyke those called Astral) somewhat of the nature of a condensed cloud, and best seen in twilight. These bodies be so pliable through the sublety of Spirits that agitate them, that they can make them appear or disappear at pleasure[79]
|
98 |
+
|
99 |
+
The word "fairy" was used to describe an individual inhabitant of Faerie before the time of Chaucer.[1]
|
100 |
+
|
101 |
+
Fairies appeared in medieval romances as one of the beings that a knight errant might encounter. A fairy lady appeared to Sir Launfal and demanded his love; like the fairy bride of ordinary folklore, she imposed a prohibition on him that in time he violated. Sir Orfeo's wife was carried off by the King of Faerie. Huon of Bordeaux is aided by King Oberon.[80] These fairy characters dwindled in number as the medieval era progressed; the figures became wizards and enchantresses.[81]
|
102 |
+
|
103 |
+
The oldest fairies on record in England were first described by the historian Gervase of Tilbury in the 13th century.[82]
|
104 |
+
|
105 |
+
Morgan le Fay, whose connection to the realm of Faerie is implied in her name, in Le Morte d'Arthur is a woman whose magic powers stem from study.[83] While somewhat diminished with time, fairies never completely vanished from the tradition. Sir Gawain and the Green Knight is a late tale, but the Green Knight himself is an otherworldly being.[81] Edmund Spenser featured fairies in The Faerie Queene.[84] In many works of fiction, fairies are freely mixed with the nymphs and satyrs of classical tradition,[85] while in others (e.g., Lamia), they were seen as displacing the Classical beings. 15th-century poet and monk John Lydgate wrote that King Arthur was crowned in "the land of the fairy" and taken in his death by four fairy queens, to Avalon, where he lies under a "fairy hill", until he is needed again.[86]
|
106 |
+
|
107 |
+
Fairies appear as significant characters in William Shakespeare's A Midsummer Night's Dream, which is set simultaneously in the woodland and in the realm of Fairyland, under the light of the moon[87] and in which a disturbance of nature caused by a fairy dispute creates tension underlying the plot and informing the actions of the characters. According to Maurice Hunt, Chair of the English Department at Baylor University, the blurring of the identities of fantasy and reality makes possible “that pleasing, narcotic dreaminess associated with the fairies of the play”.[88]
|
108 |
+
|
109 |
+
Shakespeare's contemporary Michael Drayton features fairies in his Nimphidia; from these stem Alexander Pope's sylphs of The Rape of the Lock, and in the mid-17th century, précieuses took up the oral tradition of such tales to write fairy tales; Madame d'Aulnoy invented the term contes de fée ("fairy tale").[89] While the tales told by the précieuses included many fairies, they were less common in other countries' tales; indeed, the Brothers Grimm included fairies in their first edition but decided this was not authentically German and altered the language in later editions, changing each Fee ("fairy") to an enchantress or wise woman.[90] J. R. R. Tolkien described these tales as taking place in the land of Faerie.[91] Additionally, not all folktales that feature fairies are generally categorized as fairy tales.
|
110 |
+
|
111 |
+
The modern depiction of fairies was shaped in the literature of Romanticism during the Victorian era. Writers such as Walter Scott and James Hogg were inspired by folklore which featured fairies, such as the Border ballads. This era saw an increase in the popularity of collecting fairy folklore and an increase in the creation of original works with fairy characters.[92] In Rudyard Kipling's Puck of Pook's Hill, Puck holds to scorn the moralizing fairies of other Victorian works.[93] The period also saw a revival of older themes in fantasy literature, such as C.S. Lewis's Narnia books, which, while featuring many such classical beings as fauns and dryads, mingles them freely with hags, giants, and other creatures of the folkloric fairy tradition.[94] Victorian flower fairies were popularized in part by Queen Mary’s keen interest in fairy art and by British illustrator and poet Cicely Mary Barker's series of eight books published in 1923 through 1948. Imagery of fairies in literature became prettier and smaller as time progressed.[95] Andrew Lang, complaining of "the fairies of polyanthuses and gardenias and apple blossoms" in the introduction to The Lilac Fairy Book, observed that "These fairies try to be funny, and fail; or they try to preach, and succeed."[96]
|
112 |
+
|
113 |
+
A story of the origin of fairies appears in a chapter about Peter Pan in J. M. Barrie's 1902 novel The Little White Bird, and was incorporated into his later works about the character. Barrie wrote, "When the first baby laughed for the first time, his laugh broke into a million pieces, and they all went skipping about. That was the beginning of fairies."[97] Fairies are seen in Neverland, in Peter and Wendy, the novel version of J. M. Barrie's famous Peter Pan stories, published in 1911, and its character Tinker Bell has become a pop culture icon. When Peter Pan is guarding Wendy from pirates, the story says, "After a time he fell asleep, and some unsteady fairies had to climb over him on their way home from an orgy. Any of the other boys obstructing the fairy path at night they would have mischiefed, but they just tweaked Peter's nose and passed on."[98]
|
114 |
+
|
115 |
+
Images of fairies have appeared as illustrations, often in books of fairy tales, as well as in photographic media and sculpture. Some artists known for their depictions of fairies include Cicely Mary Barker, Amy Brown, David Delamare, Meredith Dillman, Gustave Doré, Brian Froud, Warwick Goble, Jasmine Becket-Griffith, Rebecca Guay, Florence Harrison, Kylie InGold, Greta James, Alan Lee, Ida Rentoul Outhwaite, Myrea Pettit, Arthur Rackham, Suza Scalora, and Nene Thomas.[99]
|
116 |
+
|
117 |
+
The Fairy Doors of Ann Arbor, MI are small doors installed into local buildings. Local children believe these are the front doors of fairy houses, and in some cases, small furniture, dishes, and various other things can be seen beyond the doors.
|
118 |
+
|
119 |
+
The Victorian era was particularly noted for fairy paintings. The Victorian painter Richard Dadd created paintings of fairy-folk with a sinister and malign tone. Other Victorian artists who depicted fairies include John Anster Fitzgerald, John Atkinson Grimshaw, Daniel Maclise, and Joseph Noel Paton.[100] Interest in fairy-themed art enjoyed a brief renaissance following the publication of the Cottingley Fairies photographs in 1917, and a number of artists turned to painting fairy themes.[citation needed]
|
en/1957.html.txt
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A fairy (also fay, fae, fair folk) is a type of mythical being or legendary creature found in the folklore of multiple European cultures (including Celtic, Slavic, German, English, and French folklore), a form of spirit, often described as metaphysical, supernatural, or preternatural.
|
4 |
+
|
5 |
+
Myths and stories about fairies do not have a single origin, but are rather a collection of folk beliefs from disparate sources. Various folk theories about the origins of fairies include casting them as either demoted angels or demons in a Christian tradition, as minor deities in Pagan belief systems, as spirits of the dead, as prehistoric precursors to humans, or as elementals.
|
6 |
+
|
7 |
+
The label of fairy has at times applied only to specific magical creatures with human appearance, small stature, magical powers, and a penchant for trickery. At other times it has been used to describe any magical creature, such as goblins and gnomes. Fairy has at times been used as an adjective, with a meaning equivalent to "enchanted" or "magical".
|
8 |
+
|
9 |
+
A recurring motif of legends about fairies is the need to ward off fairies using protective charms. Common examples of such charms include church bells, wearing clothing inside out, four-leaf clover, and food. Fairies were also sometimes thought to haunt specific locations, and to lead travelers astray using will-o'-the-wisps. Before the advent of modern medicine, fairies were often blamed for sickness, particularly tuberculosis and birth deformities.
|
10 |
+
|
11 |
+
In addition to their folkloric origins, fairies were a common feature of Renaissance literature and Romantic art, and were especially popular in the United Kingdom during the Victorian and Edwardian eras. The Celtic Revival also saw fairies established as a canonical part of Celtic cultural heritage.
|
12 |
+
|
13 |
+
The English fairy derives from the Early Modern English faerie, meaning "realm of the fays". Faerie, in turn, derives from the Old French form faierie, a derivation from faie (from Vulgar Latin fata) with the abstract noun suffix -erie.
|
14 |
+
|
15 |
+
In Old French romance, a faie or fee was a woman skilled in magic, and who knew the power and virtue of words, of stones, and of herbs.[1]
|
16 |
+
|
17 |
+
"Fairy" was used to represent: an illusion or enchantment; the land of the Faes; collectively the inhabitants thereof; an individual such as a fairy knight.[1]
|
18 |
+
Faie became Modern English fay, while faierie became fairy, but this spelling almost exclusively refers to one individual (the same meaning as fay). In the sense of "land where fairies dwell", archaic spellings faery and faerie are still in use.
|
19 |
+
|
20 |
+
Latinate fay is not related the Germanic fey (from Old English fǣġe), meaning "fated to die",[2]. Yet, this unrelated Germanic word "fey" may have been influenced by Old French fae (fay or fairy) as the meaning had shifted slightly to "fated" from the earlier "doomed" or "accursed".[3]
|
21 |
+
|
22 |
+
Various folklore traditions refer to fairies euphemistically as wee folk, good folk, people of peace, fair folk (Welsh: Tylwyth Teg), etc.[4]
|
23 |
+
|
24 |
+
The term fairy is sometimes used to describe any magical creature, including goblins and gnomes, while at other times, the term describes only a specific type of ethereal creature or sprite.[5] The concept of "fairy" in the narrower sense is unique to English folklore, later made diminutive in accordance with prevailing tastes of the Victorian era, as in "fairy tales" for children.
|
25 |
+
|
26 |
+
Historical origins include various traditions of Brythonic (Bretons, Welsh, Cornish), Gaelic (Irish, Scots, Manx), and Germanic peoples, and of Middle French medieval romances. Fairie was used adjectivally, meaning "enchanted" (as in fairie knight, fairie queene), but also became a generic term for various "enchanted" creatures during the Late Middle English period. Literature of the Elizabethan era conflated elves with the fairies of Romance culture, rendering these terms somewhat interchangeable.
|
27 |
+
|
28 |
+
The Victorian era and Edwardian era saw a heightened increase of interest in fairies. The Celtic Revival cast fairies as part of Ireland's cultural heritage. Carole Silvers and others suggested this fascination of English antiquarians arose from a reaction to greater industrialization and loss of older folk ways.[6]
|
29 |
+
|
30 |
+
Fairies are generally described as human in appearance and having magical powers. Diminutive fairies of various kinds have been reported through centuries, ranging from quite tiny to the size of a human child.[7] These small sizes could be magically assumed, rather than constant.[8] Some smaller fairies could expand their figures to imitate humans.[9] On Orkney, fairies were described as short in stature, dressed in dark grey, and sometimes seen in armour.[10] In some folklore, fairies have green eyes. Some depictions of fairies show them with footwear, others as barefoot. Wings, while common in Victorian and later artworks, are rare in folklore; fairies flew by means of magic, sometimes perched on ragwort stems or the backs of birds.[11] Modern illustrations often include dragonfly or butterfly wings.[12]
|
31 |
+
|
32 |
+
Early modern fairies does not derive from a single origin; the term is a conflation of disparate elements from folk belief sources, influenced by literature and speculation. In folklore of Ireland, the mythic aes sídhe, or 'little folk', have come to a modern meaning somewhat inclusive of fairies. The Scandinavian elves also served as an influence. Folklorists and mythologists have variously depicted fairies as: the unworthy dead, the children of Eve, a kind of demon, a species independent of humans, an older race of humans, and fallen angels.[13] The folkloristic or mythological elements combine Celtic, Germanic and Greco-Roman elements. Folklorists have suggested that 'fairies' arose from various earlier beliefs, which lost currency with the advent of Christianity.[14] These disparate explanations are not necessarily incompatible, as 'fairies' may be traced to multiple sources.
|
33 |
+
|
34 |
+
King James, in his dissertation Daemonologie, stated the term "faries" referred to illusory spirits (demonic entities) that prophesied to, consorted with, and transported the individuals they served; in medieval times, a witch or sorcerer who had a pact with a familiar spirit might receive these services.[15]
|
35 |
+
|
36 |
+
A Christian tenet held that fairies were a class of "demoted" angels.[16] One story described a group of angels revolting, and God ordering the gates of heaven shut; those still in heaven remained angels, those in hell became demons, and those caught in between became fairies.[17] Others wrote that some angels, not being godly enough, yet not evil enough for hell, were thrown out of heaven.[18] This concept may explain the tradition of paying a "teind" or tithe to hell; as fallen angels, although not quite devils, they could be viewed as subjects of Satan.[19]
|
37 |
+
|
38 |
+
In England's Theosophist circles of the 19th century, a belief in the "angelic" nature of fairies was reported.[20] Entities referred to as Devas were said to guide many processes of nature, such as evolution of organisms, growth of plants, etc., many of which resided inside the Sun (Solar Angels). The more Earthbound Devas included nature spirits, elementals, and fairies,[21] which were described as appearing in the form of colored flames, roughly the size of a human.[22]
|
39 |
+
|
40 |
+
Arthur Conan Doyle, in his The Coming of the Fairies; The Theosophic View of Fairies, reported that eminent theosophist E. L. Gardner had likened fairies to butterflies, whose function was to provide an essential link between the energy of the sun and the plants of Earth, describing them as having no clean-cut shape ... small, hazy, and somewhat luminous clouds of colour with a brighter sparkish nucleus. "That growth of a plant which we regard as the customary and inevitable result of associating the three factors of sun, seed, and soil would never take place if the fairy builders were absent."[23]
|
41 |
+
|
42 |
+
For a similar concept in Persian mythology, see Peri.
|
43 |
+
|
44 |
+
At one time it was thought that fairies were originally worshiped as minor deities, such as nymphs and tree spirits,[24] and with the burgeoning predominance of the Christian Church, reverence for these deities carried on, but in a dwindling state of perceived power. Many deprecated deities of older folklore and myth were repurposed as fairies in Victorian fiction (See the works of W. B. Yeats for examples).
|
45 |
+
|
46 |
+
A recorded Christian belief of the 17th century cast all fairies as demons.[25] This perspective grew more popular with the rise of Puritanism among the Reformed Church of England (See: Anglicanism).[26] The hobgoblin, once a friendly household spirit, became classed as a wicked goblin.[27] Dealing with fairies was considered a form of witchcraft, and punished as such.[28] In William Shakespeare's A Midsummer Night's Dream, Oberon, king of the faeries, states that neither he nor his court fear the church bells, which the renowned author and Christian apologist C. S. Lewis cast as a politic disassociation from faeries.[29]
|
47 |
+
In an era of intellectual and religious upheaval, some Victorian reappraisals of mythology cast deities in general as metaphors for natural events,[30] which was later refuted by other authors (See: The Triumph of the Moon, by Ronald Hutton). This contentious environment of thought contributed to the modern meaning of 'fairies'.
|
48 |
+
|
49 |
+
One belief held that fairies were spirits of the dead.[31] This derived from many factors common in various folklore and myths: same or similar tales of both ghosts and fairies; the Irish sídhe, origin of their term for fairies, were ancient burial mounds; deemed dangerous to eat food in Fairyland and Hades; the dead and fairies depicted as living underground.[32] Diane Purkiss observed an equating of fairies with the untimely dead who left "unfinished lives".[33] One tale recounted a man caught by the fairies, who found that whenever he looked steadily at a fairy, it appeared as a dead neighbor of his.[34] This theory was among the more common traditions related, although many informants also expressed doubts.[35]
|
50 |
+
|
51 |
+
There is a theory that fairy folklore evolved from folk memories of a prehistoric race: newcomers superseded a body of earlier human or humanoid peoples, and the memories of this defeated race developed into modern conceptions of fairies. Proponents find support in the tradition of cold iron as a charm against fairies, viewed as a cultural memory of invaders with iron weapons displacing peoples who had just stone, bone, wood, etc., at their disposal, and were easily defeated. 19th-century archaeologists uncovered underground rooms in the Orkney islands that resembled the Elfland described in Childe Rowland,[36] which lent additional support. In folklore, flint arrowheads from the Stone Age were attributed to the fairies as "elfshot",[37] while their green clothing and underground homes spoke to a need for camouflage and covert shelter from hostile humans, their magic a necessary skill for combating those with superior weaponry. In a Victorian tenet of evolution, mythic cannibalism among ogres was attributed to memories of more savage races, practising alongside "superior" races of more refined sensibilities.[38]
|
52 |
+
|
53 |
+
A theory that fairies, et al., were intelligent species, distinct from humans and angels.[39] An alchemist, Paracelsus, classed gnomes and sylphs as elementals, meaning magical entities who personify a particular force of nature, and exert powers over these forces.[40] Folklore accounts have described fairies as "spirits of the air".[41]
|
54 |
+
|
55 |
+
Much folklore of fairies involves methods of protecting oneself from their malice, by means such as cold iron, charms (see amulet, talisman) of rowan trees or various herbs, or simply shunning locations "known" to be theirs, ergo avoiding offending any fairies.[42] Less harmful pranks ascribed to fairies include: tangling the hair of sleepers into fairy-locks (aka elf-locks), stealing small items, and leading a traveler astray. More dangerous behaviors were also attributed to fairies; any form of sudden death might have stemmed from a fairy kidnapping, the evident corpse a magical replica of wood.[43] Consumption (tuberculosis) was sometimes blamed on fairies who forced young men and women to dance at revels every night, causing them to waste away from lack of rest.[44] Rowan trees were considered sacred to fairies,[45] and a charm tree to protect one's home.[46]
|
56 |
+
|
57 |
+
In Scottish folklore, fairies are divided into the Seelie Court (more beneficently inclined, but still dangerous), and the Unseelie Court (more malicious). While fairies of the Seelie Court enjoyed playing generally harmless pranks on humans, those of the Unseelie Court often brought harm to humans for entertainment.[37]
|
58 |
+
|
59 |
+
Trooping fairies refers to those who appear in groups and might form settlements, as opposed to solitary fairies, who do not live or associate with others of their kind. In this context, the term fairy is usually held in a wider sense, including various similar beings, such as dwarves and elves of Germanic folklore.[47]
|
60 |
+
|
61 |
+
A considerable amount of lore about fairies revolves around changelings, fairy children left in the place of stolen human babies.[6] In particular, folklore describes how to prevent the fairies from stealing babies and substituting changelings, and abducting older people as well.[48] The theme of the swapped child is common in medieval literature and reflects concern over infants thought to be afflicted with unexplained diseases, disorders, or developmental disabilities. In pre-industrial Europe, a peasant family's subsistence frequently depended upon the productive labor of each member, and a person who was a permanent drain on the family's scarce resources could pose a threat to the survival of the entire family.[49]
|
62 |
+
|
63 |
+
In terms of protective charms, wearing clothing inside out,[50] church bells, St. John's wort, and four-leaf clovers are regarded as effective. In Newfoundland folklore, the most popular type of fairy protection is bread, varying from stale bread to hard tack or a slice of fresh homemade bread. Bread is associated with the home and the hearth, as well as with industry and the taming of nature, and as such, seems to be disliked by some types of fairies. On the other hand, in much of the Celtic folklore, baked goods are a traditional offering to the folk, as are cream and butter.[20] “The prototype of food, and therefore a symbol of life, bread was one of the commonest protections against fairies. Before going out into a fairy-haunted place, it was customary to put a piece of dry bread in one’s pocket.”[51] In County Wexford, Ireland, in 1882, it was reported that “if an infant is carried out after dark a piece of bread is wrapped in its bib or dress, and this protects it from any witchcraft or evil.”[52]
|
64 |
+
|
65 |
+
Bells also have an ambiguous role; while they protect against fairies, the fairies riding on horseback — such as the fairy queen — often have bells on their harness. This may be a distinguishing trait between the Seelie Court from the Unseelie Court, such that fairies use them to protect themselves from more wicked members of their race.[53] Another ambiguous piece of folklore revolves about poultry: a cock's crow drove away fairies, but other tales recount fairies keeping poultry.[54]
|
66 |
+
|
67 |
+
While many fairies will confuse travelers on the path, the will-o'-the-wisp can be avoided by not following it. Certain locations, known to be haunts of fairies, are to be avoided; C. S. Lewis reported hearing of a cottage more feared for its reported fairies than its reported ghost.[55] In particular, digging in fairy hills was unwise. Paths that the fairies travel are also wise to avoid. Home-owners have knocked corners from houses because the corner blocked the fairy path,[56] and cottages have been built with the front and back doors in line, so that the owners could, in need, leave them both open and let the fairies troop through all night.[57] Locations such as fairy forts were left undisturbed; even cutting brush on fairy forts was reputed to be the death of those who performed the act.[58] Fairy trees, such as thorn trees, were dangerous to chop down; one such tree was left alone in Scotland, though it prevented a road from being widened for seventy years.[59]
|
68 |
+
|
69 |
+
Other actions were believed to offend fairies. Brownies were known to be driven off by being given clothing, though some folktales recounted that they were offended by the inferior quality of the garments given, and others merely stated it, some even recounting that the brownie was delighted with the gift and left with it.[60] Other brownies left households or farms because they heard a complaint, or a compliment.[61] People who saw the fairies were advised not to look closely, because they resented infringements on their privacy.[62] The need to not offend them could lead to problems: one farmer found that fairies threshed his corn, but the threshing continued after all his corn was gone, and he concluded that they were stealing from his neighbors, leaving him the choice between offending them, dangerous in itself, and profiting by the theft.[63]
|
70 |
+
|
71 |
+
Millers were thought by the Scots to be "no canny", owing to their ability to control the forces of nature, such as fire in the kiln, water in the burn, and for being able to set machinery a-whirring. Superstitious communities sometimes believed that the miller must be in league with the fairies. In Scotland, fairies were often mischievous and to be feared. No one dared to set foot in the mill or kiln at night, as it was known that the fairies brought their corn to be milled after dark. So long as the locals believed this, the miller could sleep secure in the knowledge that his stores were not being robbed. John Fraser, the miller of Whitehill, claimed to have hidden and watched the fairies trying unsuccessfully to work the mill. He said he decided to come out of hiding and help them, upon which one of the fairy women gave him a gowpen (double handful of meal) and told him to put it in his empty girnal (store), saying that the store would remain full for a long time, no matter how much he took out.[64]
|
72 |
+
|
73 |
+
It is also believed that to know the name of a particular fairy, a person could summon it and force it to do their bidding. The name could be used as an insult towards the fairy in question, but it could also rather contradictorily be used to grant powers and gifts to the user.[citation needed]
|
74 |
+
|
75 |
+
Before the advent of modern medicine, many physiological conditions were untreatable and when children were born with abnormalities, it was common to blame the fairies.[65]
|
76 |
+
|
77 |
+
Sometimes fairies are described as assuming the guise of an animal.[66] In Scotland, it was peculiar to the fairy women to assume the shape of deer; while witches became mice, hares, cats, gulls, or black sheep. In "The Legend of Knockshigowna", in order to frighten a farmer who pastured his herd on fairy ground, a fairy queen took on the appearance of a great horse, with the wings of an eagle, and a tail like a dragon, hissing loud and spitting fire. Then she would change into a little man lame of a leg, with a bull's head, and a lambent flame playing round it.[67]
|
78 |
+
|
79 |
+
In the 19th-century child ballad "Lady Isabel and the Elf-Knight", the elf-knight is a Bluebeard figure, and Isabel must trick and kill him to preserve her life.[68] The child ballad "Tam Lin" reveals that the title character, though living among the fairies and having fairy powers, was, in fact, an "earthly knight" and though his life was pleasant now, he feared that the fairies would pay him as their teind (tithe) to hell.[68]
|
80 |
+
|
81 |
+
"Sir Orfeo" tells how Sir Orfeo's wife was kidnapped by the King of Faerie and only by trickery and an excellent harping ability was he able to win her back. "Sir Degare" narrates the tale of a woman overcome by her fairy lover, who in later versions of the story is unmasked as a mortal. "Thomas the Rhymer" shows Thomas escaping with less difficulty, but he spends seven years in Elfland.[69] Oisín is harmed not by his stay in Faerie but by his return; when he dismounts, the three centuries that have passed catch up with him, reducing him to an aged man.[70] King Herla (O.E. "Herla cyning"), originally a guise of Woden but later Christianised as a king in a tale by Walter Map, was said, by Map, to have visited a dwarf's underground mansion and returned three centuries later; although only some of his men crumbled to dust on dismounting, Herla and his men who did not dismount were trapped on horseback, this being one account of the origin of the Wild Hunt of European folklore.[71][72]
|
82 |
+
|
83 |
+
A common feature of the fairies is the use of magic to disguise their appearance. Fairy gold is notoriously unreliable, appearing as gold when paid but soon thereafter revealing itself to be leaves, gorse blossoms, gingerbread cakes, or a variety of other comparatively worthless things.[73]
|
84 |
+
|
85 |
+
These illusions are also implicit in the tales of fairy ointment. Many tales from Northern Europe[74][75] tell of a mortal woman summoned to attend a fairy birth — sometimes attending a mortal, kidnapped woman's childbed. Invariably, the woman is given something for the child's eyes, usually an ointment; through mischance, or sometimes curiosity, she uses it on one or both of her own eyes. At that point, she sees where she is; one midwife realizes that she was not attending a great lady in a fine house but her own runaway maid-servant in a wretched cave. She escapes without making her ability known but sooner or later betrays that she can see the fairies. She is invariably blinded in that eye or in both if she used the ointment on both.[76]
|
86 |
+
|
87 |
+
There have been claims by people in the past, like William Blake, to have seen fairy funerals. Allan Cunningham in his Lives of Eminent British Painters records that William Blake claimed to have seen a fairy funeral. "'Did you ever see a fairy's funeral, madam?' said Blake to a lady who happened to sit next to him. 'Never, sir!' said the lady. 'I have,' said Blake, 'but not before last night.' And he went on to tell how, in his garden, he had seen 'a procession of creatures of the size and colour of green and grey grasshoppers, bearing a body laid out on a rose-leaf, which they buried with songs, and then disappeared.' They are believed to be an omen of death.
|
88 |
+
|
89 |
+
The Tuath(a) Dé Danann are a race of supernaturally-gifted people in Irish mythology. They are thought to represent the main deities of pre-Christian Gaelic Ireland. Many of the Irish tales of the Tuatha Dé Danann refer to these beings as fairies, though in more ancient times they were regarded as goddesses and gods. The Tuatha Dé Danann were spoken of as having come from islands in the north of the world or, in other sources, from the sky. After being defeated in a series of battles with other otherworldly beings, and then by the ancestors of the current Irish people, they were said to have withdrawn to the sídhe (fairy mounds), where they lived on in popular imagination as "fairies."[citation needed]
|
90 |
+
|
91 |
+
They are associated with several Otherworld realms including Mag Mell (the Pleasant Plain), Emain Ablach (the Fortress of Apples, the Land of Promise or the Isle of Women), and Tir na nÓg (the Land of Youth).
|
92 |
+
|
93 |
+
The aos sí is the Irish term for a supernatural race in Irish and Scottish, comparable to the fairies or elves. They are variously said to be ancestors, the spirits of nature, or goddesses and gods.[77] A common theme found among the Celtic nations describes a race of diminutive people who had been driven into hiding by invading humans. In old Celtic fairy lore the Aos Sí (fairy folk) are immortals living in the ancient barrows and cairns. The Irish banshee (Irish Gaelic bean sí or Scottish Gaelic bean shìth, which both mean "woman of the fairy mound") is sometimes described as a ghost.[78]
|
94 |
+
|
95 |
+
In the 1691 The Secret Commonwealth of Elves, Fauns and Fairies, Reverend Robert Kirk, minister of the Parish of Aberfoyle, Stirling, Scotland, wrote:
|
96 |
+
|
97 |
+
These Siths or Fairies they call Sleagh Maith or the Good People...are said to be of middle nature between Man and Angel, as were Daemons thought to be of old; of intelligent fluidous Spirits, and light changeable bodies (lyke those called Astral) somewhat of the nature of a condensed cloud, and best seen in twilight. These bodies be so pliable through the sublety of Spirits that agitate them, that they can make them appear or disappear at pleasure[79]
|
98 |
+
|
99 |
+
The word "fairy" was used to describe an individual inhabitant of Faerie before the time of Chaucer.[1]
|
100 |
+
|
101 |
+
Fairies appeared in medieval romances as one of the beings that a knight errant might encounter. A fairy lady appeared to Sir Launfal and demanded his love; like the fairy bride of ordinary folklore, she imposed a prohibition on him that in time he violated. Sir Orfeo's wife was carried off by the King of Faerie. Huon of Bordeaux is aided by King Oberon.[80] These fairy characters dwindled in number as the medieval era progressed; the figures became wizards and enchantresses.[81]
|
102 |
+
|
103 |
+
The oldest fairies on record in England were first described by the historian Gervase of Tilbury in the 13th century.[82]
|
104 |
+
|
105 |
+
Morgan le Fay, whose connection to the realm of Faerie is implied in her name, in Le Morte d'Arthur is a woman whose magic powers stem from study.[83] While somewhat diminished with time, fairies never completely vanished from the tradition. Sir Gawain and the Green Knight is a late tale, but the Green Knight himself is an otherworldly being.[81] Edmund Spenser featured fairies in The Faerie Queene.[84] In many works of fiction, fairies are freely mixed with the nymphs and satyrs of classical tradition,[85] while in others (e.g., Lamia), they were seen as displacing the Classical beings. 15th-century poet and monk John Lydgate wrote that King Arthur was crowned in "the land of the fairy" and taken in his death by four fairy queens, to Avalon, where he lies under a "fairy hill", until he is needed again.[86]
|
106 |
+
|
107 |
+
Fairies appear as significant characters in William Shakespeare's A Midsummer Night's Dream, which is set simultaneously in the woodland and in the realm of Fairyland, under the light of the moon[87] and in which a disturbance of nature caused by a fairy dispute creates tension underlying the plot and informing the actions of the characters. According to Maurice Hunt, Chair of the English Department at Baylor University, the blurring of the identities of fantasy and reality makes possible “that pleasing, narcotic dreaminess associated with the fairies of the play”.[88]
|
108 |
+
|
109 |
+
Shakespeare's contemporary Michael Drayton features fairies in his Nimphidia; from these stem Alexander Pope's sylphs of The Rape of the Lock, and in the mid-17th century, précieuses took up the oral tradition of such tales to write fairy tales; Madame d'Aulnoy invented the term contes de fée ("fairy tale").[89] While the tales told by the précieuses included many fairies, they were less common in other countries' tales; indeed, the Brothers Grimm included fairies in their first edition but decided this was not authentically German and altered the language in later editions, changing each Fee ("fairy") to an enchantress or wise woman.[90] J. R. R. Tolkien described these tales as taking place in the land of Faerie.[91] Additionally, not all folktales that feature fairies are generally categorized as fairy tales.
|
110 |
+
|
111 |
+
The modern depiction of fairies was shaped in the literature of Romanticism during the Victorian era. Writers such as Walter Scott and James Hogg were inspired by folklore which featured fairies, such as the Border ballads. This era saw an increase in the popularity of collecting fairy folklore and an increase in the creation of original works with fairy characters.[92] In Rudyard Kipling's Puck of Pook's Hill, Puck holds to scorn the moralizing fairies of other Victorian works.[93] The period also saw a revival of older themes in fantasy literature, such as C.S. Lewis's Narnia books, which, while featuring many such classical beings as fauns and dryads, mingles them freely with hags, giants, and other creatures of the folkloric fairy tradition.[94] Victorian flower fairies were popularized in part by Queen Mary’s keen interest in fairy art and by British illustrator and poet Cicely Mary Barker's series of eight books published in 1923 through 1948. Imagery of fairies in literature became prettier and smaller as time progressed.[95] Andrew Lang, complaining of "the fairies of polyanthuses and gardenias and apple blossoms" in the introduction to The Lilac Fairy Book, observed that "These fairies try to be funny, and fail; or they try to preach, and succeed."[96]
|
112 |
+
|
113 |
+
A story of the origin of fairies appears in a chapter about Peter Pan in J. M. Barrie's 1902 novel The Little White Bird, and was incorporated into his later works about the character. Barrie wrote, "When the first baby laughed for the first time, his laugh broke into a million pieces, and they all went skipping about. That was the beginning of fairies."[97] Fairies are seen in Neverland, in Peter and Wendy, the novel version of J. M. Barrie's famous Peter Pan stories, published in 1911, and its character Tinker Bell has become a pop culture icon. When Peter Pan is guarding Wendy from pirates, the story says, "After a time he fell asleep, and some unsteady fairies had to climb over him on their way home from an orgy. Any of the other boys obstructing the fairy path at night they would have mischiefed, but they just tweaked Peter's nose and passed on."[98]
|
114 |
+
|
115 |
+
Images of fairies have appeared as illustrations, often in books of fairy tales, as well as in photographic media and sculpture. Some artists known for their depictions of fairies include Cicely Mary Barker, Amy Brown, David Delamare, Meredith Dillman, Gustave Doré, Brian Froud, Warwick Goble, Jasmine Becket-Griffith, Rebecca Guay, Florence Harrison, Kylie InGold, Greta James, Alan Lee, Ida Rentoul Outhwaite, Myrea Pettit, Arthur Rackham, Suza Scalora, and Nene Thomas.[99]
|
116 |
+
|
117 |
+
The Fairy Doors of Ann Arbor, MI are small doors installed into local buildings. Local children believe these are the front doors of fairy houses, and in some cases, small furniture, dishes, and various other things can be seen beyond the doors.
|
118 |
+
|
119 |
+
The Victorian era was particularly noted for fairy paintings. The Victorian painter Richard Dadd created paintings of fairy-folk with a sinister and malign tone. Other Victorian artists who depicted fairies include John Anster Fitzgerald, John Atkinson Grimshaw, Daniel Maclise, and Joseph Noel Paton.[100] Interest in fairy-themed art enjoyed a brief renaissance following the publication of the Cottingley Fairies photographs in 1917, and a number of artists turned to painting fairy themes.[citation needed]
|
en/1958.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The cat (Felis catus) is a domestic species of small carnivorous mammal.[1][2] It is the only domesticated species in the family Felidae and is often referred to as the domestic cat to distinguish it from the wild members of the family.[4] A cat can either be a house cat, a farm cat or a feral cat; the latter ranges freely and avoids human contact.[5] Domestic cats are valued by humans for companionship and their ability to hunt rodents. About 60 cat breeds are recognized by various cat registries.[6]
|
6 |
+
|
7 |
+
The cat is similar in anatomy to the other felid species: it has a strong flexible body, quick reflexes, sharp teeth and retractable claws adapted to killing small prey. Its night vision and sense of smell are well developed. Cat communication includes vocalizations like meowing, purring, trilling, hissing, growling and grunting as well as cat-specific body language. It is a solitary hunter but a social species. It can hear sounds too faint or too high in frequency for human ears, such as those made by mice and other small mammals. It is a predator that is most active at dawn and dusk.[7] It secretes and perceives pheromones.[8]
|
8 |
+
|
9 |
+
Female domestic cats can have kittens from spring to late autumn, with litter sizes often ranging from two to five kittens.[9] Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Failure to control breeding of pet cats by spaying and neutering, as well as abandonment of pets, resulted in large numbers of feral cats worldwide, contributing to the extinction of entire bird, mammal, and reptile species, and evoking population control.[10]
|
10 |
+
|
11 |
+
Cats were first domesticated in the Near East around 7500 BC.[11] It was long thought that cat domestication was initiated in Ancient Egypt, as since around 3100 BC veneration was given to cats in ancient Egypt.[12][13]
|
12 |
+
|
13 |
+
As of 2017[update], the domestic cat was the second-most popular pet in the United States by number of pets owned, after freshwater fish,[14] with 95 million cats owned.[15][16] In the United Kingdom, around 7.3 million cats lived in more than 4.8 million households as of 2019[update].[17]
|
14 |
+
|
15 |
+
The origin of the English word 'cat', Old English catt, is thought to be the Late Latin word cattus, which was first used at the beginning of the 6th century.[18] It was suggested that the word 'cattus' is derived from an Egyptian precursor of Coptic ϣⲁⲩ šau, "tomcat", or its feminine form suffixed with -t.[19]
|
16 |
+
The Late Latin word is also thought to be derived from Afro-Asiatic languages.[20] The Nubian word kaddîska "wildcat" and Nobiin kadīs are possible sources or cognates.[21] The Nubian word may be a loan from Arabic قَطّ qaṭṭ ~ قِطّ qiṭṭ. It is "equally likely that the forms might derive from an ancient Germanic word, imported into Latin and thence to Greek and to Syriac and Arabic".[22] The word may be derived from Germanic and Northern European languages, and ultimately be borrowed from Uralic, cf. Northern Sami gáđfi, "female stoat", and Hungarian hölgy, "stoat"; from Proto-Uralic *käďwä, "female (of a furred animal)".[23]
|
17 |
+
|
18 |
+
The English puss, extended as pussy and pussycat, is attested from the 16th century and may have been introduced from Dutch poes or from Low German puuskatte, related to Swedish kattepus, or Norwegian pus, pusekatt. Similar forms exist in Lithuanian puižė and Irish puisín or puiscín. The etymology of this word is unknown, but it may have simply arisen from a sound used to attract a cat.[24][25]
|
19 |
+
|
20 |
+
A male cat is called a tom or tomcat[26] (or a gib,[27] if neutered) An unspayed female is called a queen,[28] especially in a cat-breeding context. A juvenile cat is referred to as a kitten. In Early Modern English, the word kitten was interchangeable with the now-obsolete word catling.[29] A group of cats can be referred to as a clowder or a glaring.[30]
|
21 |
+
|
22 |
+
The scientific name Felis catus was proposed by Carl Linnaeus in 1758 for a domestic cat.[1][2]
|
23 |
+
Felis catus domesticus was proposed by Johann Christian Polycarp Erxleben in 1777.[3]
|
24 |
+
Felis daemon proposed by Konstantin Alekseevich Satunin in 1904 was a black cat from the Transcaucasus, later identified as a domestic cat.[31][32]
|
25 |
+
|
26 |
+
In 2003, the International Commission on Zoological Nomenclature ruled that the domestic cat is a distinct species, namely Felis catus.[33][34]
|
27 |
+
In 2007, it was considered a subspecies of the European wildcat, F. silvestris catus, following results of phylogenetic research.[35][36] In 2017, the IUCN Cat Classification Taskforce followed the recommendation of the ICZN in regarding the domestic cat as a distinct species, Felis catus.[37]
|
28 |
+
|
29 |
+
The domestic cat is a member of the Felidae, a family that had a common ancestor about 10–15 million years ago.[38]
|
30 |
+
The genus Felis diverged from the Felidae around 6–7 million years ago.[39]
|
31 |
+
Results of phylogenetic research confirm that the wild Felis species evolved through sympatric or parapatric speciation, whereas the domestic cat evolved through artificial selection.[40] The domesticated cat and its closest wild ancestor are both diploid organisms that possess 38 chromosomes[41] and roughly 20,000 genes.[42]
|
32 |
+
The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BCE. This line of partially domesticated cats leaves no trace in the domestic cat populations of today.[43]
|
33 |
+
|
34 |
+
The earliest known indication for the taming of an African wildcat (F. lybica) was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 9,200 to 9,500 years before present. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland.[44] Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus), and were tamed by Neolithic farmers. This commensal relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats.[11][6] Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time.[45]
|
35 |
+
The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BCE. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe.[46]
|
36 |
+
During the Roman Empire they were introduced to Corsica and Sardinia before the beginning of the 1st millennium.[47]
|
37 |
+
By the 5th century BCE, they were familiar animals around settlements in Magna Graecia and Etruria.[48]
|
38 |
+
By the end of the Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany.[45]
|
39 |
+
|
40 |
+
During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play and relatively high intelligence. Captive Leopardus cats may also display affectionate behavior toward humans, but were not domesticated.[49] House cats often mate with feral cats,[50] producing hybrids such as the Kellas cat in Scotland.[51] Hybridisation between domestic and other Felinae species is also possible.[52]
|
41 |
+
|
42 |
+
Development of cat breeds started in the mid 19th century.[53]
|
43 |
+
An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds.[54] Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders.[55]
|
44 |
+
|
45 |
+
The domestic cat has a smaller skull and shorter bones than the European wildcat.[56]
|
46 |
+
It averages about 46 cm (18 in) in head-to-body length and 23–25 cm (9–10 in) in height, with about 30 cm (12 in) long tails. Males are larger than females.[57]
|
47 |
+
Adult domestic cats typically weigh between 4 and 5 kg (9 and 11 lb).[40]
|
48 |
+
|
49 |
+
Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only vestigial caudal vertebrae, fused into an internal coccyx).[58]:11 The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis.[58]:16 Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head.[59]
|
50 |
+
|
51 |
+
The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw.[60]:35 Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death.[61] Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae.[61] The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication.[60]:37 Although cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar, they are nonetheless subject to occasional tooth loss and infection.[62]
|
52 |
+
|
53 |
+
The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg.[63] Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously.[64]
|
54 |
+
|
55 |
+
Cats have protractable and retractable claws.[65] In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows the silent stalking of prey. The claws on the fore feet are typically sharper than those on the hind feet.[66] Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces.[67]
|
56 |
+
|
57 |
+
Most cats have five claws on their front paws, and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws, on the inside of the wrists has no function in normal walking, but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits (“polydactyly”).[68] Polydactylous cats occur along North America's northeast coast and in Great Britain.[69]
|
58 |
+
|
59 |
+
Cats have excellent night vision and can see at only one-sixth the light level required for human vision.[60]:43 This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light.[70] Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration.[71] At low light, a cat's pupils expand to cover most of the exposed surface of its eyes.[72] However, the domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited.[73] A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. However, this appears to be an adaptation to low light levels rather than representing true trichromatic vision.[74]
|
60 |
+
|
61 |
+
The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz.[75] It can detect an extremely broad range of frequencies ranging from 55 Hz to 79,000 Hz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves.[76][77]
|
62 |
+
Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey.[78][79]
|
63 |
+
|
64 |
+
Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about 5.8 square centimetres (29⁄32 square inch) in area, which is about twice that of humans.[80] Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol,[81] which they use to communicate through urine spraying and marking with scent glands.[82] Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion.[83] About 70–80% of cats are affected by nepetalactone.[84] This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors.[85]
|
65 |
+
|
66 |
+
Cats have relatively few taste buds compared to humans (470 or so versus more than 9,000 on the human tongue).[86] Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness.[87] Their taste buds instead respond to acids, amino acids like protein, and bitter tastes.[88] Cats also have a distinct temperature preference for their food, preferring food with a temperature around 38 °C (100 °F) which is similar to that of a fresh kill and routinely rejecting food presented cold or refrigerated (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing).[86]
|
67 |
+
|
68 |
+
To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage.[60]:47
|
69 |
+
|
70 |
+
Most breeds of cat have a noted fondness for sitting in high places, or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to 3 meters can right itself and land on its paws.[89]
|
71 |
+
During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex.[90]
|
72 |
+
An individual cat always rights itself in the same way during a fall, provided it has sufficient time to do so. The height required for this to occur is around 90 cm (2 ft 11 in).[91]
|
73 |
+
Several explanations have been proposed for this phenomenon since the late 19th century:
|
74 |
+
|
75 |
+
Outdoor cats are active both day and night, although they tend to be slightly more active at night.[95] Domestic cats spend the majority of their time in the vicinity of their homes, but can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging from 7 to 28 hectares (17–69 acres).[96] The timing of cats' activity is quite flexible and varied, which means house cats may be more active in the morning and evening, as a response to greater human activity at these times.[97]
|
76 |
+
|
77 |
+
Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 and 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming.[98]
|
78 |
+
|
79 |
+
The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females.[99][100] Within such groups, one cat is usually dominant over the others.[101] Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, by rubbing objects at head height with secretions from facial glands, and by defecation.[82] Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling and, if that does not work, by short but noisy and violent attacks. Despite some cats cohabiting in colonies, they do not have a social survival strategy, or a pack mentality and always hunt alone.[102]
|
80 |
+
|
81 |
+
However, some pet cats are poorly socialized. In particular, older cats show aggressiveness towards newly arrived kittens, which include biting and scratching; this type of behavior is known as feline asocial aggression.[103]
|
82 |
+
|
83 |
+
Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, the human keeper of a cat functions as a sort of surrogate for the cat's mother.[104] Adult cats live their lives in a kind of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore.[105]
|
84 |
+
|
85 |
+
Domestic cats' scent rubbing behavior towards humans or other cats is thought to be a feline means for social bonding.[106]
|
86 |
+
|
87 |
+
Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing.[7] Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms in cats. A raised tail indicates a friendly greeting, and flattened ears indicates hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones.[107] Feral cats are generally silent.[108]:208 Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head.[100]
|
88 |
+
|
89 |
+
Purring may have developed as an evolutionary advantage as a signalling mechanism of reassurance between mother cats and nursing kittens. Post-nursing cats often purr as a sign of contentment: when being petted, becoming relaxed,[109][110] or eating. The mechanism by which cats purr is elusive. The cat has no unique anatomical feature that is clearly responsible for the sound.[111]
|
90 |
+
|
91 |
+
Cats are known for spending considerable amounts of time licking their coats to keep them clean.[112] The cat's tongue has backwards-facing spines about 500 μm long, which are called papillae. These contain keratin which makes them rigid[113] so the papillae act like a hairbrush. Some cats, particularly longhaired cats, occasionally regurgitate hairballs of fur that have collected in their stomachs from grooming. These clumps of fur are usually sausage-shaped and about 2–3 cm (3⁄4–1 1⁄4 in) long. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush.[112]
|
92 |
+
|
93 |
+
Among domestic cats, males are more likely to fight than females.[114] Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male.[115] Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home.[114] Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones.[116]
|
94 |
+
|
95 |
+
When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways and hissing or spitting.[117] Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. They may also vocalize loudly and bare their teeth in an effort to further intimidate their opponent. Fights usually consist of grappling and delivering powerful slaps to the face and body with the forepaws as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their powerful hind legs.[118]
|
96 |
+
|
97 |
+
Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. However, fights for mating rights are typically more severe and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections of scratches and bites, though these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of feline immunodeficiency virus.[119] Sexually active males are usually involved in many fights during their lives, and often have decidedly battered faces with obvious scars and cuts to their ears and nose.[120]
|
98 |
+
|
99 |
+
The shape and structure of cats' cheeks is insufficient to suck. They lap with the tongue to draw liquid upwards into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upwards.[121][122]
|
100 |
+
|
101 |
+
Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past.[102][123] They also avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea.[124] Some also develop odd eating habits and like to eat or chew on things like wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten.[125]
|
102 |
+
|
103 |
+
Cats hunt small prey, primarily birds and rodents,[126] and are often used as a form of pest control.[127][128] Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured.[129] The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds.[130]:153 Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.4 to 3.7 billion birds and 6.9 to 20.7 billion mammals annually.[131]
|
104 |
+
Certain species appear more susceptible than others; for example, 30% of house sparrow mortality is linked to the domestic cat.[132] In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis), 31% of deaths were a result of cat predation.[133] In parts of North America, the presence of larger carnivores such as coyotes which prey on cats and other small predators reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety.[134]
|
105 |
+
|
106 |
+
Perhaps the best-known element of cats' hunting behavior, which is commonly misunderstood and often appalls cat owners because it looks like torture, is that cats often appear to "play" with prey by releasing it after capture. This cat and mouse behavior is due to an instinctive imperative to ensure that the prey is weak enough to be killed without endangering the cat.[135]
|
107 |
+
Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at, or near, the top.[136] Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding "an elderly cat, or an inept kitten".[137] This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens.[130]:153
|
108 |
+
|
109 |
+
On islands, birds can contribute as much as 60% of a cat's diet.[138] In nearly all cases, however, the cat cannot be identified as the sole cause for reducing the numbers of island birds, and in some instances, eradication of cats has caused a "mesopredator release" effect;[139] where the suppression of top carnivores creates an abundance of smaller predators that cause a severe decline in their shared prey. Domestic cats are, however, known to be a contributing factor to the decline of many species, a factor that has ultimately led, in some cases, to extinction. The South Island piopio, Chatham rail,[133] and the New Zealand merganser[140] are a few from a long list, with the most extreme case being the flightless Lyall's wren, which was driven to extinction only a few years after its discovery.[141][142]
|
110 |
+
One feral cat in New Zealand killed 102 New Zealand lesser short-tailed bats in seven days.[143] In the US, feral and free-ranging domestic cats kill an estimated 6.3 – 22.3 billion mammals annually.[131]
|
111 |
+
|
112 |
+
In Australia, the impact of cats on mammal populations is even greater than the impact of habitat loss.[144] More than one million reptiles are killed by feral cats each day, representing 258 species.[145] Cats have contributed to the extinction of the Navassa curly-tailed lizard and Chioninia coctei.[146]
|
113 |
+
|
114 |
+
Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey.[147] Cats also engage in play fighting, with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals.[148]
|
115 |
+
|
116 |
+
Cats also tend to play with toys more when they are hungry.[149] Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before.[150] String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death.[151] Owing to the risks posed by cats eating string, it is sometimes replaced with a laser pointer's dot, which cats may chase.[152]
|
117 |
+
|
118 |
+
Female cats called queens are polyestrous with several estrus cycles during a year, lasting usually 21 days. They are usually ready to mate between early February and August.[153]
|
119 |
+
|
120 |
+
Several males, called tomcats, are attracted to a female in heat. They fight over her, and the victor wins the right to mate. At first, the female rejects the male, but eventually, the female allows the male to mate. The female utters a loud yowl as the male pulls out of her because a male cat's penis has a band of about 120–150 backward-pointing penile spines, which are about 1 mm (1⁄32 in) long; upon withdrawal of the penis, the spines rake the walls of the female's vagina, which acts to induce ovulation. This act also occurs to clear the vagina of other sperm in the context of a second (or more) mating, thus giving the later males a larger chance of conception.[154]
|
121 |
+
|
122 |
+
After mating, the female cleans her vulva thoroughly. If a male attempts to mate with her at this point, the female attacks him. After about 20 to 30 minutes, once the female is finished grooming, the cycle will repeat.[155]
|
123 |
+
Because ovulation is not always triggered by a single mating, females may not be impregnated by the first male with which they mate.[156] Furthermore, cats are superfecund; that is, a female may mate with more than one male when she is in heat, with the result that different kittens in a litter may have different fathers.[155]
|
124 |
+
|
125 |
+
The morula forms 124 hours after conception. At 148 hours, early blastocysts form. At 10–12 days, implantation occurs.[157]
|
126 |
+
The gestation of queens lasts between 64 and 67 days, with an average of 65 days.[153][158]
|
127 |
+
Data on the reproductive capacity of more than 2,300 free-ranging queens were collected during a study between May 1998 and October 2000. They had one to six kittens per litter, with an average of three kittens. They produced a mean of 1.4 litters per year, but a maximum of three litters in a year. Of 169 kittens, 127 died before they were six months old due to a trauma caused in most cases by dog attacks and road accidents.[9]
|
128 |
+
The first litter is usually smaller than subsequent litters. Kittens are weaned between six and seven weeks of age. Queens normally reach sexual maturity at 5–10 months, and males at 5–7 months. This varies depending on breed.[155] Kittens reach puberty at the age of 9–10 months.[153]
|
129 |
+
|
130 |
+
Cats are ready to go to new homes at about 12 weeks of age, when they are ready to leave their mother.[159] They can be surgically sterilized (spayed or castrated) as early as seven weeks to limit unwanted reproduction.[160] This surgery also prevents undesirable sex-related behavior, such as aggression, territory marking (spraying urine) in males and yowling (calling) in females. Traditionally, this surgery was performed at around six to nine months of age, but it is increasingly being performed before puberty, at about three to six months.[161] In the United States, about 80% of household cats are neutered.[162]
|
131 |
+
|
132 |
+
The average lifespan of pet cats has risen in recent decades. In the early 1980s, it was about seven years,[163]:33[164] rising to 9.4 years in 1995[163]:33 and 15.1 years in 2018.[165] Some cats have been reported as surviving into their 30s,[166] with the oldest known cat, Creme Puff, dying at a verified age of 38.[167]
|
133 |
+
|
134 |
+
Spaying or neutering increases life expectancy: one study found neutered male cats live twice as long as intact males, while spayed female cats live 62% longer than intact females.[163]:35 Having a cat neutered confers health benefits, because castrated males cannot develop testicular cancer, spayed females cannot develop uterine or ovarian cancer, and both have a reduced risk of mammary cancer.[168]
|
135 |
+
|
136 |
+
Despite widespread concern about the welfare of free-roaming cats, the lifespans of neutered feral cats in managed colonies compare favorably with those of pet cats.[169]:45[170]:1358[171][172][173][174]
|
137 |
+
|
138 |
+
About two hundred fifty heritable genetic disorders have been identified in cats, many similar to human inborn errors of metabolism.[175] The high level of similarity among the metabolism of mammals allows many of these feline diseases to be diagnosed using genetic tests that were originally developed for use in humans, as well as the use of cats as animal models in the study of the human diseases.[176][177]
|
139 |
+
Diseases affecting domestic cats include acute infections, parasitic infestations, injuries, and chronic diseases such as kidney disease, thyroid disease, and arthritis. Vaccinations are available for many infectious diseases, as are treatments to eliminate parasites such as worms and fleas.[178]
|
140 |
+
|
141 |
+
The domestic cat is a cosmopolitan species and occurs across much of the world.[55] It is adaptable and now present on all continents except Antarctica, and on 118 of the 131 main groups of islands—even on isolated islands such as the Kerguelen Islands.[179][180]
|
142 |
+
Due to its ability to thrive in almost any terrestrial habitat, it is among the world's most invasive species.[181]
|
143 |
+
As it is little altered from the wildcat, it can readily interbreed with the wildcat. This hybridization poses a danger to the genetic distinctiveness of some wildcat populations, particularly in Scotland and Hungary and possibly also the Iberian Peninsula.[52] It lives on small islands with no human inhabitants.[182]
|
144 |
+
Feral cats can live in forests, grasslands, tundra, coastal areas, agricultural land, scrublands, urban areas, and wetlands.[183]
|
145 |
+
|
146 |
+
Feral cats are domestic cats that were born in or have reverted to a wild state. They are unfamiliar with and wary of humans and roam freely in urban and rural areas.[10] The numbers of feral cats is not known, but estimates of the United States feral population range from twenty-five to sixty million.[10] Feral cats may live alone, but most are found in large colonies, which occupy a specific territory and are usually associated with a source of food.[184] Famous feral cat colonies are found in Rome around the Colosseum and Forum Romanum, with cats at some of these sites being fed and given medical attention by volunteers.[185]
|
147 |
+
|
148 |
+
Public attitudes towards feral cats vary widely, ranging from seeing them as free-ranging pets, to regarding them as vermin.[186] One common approach to reducing the feral cat population is termed "trap-neuter-return", where the cats are trapped, neutered, immunized against diseases such as rabies and the feline panleukopenia and leukemia viruses, and then released.[187] Before releasing them back into their feral colonies, the attending veterinarian often nips the tip off one ear to mark it as neutered and inoculated, since these cats may be trapped again. Volunteers continue to feed and give care to these cats throughout their lives. Given this support, their lifespans are increased, and behavior and nuisance problems caused by competition for food are reduced.[184]
|
149 |
+
|
150 |
+
Some feral cats can be successfully socialised and 're-tamed' for adoption; young cats, especially kittens[188] and cats that have had prior experience and contact with humans are the most receptive to these efforts.
|
151 |
+
|
152 |
+
Cats are common pets throughout the world, and their worldwide population exceeds 500 million as of 2007.[189] Cats have been used for millennia to control rodents, notably around grain stores and aboard ships, and both uses extend to the present day.[190][191]
|
153 |
+
|
154 |
+
As well as being kept as pets, cats are also used in the international fur[192] and leather industries for making coats, hats, blankets, and stuffed toys;[193] and shoes, gloves, and musical instruments respectively[194] (about 24 cats are needed to make a cat-fur coat).[195] This use has been outlawed in the United States, Australia, and the European Union in 2007.[196] Cat pelts have been used for superstitious purposes as part of the practise of witchcraft,[197] and are still made into blankets in Switzerland as folk remedies believed to help rheumatism.[198] In the Western intellectual tradition, the idea of cats as everyday objects have served to illustrate problems of quantum mechanics in the Schrödinger's cat thought experiment.
|
155 |
+
|
156 |
+
A few attempts to build a cat census have been made over the years, both through associations or national and international organizations (such as the Canadian Federation of Humane Societies's one[199]) and over the Internet,[200][201] but such a task does not seem simple to achieve. General estimates for the global population of domestic cats range widely from anywhere between 200 million to 600 million.[202][203][204][205][206]
|
157 |
+
Walter Chandoha made his career photographing cats after his 1949 images of Loco, an especially charming stray taken in, were published around the world. He is reported to have photographed 90,000 cats during his career and maintained an archive of 225,000 images that he drew from for publications during his lifetime.[207]
|
158 |
+
|
159 |
+
A cat show is a judged event in which the owners of cats compete to win titles in various cat-registering organizations by entering their cats to be judged after a breed standard.[208][209] Both pedigreed and non-purebred companion ("moggy") cats are admissible, although the rules differ from organization to organization. Competing cats are compared to the applicable breed standard,[210] and assessed for temperament and apparent health; the owners of those judged to be most ideal awarded a prize. Moggies are judged based on their temperament and healthy appearance. Some events also include activity judging, such as trained navigation of obstacle course. Often, at the end of the year, all of the points accrued at various shows are added up and more national and regional titles are awarded to champion cats.
|
160 |
+
|
161 |
+
Cats can be infected or infested with viruses, bacteria, fungus, protozoans, arthropods or worms that can transmit diseases to humans.[211] In some cases, the cat exhibits no symptoms of the disease,[212] However, the same disease can then become evident in a human. The likelihood that a person will become diseased depends on the age and immune status of the person. Humans who have cats living in their home or in close association are more likely to become infected, however, those who do not keep cats as pets might also acquire infections from cat feces and parasites exiting the cat's body.[211][213] Some of the infections of most concern include salmonella, cat-scratch disease and toxoplasmosis.[212]
|
162 |
+
|
163 |
+
In ancient Egypt, cats were worshipped, and the goddess Bastet often depicted in cat form, sometimes taking on the war-like aspect of a lioness. The Greek historian Herodotus reported that killing a cat was forbidden, and when a household cat died, the entire family mourned and shaved their eyebrows. Families took their dead cats to the sacred city of Bubastis, where they were embalmed and buried in sacred repositories. Herodotus expressed astonishment at the domestic cats in Egypt, because he had only ever seen wildcats.[214]
|
164 |
+
Ancient Greeks and Romans kept weasels as pets, which were seen as the ideal rodent-killers. The earliest unmistakable evidence of the Greeks having domestic cats comes from two coins from Magna Graecia dating to the mid-fifth century BC showing Iokastos and Phalanthos, the legendary founders of Rhegion and Taras respectively, playing with their pet cats. The usual ancient Greek word for 'cat' was ailouros, meaning 'thing with the waving tail'. Cats are rarely mentioned in ancient Greek literature. Aristotle remarked in his History of Animals that "female cats are naturally lecherous." The Greeks later syncretized their own goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. In Ovid's Metamorphoses, when the deities flee to Egypt and take animal forms, the goddess Diana turns into a cat.[215][216] Cats eventually displaced ferrets as the pest control of choice because they were more pleasant to have around the house and were more enthusiastic hunters of mice. During the Middle Ages, many of Artemis's associations with cats were grafted onto the Virgin Mary. Cats are often shown in icons of Annunciation and of the Holy Family and, according to Italian folklore, on the same night that Mary gave birth to Jesus, a cat in Bethlehem gave birth to a kitten.[217] Domestic cats were spread throughout much of the rest of the world during the Age of Discovery, as ships' cats were carried on sailing ships to control shipboard rodents and as good-luck charms.[46]
|
165 |
+
|
166 |
+
Several ancient religions believed cats are exalted souls, companions or guides for humans, that are all-knowing but mute so they cannot influence decisions made by humans. In Japan, the maneki neko cat is a symbol of good fortune.[218] In Norse mythology, Freyja, the goddess of love, beauty, and fertility, is depicted as riding a chariot drawn by cats.[219] In Jewish legend, the first cat was living in the house of the first man Adam as a pet that got rid of mice. The cat was once partnering with the first dog before the latter broke an oath they had made which resulted in enmity between the descendants of these two animals. It is also written that neither cats nor foxes are represented in the water, while every other animal has an incarnation species in the water.[220] Although no species are sacred in Islam, cats are revered by Muslims. Some Western writers have stated Muhammad had a favorite cat, Muezza.[221] He is reported to have loved cats so much, "he would do without his cloak rather than disturb one that was sleeping on it".[222] The story has no origin in early Muslim writers, and seems to confuse a story of a later Sufi saint, Ahmed ar-Rifa'i, centuries after Muhammad.[223] One of the companions of Muhammad was known as Abu Hurayrah ("father of the kitten"), in reference to his documented affection to cats.[224]
|
167 |
+
|
168 |
+
Many cultures have negative superstitions about cats. An example would be the belief that a black cat "crossing one's path" leads to bad luck, or that cats are witches' familiars used to augment a witch's powers and skills. The killing of cats in Medieval Ypres, Belgium, is commemorated in the innocuous present-day Kattenstoet (cat parade).[225] In medieval France, cats would be burnt alive as a form of entertainment. According to Norman Davies, the assembled people "shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized".[226]
|
169 |
+
|
170 |
+
"It was the custom to burn a basket, barrel, or sack full of live cats, which was hung from a tall mast in the midst of the bonfire; sometimes a fox was burned. The people collected the embers and ashes of the fire and took them home, believing that they brought good luck. The French kings often witnessed these spectacles and even lit the bonfire with their own hands. In 1648 Louis XIV, crowned with a wreath of roses and carrying a bunch of roses in his hand, kindled the fire, danced at it and partook of the banquet afterwards in the town hall. But this was the last occasion when a monarch presided at the midsummer bonfire in Paris. At Metz midsummer fires were lighted with great pomp on the esplanade, and a dozen cats, enclosed in wicker cages, were burned alive in them, to the amusement of the people. Similarly at Gap, in the department of the Hautes-Alpes, cats used to be roasted over the midsummer bonfire."[227]
|
171 |
+
|
172 |
+
According to a myth in many cultures, cats have multiple lives. In many countries, they are believed to have nine lives, but in Italy, Germany, Greece, Brazil and some Spanish-speaking regions, they are said to have seven lives,[228][229] while in Turkish and Arabic traditions, the number of lives is six.[230] The myth is attributed to the natural suppleness and swiftness cats exhibit to escape life-threatening situations. Also lending credence to this myth is the fact that falling cats often land on their feet, using an instinctive righting reflex to twist their bodies around. Nonetheless, cats can still be injured or killed by a high fall.[231]
|
en/1959.html.txt
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Feminism is a range of social movements, political movements, and ideologies that aim to define, establish, and achieve the political, economic, personal, and social equality of the sexes.[a][2][3][4][5] Feminism incorporates the position that societies prioritize the male point of view, and that women are treated unjustly within those societies.[6] Efforts to change that include fighting against gender stereotypes and establishing educational, professional, and interpersonal opportunities and outcomes for women that are equal to those for men.
|
6 |
+
|
7 |
+
Feminist movements have campaigned and continue to campaign for women's rights, including the right to: vote, hold public office, work, earn equal pay, own property, receive education, enter contracts, have equal rights within marriage, and maternity leave. Feminists have also worked to ensure access to legal abortions and social integration and to protect women and girls from rape, sexual harassment, and domestic violence.[7] Changes in dress and acceptable physical activity have often been part of feminist movements.[8]
|
8 |
+
|
9 |
+
Some scholars consider feminist campaigns to be a main force behind major historical societal changes for women's rights, particularly in the West, where they are near-universally credited with achieving women's suffrage, gender-neutral language, reproductive rights for women (including access to contraceptives and abortion), and the right to enter into contracts and own property.[9] Although feminist advocacy is, and has been, mainly focused on women's rights, some feminists argue for the inclusion of men's liberation within its aims, because they believe that men are also harmed by traditional gender roles.[10]
|
10 |
+
Feminist theory, which emerged from feminist movements, aims to understand the nature of gender inequality by examining women's social roles and lived experience; it has developed theories in a variety of disciplines in order to respond to issues concerning gender.[11][12]
|
11 |
+
|
12 |
+
Numerous feminist movements and ideologies have developed over the years and represent different viewpoints and aims. Some forms of feminism have been criticized for taking into account only white, middle class, or college-educated perspectives. This criticism led to the creation of ethnically specific or multicultural forms of feminism, such as black feminism and intersectional feminism.[13]
|
13 |
+
|
14 |
+
Charles Fourier, a utopian socialist and French philosopher, is credited with having coined the word "féminisme" in 1837.[14] The words "féminisme" ("feminism") and "féministe" ("feminist") first appeared in France and the Netherlands in 1872,[15] Great Britain in the 1890s, and the United States in 1910.[16][17] The Oxford English Dictionary lists 1852 as the year of the first appearance of "feminist"[18] and 1895 for "feminism".[19] Depending on the historical moment, culture and country, feminists around the world have had different causes and goals. Most western feminist historians contend that all movements working to obtain women's rights should be considered feminist movements, even when they did not (or do not) apply the term to themselves.[20][21][22][23][24][25] Other historians assert that the term should be limited to the modern feminist movement and its descendants. Those historians use the label "protofeminist" to describe earlier movements.[26]
|
15 |
+
|
16 |
+
The history of the modern western feminist movement is divided into four "waves".[27][28][29] The first comprised women's suffrage movements of the 19th and early-20th centuries, promoting women's right to vote. The second wave, the women's liberation movement, began in the 1960s and campaigned for legal and social equality for women. In or around 1992, a third wave was identified, characterized by a focus on individuality and diversity.[30] The fourth wave, from around 2012, used social media to combat sexual harassment, violence against women and rape culture; it is best known for the Me Too movement.[31]
|
17 |
+
|
18 |
+
First-wave feminism was a period of activity during the 19th and early-20th centuries. In the UK and US, it focused on the promotion of equal contract, marriage, parenting, and property rights for women. New legislation included the Custody of Infants Act 1839 in the UK, which introduced the tender years doctrine for child custody and gave women the right of custody of their children for the first time.[32][33][34] Other legislation, such as the Married Women's Property Act 1870 in the UK and extended in the 1882 Act,[35] became models for similar legislation in other British territories. Victoria passed legislation in 1884 and New South Wales in 1889; the remaining Australian colonies passed similar legislation between 1890 and 1897. With the turn of the 19th century, activism focused primarily on gaining political power, particularly the right of women's suffrage, though some feminists were active in campaigning for women's sexual, reproductive, and economic rights too.[36]
|
19 |
+
|
20 |
+
Women's suffrage (the right to vote and stand for parliamentary office) began in Britain's Australasian colonies at the close of the 19th century, with the self-governing colonies of New Zealand granting women the right to vote in 1893; South Australia followed suit in 1895. This was followed by Australia granting female suffrage in 1902.[37][38]
|
21 |
+
|
22 |
+
In Britain, the suffragettes and suffragists campaigned for the women's vote, and in 1918 the Representation of the People Act was passed granting the vote to women over the age of 30 who owned property. In 1928 this was extended to all women over 21.[39] Emmeline Pankhurst was the most notable activist in England. Time named her one of the 100 Most Important People of the 20th Century, stating: "she shaped an idea of women for our time; she shook society into a new pattern from which there could be no going back."[40] In the US, notable leaders of this movement included Lucretia Mott, Elizabeth Cady Stanton, and Susan B. Anthony, who each campaigned for the abolition of slavery before championing women's right to vote. These women were influenced by the Quaker theology of spiritual equality, which asserts that men and women are equal under God.[41] In the US, first-wave feminism is considered to have ended with the passage of the Nineteenth Amendment to the United States Constitution (1919), granting women the right to vote in all states. The term first wave was coined retroactively when the term second-wave feminism came into use.[36][42][43][44][45]
|
23 |
+
|
24 |
+
During the late Qing period and reform movements such as the Hundred Days' Reform, Chinese feminists called for women's liberation from traditional roles and Neo-Confucian gender segregation.[46][47][48] Later, the Chinese Communist Party created projects aimed at integrating women into the workforce, and claimed that the revolution had successfully achieved women's liberation.[49]
|
25 |
+
|
26 |
+
According to Nawar al-Hassan Golley, Arab feminism was closely connected with Arab nationalism. In 1899, Qasim Amin, considered the "father" of Arab feminism, wrote The Liberation of Women, which argued for legal and social reforms for women.[50] He drew links between women's position in Egyptian society and nationalism, leading to the development of Cairo University and the National Movement.[51] In 1923 Hoda Shaarawi founded the Egyptian Feminist Union, became its president and a symbol of the Arab women's rights movement.[51]
|
27 |
+
|
28 |
+
The Iranian Constitutional Revolution in 1905 triggered the Iranian women's movement, which aimed to achieve women's equality in education, marriage, careers, and legal rights.[52] However, during the Iranian revolution of 1979, many of the rights that women had gained from the women's movement were systematically abolished, such as the Family Protection Law.[53]
|
29 |
+
|
30 |
+
In France, women obtained the right to vote only with the Provisional Government of the French Republic of 21 April 1944. The Consultative Assembly of Algiers of 1944 proposed on 24 March 1944 to grant eligibility to women but following an amendment by Fernand Grenier, they were given full citizenship, including the right to vote. Grenier's proposition was adopted 51 to 16. In May 1947, following the November 1946 elections, the sociologist Robert Verdier minimized the "gender gap", stating in Le Populaire that women had not voted in a consistent way, dividing themselves, as men, according to social classes. During the baby boom period, feminism waned in importance. Wars (both World War I and World War II) had seen the provisional emancipation of some women, but post-war periods signalled the return to conservative roles.[54]
|
31 |
+
|
32 |
+
By the mid-20th century, women still lacked significant rights. In Switzerland, women gained the right to vote in federal elections in 1971;[55] but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland.[56] In Liechtenstein, women were given the right to vote by the women's suffrage referendum of 1984. Three prior referendums held in 1968, 1971 and 1973 had failed to secure women's right to vote.
|
33 |
+
|
34 |
+
Feminists continued to campaign for the reform of family laws which gave husbands control over their wives. Although by the 20th century coverture had been abolished in the UK and US, in many continental European countries married women still had very few rights. For instance, in France, married women did not receive the right to work without their husband's permission until 1965.[57][58] Feminists have also worked to abolish the "marital exemption" in rape laws which precluded the prosecution of husbands for the rape of their wives.[59] Earlier efforts by first-wave feminists such as Voltairine de Cleyre, Victoria Woodhull and Elizabeth Clarke Wolstenholme Elmy to criminalize marital rape in the late 19th century had failed;[60][61] this was only achieved a century later in most Western countries, but is still not achieved in many other parts of the world.[62]
|
35 |
+
|
36 |
+
French philosopher Simone de Beauvoir provided a Marxist solution and an existentialist view on many of the questions of feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949.[63] The book expressed feminists' sense of injustice. Second-wave feminism is a feminist movement beginning in the early 1960s[64] and continuing to the present; as such, it coexists with third-wave feminism. Second-wave feminism is largely concerned with issues of equality beyond suffrage, such as ending gender discrimination.[36]
|
37 |
+
|
38 |
+
Second-wave feminists see women's cultural and political inequalities as inextricably linked and encourage women to understand aspects of their personal lives as deeply politicized and as reflecting sexist power structures. The feminist activist and author Carol Hanisch coined the slogan "The Personal is Political", which became synonymous with the second wave.[7][65]
|
39 |
+
|
40 |
+
Second- and third-wave feminism in China has been characterized by a reexamination of women's roles during the communist revolution and other reform movements, and new discussions about whether women's equality has actually been fully achieved.[49]
|
41 |
+
|
42 |
+
In 1956, President Gamal Abdel Nasser of Egypt initiated "state feminism", which outlawed discrimination based on gender and granted women's suffrage, but also blocked political activism by feminist leaders.[66] During Sadat's presidency, his wife, Jehan Sadat, publicly advocated further women's rights, though Egyptian policy and society began to move away from women's equality with the new Islamist movement and growing conservatism.[67] However, some activists proposed a new feminist movement, Islamic feminism, which argues for women's equality within an Islamic framework.[68]
|
43 |
+
|
44 |
+
In Latin America, revolutions brought changes in women's status in countries such as Nicaragua, where feminist ideology during the Sandinista Revolution aided women's quality of life but fell short of achieving a social and ideological change.[69]
|
45 |
+
|
46 |
+
In 1963, Betty Friedan's book The Feminine Mystique helped voice the discontent that American women felt. The book is widely credited with sparking the beginning of second-wave feminism in the United States.[70] Within ten years, women made up over half the First World workforce.[71]
|
47 |
+
|
48 |
+
Third-wave feminism is traced to the emergence of the Riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s,[72][73] and to Anita Hill's televised testimony in 1991—to an all-male, all-white Senate Judiciary Committee—that Clarence Thomas, nominated for the Supreme Court of the United States, had sexually harassed her. The term third wave is credited to Rebecca Walker, who responded to Thomas's appointment to the Supreme Court with an article in Ms. magazine, "Becoming the Third Wave" (1992).[74][75] She wrote:
|
49 |
+
|
50 |
+
So I write this as a plea to all women, especially women of my generation: Let Thomas’ confirmation serve to remind you, as it did me, that the fight is far from over. Let this dismissal of a woman's experience move you to anger. Turn that outrage into political power. Do not vote for them unless they work for us. Do not have sex with them, do not break bread with them, do not nurture them if they don't prioritize our freedom to control our bodies and our lives. I am not a post-feminism feminist. I am the Third Wave.[74]
|
51 |
+
|
52 |
+
Third-wave feminism also sought to challenge or avoid what it deemed the second wave's essentialist definitions of femininity, which, third-wave feminists argued, over-emphasized the experiences of upper middle-class white women. Third-wave feminists often focused on "micro-politics" and challenged the second wave's paradigm as to what was, or was not, good for women, and tended to use a post-structuralist interpretation of gender and sexuality.[36][76][77][78] Feminist leaders rooted in the second wave, such as Gloria Anzaldúa, bell hooks, Chela Sandoval, Cherríe Moraga, Audre Lorde, Maxine Hong Kingston, and many other non-white feminists, sought to negotiate a space within feminist thought for consideration of race-related subjectivities.[77][79][80] Third-wave feminism also contained internal debates between difference feminists, who believe that there are important psychological differences between the sexes, and those who believe that there are no inherent psychological differences between the sexes and contend that gender roles are due to social conditioning.[81]
|
53 |
+
|
54 |
+
Standpoint theory is a feminist theoretical point of view stating that a person's social position influences their knowledge. This perspective argues that research and theory treat women and the feminist movement as insignificant and refuses to see traditional science as unbiased.[82] Since the 1980s, standpoint feminists have argued that the feminist movement should address global issues (such as rape, incest, and prostitution) and culturally specific issues (such as female genital mutilation in some parts of Africa and Arab societies, as well as glass ceiling practices that impede women's advancement in developed economies) in order to understand how gender inequality interacts with racism, homophobia, classism and colonization in a "matrix of domination".[83][84]
|
55 |
+
|
56 |
+
Fourth-wave feminism refers to a resurgence of interest in feminism that began around 2012 and is associated with the use of social media.[85] According to feminist scholar Prudence Chamberlain, the focus of the fourth wave is justice for women and opposition to sexual harassment and violence against women. Its essence, she writes, is "incredulity that certain attitudes can still exist".[86]
|
57 |
+
|
58 |
+
Fourth-wave feminism is "defined by technology", according to Kira Cochrane, and is characterized particularly by the use of Facebook, Twitter, Instagram, YouTube, Tumblr, and blogs such as Feministing to challenge misogyny and further gender equality.[85][87][88][85]
|
59 |
+
|
60 |
+
Issues that fourth-wave feminists focus on include street and workplace harassment, campus sexual assault and rape culture. Scandals involving the harassment, abuse, and murder of women and girls have galvanized the movement. These have included the 2012 Delhi gang rape, 2012 Jimmy Savile allegations, the Bill Cosby allegations, 2014 Isla Vista killings, 2016 trial of Jian Ghomeshi, 2017 Harvey Weinstein allegations and subsequent Weinstein effect, and the 2017 Westminster sexual scandals.[89]
|
61 |
+
|
62 |
+
Examples of fourth-wave feminist campaigns include the Everyday Sexism Project, No More Page 3, Stop Bild Sexism, Mattress Performance, 10 Hours of Walking in NYC as a Woman, #YesAllWomen, Free the Nipple, One Billion Rising, the 2017 Women's March, the 2018 Women's March, and the #MeToo movement. In December 2017, Time magazine chose several prominent female activists involved in the #MeToo movement, dubbed "the silence breakers", as Person of the Year.[90][91]
|
63 |
+
|
64 |
+
The term postfeminism is used to describe a range of viewpoints reacting to feminism since the 1980s. While not being "anti-feminist", postfeminists believe that women have achieved second wave goals while being critical of third- and fourth-wave feminist goals. The term was first used to describe a backlash against second-wave feminism, but it is now a label for a wide range of theories that take critical approaches to previous feminist discourses and includes challenges to the second wave's ideas.[92] Other postfeminists say that feminism is no longer relevant to today's society.[93] Amelia Jones has written that the postfeminist texts which emerged in the 1980s and 1990s portrayed second-wave feminism as a monolithic entity.[94] Dorothy Chunn notes a "blaming narrative" under the postfeminist moniker, where feminists are undermined for continuing to make demands for gender equality in a "post-feminist" society, where "gender equality has (already) been achieved." According to Chunn, "many feminists have voiced disquiet about the ways in which rights and equality discourses are now used against them."[95]
|
65 |
+
|
66 |
+
Feminist theory is the extension of feminism into theoretical or philosophical fields. It encompasses work in a variety of disciplines, including anthropology, sociology, economics, women's studies, literary criticism,[96][97] art history,[98] psychoanalysis[99] and philosophy.[100][101] Feminist theory aims to understand gender inequality and focuses on gender politics, power relations, and sexuality. While providing a critique of these social and political relations, much of feminist theory also focuses on the promotion of women's rights and interests. Themes explored in feminist theory include discrimination, stereotyping, objectification (especially sexual objectification), oppression, and patriarchy.[11][12]
|
67 |
+
In the field of literary criticism, Elaine Showalter describes the development of feminist theory as having three phases. The first she calls "feminist critique", in which the feminist reader examines the ideologies behind literary phenomena. The second Showalter calls "gynocriticism", in which the "woman is producer of textual meaning". The last phase she calls "gender theory", in which the "ideological inscription and the literary effects of the sex/gender system are explored".[102]
|
68 |
+
|
69 |
+
This was paralleled in the 1970s by French feminists, who developed the concept of écriture féminine (which translates as 'female or feminine writing').[92] Helene Cixous argues that writing and philosophy are phallocentric and along with other French feminists such as Luce Irigaray emphasize "writing from the body" as a subversive exercise.[92] The work of Julia Kristeva, a feminist psychoanalyst and philosopher, and Bracha Ettinger,[103] artist and psychoanalyst, has influenced feminist theory in general and feminist literary criticism in particular. However, as the scholar Elizabeth Wright points out, "none of these French feminists align themselves with the feminist movement as it appeared in the Anglophone world".[92][104] More recent feminist theory, such as that of Lisa Lucile Owens,[105] has concentrated on characterizing feminism as a universal emancipatory movement.
|
70 |
+
|
71 |
+
Many overlapping feminist movements and ideologies have developed over the years.
|
72 |
+
|
73 |
+
Some branches of feminism closely track the political leanings of the larger society, such as liberalism and conservatism, or focus on the environment. Liberal feminism seeks individualistic equality of men and women through political and legal reform without altering the structure of society. Catherine Rottenberg has argued that the neoliberal shirt in Liberal feminism has led to that form of feminism being individualized rather than collectivized and becoming detached from social inequality.[106] Due to this she argues that Liberal Feminism cannot offer any sustained analysis of the structures of male dominance, power, or privilege.[106]
|
74 |
+
|
75 |
+
Radical feminism considers the male-controlled capitalist hierarchy as the defining feature of women's oppression and the total uprooting and reconstruction of society as necessary.[7] Conservative feminism is conservative relative to the society in which it resides. Libertarian feminism conceives of people as self-owners and therefore as entitled to freedom from coercive interference.[107] Separatist feminism does not support heterosexual relationships. Lesbian feminism is thus closely related. Other feminists criticize separatist feminism as sexist.[10] Ecofeminists see men's control of land as responsible for the oppression of women and destruction of the natural environment; ecofeminism has been criticized for focusing too much on a mystical connection between women and nature.[108]
|
76 |
+
|
77 |
+
Rosemary Hennessy and Chrys Ingraham say that materialist forms of feminism grew out of Western Marxist thought and have inspired a number of different (but overlapping) movements, all of which are involved in a critique of capitalism and are focused on ideology's relationship to women.[109] Marxist feminism argues that capitalism is the root cause of women's oppression, and that discrimination against women in domestic life and employment is an effect of capitalist ideologies.[110] Socialist feminism distinguishes itself from Marxist feminism by arguing that women's liberation can only be achieved by working to end both the economic and cultural sources of women's oppression.[111] Anarcha-feminists believe that class struggle and anarchy against the state[112] require struggling against patriarchy, which comes from involuntary hierarchy.
|
78 |
+
|
79 |
+
Sara Ahmed argues that Black and Postcolonial feminisms pose a challenge "to some of the organizing premises of Western feminist thought."[113] During much of its history, feminist movements and theoretical developments were led predominantly by middle-class white women from Western Europe and North America.[79][83][114] However, women of other races have proposed alternative feminisms.[83] This trend accelerated in the 1960s with the civil rights movement in the United States and the collapse of European colonialism in Africa, the Caribbean, parts of Latin America, and Southeast Asia. Since that time, women in developing nations and former colonies and who are of colour or various ethnicities or living in poverty have proposed additional feminisms.[114] Womanism[115][116] emerged after early feminist movements were largely white and middle-class.[79] Postcolonial feminists argue that colonial oppression and Western feminism marginalized postcolonial women but did not turn them passive or voiceless.[13] Third-world feminism and Indigenous feminism are closely related to postcolonial feminism.[114] These ideas also correspond with ideas in African feminism, motherism,[117] Stiwanism,[118] negofeminism,[119] femalism, transnational feminism, and Africana womanism.[120]
|
80 |
+
|
81 |
+
In the late twentieth century various feminists began to argue that gender roles are socially constructed,[121][122] and that it is impossible to generalize women's experiences across cultures and histories.[123] Post-structural feminism draws on the philosophies of post-structuralism and deconstruction in order to argue that the concept of gender is created socially and culturally through discourse.[124] Postmodern feminists also emphasize the social construction of gender and the discursive nature of reality;[121] however, as Pamela Abbott et al. note, a postmodern approach to feminism highlights "the existence of multiple truths (rather than simply men and women's standpoints)".[125]
|
82 |
+
|
83 |
+
Feminist views on transgender people differ. Some feminists do not view trans women as women,[126][127] believing that they have male privilege due to their sex assignment at birth.[128] Additionally, some feminists reject the concept of transgender identity due to views that all behavioral differences between genders are a result of socialization.[129] In contrast, other feminists and transfeminists believe that the liberation of trans women is a necessary part of feminist goals.[130] Third-wave feminists are overall more supportive of trans rights.[131][132] A key concept in transfeminism is of transmisogyny,[133] which is the irrational fear of, aversion to, or discrimination against transgender women or feminine gender-nonconforming people.[134][135]
|
84 |
+
|
85 |
+
Riot grrrls took an anti-corporate stance of self-sufficiency and self-reliance.[136] Riot grrrl's emphasis on universal female identity and separatism often appears more closely allied with second-wave feminism than with the third wave.[137] The movement encouraged and made "adolescent girls' standpoints central", allowing them to express themselves fully.[138] Lipstick feminism is a cultural feminist movement that attempts to respond to the backlash of second-wave radical feminism of the 1960s and 1970s by reclaiming symbols of "feminine" identity such as make-up, suggestive clothing and having a sexual allure as valid and empowering personal choices.[139][140]
|
86 |
+
|
87 |
+
According to 2014 Ipsos poll covering 15 developed countries, 53 percent of respondents identified as feminists, and 87% agreed that "women should be treated equally to men in all areas based on their competency, not their gender". However, only 55% of women agreed that they have "full equality with men and the freedom to reach their full dreams and aspirations".[141] Taken together, these studies reflect the importance differentiating between claiming a "feminist identity" and holding "feminist attitudes or beliefs"[142]
|
88 |
+
|
89 |
+
According to a 2015 poll, 18 percent of Americans consider themselves feminists, while 85 percent reported they believe in "equality for women". Despite the popular belief in equal rights, 52 percent did not identify as feminist, 26 percent were unsure, and four percent provided no response.[143]
|
90 |
+
|
91 |
+
Sociological research shows that, in the US, increased educational attainment is associated with greater support for feminist issues. In addition, politically liberal people are more likely to support feminist ideals compared to those who are conservative.[144][145]
|
92 |
+
|
93 |
+
According to numerous polls, 7% of Britons consider themselves feminists, with 83% saying they support equality of opportunity for women – this included even higher support from men (86%) than women (81%).[146][147]
|
94 |
+
|
95 |
+
Feminist views on sexuality vary, and have differed by historical period and by cultural context. Feminist attitudes to female sexuality have taken a few different directions. Matters such as the sex industry, sexual representation in the media, and issues regarding consent to sex under conditions of male dominance have been particularly controversial among feminists. This debate has culminated in the late 1970s and the 1980s, in what came to be known as the feminist sex wars, which pitted anti-pornography feminism against sex-positive feminism, and parts of the feminist movement were deeply divided by these debates.[148][149][150][151][152] Feminists have taken a variety of positions on different aspects of the sexual revolution from the 1960s and 70s. Over the course of the 1970s, a large number of influential women accepted lesbian and bisexual women as part of feminism.[153]
|
96 |
+
|
97 |
+
Opinions on the sex industry are diverse. Feminists critical of the sex industry generally see it as the exploitative result of patriarchal social structures which reinforce sexual and cultural attitudes complicit in rape and sexual harassment. Alternately, feminists who support at least part of the sex industry argue that it can be a medium of feminist expression and a means for women to take control of their sexuality. For the views of feminism on male prostitutes see the article on male prostitution.
|
98 |
+
|
99 |
+
Feminist views of pornography range from condemnation of pornography as a form of violence against women, to an embracing of some forms of pornography as a medium of feminist expression.[148][149][150][151][152] Similarly, feminists' views on prostitution vary, ranging from critical to supportive.[154]
|
100 |
+
|
101 |
+
For feminists, a woman's right to control her own sexuality is a key issue. Feminists such as Catharine MacKinnon argue that women have very little control over their own bodies, with female sexuality being largely controlled and defined by men in patriarchal societies. Feminists argue that sexual violence committed by men is often rooted in ideologies of male sexual entitlement and that these systems grant women very few legitimate options to refuse sexual advances.[155][156] Feminists argue that all cultures are, in one way or another, dominated by ideologies that largely deny women the right to decide how to express their sexuality, because men under patriarchy feel entitled to define sex on their own terms. This entitlement can take different forms, depending on the culture. In conservative and religious cultures marriage is regarded as an institution which requires a wife to be sexually available at all times, virtually without limit; thus, forcing or coercing sex on a wife is not considered a crime or even an abusive behaviour.[157][158] In more liberal cultures, this entitlement takes the form of a general sexualization of the whole culture. This is played out in the sexual objectification of women, with pornography and other forms of sexual entertainment creating the fantasy that all women exist solely for men's sexual pleasure and that women are readily available and desiring to engage in sex at any time, with any man, on a man's terms.[159]
|
102 |
+
|
103 |
+
Sandra Harding says that the "moral and political insights of the women's movement have inspired social scientists and biologists to raise critical questions about the ways traditional researchers have explained gender, sex and relations within and between the social and natural worlds."[160] Some feminists, such as Ruth Hubbard and Evelyn Fox Keller, criticize traditional scientific discourse as being historically biased towards a male perspective.[161] A part of the feminist research agenda is the examination of the ways in which power inequities are created or reinforced in scientific and academic institutions.[162] Physicist Lisa Randall, appointed to a task force at Harvard by then-president Lawrence Summers after his controversial discussion of why women may be underrepresented in science and engineering, said, "I just want to see a whole bunch more women enter the field so these issues don't have to come up anymore."[163]
|
104 |
+
|
105 |
+
Lynn Hankinson Nelson notes that feminist empiricists find fundamental differences between the experiences of men and women. Thus, they seek to obtain knowledge through the examination of the experiences of women and to "uncover the consequences of omitting, misdescribing, or devaluing them" to account for a range of human experience.[164] Another part of the feminist research agenda is the uncovering of ways in which power inequities are created or reinforced in society and in scientific and academic institutions.[162] Furthermore, despite calls for greater attention to be paid to structures of gender inequity in the academic literature, structural analyses of gender bias rarely appear in highly cited psychological journals, especially in the commonly studied areas of psychology and personality.[165]
|
106 |
+
|
107 |
+
One criticism of feminist epistemology is that it allows social and political values to influence its findings.[166] Susan Haack also points out that feminist epistemology reinforces traditional stereotypes about women's thinking (as intuitive and emotional, etc.); Meera Nanda further cautions that this may in fact trap women within "traditional gender roles and help justify patriarchy".[167]
|
108 |
+
|
109 |
+
Modern feminism challenges the essentialist view of gender as biologically intrinsic.[168][169] For example, Anne Fausto-Sterling's book, Myths of Gender, explores the assumptions embodied in scientific research that support a biologically essentialist view of gender.[170] In Delusions of Gender, Cordelia Fine disputes scientific evidence that suggests that there is an innate biological difference between men's and women's minds, asserting instead that cultural and societal beliefs are the reason for differences between individuals that are commonly perceived as sex differences.[171]
|
110 |
+
|
111 |
+
Feminism in psychology emerged as a critique of the dominant male outlook on psychological research where only male perspectives were studied with all male subjects. As women earned doctorates in psychology, females and their issues were introduced as legitimate topics of study. Feminist psychology emphasizes social context, lived experience, and qualitative analysis.[172] Projects such as Psychology's Feminist Voices have emerged to catalogue the influence of feminist psychologists on the discipline.[173]
|
112 |
+
|
113 |
+
Gender-based inquiries into and conceptualization of architecture have also come about, leading to feminism in modern architecture. Piyush Mathur coined the term "archigenderic". Claiming that "architectural planning has an inextricable link with the defining and regulation of gender roles, responsibilities, rights, and limitations", Mathur came up with that term "to explore ... the meaning of 'architecture' in terms of gender" and "to explore the meaning of 'gender' in terms of architecture".[174]
|
114 |
+
|
115 |
+
Feminist activists have established a range of feminist businesses, including women's bookstores, feminist credit unions, feminist presses, feminist mail-order catalogs, and feminist restaurants. These businesses flourished as part of the second and third-waves of feminism in the 1970s, 1980s, and 1990s.[175][176]
|
116 |
+
|
117 |
+
Corresponding with general developments within feminism, and often including such self-organizing tactics as the consciousness-raising group, the movement began in the 1960s and flourished throughout the 1970s.[177] Jeremy Strick, director of the Museum of Contemporary Art in Los Angeles, described the feminist art movement as "the most influential international movement of any during the postwar period", and Peggy Phelan says that it "brought about the most far-reaching transformations in both artmaking and art writing over the past four decades".[177] Feminist artist Judy Chicago, who created The Dinner Party, a set of vulva-themed ceramic plates in the 1970s, said in 2009 to ARTnews, "There is still an institutional lag and an insistence on a male Eurocentric narrative. We are trying to change the future: to get girls and boys to realize that women's art is not an exception—it's a normal part of art history."[178] A feminist approach to the visual arts has most recently developed through Cyberfeminism and the posthuman turn, giving voice to the ways "contemporary female artists are dealing with gender, social media and the notion of embodiment".[179]
|
118 |
+
|
119 |
+
The feminist movement produced feminist fiction, feminist non-fiction, and feminist poetry, which created new interest in women's writing. It also prompted a general reevaluation of women's historical and academic contributions in response to the belief that women's lives and contributions have been underrepresented as areas of scholarly interest.[180] There has also been a close link between feminist literature and activism, with feminist writing typically voicing key concerns or ideas of feminism in a particular era.
|
120 |
+
|
121 |
+
Much of the early period of feminist literary scholarship was given over to the rediscovery and reclamation of texts written by women. In Western feminist literary scholarship, Studies like Dale Spender's Mothers of the Novel (1986) and Jane Spencer's The Rise of the Woman Novelist (1986) were ground-breaking in their insistence that women have always been writing.
|
122 |
+
|
123 |
+
Commensurate with this growth in scholarly interest, various presses began the task of reissuing long-out-of-print texts. Virago Press began to publish its large list of 19th and early-20th-century novels in 1975 and became one of the first commercial presses to join in the project of reclamation. In the 1980s Pandora Press, responsible for publishing Spender's study, issued a companion line of 18th-century novels written by women.[181] More recently, Broadview Press continues to issue 18th- and 19th-century novels, many hitherto out of print, and the University of Kentucky has a series of republications of early women's novels.
|
124 |
+
|
125 |
+
Particular works of literature have come to be known as key feminist texts. A Vindication of the Rights of Woman (1792) by Mary Wollstonecraft, is one of the earliest works of feminist philosophy. A Room of One's Own (1929) by Virginia Woolf, is noted in its argument for both a literal and figural space for women writers within a literary tradition dominated by patriarchy.
|
126 |
+
|
127 |
+
The widespread interest in women's writing is related to a general reassessment and expansion of the literary canon. Interest in post-colonial literatures, gay and lesbian literature, writing by people of colour, working people's writing, and the cultural productions of other historically marginalized groups has resulted in a whole scale expansion of what is considered "literature", and genres hitherto not regarded as "literary", such as children's writing, journals, letters, travel writing, and many others are now the subjects of scholarly interest.[180][182][183] Most genres and subgenres have undergone a similar analysis, so literary studies have entered new territories such as the "female gothic"[184] or women's science fiction.
|
128 |
+
|
129 |
+
According to Elyce Rae Helford, "Science fiction and fantasy serve as important vehicles for feminist thought, particularly as bridges between theory and practice."[185] Feminist science fiction is sometimes taught at the university level to explore the role of social constructs in understanding gender.[186] Notable texts of this kind are Ursula K. Le Guin's The Left Hand of Darkness (1969), Joanna Russ' The Female Man (1970), Octavia Butler's Kindred (1979) and Margaret Atwood's Handmaid's Tale (1985).
|
130 |
+
|
131 |
+
Feminist nonfiction has played an important role in voicing concerns about women's lived experiences. For example, Maya Angelou's I Know Why the Caged Bird Sings was extremely influential, as it represented the specific racism and sexism experienced by black women growing up in the United States.[187]
|
132 |
+
|
133 |
+
In addition, many feminist movements have embraced poetry as a vehicle through which to communicate feminist ideas to public audiences through anthologies, poetry collections, and public readings.[188]
|
134 |
+
|
135 |
+
Moreover, historical pieces of writing by women have been used by feminists to speak about what women's lives would have been like in the past, while demonstrating the power that they held and the impact they had in their communities even centuries ago.[189] An important figure in the history of women in relation to literature is Hrothsvitha. Hrothsvitha was a canoness from 935 - 973,[190] as the first female poetess in the German lands, and first female historian Hrothsvitha is one of the few people to speak about women's lives from a woman's perspective during the Middle Ages[191].
|
136 |
+
|
137 |
+
Women's music (or womyn's music or wimmin's music) is the music by women, for women, and about women.[192] The genre emerged as a musical expression of the second-wave feminist movement[193] as well as the labour, civil rights, and peace movements.[194] The movement was started by lesbians such as Cris Williamson, Meg Christian, and Margie Adam, African-American women activists such as Bernice Johnson Reagon and her group Sweet Honey in the Rock, and peace activist Holly Near.[194] Women's music also refers to the wider industry of women's music that goes beyond the performing artists to include studio musicians, producers, sound engineers, technicians, cover artists, distributors, promoters, and festival organizers who are also women.[192]
|
138 |
+
Riot grrrl is an underground feminist hardcore punk movement described in the cultural movements section of this article.
|
139 |
+
|
140 |
+
Feminism became a principal concern of musicologists in the 1980s[195] as part of the New Musicology. Prior to this, in the 1970s, musicologists were beginning to discover women composers and performers, and had begun to review concepts of canon, genius, genre and periodization from a feminist perspective. In other words, the question of how women musicians fit into traditional music history was now being asked.[195] Through the 1980s and 1990s, this trend continued as musicologists like Susan McClary, Marcia Citron and Ruth Solie began to consider the cultural reasons for the marginalizing of women from the received body of work. Concepts such as music as gendered discourse; professionalism; reception of women's music; examination of the sites of music production; relative wealth and education of women; popular music studies in relation to women's identity; patriarchal ideas in music analysis; and notions of gender and difference are among the themes examined during this time.[195]
|
141 |
+
|
142 |
+
While the music industry has long been open to having women in performance or entertainment roles, women are much less likely to have positions of authority, such as being the leader of an orchestra.[196] In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process.[197]
|
143 |
+
|
144 |
+
Feminist cinema, advocating or illustrating feminist perspectives, arose largely with the development of feminist film theory in the late '60s and early '70s. Women who were radicalized during the 1960s by political debate and sexual liberation; but the failure of radicalism to produce substantive change for women galvanized them to form consciousness-raising groups and set about analysing, from different perspectives, dominant cinema's construction of women.[198] Differences were particularly marked between feminists on either side of the Atlantic. 1972 saw the first feminist film festivals in the U.S. and U.K. as well as the first feminist film journal, Women and Film. Trailblazers from this period included Claire Johnston and Laura Mulvey, who also organized the Women's Event at the Edinburgh Film Festival.[199] Other theorists making a powerful impact on feminist film include Teresa de Lauretis, Anneke Smelik and Kaja Silverman. Approaches in philosophy and psychoanalysis fuelled feminist film criticism, feminist independent film and feminist distribution.
|
145 |
+
|
146 |
+
It has been argued that there are two distinct approaches to independent, theoretically inspired feminist filmmaking. 'Deconstruction' concerns itself with analysing and breaking down codes of mainstream cinema, aiming to create a different relationship between the spectator and dominant cinema. The second approach, a feminist counterculture, embodies feminine writing to investigate a specifically feminine cinematic language.[200] Some recent criticism[201] of "feminist film" approaches has centred around a Swedish rating system called the Bechdel test.
|
147 |
+
|
148 |
+
During the 1930s–1950s heyday of the big Hollywood studios, the status of women in the industry was abysmal.[202] Since then female directors such as Sally Potter, Catherine Breillat, Claire Denis and Jane Campion have made art movies, and directors like Kathryn Bigelow and Patty Jenkins have had mainstream success. This progress stagnated in the 90s, and men outnumber women five to one in behind the camera roles.[203][204]
|
149 |
+
|
150 |
+
Feminism had complex interactions with the major political movements of the twentieth century.
|
151 |
+
|
152 |
+
Since the late nineteenth century, some feminists have allied with socialism, whereas others have criticized socialist ideology for being insufficiently concerned about women's rights. August Bebel, an early activist of the German Social Democratic Party (SPD), published his work Die Frau und der Sozialismus, juxtaposing the struggle for equal rights between sexes with social equality in general. In 1907 there was an International Conference of Socialist Women in Stuttgart where suffrage was described as a tool of class struggle. Clara Zetkin of the SPD called for women's suffrage to build a "socialist order, the only one that allows for a radical solution to the women's question".[205][206]
|
153 |
+
|
154 |
+
In Britain, the women's movement was allied with the Labour party. In the U.S., Betty Friedan emerged from a radical background to take leadership. Radical Women is the oldest socialist feminist organization in the U.S. and is still active.[207] During the Spanish Civil War, Dolores Ibárruri (La Pasionaria) led the Communist Party of Spain. Although she supported equal rights for women, she opposed women fighting on the front and clashed with the anarcha-feminist Mujeres Libres.[208]
|
155 |
+
|
156 |
+
Feminists in Ireland in the early 20th century included the revolutionary Irish Republican, suffragette and socialist Constance Markievicz who in 1918 was the first woman elected to the British House of Commons. However, in line with Sinn Féin abstentionist policy, she would not take her seat in the House of Commons.[209] She was re-elected to the Second Dáil in the elections of 1921.[210] She was also a commander of the Irish Citizens Army which was led by the socialist & self-described feminist, Irish leader James Connolly during the 1916 Easter Rising.[211]
|
157 |
+
|
158 |
+
Fascism has been prescribed dubious stances on feminism by its practitioners and by women's groups. Amongst other demands concerning social reform presented in the Fascist manifesto in 1919 was expanding the suffrage to all Italian citizens of age 18 and above, including women (accomplished only in 1946, after the defeat of fascism) and eligibility for all to stand for office from age 25. This demand was particularly championed by special Fascist women's auxiliary groups such as the fasci femminilli and only partly realized in 1925, under pressure from dictator Benito Mussolini's more conservative coalition partners.[212][213]
|
159 |
+
|
160 |
+
Cyprian Blamires states that although feminists were among those who opposed the rise of Adolf Hitler, feminism has a complicated relationship with the Nazi movement as well. While Nazis glorified traditional notions of patriarchal society and its role for women, they claimed to recognize women's equality in employment.[214] However, Hitler and Mussolini declared themselves as opposed to feminism,[214] and after the rise of Nazism in Germany in 1933, there was a rapid dissolution of the political rights and economic opportunities that feminists had fought for during the pre-war period and to some extent during the 1920s.[206] Georges Duby et al. note that in practice fascist society was hierarchical and emphasized male virility, with women maintaining a largely subordinate position.[206] Blamires also notes that Neofascism has since the 1960s been hostile towards feminism and advocates that women accept "their traditional roles".[214]
|
161 |
+
|
162 |
+
The civil rights movement has influenced and informed the feminist movement and vice versa. Many Western feminists adapted the language and theories of black equality activism and drew parallels between women's rights and the rights of non-white people.[215] Despite the connections between the women's and civil rights movements, some tensions arose during the late 1960s and the 1970s as non-white women argued that feminism was predominantly white, straight, and middle class, and did not understand and was not concerned with issues of race and sexuality.[216] Similarly, some women argued that the civil rights movement had sexist and homophobic elements and did not adequately address minority women's concerns.[215][217][218] These criticisms created new feminist social theories about identity politics and the intersections of racism, classism, and sexism; they also generated new feminisms such as black feminism and Chicana feminism in addition to making large contributions to lesbian feminism and other integrations of queer of colour identity.[219][220][221]
|
163 |
+
|
164 |
+
Neoliberalism has been criticized by feminist theory for having a negative effect on the female workforce population across the globe, especially in the global south. Masculinist assumptions and objectives continue to dominate economic and geopolitical thinking.[222]:177 Women's experiences in non-industrialized countries reveal often deleterious effects of modernization policies and undercut orthodox claims that development benefits everyone.[222]:175
|
165 |
+
|
166 |
+
Proponents of neoliberalism have theorized that by increasing women's participation in the workforce, there will be heightened economic progress, but feminist critics have noted that this participation alone does not further equality in gender relations.[223]:186–98 Neoliberalism has failed to address significant problems such as the devaluation of feminized labour, the structural privileging of men and masculinity, and the politicization of women's subordination in the family and the workplace.[222]:176 The "feminization of employment" refers to a conceptual characterization of deteriorated and devalorized labour conditions that are less desirable, meaningful, safe and secure.[222]:179 Employers in the global south have perceptions about feminine labour and seek workers who are perceived to be undemanding, docile and willing to accept low wages.[222]:180 Social constructs about feminized labour have played a big part in this, for instance, employers often perpetuate ideas about women as 'secondary income earners to justify their lower rates of pay and not deserving of training or promotion.[223]:189
|
167 |
+
|
168 |
+
The feminist movement has effected change in Western society, including women's suffrage; greater access to education; more nearly equitable[weasel words] pay with men; the right to initiate divorce proceedings; the right of women to make individual decisions regarding pregnancy (including access to contraceptives and abortion); and the right to own property.[9]
|
169 |
+
|
170 |
+
From the 1960s on, the campaign for women's rights[224] was met with mixed results[225] in the U.S. and the U.K. Other countries of the EEC agreed to ensure that discriminatory laws would be phased out across the European Community.
|
171 |
+
|
172 |
+
Some feminist campaigning also helped reform attitudes to child sexual abuse. The view that young girls cause men to have sexual intercourse with them was replaced by that of men's responsibility for their own conduct, the men being adults.[226]
|
173 |
+
|
174 |
+
In the U.S., the National Organization for Women (NOW) began in 1966 to seek women's equality, including through the Equal Rights Amendment (ERA),[227] which did not pass, although some states enacted their own. Reproductive rights in the U.S. centred on the court decision in Roe v. Wade enunciating a woman's right to choose whether to carry a pregnancy to term. Western women gained more reliable birth control, allowing family planning and careers. The movement started in the 1910s in the U.S. under Margaret Sanger and elsewhere under Marie Stopes. In the final three decades of the 20th century, Western women knew a new freedom through birth control, which enabled women to plan their adult lives, often making way for both career and family.[228]
|
175 |
+
|
176 |
+
The division of labour within households was affected by the increased entry of women into workplaces in the 20th century. Sociologist Arlie Russell Hochschild found that, in two-career couples, men and women, on average, spend about equal amounts of time working, but women still spend more time on housework,[229][230] although Cathy Young responded by arguing that women may prevent equal participation by men in housework and parenting.[231] Judith K. Brown writes, "Women are most likely to make a substantial contribution when subsistence activities have the following characteristics: the participant is not obliged to be far from home; the tasks are relatively monotonous and do not require rapt concentration and the work is not dangerous, can be performed in spite of interruptions, and is easily resumed once interrupted."[232]
|
177 |
+
|
178 |
+
In international law, the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) is an international convention adopted by the United Nations General Assembly and described as an international bill of rights for women. It came into force in those nations ratifying it.[233]
|
179 |
+
|
180 |
+
Feminist jurisprudence is a branch of jurisprudence that examines the relationship between women and law. It addresses questions about the history of legal and social biases against women and about the enhancement of their legal rights.[234]
|
181 |
+
|
182 |
+
Feminist jurisprudence signifies a reaction to the philosophical approach of modern legal scholars, who typically see the law as a process for interpreting and perpetuating a society's universal, gender-neutral ideals. Feminist legal scholars claim that this fails to acknowledge women's values or legal interests or the harms that they may anticipate or experience.[235]
|
183 |
+
|
184 |
+
Proponents of gender-neutral language argue that the use of gender-specific language often implies male superiority or reflects an unequal state of society.[236] According to The Handbook of English Linguistics, generic masculine pronouns and gender-specific job titles are instances "where English linguistic convention has historically treated men as prototypical of the human species."[237]
|
185 |
+
|
186 |
+
Merriam-Webster chose "feminism" as its 2017 Word of the Year, noting that "Word of the Year is a quantitative measure of interest in a particular word."[238]
|
187 |
+
|
188 |
+
Feminist theology is a movement that reconsiders the traditions, practices, scriptures, and theologies of religions from a feminist perspective. Some of the goals of feminist theology include increasing the role of women among the clergy and religious authorities, reinterpreting male-dominated imagery and language about God, determining women's place in relation to career and motherhood, and studying images of women in the religion's sacred texts.[239]
|
189 |
+
|
190 |
+
Christian feminism is a branch of feminist theology which seeks to interpret and understand Christianity in light of the equality of women and men, and that this interpretation is necessary for a complete understanding of Christianity. While there is no standard set of beliefs among Christian feminists, most agree that God does not discriminate on the basis of sex, and are involved in issues such as the ordination of women, male dominance and the balance of parenting in Christian marriage, claims of moral deficiency and inferiority of women compared to men, and the overall treatment of women in the church.[240][241]
|
191 |
+
|
192 |
+
Islamic feminists advocate women's rights, gender equality, and social justice grounded within an Islamic framework. Advocates seek to highlight the deeply rooted teachings of equality in the Quran and encourage a questioning of the patriarchal interpretation of Islamic teaching through the Quran, hadith (sayings of Muhammad), and sharia (law) towards the creation of a more equal and just society.[242] Although rooted in Islam, the movement's pioneers have also utilized secular and Western feminist discourses and recognize the role of Islamic feminism as part of an integrated global feminist movement.[243]
|
193 |
+
|
194 |
+
Buddhist feminism is a movement that seeks to improve the religious, legal, and social status of women within Buddhism. It is an aspect of feminist theology which seeks to advance and understand the equality of men and women morally, socially, spiritually, and in leadership from a Buddhist perspective. The Buddhist feminist Rita Gross describes Buddhist feminism as "the radical practice of the co-humanity of women and men."[244]
|
195 |
+
|
196 |
+
Jewish feminism is a movement that seeks to improve the religious, legal, and social status of women within Judaism and to open up new opportunities for religious experience and leadership for Jewish women. The main issues for early Jewish feminists in these movements were the exclusion from the all-male prayer group or minyan, the exemption from positive time-bound mitzvot, and women's inability to function as witnesses and to initiate divorce.[245] Many Jewish women have become leaders of feminist movements throughout their history.[246]
|
197 |
+
|
198 |
+
Dianic Wicca is a feminist-centred thealogy.[247]
|
199 |
+
|
200 |
+
Secular or atheist feminists have engaged in feminist criticism of religion, arguing that many religions have oppressive rules towards women and misogynistic themes and elements in religious texts.[248][249][250]
|
201 |
+
|
202 |
+
Patriarchy is a social system in which society is organized around male authority figures. In this system, fathers have authority over women, children, and property. It implies the institutions of male rule and privilege and is dependent on female subordination.[251] Most forms of feminism characterize patriarchy as an unjust social system that is oppressive to women. Carole Pateman argues that the patriarchal distinction "between masculinity and femininity is the political difference between freedom and subjection."[252] In feminist theory the concept of patriarchy often includes all the social mechanisms that reproduce and exert male dominance over women. Feminist theory typically characterizes patriarchy as a social construction, which can be overcome by revealing and critically analyzing its manifestations.[253] Some radical feminists have proposed that because patriarchy is too deeply rooted in society, separatism is the only viable solution.[254] Other feminists have criticized these views as being anti-men.[255][256][257]
|
203 |
+
|
204 |
+
Feminist theory has explored the social construction of masculinity and its implications for the goal of gender equality. The social construct of masculinity is seen by feminism as problematic because it associates males with aggression and competition, and reinforces patriarchal and unequal gender relations.[78][258] Patriarchal cultures are criticized for "limiting forms of masculinity" available to men and thus narrowing their life choices.[259] Some feminists are engaged with men's issues activism, such as bringing attention to male rape and spousal battery and addressing negative social expectations for men.[260][261][262]
|
205 |
+
|
206 |
+
Male participation in feminism is generally encouraged by feminists and is seen as an important strategy for achieving full societal commitment to gender equality.[10][263][264] Many male feminists and pro-feminists are active in both women's rights activism, feminist theory, and masculinity studies. However, some argue that while male engagement with feminism is necessary, it is problematic because of the ingrained social influences of patriarchy in gender relations.[265] The consensus today in feminist and masculinity theories is that men and women should cooperate to achieve the larger goals of feminism.[259] It has been proposed that, in large part, this can be achieved through considerations of women's agency.[266]
|
207 |
+
|
208 |
+
Different groups of people have responded to feminism, and both men and women have been among its supporters and critics. Among American university students, for both men and women, support for feminist ideas is more common than self-identification as a feminist.[267][268][269] The US media tends to portray feminism negatively and feminists "are less often associated with day-to-day work/leisure activities of regular women."[270][271] However, as recent research has demonstrated, as people are exposed to self-identified feminists and to discussions relating to various forms of feminism, their own self-identification with feminism increases.[272]
|
209 |
+
|
210 |
+
Pro-feminism is the support of feminism without implying that the supporter is a member of the feminist movement. The term is most often used in reference to men who are actively supportive of feminism. The activities of pro-feminist men's groups include anti-violence work with boys and young men in schools, offering sexual harassment workshops in workplaces, running community education campaigns, and counselling male perpetrators of violence. Pro-feminist men also may be involved in men's health, activism against pornography including anti-pornography legislation, men's studies, and the development of gender equity curricula in schools. This work is sometimes in collaboration with feminists and women's services, such as domestic violence and rape crisis centres.[273][274]
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
Anti-feminism is opposition to feminism in some or all of its forms.[275]
|
215 |
+
|
216 |
+
In the nineteenth century, anti-feminism was mainly focused on opposition to women's suffrage. Later, opponents of women's entry into institutions of higher learning argued that education was too great a physical burden on women. Other anti-feminists opposed women's entry into the labour force, or their right to join unions, to sit on juries, or to obtain birth control and control of their sexuality.[276]
|
217 |
+
|
218 |
+
Some people have opposed feminism on the grounds that they believe it is contrary to traditional values or religious beliefs. These anti-feminists argue, for example, that social acceptance of divorce and non-married women is wrong and harmful, and that men and women are fundamentally different and thus their different traditional roles in society should be maintained.[277][278][279] Other anti-feminists oppose women's entry into the workforce, political office, and the voting process, as well as the lessening of male authority in families.[280][281]
|
219 |
+
|
220 |
+
Writers such as Camille Paglia, Christina Hoff Sommers, Jean Bethke Elshtain, Elizabeth Fox-Genovese, Lisa Lucile Owens[282] and Daphne Patai oppose some forms of feminism, though they identify as feminists. They argue, for example, that feminism often promotes misandry and the elevation of women's interests above men's, and criticize radical feminist positions as harmful to both men and women.[283] Daphne Patai and Noretta Koertge argue that the term "anti-feminist" is used to silence academic debate about feminism.[284][285] Lisa Lucile Owens argues that certain rights extended exclusively to women are patriarchal because they relieve women from exercising a crucial aspect of their moral agency.[266]
|
221 |
+
|
222 |
+
Secular humanism is an ethical framework that attempts to dispense with any unreasoned dogma, pseudoscience, and superstition. Critics of feminism sometimes ask "Why feminism and not humanism?". Some humanists argue, however, that the goals of feminists and humanists largely overlap, and the distinction is only in motivation. For example, a humanist may consider abortion in terms of a utilitarian ethical framework, rather than considering the motivation of any particular woman in getting an abortion. In this respect, it is possible to be a humanist without being a feminist, but this does not preclude the existence of feminist humanism.[286][287] Humanism plays a significant role in protofeminism during the renaissance period in such that humanists made educated women a popular figure despite the challenge to the male patriarchal organization of society.[288]
|
223 |
+
|
224 |
+
For Isla Vista killings, see Bennett, Jessica (10 September 2014). "Behold the Power of #Hashtag Feminism". Time.
|
en/196.html.txt
ADDED
@@ -0,0 +1,186 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
North America is a continent entirely within the Northern Hemisphere and almost all within the Western Hemisphere. It can also be described as a northern subcontinent of the Americas, or America,[5][6] in models that use fewer than seven continents. It is bordered to the north by the Arctic Ocean, to the east by the Atlantic Ocean, to the west and south by the Pacific Ocean, and to the southeast by South America and the Caribbean Sea.
|
6 |
+
|
7 |
+
North America covers an area of about 24,709,000 square kilometers (9,540,000 square miles), about 16.5% of the earth's land area and about 4.8% of its total surface.
|
8 |
+
North America is the third-largest continent by area, following Asia and Africa,[7][better source needed] and the fourth by population after Asia, Africa, and Europe.[8] In 2013, its population was estimated at nearly 579 million people in 23 independent states, or about 7.5% of the world's population, if nearby islands (most notably around the Caribbean) are included.
|
9 |
+
|
10 |
+
North America was reached by its first human populations during the last glacial period, via crossing the Bering land bridge approximately 40,000 to 17,000 years ago. The so-called Paleo-Indian period is taken to have lasted until about 10,000 years ago (the beginning of the Archaic or Meso-Indian period). The Classic stage spans roughly the 6th to 13th centuries. The Pre-Columbian era ended in 1492, with the beginning of the transatlantic migrations—the arrival of European settlers during the Age of Discovery and the Early Modern period. Present-day cultural and ethnic patterns reflect interactions between European colonists, indigenous peoples, African slaves and all of their descendants.
|
11 |
+
|
12 |
+
Owing to Europe's colonization of the Americas, most North Americans speak European languages such as English, Spanish or French, and their states' cultures commonly reflect Western traditions.
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
The Americas are usually accepted as having been named after the Italian explorer Amerigo Vespucci by the German cartographers Martin Waldseemüller and Matthias Ringmann.[9] Vespucci, who explored South America between 1497 and 1502, was the first European to suggest that the Americas were not the East Indies, but a different landmass previously unknown by Europeans. In 1507, Waldseemüller produced a world map, in which he placed the word "America" on the continent of South America, in the middle of what is today Brazil. He explained the rationale for the name in the accompanying book Cosmographiae Introductio:[10]
|
17 |
+
|
18 |
+
... ab Americo inventore ... quasi Americi terram sive Americam (from Americus the discoverer ... as if it were the land of Americus, thus America).
|
19 |
+
|
20 |
+
For Waldseemüller, no one should object to the naming of the land after its discoverer. He used the Latinized version of Vespucci's name (Americus Vespucius), but in its feminine form "America", following the examples of "Europa", "Asia" and "Africa". Later, other mapmakers extended the name America to the northern continent. In 1538, Gerard Mercator used America on his map of the world for all the Western Hemisphere.[11]
|
21 |
+
|
22 |
+
Some argue that because the convention is to use the surname for naming discoveries (except in the case of royalty), the derivation from "Amerigo Vespucci" could be put in question.[12] In 1874, Thomas Belt proposed a derivation from the Amerrique mountains of Central America; the next year, Jules Marcou suggested that the name of the mountain range stemmed from indigenous American languages.[13] Marcou corresponded with Augustus Le Plongeon, who wrote: "The name AMERICA or AMERRIQUE in the Mayan language means, a country of perpetually strong wind, or the Land of the Wind, and ... the [suffixes] can mean ... a spirit that breathes, life itself."[11]
|
23 |
+
|
24 |
+
The United Nations formally recognizes "North America" as comprising three areas: Northern America, Central America, and The Caribbean. This has been formally defined by the UN Statistics Division.[14][15][16]
|
25 |
+
|
26 |
+
"Northern America", as a term distinct from "North America", excludes Central America, which itself may or may not include Mexico (see Central America § Different definitions). In the limited context of the North American Free Trade Agreement, the term covers Canada, the United States, and Mexico, which are the three signatories of that treaty.
|
27 |
+
|
28 |
+
France, Italy, Portugal, Spain, Romania, Greece, and the countries of Latin America use a six-continent model, with the Americas viewed as a single continent and North America designating a subcontinent comprising Canada, the United States, and Mexico, and often Greenland, Saint Pierre et Miquelon, and Bermuda.[17][18][19][20][21]
|
29 |
+
|
30 |
+
North America has been historically referred to by other names. Spanish North America (New Spain) was often referred to as Northern America, and this was the first official name given to Mexico.[22]
|
31 |
+
|
32 |
+
Geographically the North American continent has many regions and subregions. These include cultural, economic, and geographic regions. Economic regions included those formed by trade blocs, such as the North American Trade Agreement bloc and Central American Trade Agreement. Linguistically and culturally, the continent could be divided into Anglo-America and Latin America. Anglo-America includes most of Northern America, Belize, and Caribbean islands with English-speaking populations (though sub-national entities, such as Louisiana and Quebec, have large Francophone populations; in Quebec, French is the sole official language[23]).
|
33 |
+
|
34 |
+
The southern North American continent is composed of two regions. These are Central America and the Caribbean.[24][25] The north of the continent maintains recognized regions as well. In contrast to the common definition of "North America", which encompasses the whole continent, the term "North America" is sometimes used to refer only to Mexico, Canada, the United States, and Greenland.[26][27][28][29][30]
|
35 |
+
|
36 |
+
The term Northern America refers to the northern-most countries and territories of North America: the United States, Bermuda, St. Pierre and Miquelon, Canada and Greenland.[31][32] Although the term does not refer to a unified region,[33] Middle America—not to be confused with the Midwestern United States—groups the regions of Mexico, Central America, and the Caribbean.[34]
|
37 |
+
|
38 |
+
The largest countries of the continent, Canada and the United States, also contain well-defined and recognized regions. In the case of Canada these are (from east to west) Atlantic Canada, Central Canada, Canadian Prairies, the British Columbia Coast, and Northern Canada. These regions also contain many subregions. In the case of the United States – and in accordance with the US Census Bureau definitions – these regions are: New England, Mid-Atlantic, South Atlantic States, East North Central States, West North Central States, East South Central States, West South Central States, Mountain States, and Pacific States. Regions shared between both nations included the Great Lakes Region. Megalopolises have formed between both nations in the case of the Pacific Northwest and the Great Lakes Megaregion.
|
39 |
+
|
40 |
+
|
41 |
+
|
42 |
+
Laurentia is an ancient craton which forms the geologic core of North America; it formed between 1.5 and 1.0 billion years ago during the Proterozoic eon.[44] The Canadian Shield is the largest exposure of this craton. From the Late Paleozoic to Early Mesozoic eras, North America was joined with the other modern-day continents as part of the supercontinent Pangaea, with Eurasia to its east. One of the results of the formation of Pangaea was the Appalachian Mountains, which formed some 480 million years ago, making it among the oldest mountain ranges in the world. When Pangaea began to rift around 200 million years ago, North America became part of Laurasia, before it separated from Eurasia as its own continent during the mid-Cretaceous period.[45] The Rockies and other western mountain ranges began forming around this time from a period of mountain building called the Laramide orogeny, between 80 and 55 million years ago. The formation of the Isthmus of Panama that connected the continent to South America arguably occurred approximately 12 to 15 million years ago,[46] and the Great Lakes (as well as many other northern freshwater lakes and rivers) were carved by receding glaciers about 10,000 years ago.
|
43 |
+
|
44 |
+
North America is the source of much of what humanity knows about geologic time periods.[47] The geographic area that would later become the United States has been the source of more varieties of dinosaurs than any other modern country.[47] According to paleontologist Peter Dodson, this is primarily due to stratigraphy, climate and geography, human resources, and history.[47] Much of the Mesozoic Era is represented by exposed outcrops in the many arid regions of the continent.[47] The most significant Late Jurassic dinosaur-bearing fossil deposit in North America is the Morrison Formation of the western United States.[48]
|
45 |
+
|
46 |
+
The indigenous peoples of the Americas have many creation myths by which they assert that they have been present on the land since its creation,[49] but there is no evidence that humans evolved there.[50] The specifics of the initial settlement of the Americas by ancient Asians are subject to ongoing research and discussion.[51] The traditional theory has been that hunters entered the Beringia land bridge between eastern Siberia and present-day Alaska from 27,000 to 14,000 years ago.[52][53][h] A growing viewpoint is that the first American inhabitants sailed from Beringia some 13,000 years ago,[55] with widespread habitation of the Americas during the end of the Last Glacial Period, in what is known as the Late Glacial Maximum, around 12,500 years ago.[56] The oldest petroglyphs in North America date from 15,000 to 10,000 years before present.[57][i] Genetic research and anthropology indicate additional waves of migration from Asia via the Bering Strait during the Early-Middle Holocene.[59][60][61]
|
47 |
+
|
48 |
+
Before contact with Europeans, the natives of North America were divided into many different polities, from small bands of a few families to large empires. They lived in several "culture areas", which roughly correspond to geographic and biological zones and give a good indication of the main way of life of the people who lived there (e.g., the bison hunters of the Great Plains, or the farmers of Mesoamerica). Native groups can also be classified by their language family (e.g., Athapascan or Uto-Aztecan). Peoples with similar languages did not always share the same material culture, nor were they always allies. Anthropologists think that the Inuit people of the high Arctic came to North America much later than other native groups, as evidenced by the disappearance of Dorset culture artifacts from the archaeological record, and their replacement by the Thule people.
|
49 |
+
|
50 |
+
During the thousands of years of native habitation on the continent, cultures changed and shifted. One of the oldest yet discovered is the Clovis culture (c. 9550–9050 BCE) in modern New Mexico.[58] Later groups include the Mississippian culture and related Mound building cultures, found in the Mississippi river valley and the Pueblo culture of what is now the Four Corners. The more southern cultural groups of North America were responsible for the domestication of many common crops now used around the world, such as tomatoes, squash, and maize. As a result of the development of agriculture in the south, many other cultural advances were made there. The Mayans developed a writing system, built huge pyramids and temples, had a complex calendar, and developed the concept of zero around 400 CE.[62]
|
51 |
+
|
52 |
+
The first recorded European references to North America are in Norse sagas where it is referred to as Vinland.[63] The earliest verifiable instance of pre-Columbian trans-oceanic contact by any European culture with the North America mainland has been dated to around 1000 CE.[64] The site, situated at the northernmost extent of the island named Newfoundland, has provided unmistakable evidence of Norse settlement.[65] Norse explorer Leif Erikson (c. 970–1020 CE) is thought to have visited the area.[j] Erikson was the first European to make landfall on the continent (excluding Greenland).[67][68]
|
53 |
+
|
54 |
+
The Mayan culture was still present in southern Mexico and Guatemala when the Spanish conquistadors arrived, but political dominance in the area had shifted to the Aztec Empire, whose capital city Tenochtitlan was located further north in the Valley of Mexico. The Aztecs were conquered in 1521 by Hernán Cortés.[69]
|
55 |
+
|
56 |
+
During the Age of Discovery, Europeans explored and staked claims to various parts of North America. Upon their arrival in the "New World", the Native American population declined substantially, because of violent conflicts with the invaders and the introduction of European diseases to which the Native Americans lacked immunity.[70] Native culture changed drastically and their affiliation with political and cultural groups also changed. Several linguistic groups died out, and others changed quite quickly. The names and cultures that Europeans recorded were not necessarily the same as the names they had used a few generations before, or the ones in use today.
|
57 |
+
|
58 |
+
Britain, Spain, and France took over extensive territories in North America. In the late 18th and early 19th century, independence movements sprung up across the continent, leading to the founding of the modern countries in the area. The 13 British Colonies on the North Atlantic coast declared independence in 1776, becoming the United States of America. Canada was formed from the unification of northern territories controlled by Britain and France. New Spain, a territory that stretched from the modern-day southern US to Central America, declared independence in 1810, becoming the First Mexican Empire. In 1823 the former Captaincy General of Guatemala, then part of the Mexican Empire, became the first independent state in Central America, officially changing its name to the United Provinces of Central America.
|
59 |
+
|
60 |
+
Over three decades of work on the Panama Canal led to the connection of Atlantic and Pacific waters in 1913, physically making North America a separate continent.[attribution needed]
|
61 |
+
|
62 |
+
North America occupies the northern portion of the landmass generally referred to as the New World, the Western Hemisphere, the Americas, or simply America (which, less commonly, is considered by some as a single continent[71][72][73] with North America a subcontinent).[74] North America's only land connection to South America is at the Isthmus of Darian/ Isthmus of Panama. The continent is delimited on the southeast by most geographers at the Darién watershed along the Colombia-Panama border, placing almost all of Panama within North America.[75][76][77] Alternatively, some geologists physiographically locate its southern limit at the Isthmus of Tehuantepec, Mexico, with Central America extending southeastward to South America from this point.[78] The Caribbean islands, or West Indies, are considered part of North America.[5] The continental coastline is long and irregular. The Gulf of Mexico is the largest body of water indenting the continent, followed by Hudson Bay. Others include the Gulf of Saint Lawrence and the Gulf of California.
|
63 |
+
|
64 |
+
Before the Central American isthmus formed, the region had been underwater. The islands of the West Indies delineate a submerged former land bridge, which had connected North and South America via what are now Florida and Venezuela.
|
65 |
+
|
66 |
+
There are numerous islands off the continent's coasts; principally, the Arctic Archipelago, the Bahamas, Turks & Caicos, the Greater and Lesser Antilles, the Aleutian Islands (some of which are in the Eastern Hemisphere proper), the Alexander Archipelago, the many thousand islands of the British Columbia Coast, and Newfoundland. Greenland, a self-governing Danish island, and the world's largest, is on the same tectonic plate (the North American Plate) and is part of North America geographically. In a geologic sense, Bermuda is not part of the Americas, but an oceanic island which was formed on the fissure of the Mid-Atlantic Ridge over 100 million years ago. The nearest landmass to it is Cape Hatteras, North Carolina. However, Bermuda is often thought of as part of North America, especially given its historical, political and cultural ties to Virginia and other parts of the continent.
|
67 |
+
|
68 |
+
The vast majority of North America is on the North American Plate. Parts of western Mexico, including Baja California, and of California, including the cities of San Diego, Los Angeles, and Santa Cruz, lie on the eastern edge of the Pacific Plate, with the two plates meeting along the San Andreas fault. The southernmost portion of the continent and much of the West Indies lie on the Caribbean Plate, whereas the Juan de Fuca and Cocos plates border the North American Plate on its western frontier.
|
69 |
+
|
70 |
+
The continent can be divided into four great regions (each of which contains many subregions): the Great Plains stretching from the Gulf of Mexico to the Canadian Arctic; the geologically young, mountainous west, including the Rocky Mountains, the Great Basin, California and Alaska; the raised but relatively flat plateau of the Canadian Shield in the northeast; and the varied eastern region, which includes the Appalachian Mountains, the coastal plain along the Atlantic seaboard, and the Florida peninsula. Mexico, with its long plateaus and cordilleras, falls largely in the western region, although the eastern coastal plain does extend south along the Gulf.
|
71 |
+
|
72 |
+
The western mountains are split in the middle into the main range of the Rockies and the coast ranges in California, Oregon, Washington, and British Columbia, with the Great Basin—a lower area containing smaller ranges and low-lying deserts—in between. The highest peak is Denali in Alaska.
|
73 |
+
|
74 |
+
The United States Geographical Survey (USGS) states that the geographic center of North America is "6 miles [10 km] west of Balta, Pierce County, North Dakota" at about 48°10′N 100°10′W / 48.167°N 100.167°W / 48.167; -100.167, about 24 kilometres (15 mi) from Rugby, North Dakota. The USGS further states that "No marked or monumented point has been established by any government agency as the geographic center of either the 50 States, the conterminous United States, or the North American continent." Nonetheless, there is a 4.6-metre (15 ft) field stone obelisk in Rugby claiming to mark the center. The North American continental pole of inaccessibility is located 1,650 km (1,030 mi) from the nearest coastline, between Allen and Kyle, South Dakota at 43°22′N 101°58′W / 43.36°N 101.97°W / 43.36; -101.97 (Pole of Inaccessibility North America).[79]
|
75 |
+
|
76 |
+
Geologically, Canada is one of the oldest regions in the world, with more than half of the region consisting of precambrian rocks that have been above sea level since the beginning of the Palaeozoic era.[80] Canada's mineral resources are diverse and extensive.[80] Across the Canadian Shield and in the north there are large iron, nickel, zinc, copper, gold, lead, molybdenum, and uranium reserves. Large diamond concentrations have been recently developed in the Arctic,[81] making Canada one of the world's largest producers. Throughout the Shield there are many mining towns extracting these minerals. The largest, and best known, is Sudbury, Ontario. Sudbury is an exception to the normal process of forming minerals in the Shield since there is significant evidence that the Sudbury Basin is an ancient meteorite impact crater. The nearby, but less known Temagami Magnetic Anomaly has striking similarities to the Sudbury Basin. Its magnetic anomalies are very similar to the Sudbury Basin, and so it could be a second metal-rich impact crater.[82] The Shield is also covered by vast boreal forests that support an important logging industry.
|
77 |
+
|
78 |
+
The lower 48 US states can be divided into roughly five physiographic provinces:
|
79 |
+
|
80 |
+
The geology of Alaska is typical of that of the cordillera, while the major islands of Hawaii consist of Neogene volcanics erupted over a hot spot.
|
81 |
+
|
82 |
+
Central America is geologically active with volcanic eruptions and earthquakes occurring from time to time. In 1976 Guatemala was hit by a major earthquake, killing 23,000 people; Managua, the capital of Nicaragua, was devastated by earthquakes in 1931 and 1972, the last one killing about 5,000 people; three earthquakes devastated El Salvador, one in 1986 and two in 2001; one earthquake devastated northern and central Costa Rica in 2009, killing at least 34 people; in Honduras a powerful earthquake killed seven people in 2009.
|
83 |
+
|
84 |
+
Volcanic eruptions are common in the region. In 1968 the Arenal Volcano, in Costa Rica, erupted and killed 87 people. Fertile soils from weathered volcanic lavas have made it possible to sustain dense populations in the agriculturally productive highland areas.
|
85 |
+
|
86 |
+
Central America has many mountain ranges; the longest are the Sierra Madre de Chiapas, the Cordillera Isabelia, and the Cordillera de Talamanca. Between the mountain ranges lie fertile valleys that are suitable for the people; in fact, most of the population of Honduras, Costa Rica, and Guatemala live in valleys. Valleys are also suitable for the production of coffee, beans, and other crops.
|
87 |
+
|
88 |
+
North America is a very large continent which surpasses the Arctic Circle, and the Tropic of Cancer. Greenland, along with the Canadian Shield, is tundra with average temperatures ranging from 10 to 20 °C (50 to 68 °F), but central Greenland is composed of a very large ice sheet. This tundra radiates throughout Canada, but its border ends near the Rocky Mountains (but still contains Alaska) and at the end of the Canadian Shield, near the Great Lakes.
|
89 |
+
Climate west of the Cascades is described as being a temperate weather with average precipitation 20 inches (510 mm).[83]
|
90 |
+
Climate in coastal California is described to be Mediterranean, with average temperatures in cities like San Francisco ranging from 57 to 70 °F (14 to 21 °C) over the course of the year.[84]
|
91 |
+
|
92 |
+
Stretching from the East Coast to eastern North Dakota, and stretching down to Kansas, is the continental-humid climate featuring intense seasons, with a large amount of annual precipitation, with places like New York City averaging 50 inches (1,300 mm).[85]
|
93 |
+
Starting at the southern border of the continental-humid climate and stretching to the Gulf of Mexico (whilst encompassing the eastern half of Texas) is the subtropical climate. This area has the wettest cities in the contiguous U.S. with annual precipitation reaching 67 inches (1,700 mm) in Mobile, Alabama.[86]
|
94 |
+
Stretching from the borders of the continental humid and subtropical climates, and going west to the Cascades Sierra Nevada, south to the southern tip of durango, north to the border with tundra climate, the steppe/desert climate is the driest climate in the U.S.[87] Highland climates cut from north to south of the continent, where subtropical or temperate climates occur just below the tropics, as in central Mexico and Guatemala. Tropical climates appear in the island regions and in the subcontinent's bottleneck. Usually of the savannah type, with rains and high temperatures constants the whole year. Found in countries and states bathed by the Caribbean Sea or to south of the Gulf of Mexico and Pacific Ocean.[88]
|
95 |
+
|
96 |
+
Notable North American fauna include the bison, black bear, prairie dog, turkey, pronghorn, raccoon, coyote and monarch butterfly.
|
97 |
+
|
98 |
+
Notable plants that were domesticated in North America include tobacco, maize, squash, tomato, sunflower, blueberry, avocado, cotton, chile pepper
|
99 |
+
and vanilla.
|
100 |
+
|
101 |
+
Economically, Canada and the United States are the wealthiest and most developed nations in the continent, followed by Mexico, a newly industrialized country.[89] The countries of Central America and the Caribbean are at various levels of economic and human development. For example, small Caribbean island-nations, such as Barbados, Trinidad and Tobago, and Antigua and Barbuda, have a higher GDP (PPP) per capita than Mexico due to their smaller populations. Panama and Costa Rica have a significantly higher Human Development Index and GDP than the rest of the Central American nations.[90] Additionally, despite Greenland's vast resources in oil and minerals, much of them remain untapped, and the island is economically dependent on fishing, tourism, and subsidies from Denmark. Nevertheless, the island is highly developed.[91]
|
102 |
+
|
103 |
+
Demographically, North America is ethnically diverse. Its three main groups are Caucasians, Mestizos and Blacks.[citation needed] There is a significant minority of Indigenous Americans and Asians among other less numerous groups.[citation needed]
|
104 |
+
|
105 |
+
The dominant languages in North America are English, Spanish, and French. Danish is prevalent in Greenland alongside Greenlandic, and Dutch is spoken side by side local languages in the Dutch Caribbean. The term Anglo-America is used to refer to the anglophone countries of the Americas: namely Canada (where English and French are co-official) and the United States, but also sometimes Belize and parts of the tropics, especially the Commonwealth Caribbean. Latin America refers to the other areas of the Americas (generally south of the United States) where the Romance languages, derived from Latin, of Spanish and Portuguese (but French speaking countries are not usually included) predominate: the other republics of Central America (but not always Belize), part of the Caribbean (not the Dutch-, English-, or French-speaking areas), Mexico, and most of South America (except Guyana, Suriname, French Guiana (France), and the Falkland Islands (UK)).
|
106 |
+
|
107 |
+
The French language has historically played a significant role in North America and now retains a distinctive presence in some regions. Canada is officially bilingual. French is the official language of the Province of Quebec, where 95% of the people speak it as either their first or second language, and it is co-official with English in the Province of New Brunswick. Other French-speaking locales include the Province of Ontario (the official language is English, but there are an estimated 600,000 Franco-Ontarians), the Province of Manitoba (co-official as de jure with English), the French West Indies and Saint-Pierre et Miquelon, as well as the US state of Louisiana, where French is also an official language. Haiti is included with this group based on historical association but Haitians speak both Creole and French. Similarly, French and French Antillean Creole is spoken in Saint Lucia and the Commonwealth of Dominica alongside English.
|
108 |
+
|
109 |
+
Christianity is the largest religion in the United States, Canada and Mexico. According to a 2012 Pew Research Center survey, 77% of the population considered themselves Christians.[92] Christianity also is the predominant religion in the 23 dependent territories in North America.[93] The United States has the largest Christian population in the world, with nearly 247 million Christians (70%), although other countries have higher percentages of Christians among their populations.[94] Mexico has the world's second largest number of Catholics, surpassed only by Brazil.[95] A 2015 study estimates about 493,000 Christian believers from a Muslim background in North America, most of them belonging to some form of Protestantism.[96]
|
110 |
+
|
111 |
+
According to the same study religiously unaffiliated (include agnostic and atheist) make up about 17% of the population of Canada and the United States.[97] No religion make up about 24% of the United States population, and 24% of Canada total population.[98]
|
112 |
+
|
113 |
+
Canada, the United States and Mexico host communities of both Jews (6 million or about 1.8%),[99] Buddhists (3.8 million or 1.1%)[100] and Muslims (3.4 million or 1.0%).[101] The biggest number of Jewish individuals can be found in the United States (5.4 million),[102] Canada (375,000)[103] and Mexico (67,476).[104] The United States host the largest Muslim population in North America with 2.7 million or 0.9%,[105][106] While Canada host about one million Muslim or 3.2% of the population.[107] While in Mexico there were 3,700 Muslims in the country.[108] In 2012, U-T San Diego estimated U.S. practitioners of Buddhism at 1.2 million people, of whom 40% are living in Southern California.[109]
|
114 |
+
|
115 |
+
The predominant religion in Central America is Christianity (96%).[110] Beginning with the Spanish colonization of Central America in the 16th century, Roman Catholicism became the most popular religion in the region until the first half of the 20th century. Since the 1960s, there has been an increase in other Christian groups, particularly Protestantism, as well as other religious organizations, and individuals identifying themselves as having no religion. Also Christianity is the predominant religion in the Caribbean (85%).[110] Other religious groups in the region are Hinduism, Islam, Rastafari (in Jamaica), and Afro-American religions such as Santería and Vodou.
|
116 |
+
|
117 |
+
The most populous country in North America is the United States with 329.7 million persons. The second largest country is Mexico with a population of 112.3 million.[111] Canada is the third most populous country with 37.0 million.[112] The majority of Caribbean island-nations have national populations under a million, though Cuba, Dominican Republic, Haiti, Puerto Rico (a territory of the United States), Jamaica, and Trinidad and Tobago each have populations higher than a million.[113][114][115][116][117] Greenland has a small population of 55,984 for its massive size (2,166,000 km² or 836,300 mi²), and therefore, it has the world's lowest population density at 0.026 pop./km² (0.067 pop./mi²).[118]
|
118 |
+
|
119 |
+
While the United States, Canada, and Mexico maintain the largest populations, large city populations are not restricted to those nations. There are also large cities in the Caribbean. The largest cities in North America, by far, are Mexico City and New York. These cities are the only cities on the continent to exceed eight million, and two of three in the Americas. Next in size are Los Angeles, Toronto,[119] Chicago, Havana, Santo Domingo, and Montreal. Cities in the sun belt regions of the United States, such as those in Southern California and Houston, Phoenix, Miami, Atlanta, and Las Vegas, are experiencing rapid growth. These causes included warm temperatures, retirement of Baby Boomers, large industry, and the influx of immigrants. Cities near the United States border, particularly in Mexico, are also experiencing large amounts of growth. Most notable is Tijuana, a city bordering San Diego that receives immigrants from all over Latin America and parts of Europe and Asia. Yet as cities grow in these warmer regions of North America, they are increasingly forced to deal with the major issue of water shortages.[120]
|
120 |
+
|
121 |
+
Eight of the top ten metropolitan areas are located in the United States. These metropolitan areas all have a population of above 5.5 million and include the New York City metropolitan area, Los Angeles metropolitan area, Chicago metropolitan area, and the Dallas–Fort Worth metroplex.[121] Whilst the majority of the largest metropolitan areas are within the United States, Mexico is host to the largest metropolitan area by population in North America: Greater Mexico City.[122] Canada also breaks into the top ten largest metropolitan areas with the Toronto metropolitan area having six million people.[123] The proximity of cities to each other on the Canada–United States border and Mexico–United States border has led to the rise of international metropolitan areas. These urban agglomerations are observed at their largest and most productive in Detroit–Windsor and San Diego–Tijuana and experience large commercial, economic, and cultural activity. The metropolitan areas are responsible for millions of dollars of trade dependent on international freight. In Detroit-Windsor the Border Transportation Partnership study in 2004 concluded US$13 billion was dependent on the Detroit–Windsor international border crossing while in San Diego-Tijuana freight at the Otay Mesa Port of Entry was valued at US$20 billion.[124][125]
|
122 |
+
|
123 |
+
North America has also been witness to the growth of megapolitan areas. In the United States exists eleven megaregions that transcend international borders and comprise Canadian and Mexican metropolitan regions. These are the Arizona Sun Corridor, Cascadia, Florida, Front Range, Great Lakes Megaregion, Gulf Coast Megaregion, Northeast, Northern California, Piedmont Atlantic, Southern California, and the Texas Triangle.[126] Canada and Mexico are also the home of megaregions. These include the Quebec City – Windsor Corridor, Golden Horseshoe – both of which are considered part of the Great Lakes Megaregion – and megalopolis of Central Mexico. Traditionally the largest megaregion has been considered the Boston-Washington, DC Corridor, or the Northeast, as the region is one massive contiguous area. Yet megaregion criterion have allowed the Great Lakes Megalopolis to maintain status as the most populated region, being home to 53,768,125 people in 2000.[127]
|
124 |
+
|
125 |
+
The top ten largest North American metropolitan areas by population as of 2013, based on national census numbers from the United States and census estimates from Canada and Mexico.
|
126 |
+
|
127 |
+
†2011 Census figures.
|
128 |
+
|
129 |
+
North America's GDP per capita was evaluated in October 2016 by the International Monetary Fund (IMF) to be $41,830, making it the richest continent in the world,[129] followed by Oceania.[130]
|
130 |
+
|
131 |
+
Canada, Mexico, and the United States have significant and multifaceted economic systems. The United States has the largest economy of all three countries and in the world.[130] In 2016, the U.S. had an estimated per capita gross domestic product (PPP) of $57,466 according to the World Bank, and is the most technologically developed economy of the three.[131] The United States' services sector comprises 77% of the country's GDP (estimated in 2010), industry comprises 22% and agriculture comprises 1.2%.[130] The U.S. economy is also the fastest growing economy in North America and the Americas as a whole,[132][129] with the highest GDP per capita in the Americas as well.[129]
|
132 |
+
|
133 |
+
Canada shows significant growth in the sectors of services, mining and manufacturing.[133] Canada's per capita GDP (PPP) was estimated at $44,656 and it had the 11th largest GDP (nominal) in 2014.[133] Canada's services sector comprises 78% of the country's GDP (estimated in 2010), industry comprises 20% and agriculture comprises 2%.[133] Mexico has a per capita GDP (PPP) of $16,111 and as of 2014 is the 15th largest GDP (nominal) in the world.[134] Being a newly industrialized country,[89] Mexico maintains both modern and outdated industrial and agricultural facilities and operations.[135] Its main sources of income are oil, industrial exports, manufactured goods, electronics, heavy industry, automobiles, construction, food, banking and financial services.[136]
|
134 |
+
|
135 |
+
The North American economy is well defined and structured in three main economic areas.[137] These areas are the North American Free Trade Agreement (NAFTA), Caribbean Community and Common Market (CARICOM), and the Central American Common Market (CACM).[137] Of these trade blocs, the United States takes part in two. In addition to the larger trade blocs there is the Canada-Costa Rica Free Trade Agreement among numerous other free trade relations, often between the larger, more developed countries and Central American and Caribbean countries.
|
136 |
+
|
137 |
+
The North America Free Trade Agreement (NAFTA) forms one of the four largest trade blocs in the world.[138] Its implementation in 1994 was designed for economic homogenization with hopes of eliminating barriers of trade and foreign investment between Canada, the United States and Mexico.[139] While Canada and the United States already conducted the largest bilateral trade relationship – and to present day still do – in the world and Canada–United States trade relations already allowed trade without national taxes and tariffs,[140] NAFTA allowed Mexico to experience a similar duty-free trade. The free trade agreement allowed for the elimination of tariffs that had previously been in place on United States-Mexico trade. Trade volume has steadily increased annually and in 2010, surface trade between the three NAFTA nations reached an all-time historical increase of 24.3% or US$791 billion.[141] The NAFTA trade bloc GDP (PPP) is the world's largest with US$17.617 trillion.[142] This is in part attributed to the fact that the economy of the United States is the world's largest national economy; the country had a nominal GDP of approximately $14.7 trillion in 2010.[143] The countries of NAFTA are also some of each other's largest trade partners. The United States is the largest trade partner of Canada and Mexico;[144] while Canada and Mexico are each other's third largest trade partners.[145][146]
|
138 |
+
|
139 |
+
The Caribbean trade bloc – CARICOM – came into agreement in 1973 when it was signed by 15 Caribbean nations. As of 2000, CARICOM trade volume was US$96 billion. CARICOM also allowed for the creation of a common passport for associated nations. In the past decade the trade bloc focused largely on Free Trade Agreements and under the CARICOM Office of Trade Negotiations (OTN) free trade agreements have been signed into effect.
|
140 |
+
|
141 |
+
Integration of Central American economies occurred under the signing of the Central American Common Market agreement in 1961; this was the first attempt to engage the nations of this area into stronger financial cooperation. Recent implementation of the Central American Free Trade Agreement (CAFTA) has left the future of the CACM unclear.[147] The Central American Free Trade Agreement was signed by five Central American countries, the Dominican Republic, and the United States. The focal point of CAFTA is to create a free trade area similar to that of NAFTA. In addition to the United States, Canada also has relations in Central American trade blocs. Currently under proposal, the Canada – Central American Free Trade Agreement (CA4) would operate much the same as CAFTA with the United States does.
|
142 |
+
|
143 |
+
These nations also take part in inter-continental trade blocs. Mexico takes a part in the G3 Free Trade Agreement with Colombia and Venezuela and has a trade agreement with the EU. The United States has proposed and maintained trade agreements under the Transatlantic Free Trade Area between itself and the European Union; the US-Middle East Free Trade Area between numerous Middle Eastern nations and itself; and the Trans-Pacific Strategic Economic Partnership between Southeast Asian nations, Australia, and New Zealand.
|
144 |
+
|
145 |
+
The Pan-American Highway route in the Americas is the portion of a network of roads nearly 48,000 km (30,000 mi) in length which travels through the mainland nations. No definitive length of the Pan-American Highway exists because the US and Canadian governments have never officially defined any specific routes as being part of the Pan-American Highway, and Mexico officially has many branches connecting to the US border. However, the total length of the portion from Mexico to the northern extremity of the highway is roughly 26,000 km (16,000 mi).
|
146 |
+
|
147 |
+
The First Transcontinental Railroad in the United States was built in the 1860s, linking the railroad network of the eastern US with California on the Pacific coast. Finished on 10 May 1869 at the famous golden spike event at Promontory Summit, Utah, it created a nationwide mechanized transportation network that revolutionized the population and economy of the American West, catalyzing the transition from the wagon trains of previous decades to a modern transportation system.[148] Although an accomplishment, it achieved the status of first transcontinental railroad by connecting myriad eastern US railroads to the Pacific and was not the largest single railroad system in the world. The Canadian Grand Trunk Railway (GTR) had, by 1867, already accumulated more than 2,055 km (1,277 mi) of track by connecting Ontario with the Canadian Atlantic provinces west as far as Port Huron, Michigan, through Sarnia, Ontario.
|
148 |
+
|
149 |
+
A shared telephone system known as the North American Numbering Plan (NANP) is an integrated telephone numbering plan of 24 countries and territories: the United States and its territories, Canada, Bermuda, and 17 Caribbean nations.
|
150 |
+
|
151 |
+
Canada and the United States were both former British colonies. There is frequent cultural interplay between the United States and English-speaking Canada.
|
152 |
+
|
153 |
+
Greenland has experienced many immigration waves from Northern Canada, e.g. the Thule People. Therefore, Greenland shares some cultural ties with the indigenous people of Canada. Greenland is also considered Nordic and has strong Danish ties due to centuries of colonization by Denmark.[149]
|
154 |
+
|
155 |
+
Spanish-speaking North America shares a common past as former Spanish colonies. In Mexico and the Central American countries where civilizations like the Maya developed, indigenous people preserve traditions across modern boundaries. Central American and Spanish-speaking Caribbean nations have historically had more in common due to geographical proximity.
|
156 |
+
|
157 |
+
Northern Mexico, particularly in the cities of Monterrey, Tijuana, Ciudad Juárez, and Mexicali, is strongly influenced by the culture and way of life of the United States. Of the aforementioned cities, Monterrey has been regarded as the most Americanized city in Mexico.[150] Immigration to the United States and Canada remains a significant attribute of many nations close to the southern border of the US. The Anglophone Caribbean states have witnessed the decline of the British Empire and its influence on the region, and its replacement by the economic influence of Northern America in the Anglophone Caribbean. This is partly due to the relatively small populations of the English-speaking Caribbean countries, and also because many of them now have more people living abroad than those remaining at home. Northern Mexico, the Western United States and Alberta, Canada share a cowboy culture.
|
158 |
+
|
159 |
+
Canada, Mexico and the US submitted a joint bid to host the 2026 FIFA World Cup.
|
160 |
+
The following table shows the most prominent sports leagues in North America, in order of average revenue.[151][152]
|
161 |
+
|
162 |
+
Footnotes
|
163 |
+
|
164 |
+
Citations
|
165 |
+
|
166 |
+
Africa
|
167 |
+
|
168 |
+
Antarctica
|
169 |
+
|
170 |
+
Asia
|
171 |
+
|
172 |
+
Australia
|
173 |
+
|
174 |
+
Europe
|
175 |
+
|
176 |
+
North America
|
177 |
+
|
178 |
+
South America
|
179 |
+
|
180 |
+
Afro-Eurasia
|
181 |
+
|
182 |
+
America
|
183 |
+
|
184 |
+
Eurasia
|
185 |
+
|
186 |
+
Oceania
|
en/1960.html.txt
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Feminism is a range of social movements, political movements, and ideologies that aim to define, establish, and achieve the political, economic, personal, and social equality of the sexes.[a][2][3][4][5] Feminism incorporates the position that societies prioritize the male point of view, and that women are treated unjustly within those societies.[6] Efforts to change that include fighting against gender stereotypes and establishing educational, professional, and interpersonal opportunities and outcomes for women that are equal to those for men.
|
6 |
+
|
7 |
+
Feminist movements have campaigned and continue to campaign for women's rights, including the right to: vote, hold public office, work, earn equal pay, own property, receive education, enter contracts, have equal rights within marriage, and maternity leave. Feminists have also worked to ensure access to legal abortions and social integration and to protect women and girls from rape, sexual harassment, and domestic violence.[7] Changes in dress and acceptable physical activity have often been part of feminist movements.[8]
|
8 |
+
|
9 |
+
Some scholars consider feminist campaigns to be a main force behind major historical societal changes for women's rights, particularly in the West, where they are near-universally credited with achieving women's suffrage, gender-neutral language, reproductive rights for women (including access to contraceptives and abortion), and the right to enter into contracts and own property.[9] Although feminist advocacy is, and has been, mainly focused on women's rights, some feminists argue for the inclusion of men's liberation within its aims, because they believe that men are also harmed by traditional gender roles.[10]
|
10 |
+
Feminist theory, which emerged from feminist movements, aims to understand the nature of gender inequality by examining women's social roles and lived experience; it has developed theories in a variety of disciplines in order to respond to issues concerning gender.[11][12]
|
11 |
+
|
12 |
+
Numerous feminist movements and ideologies have developed over the years and represent different viewpoints and aims. Some forms of feminism have been criticized for taking into account only white, middle class, or college-educated perspectives. This criticism led to the creation of ethnically specific or multicultural forms of feminism, such as black feminism and intersectional feminism.[13]
|
13 |
+
|
14 |
+
Charles Fourier, a utopian socialist and French philosopher, is credited with having coined the word "féminisme" in 1837.[14] The words "féminisme" ("feminism") and "féministe" ("feminist") first appeared in France and the Netherlands in 1872,[15] Great Britain in the 1890s, and the United States in 1910.[16][17] The Oxford English Dictionary lists 1852 as the year of the first appearance of "feminist"[18] and 1895 for "feminism".[19] Depending on the historical moment, culture and country, feminists around the world have had different causes and goals. Most western feminist historians contend that all movements working to obtain women's rights should be considered feminist movements, even when they did not (or do not) apply the term to themselves.[20][21][22][23][24][25] Other historians assert that the term should be limited to the modern feminist movement and its descendants. Those historians use the label "protofeminist" to describe earlier movements.[26]
|
15 |
+
|
16 |
+
The history of the modern western feminist movement is divided into four "waves".[27][28][29] The first comprised women's suffrage movements of the 19th and early-20th centuries, promoting women's right to vote. The second wave, the women's liberation movement, began in the 1960s and campaigned for legal and social equality for women. In or around 1992, a third wave was identified, characterized by a focus on individuality and diversity.[30] The fourth wave, from around 2012, used social media to combat sexual harassment, violence against women and rape culture; it is best known for the Me Too movement.[31]
|
17 |
+
|
18 |
+
First-wave feminism was a period of activity during the 19th and early-20th centuries. In the UK and US, it focused on the promotion of equal contract, marriage, parenting, and property rights for women. New legislation included the Custody of Infants Act 1839 in the UK, which introduced the tender years doctrine for child custody and gave women the right of custody of their children for the first time.[32][33][34] Other legislation, such as the Married Women's Property Act 1870 in the UK and extended in the 1882 Act,[35] became models for similar legislation in other British territories. Victoria passed legislation in 1884 and New South Wales in 1889; the remaining Australian colonies passed similar legislation between 1890 and 1897. With the turn of the 19th century, activism focused primarily on gaining political power, particularly the right of women's suffrage, though some feminists were active in campaigning for women's sexual, reproductive, and economic rights too.[36]
|
19 |
+
|
20 |
+
Women's suffrage (the right to vote and stand for parliamentary office) began in Britain's Australasian colonies at the close of the 19th century, with the self-governing colonies of New Zealand granting women the right to vote in 1893; South Australia followed suit in 1895. This was followed by Australia granting female suffrage in 1902.[37][38]
|
21 |
+
|
22 |
+
In Britain, the suffragettes and suffragists campaigned for the women's vote, and in 1918 the Representation of the People Act was passed granting the vote to women over the age of 30 who owned property. In 1928 this was extended to all women over 21.[39] Emmeline Pankhurst was the most notable activist in England. Time named her one of the 100 Most Important People of the 20th Century, stating: "she shaped an idea of women for our time; she shook society into a new pattern from which there could be no going back."[40] In the US, notable leaders of this movement included Lucretia Mott, Elizabeth Cady Stanton, and Susan B. Anthony, who each campaigned for the abolition of slavery before championing women's right to vote. These women were influenced by the Quaker theology of spiritual equality, which asserts that men and women are equal under God.[41] In the US, first-wave feminism is considered to have ended with the passage of the Nineteenth Amendment to the United States Constitution (1919), granting women the right to vote in all states. The term first wave was coined retroactively when the term second-wave feminism came into use.[36][42][43][44][45]
|
23 |
+
|
24 |
+
During the late Qing period and reform movements such as the Hundred Days' Reform, Chinese feminists called for women's liberation from traditional roles and Neo-Confucian gender segregation.[46][47][48] Later, the Chinese Communist Party created projects aimed at integrating women into the workforce, and claimed that the revolution had successfully achieved women's liberation.[49]
|
25 |
+
|
26 |
+
According to Nawar al-Hassan Golley, Arab feminism was closely connected with Arab nationalism. In 1899, Qasim Amin, considered the "father" of Arab feminism, wrote The Liberation of Women, which argued for legal and social reforms for women.[50] He drew links between women's position in Egyptian society and nationalism, leading to the development of Cairo University and the National Movement.[51] In 1923 Hoda Shaarawi founded the Egyptian Feminist Union, became its president and a symbol of the Arab women's rights movement.[51]
|
27 |
+
|
28 |
+
The Iranian Constitutional Revolution in 1905 triggered the Iranian women's movement, which aimed to achieve women's equality in education, marriage, careers, and legal rights.[52] However, during the Iranian revolution of 1979, many of the rights that women had gained from the women's movement were systematically abolished, such as the Family Protection Law.[53]
|
29 |
+
|
30 |
+
In France, women obtained the right to vote only with the Provisional Government of the French Republic of 21 April 1944. The Consultative Assembly of Algiers of 1944 proposed on 24 March 1944 to grant eligibility to women but following an amendment by Fernand Grenier, they were given full citizenship, including the right to vote. Grenier's proposition was adopted 51 to 16. In May 1947, following the November 1946 elections, the sociologist Robert Verdier minimized the "gender gap", stating in Le Populaire that women had not voted in a consistent way, dividing themselves, as men, according to social classes. During the baby boom period, feminism waned in importance. Wars (both World War I and World War II) had seen the provisional emancipation of some women, but post-war periods signalled the return to conservative roles.[54]
|
31 |
+
|
32 |
+
By the mid-20th century, women still lacked significant rights. In Switzerland, women gained the right to vote in federal elections in 1971;[55] but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland.[56] In Liechtenstein, women were given the right to vote by the women's suffrage referendum of 1984. Three prior referendums held in 1968, 1971 and 1973 had failed to secure women's right to vote.
|
33 |
+
|
34 |
+
Feminists continued to campaign for the reform of family laws which gave husbands control over their wives. Although by the 20th century coverture had been abolished in the UK and US, in many continental European countries married women still had very few rights. For instance, in France, married women did not receive the right to work without their husband's permission until 1965.[57][58] Feminists have also worked to abolish the "marital exemption" in rape laws which precluded the prosecution of husbands for the rape of their wives.[59] Earlier efforts by first-wave feminists such as Voltairine de Cleyre, Victoria Woodhull and Elizabeth Clarke Wolstenholme Elmy to criminalize marital rape in the late 19th century had failed;[60][61] this was only achieved a century later in most Western countries, but is still not achieved in many other parts of the world.[62]
|
35 |
+
|
36 |
+
French philosopher Simone de Beauvoir provided a Marxist solution and an existentialist view on many of the questions of feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949.[63] The book expressed feminists' sense of injustice. Second-wave feminism is a feminist movement beginning in the early 1960s[64] and continuing to the present; as such, it coexists with third-wave feminism. Second-wave feminism is largely concerned with issues of equality beyond suffrage, such as ending gender discrimination.[36]
|
37 |
+
|
38 |
+
Second-wave feminists see women's cultural and political inequalities as inextricably linked and encourage women to understand aspects of their personal lives as deeply politicized and as reflecting sexist power structures. The feminist activist and author Carol Hanisch coined the slogan "The Personal is Political", which became synonymous with the second wave.[7][65]
|
39 |
+
|
40 |
+
Second- and third-wave feminism in China has been characterized by a reexamination of women's roles during the communist revolution and other reform movements, and new discussions about whether women's equality has actually been fully achieved.[49]
|
41 |
+
|
42 |
+
In 1956, President Gamal Abdel Nasser of Egypt initiated "state feminism", which outlawed discrimination based on gender and granted women's suffrage, but also blocked political activism by feminist leaders.[66] During Sadat's presidency, his wife, Jehan Sadat, publicly advocated further women's rights, though Egyptian policy and society began to move away from women's equality with the new Islamist movement and growing conservatism.[67] However, some activists proposed a new feminist movement, Islamic feminism, which argues for women's equality within an Islamic framework.[68]
|
43 |
+
|
44 |
+
In Latin America, revolutions brought changes in women's status in countries such as Nicaragua, where feminist ideology during the Sandinista Revolution aided women's quality of life but fell short of achieving a social and ideological change.[69]
|
45 |
+
|
46 |
+
In 1963, Betty Friedan's book The Feminine Mystique helped voice the discontent that American women felt. The book is widely credited with sparking the beginning of second-wave feminism in the United States.[70] Within ten years, women made up over half the First World workforce.[71]
|
47 |
+
|
48 |
+
Third-wave feminism is traced to the emergence of the Riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s,[72][73] and to Anita Hill's televised testimony in 1991—to an all-male, all-white Senate Judiciary Committee—that Clarence Thomas, nominated for the Supreme Court of the United States, had sexually harassed her. The term third wave is credited to Rebecca Walker, who responded to Thomas's appointment to the Supreme Court with an article in Ms. magazine, "Becoming the Third Wave" (1992).[74][75] She wrote:
|
49 |
+
|
50 |
+
So I write this as a plea to all women, especially women of my generation: Let Thomas’ confirmation serve to remind you, as it did me, that the fight is far from over. Let this dismissal of a woman's experience move you to anger. Turn that outrage into political power. Do not vote for them unless they work for us. Do not have sex with them, do not break bread with them, do not nurture them if they don't prioritize our freedom to control our bodies and our lives. I am not a post-feminism feminist. I am the Third Wave.[74]
|
51 |
+
|
52 |
+
Third-wave feminism also sought to challenge or avoid what it deemed the second wave's essentialist definitions of femininity, which, third-wave feminists argued, over-emphasized the experiences of upper middle-class white women. Third-wave feminists often focused on "micro-politics" and challenged the second wave's paradigm as to what was, or was not, good for women, and tended to use a post-structuralist interpretation of gender and sexuality.[36][76][77][78] Feminist leaders rooted in the second wave, such as Gloria Anzaldúa, bell hooks, Chela Sandoval, Cherríe Moraga, Audre Lorde, Maxine Hong Kingston, and many other non-white feminists, sought to negotiate a space within feminist thought for consideration of race-related subjectivities.[77][79][80] Third-wave feminism also contained internal debates between difference feminists, who believe that there are important psychological differences between the sexes, and those who believe that there are no inherent psychological differences between the sexes and contend that gender roles are due to social conditioning.[81]
|
53 |
+
|
54 |
+
Standpoint theory is a feminist theoretical point of view stating that a person's social position influences their knowledge. This perspective argues that research and theory treat women and the feminist movement as insignificant and refuses to see traditional science as unbiased.[82] Since the 1980s, standpoint feminists have argued that the feminist movement should address global issues (such as rape, incest, and prostitution) and culturally specific issues (such as female genital mutilation in some parts of Africa and Arab societies, as well as glass ceiling practices that impede women's advancement in developed economies) in order to understand how gender inequality interacts with racism, homophobia, classism and colonization in a "matrix of domination".[83][84]
|
55 |
+
|
56 |
+
Fourth-wave feminism refers to a resurgence of interest in feminism that began around 2012 and is associated with the use of social media.[85] According to feminist scholar Prudence Chamberlain, the focus of the fourth wave is justice for women and opposition to sexual harassment and violence against women. Its essence, she writes, is "incredulity that certain attitudes can still exist".[86]
|
57 |
+
|
58 |
+
Fourth-wave feminism is "defined by technology", according to Kira Cochrane, and is characterized particularly by the use of Facebook, Twitter, Instagram, YouTube, Tumblr, and blogs such as Feministing to challenge misogyny and further gender equality.[85][87][88][85]
|
59 |
+
|
60 |
+
Issues that fourth-wave feminists focus on include street and workplace harassment, campus sexual assault and rape culture. Scandals involving the harassment, abuse, and murder of women and girls have galvanized the movement. These have included the 2012 Delhi gang rape, 2012 Jimmy Savile allegations, the Bill Cosby allegations, 2014 Isla Vista killings, 2016 trial of Jian Ghomeshi, 2017 Harvey Weinstein allegations and subsequent Weinstein effect, and the 2017 Westminster sexual scandals.[89]
|
61 |
+
|
62 |
+
Examples of fourth-wave feminist campaigns include the Everyday Sexism Project, No More Page 3, Stop Bild Sexism, Mattress Performance, 10 Hours of Walking in NYC as a Woman, #YesAllWomen, Free the Nipple, One Billion Rising, the 2017 Women's March, the 2018 Women's March, and the #MeToo movement. In December 2017, Time magazine chose several prominent female activists involved in the #MeToo movement, dubbed "the silence breakers", as Person of the Year.[90][91]
|
63 |
+
|
64 |
+
The term postfeminism is used to describe a range of viewpoints reacting to feminism since the 1980s. While not being "anti-feminist", postfeminists believe that women have achieved second wave goals while being critical of third- and fourth-wave feminist goals. The term was first used to describe a backlash against second-wave feminism, but it is now a label for a wide range of theories that take critical approaches to previous feminist discourses and includes challenges to the second wave's ideas.[92] Other postfeminists say that feminism is no longer relevant to today's society.[93] Amelia Jones has written that the postfeminist texts which emerged in the 1980s and 1990s portrayed second-wave feminism as a monolithic entity.[94] Dorothy Chunn notes a "blaming narrative" under the postfeminist moniker, where feminists are undermined for continuing to make demands for gender equality in a "post-feminist" society, where "gender equality has (already) been achieved." According to Chunn, "many feminists have voiced disquiet about the ways in which rights and equality discourses are now used against them."[95]
|
65 |
+
|
66 |
+
Feminist theory is the extension of feminism into theoretical or philosophical fields. It encompasses work in a variety of disciplines, including anthropology, sociology, economics, women's studies, literary criticism,[96][97] art history,[98] psychoanalysis[99] and philosophy.[100][101] Feminist theory aims to understand gender inequality and focuses on gender politics, power relations, and sexuality. While providing a critique of these social and political relations, much of feminist theory also focuses on the promotion of women's rights and interests. Themes explored in feminist theory include discrimination, stereotyping, objectification (especially sexual objectification), oppression, and patriarchy.[11][12]
|
67 |
+
In the field of literary criticism, Elaine Showalter describes the development of feminist theory as having three phases. The first she calls "feminist critique", in which the feminist reader examines the ideologies behind literary phenomena. The second Showalter calls "gynocriticism", in which the "woman is producer of textual meaning". The last phase she calls "gender theory", in which the "ideological inscription and the literary effects of the sex/gender system are explored".[102]
|
68 |
+
|
69 |
+
This was paralleled in the 1970s by French feminists, who developed the concept of écriture féminine (which translates as 'female or feminine writing').[92] Helene Cixous argues that writing and philosophy are phallocentric and along with other French feminists such as Luce Irigaray emphasize "writing from the body" as a subversive exercise.[92] The work of Julia Kristeva, a feminist psychoanalyst and philosopher, and Bracha Ettinger,[103] artist and psychoanalyst, has influenced feminist theory in general and feminist literary criticism in particular. However, as the scholar Elizabeth Wright points out, "none of these French feminists align themselves with the feminist movement as it appeared in the Anglophone world".[92][104] More recent feminist theory, such as that of Lisa Lucile Owens,[105] has concentrated on characterizing feminism as a universal emancipatory movement.
|
70 |
+
|
71 |
+
Many overlapping feminist movements and ideologies have developed over the years.
|
72 |
+
|
73 |
+
Some branches of feminism closely track the political leanings of the larger society, such as liberalism and conservatism, or focus on the environment. Liberal feminism seeks individualistic equality of men and women through political and legal reform without altering the structure of society. Catherine Rottenberg has argued that the neoliberal shirt in Liberal feminism has led to that form of feminism being individualized rather than collectivized and becoming detached from social inequality.[106] Due to this she argues that Liberal Feminism cannot offer any sustained analysis of the structures of male dominance, power, or privilege.[106]
|
74 |
+
|
75 |
+
Radical feminism considers the male-controlled capitalist hierarchy as the defining feature of women's oppression and the total uprooting and reconstruction of society as necessary.[7] Conservative feminism is conservative relative to the society in which it resides. Libertarian feminism conceives of people as self-owners and therefore as entitled to freedom from coercive interference.[107] Separatist feminism does not support heterosexual relationships. Lesbian feminism is thus closely related. Other feminists criticize separatist feminism as sexist.[10] Ecofeminists see men's control of land as responsible for the oppression of women and destruction of the natural environment; ecofeminism has been criticized for focusing too much on a mystical connection between women and nature.[108]
|
76 |
+
|
77 |
+
Rosemary Hennessy and Chrys Ingraham say that materialist forms of feminism grew out of Western Marxist thought and have inspired a number of different (but overlapping) movements, all of which are involved in a critique of capitalism and are focused on ideology's relationship to women.[109] Marxist feminism argues that capitalism is the root cause of women's oppression, and that discrimination against women in domestic life and employment is an effect of capitalist ideologies.[110] Socialist feminism distinguishes itself from Marxist feminism by arguing that women's liberation can only be achieved by working to end both the economic and cultural sources of women's oppression.[111] Anarcha-feminists believe that class struggle and anarchy against the state[112] require struggling against patriarchy, which comes from involuntary hierarchy.
|
78 |
+
|
79 |
+
Sara Ahmed argues that Black and Postcolonial feminisms pose a challenge "to some of the organizing premises of Western feminist thought."[113] During much of its history, feminist movements and theoretical developments were led predominantly by middle-class white women from Western Europe and North America.[79][83][114] However, women of other races have proposed alternative feminisms.[83] This trend accelerated in the 1960s with the civil rights movement in the United States and the collapse of European colonialism in Africa, the Caribbean, parts of Latin America, and Southeast Asia. Since that time, women in developing nations and former colonies and who are of colour or various ethnicities or living in poverty have proposed additional feminisms.[114] Womanism[115][116] emerged after early feminist movements were largely white and middle-class.[79] Postcolonial feminists argue that colonial oppression and Western feminism marginalized postcolonial women but did not turn them passive or voiceless.[13] Third-world feminism and Indigenous feminism are closely related to postcolonial feminism.[114] These ideas also correspond with ideas in African feminism, motherism,[117] Stiwanism,[118] negofeminism,[119] femalism, transnational feminism, and Africana womanism.[120]
|
80 |
+
|
81 |
+
In the late twentieth century various feminists began to argue that gender roles are socially constructed,[121][122] and that it is impossible to generalize women's experiences across cultures and histories.[123] Post-structural feminism draws on the philosophies of post-structuralism and deconstruction in order to argue that the concept of gender is created socially and culturally through discourse.[124] Postmodern feminists also emphasize the social construction of gender and the discursive nature of reality;[121] however, as Pamela Abbott et al. note, a postmodern approach to feminism highlights "the existence of multiple truths (rather than simply men and women's standpoints)".[125]
|
82 |
+
|
83 |
+
Feminist views on transgender people differ. Some feminists do not view trans women as women,[126][127] believing that they have male privilege due to their sex assignment at birth.[128] Additionally, some feminists reject the concept of transgender identity due to views that all behavioral differences between genders are a result of socialization.[129] In contrast, other feminists and transfeminists believe that the liberation of trans women is a necessary part of feminist goals.[130] Third-wave feminists are overall more supportive of trans rights.[131][132] A key concept in transfeminism is of transmisogyny,[133] which is the irrational fear of, aversion to, or discrimination against transgender women or feminine gender-nonconforming people.[134][135]
|
84 |
+
|
85 |
+
Riot grrrls took an anti-corporate stance of self-sufficiency and self-reliance.[136] Riot grrrl's emphasis on universal female identity and separatism often appears more closely allied with second-wave feminism than with the third wave.[137] The movement encouraged and made "adolescent girls' standpoints central", allowing them to express themselves fully.[138] Lipstick feminism is a cultural feminist movement that attempts to respond to the backlash of second-wave radical feminism of the 1960s and 1970s by reclaiming symbols of "feminine" identity such as make-up, suggestive clothing and having a sexual allure as valid and empowering personal choices.[139][140]
|
86 |
+
|
87 |
+
According to 2014 Ipsos poll covering 15 developed countries, 53 percent of respondents identified as feminists, and 87% agreed that "women should be treated equally to men in all areas based on their competency, not their gender". However, only 55% of women agreed that they have "full equality with men and the freedom to reach their full dreams and aspirations".[141] Taken together, these studies reflect the importance differentiating between claiming a "feminist identity" and holding "feminist attitudes or beliefs"[142]
|
88 |
+
|
89 |
+
According to a 2015 poll, 18 percent of Americans consider themselves feminists, while 85 percent reported they believe in "equality for women". Despite the popular belief in equal rights, 52 percent did not identify as feminist, 26 percent were unsure, and four percent provided no response.[143]
|
90 |
+
|
91 |
+
Sociological research shows that, in the US, increased educational attainment is associated with greater support for feminist issues. In addition, politically liberal people are more likely to support feminist ideals compared to those who are conservative.[144][145]
|
92 |
+
|
93 |
+
According to numerous polls, 7% of Britons consider themselves feminists, with 83% saying they support equality of opportunity for women – this included even higher support from men (86%) than women (81%).[146][147]
|
94 |
+
|
95 |
+
Feminist views on sexuality vary, and have differed by historical period and by cultural context. Feminist attitudes to female sexuality have taken a few different directions. Matters such as the sex industry, sexual representation in the media, and issues regarding consent to sex under conditions of male dominance have been particularly controversial among feminists. This debate has culminated in the late 1970s and the 1980s, in what came to be known as the feminist sex wars, which pitted anti-pornography feminism against sex-positive feminism, and parts of the feminist movement were deeply divided by these debates.[148][149][150][151][152] Feminists have taken a variety of positions on different aspects of the sexual revolution from the 1960s and 70s. Over the course of the 1970s, a large number of influential women accepted lesbian and bisexual women as part of feminism.[153]
|
96 |
+
|
97 |
+
Opinions on the sex industry are diverse. Feminists critical of the sex industry generally see it as the exploitative result of patriarchal social structures which reinforce sexual and cultural attitudes complicit in rape and sexual harassment. Alternately, feminists who support at least part of the sex industry argue that it can be a medium of feminist expression and a means for women to take control of their sexuality. For the views of feminism on male prostitutes see the article on male prostitution.
|
98 |
+
|
99 |
+
Feminist views of pornography range from condemnation of pornography as a form of violence against women, to an embracing of some forms of pornography as a medium of feminist expression.[148][149][150][151][152] Similarly, feminists' views on prostitution vary, ranging from critical to supportive.[154]
|
100 |
+
|
101 |
+
For feminists, a woman's right to control her own sexuality is a key issue. Feminists such as Catharine MacKinnon argue that women have very little control over their own bodies, with female sexuality being largely controlled and defined by men in patriarchal societies. Feminists argue that sexual violence committed by men is often rooted in ideologies of male sexual entitlement and that these systems grant women very few legitimate options to refuse sexual advances.[155][156] Feminists argue that all cultures are, in one way or another, dominated by ideologies that largely deny women the right to decide how to express their sexuality, because men under patriarchy feel entitled to define sex on their own terms. This entitlement can take different forms, depending on the culture. In conservative and religious cultures marriage is regarded as an institution which requires a wife to be sexually available at all times, virtually without limit; thus, forcing or coercing sex on a wife is not considered a crime or even an abusive behaviour.[157][158] In more liberal cultures, this entitlement takes the form of a general sexualization of the whole culture. This is played out in the sexual objectification of women, with pornography and other forms of sexual entertainment creating the fantasy that all women exist solely for men's sexual pleasure and that women are readily available and desiring to engage in sex at any time, with any man, on a man's terms.[159]
|
102 |
+
|
103 |
+
Sandra Harding says that the "moral and political insights of the women's movement have inspired social scientists and biologists to raise critical questions about the ways traditional researchers have explained gender, sex and relations within and between the social and natural worlds."[160] Some feminists, such as Ruth Hubbard and Evelyn Fox Keller, criticize traditional scientific discourse as being historically biased towards a male perspective.[161] A part of the feminist research agenda is the examination of the ways in which power inequities are created or reinforced in scientific and academic institutions.[162] Physicist Lisa Randall, appointed to a task force at Harvard by then-president Lawrence Summers after his controversial discussion of why women may be underrepresented in science and engineering, said, "I just want to see a whole bunch more women enter the field so these issues don't have to come up anymore."[163]
|
104 |
+
|
105 |
+
Lynn Hankinson Nelson notes that feminist empiricists find fundamental differences between the experiences of men and women. Thus, they seek to obtain knowledge through the examination of the experiences of women and to "uncover the consequences of omitting, misdescribing, or devaluing them" to account for a range of human experience.[164] Another part of the feminist research agenda is the uncovering of ways in which power inequities are created or reinforced in society and in scientific and academic institutions.[162] Furthermore, despite calls for greater attention to be paid to structures of gender inequity in the academic literature, structural analyses of gender bias rarely appear in highly cited psychological journals, especially in the commonly studied areas of psychology and personality.[165]
|
106 |
+
|
107 |
+
One criticism of feminist epistemology is that it allows social and political values to influence its findings.[166] Susan Haack also points out that feminist epistemology reinforces traditional stereotypes about women's thinking (as intuitive and emotional, etc.); Meera Nanda further cautions that this may in fact trap women within "traditional gender roles and help justify patriarchy".[167]
|
108 |
+
|
109 |
+
Modern feminism challenges the essentialist view of gender as biologically intrinsic.[168][169] For example, Anne Fausto-Sterling's book, Myths of Gender, explores the assumptions embodied in scientific research that support a biologically essentialist view of gender.[170] In Delusions of Gender, Cordelia Fine disputes scientific evidence that suggests that there is an innate biological difference between men's and women's minds, asserting instead that cultural and societal beliefs are the reason for differences between individuals that are commonly perceived as sex differences.[171]
|
110 |
+
|
111 |
+
Feminism in psychology emerged as a critique of the dominant male outlook on psychological research where only male perspectives were studied with all male subjects. As women earned doctorates in psychology, females and their issues were introduced as legitimate topics of study. Feminist psychology emphasizes social context, lived experience, and qualitative analysis.[172] Projects such as Psychology's Feminist Voices have emerged to catalogue the influence of feminist psychologists on the discipline.[173]
|
112 |
+
|
113 |
+
Gender-based inquiries into and conceptualization of architecture have also come about, leading to feminism in modern architecture. Piyush Mathur coined the term "archigenderic". Claiming that "architectural planning has an inextricable link with the defining and regulation of gender roles, responsibilities, rights, and limitations", Mathur came up with that term "to explore ... the meaning of 'architecture' in terms of gender" and "to explore the meaning of 'gender' in terms of architecture".[174]
|
114 |
+
|
115 |
+
Feminist activists have established a range of feminist businesses, including women's bookstores, feminist credit unions, feminist presses, feminist mail-order catalogs, and feminist restaurants. These businesses flourished as part of the second and third-waves of feminism in the 1970s, 1980s, and 1990s.[175][176]
|
116 |
+
|
117 |
+
Corresponding with general developments within feminism, and often including such self-organizing tactics as the consciousness-raising group, the movement began in the 1960s and flourished throughout the 1970s.[177] Jeremy Strick, director of the Museum of Contemporary Art in Los Angeles, described the feminist art movement as "the most influential international movement of any during the postwar period", and Peggy Phelan says that it "brought about the most far-reaching transformations in both artmaking and art writing over the past four decades".[177] Feminist artist Judy Chicago, who created The Dinner Party, a set of vulva-themed ceramic plates in the 1970s, said in 2009 to ARTnews, "There is still an institutional lag and an insistence on a male Eurocentric narrative. We are trying to change the future: to get girls and boys to realize that women's art is not an exception—it's a normal part of art history."[178] A feminist approach to the visual arts has most recently developed through Cyberfeminism and the posthuman turn, giving voice to the ways "contemporary female artists are dealing with gender, social media and the notion of embodiment".[179]
|
118 |
+
|
119 |
+
The feminist movement produced feminist fiction, feminist non-fiction, and feminist poetry, which created new interest in women's writing. It also prompted a general reevaluation of women's historical and academic contributions in response to the belief that women's lives and contributions have been underrepresented as areas of scholarly interest.[180] There has also been a close link between feminist literature and activism, with feminist writing typically voicing key concerns or ideas of feminism in a particular era.
|
120 |
+
|
121 |
+
Much of the early period of feminist literary scholarship was given over to the rediscovery and reclamation of texts written by women. In Western feminist literary scholarship, Studies like Dale Spender's Mothers of the Novel (1986) and Jane Spencer's The Rise of the Woman Novelist (1986) were ground-breaking in their insistence that women have always been writing.
|
122 |
+
|
123 |
+
Commensurate with this growth in scholarly interest, various presses began the task of reissuing long-out-of-print texts. Virago Press began to publish its large list of 19th and early-20th-century novels in 1975 and became one of the first commercial presses to join in the project of reclamation. In the 1980s Pandora Press, responsible for publishing Spender's study, issued a companion line of 18th-century novels written by women.[181] More recently, Broadview Press continues to issue 18th- and 19th-century novels, many hitherto out of print, and the University of Kentucky has a series of republications of early women's novels.
|
124 |
+
|
125 |
+
Particular works of literature have come to be known as key feminist texts. A Vindication of the Rights of Woman (1792) by Mary Wollstonecraft, is one of the earliest works of feminist philosophy. A Room of One's Own (1929) by Virginia Woolf, is noted in its argument for both a literal and figural space for women writers within a literary tradition dominated by patriarchy.
|
126 |
+
|
127 |
+
The widespread interest in women's writing is related to a general reassessment and expansion of the literary canon. Interest in post-colonial literatures, gay and lesbian literature, writing by people of colour, working people's writing, and the cultural productions of other historically marginalized groups has resulted in a whole scale expansion of what is considered "literature", and genres hitherto not regarded as "literary", such as children's writing, journals, letters, travel writing, and many others are now the subjects of scholarly interest.[180][182][183] Most genres and subgenres have undergone a similar analysis, so literary studies have entered new territories such as the "female gothic"[184] or women's science fiction.
|
128 |
+
|
129 |
+
According to Elyce Rae Helford, "Science fiction and fantasy serve as important vehicles for feminist thought, particularly as bridges between theory and practice."[185] Feminist science fiction is sometimes taught at the university level to explore the role of social constructs in understanding gender.[186] Notable texts of this kind are Ursula K. Le Guin's The Left Hand of Darkness (1969), Joanna Russ' The Female Man (1970), Octavia Butler's Kindred (1979) and Margaret Atwood's Handmaid's Tale (1985).
|
130 |
+
|
131 |
+
Feminist nonfiction has played an important role in voicing concerns about women's lived experiences. For example, Maya Angelou's I Know Why the Caged Bird Sings was extremely influential, as it represented the specific racism and sexism experienced by black women growing up in the United States.[187]
|
132 |
+
|
133 |
+
In addition, many feminist movements have embraced poetry as a vehicle through which to communicate feminist ideas to public audiences through anthologies, poetry collections, and public readings.[188]
|
134 |
+
|
135 |
+
Moreover, historical pieces of writing by women have been used by feminists to speak about what women's lives would have been like in the past, while demonstrating the power that they held and the impact they had in their communities even centuries ago.[189] An important figure in the history of women in relation to literature is Hrothsvitha. Hrothsvitha was a canoness from 935 - 973,[190] as the first female poetess in the German lands, and first female historian Hrothsvitha is one of the few people to speak about women's lives from a woman's perspective during the Middle Ages[191].
|
136 |
+
|
137 |
+
Women's music (or womyn's music or wimmin's music) is the music by women, for women, and about women.[192] The genre emerged as a musical expression of the second-wave feminist movement[193] as well as the labour, civil rights, and peace movements.[194] The movement was started by lesbians such as Cris Williamson, Meg Christian, and Margie Adam, African-American women activists such as Bernice Johnson Reagon and her group Sweet Honey in the Rock, and peace activist Holly Near.[194] Women's music also refers to the wider industry of women's music that goes beyond the performing artists to include studio musicians, producers, sound engineers, technicians, cover artists, distributors, promoters, and festival organizers who are also women.[192]
|
138 |
+
Riot grrrl is an underground feminist hardcore punk movement described in the cultural movements section of this article.
|
139 |
+
|
140 |
+
Feminism became a principal concern of musicologists in the 1980s[195] as part of the New Musicology. Prior to this, in the 1970s, musicologists were beginning to discover women composers and performers, and had begun to review concepts of canon, genius, genre and periodization from a feminist perspective. In other words, the question of how women musicians fit into traditional music history was now being asked.[195] Through the 1980s and 1990s, this trend continued as musicologists like Susan McClary, Marcia Citron and Ruth Solie began to consider the cultural reasons for the marginalizing of women from the received body of work. Concepts such as music as gendered discourse; professionalism; reception of women's music; examination of the sites of music production; relative wealth and education of women; popular music studies in relation to women's identity; patriarchal ideas in music analysis; and notions of gender and difference are among the themes examined during this time.[195]
|
141 |
+
|
142 |
+
While the music industry has long been open to having women in performance or entertainment roles, women are much less likely to have positions of authority, such as being the leader of an orchestra.[196] In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process.[197]
|
143 |
+
|
144 |
+
Feminist cinema, advocating or illustrating feminist perspectives, arose largely with the development of feminist film theory in the late '60s and early '70s. Women who were radicalized during the 1960s by political debate and sexual liberation; but the failure of radicalism to produce substantive change for women galvanized them to form consciousness-raising groups and set about analysing, from different perspectives, dominant cinema's construction of women.[198] Differences were particularly marked between feminists on either side of the Atlantic. 1972 saw the first feminist film festivals in the U.S. and U.K. as well as the first feminist film journal, Women and Film. Trailblazers from this period included Claire Johnston and Laura Mulvey, who also organized the Women's Event at the Edinburgh Film Festival.[199] Other theorists making a powerful impact on feminist film include Teresa de Lauretis, Anneke Smelik and Kaja Silverman. Approaches in philosophy and psychoanalysis fuelled feminist film criticism, feminist independent film and feminist distribution.
|
145 |
+
|
146 |
+
It has been argued that there are two distinct approaches to independent, theoretically inspired feminist filmmaking. 'Deconstruction' concerns itself with analysing and breaking down codes of mainstream cinema, aiming to create a different relationship between the spectator and dominant cinema. The second approach, a feminist counterculture, embodies feminine writing to investigate a specifically feminine cinematic language.[200] Some recent criticism[201] of "feminist film" approaches has centred around a Swedish rating system called the Bechdel test.
|
147 |
+
|
148 |
+
During the 1930s–1950s heyday of the big Hollywood studios, the status of women in the industry was abysmal.[202] Since then female directors such as Sally Potter, Catherine Breillat, Claire Denis and Jane Campion have made art movies, and directors like Kathryn Bigelow and Patty Jenkins have had mainstream success. This progress stagnated in the 90s, and men outnumber women five to one in behind the camera roles.[203][204]
|
149 |
+
|
150 |
+
Feminism had complex interactions with the major political movements of the twentieth century.
|
151 |
+
|
152 |
+
Since the late nineteenth century, some feminists have allied with socialism, whereas others have criticized socialist ideology for being insufficiently concerned about women's rights. August Bebel, an early activist of the German Social Democratic Party (SPD), published his work Die Frau und der Sozialismus, juxtaposing the struggle for equal rights between sexes with social equality in general. In 1907 there was an International Conference of Socialist Women in Stuttgart where suffrage was described as a tool of class struggle. Clara Zetkin of the SPD called for women's suffrage to build a "socialist order, the only one that allows for a radical solution to the women's question".[205][206]
|
153 |
+
|
154 |
+
In Britain, the women's movement was allied with the Labour party. In the U.S., Betty Friedan emerged from a radical background to take leadership. Radical Women is the oldest socialist feminist organization in the U.S. and is still active.[207] During the Spanish Civil War, Dolores Ibárruri (La Pasionaria) led the Communist Party of Spain. Although she supported equal rights for women, she opposed women fighting on the front and clashed with the anarcha-feminist Mujeres Libres.[208]
|
155 |
+
|
156 |
+
Feminists in Ireland in the early 20th century included the revolutionary Irish Republican, suffragette and socialist Constance Markievicz who in 1918 was the first woman elected to the British House of Commons. However, in line with Sinn Féin abstentionist policy, she would not take her seat in the House of Commons.[209] She was re-elected to the Second Dáil in the elections of 1921.[210] She was also a commander of the Irish Citizens Army which was led by the socialist & self-described feminist, Irish leader James Connolly during the 1916 Easter Rising.[211]
|
157 |
+
|
158 |
+
Fascism has been prescribed dubious stances on feminism by its practitioners and by women's groups. Amongst other demands concerning social reform presented in the Fascist manifesto in 1919 was expanding the suffrage to all Italian citizens of age 18 and above, including women (accomplished only in 1946, after the defeat of fascism) and eligibility for all to stand for office from age 25. This demand was particularly championed by special Fascist women's auxiliary groups such as the fasci femminilli and only partly realized in 1925, under pressure from dictator Benito Mussolini's more conservative coalition partners.[212][213]
|
159 |
+
|
160 |
+
Cyprian Blamires states that although feminists were among those who opposed the rise of Adolf Hitler, feminism has a complicated relationship with the Nazi movement as well. While Nazis glorified traditional notions of patriarchal society and its role for women, they claimed to recognize women's equality in employment.[214] However, Hitler and Mussolini declared themselves as opposed to feminism,[214] and after the rise of Nazism in Germany in 1933, there was a rapid dissolution of the political rights and economic opportunities that feminists had fought for during the pre-war period and to some extent during the 1920s.[206] Georges Duby et al. note that in practice fascist society was hierarchical and emphasized male virility, with women maintaining a largely subordinate position.[206] Blamires also notes that Neofascism has since the 1960s been hostile towards feminism and advocates that women accept "their traditional roles".[214]
|
161 |
+
|
162 |
+
The civil rights movement has influenced and informed the feminist movement and vice versa. Many Western feminists adapted the language and theories of black equality activism and drew parallels between women's rights and the rights of non-white people.[215] Despite the connections between the women's and civil rights movements, some tensions arose during the late 1960s and the 1970s as non-white women argued that feminism was predominantly white, straight, and middle class, and did not understand and was not concerned with issues of race and sexuality.[216] Similarly, some women argued that the civil rights movement had sexist and homophobic elements and did not adequately address minority women's concerns.[215][217][218] These criticisms created new feminist social theories about identity politics and the intersections of racism, classism, and sexism; they also generated new feminisms such as black feminism and Chicana feminism in addition to making large contributions to lesbian feminism and other integrations of queer of colour identity.[219][220][221]
|
163 |
+
|
164 |
+
Neoliberalism has been criticized by feminist theory for having a negative effect on the female workforce population across the globe, especially in the global south. Masculinist assumptions and objectives continue to dominate economic and geopolitical thinking.[222]:177 Women's experiences in non-industrialized countries reveal often deleterious effects of modernization policies and undercut orthodox claims that development benefits everyone.[222]:175
|
165 |
+
|
166 |
+
Proponents of neoliberalism have theorized that by increasing women's participation in the workforce, there will be heightened economic progress, but feminist critics have noted that this participation alone does not further equality in gender relations.[223]:186–98 Neoliberalism has failed to address significant problems such as the devaluation of feminized labour, the structural privileging of men and masculinity, and the politicization of women's subordination in the family and the workplace.[222]:176 The "feminization of employment" refers to a conceptual characterization of deteriorated and devalorized labour conditions that are less desirable, meaningful, safe and secure.[222]:179 Employers in the global south have perceptions about feminine labour and seek workers who are perceived to be undemanding, docile and willing to accept low wages.[222]:180 Social constructs about feminized labour have played a big part in this, for instance, employers often perpetuate ideas about women as 'secondary income earners to justify their lower rates of pay and not deserving of training or promotion.[223]:189
|
167 |
+
|
168 |
+
The feminist movement has effected change in Western society, including women's suffrage; greater access to education; more nearly equitable[weasel words] pay with men; the right to initiate divorce proceedings; the right of women to make individual decisions regarding pregnancy (including access to contraceptives and abortion); and the right to own property.[9]
|
169 |
+
|
170 |
+
From the 1960s on, the campaign for women's rights[224] was met with mixed results[225] in the U.S. and the U.K. Other countries of the EEC agreed to ensure that discriminatory laws would be phased out across the European Community.
|
171 |
+
|
172 |
+
Some feminist campaigning also helped reform attitudes to child sexual abuse. The view that young girls cause men to have sexual intercourse with them was replaced by that of men's responsibility for their own conduct, the men being adults.[226]
|
173 |
+
|
174 |
+
In the U.S., the National Organization for Women (NOW) began in 1966 to seek women's equality, including through the Equal Rights Amendment (ERA),[227] which did not pass, although some states enacted their own. Reproductive rights in the U.S. centred on the court decision in Roe v. Wade enunciating a woman's right to choose whether to carry a pregnancy to term. Western women gained more reliable birth control, allowing family planning and careers. The movement started in the 1910s in the U.S. under Margaret Sanger and elsewhere under Marie Stopes. In the final three decades of the 20th century, Western women knew a new freedom through birth control, which enabled women to plan their adult lives, often making way for both career and family.[228]
|
175 |
+
|
176 |
+
The division of labour within households was affected by the increased entry of women into workplaces in the 20th century. Sociologist Arlie Russell Hochschild found that, in two-career couples, men and women, on average, spend about equal amounts of time working, but women still spend more time on housework,[229][230] although Cathy Young responded by arguing that women may prevent equal participation by men in housework and parenting.[231] Judith K. Brown writes, "Women are most likely to make a substantial contribution when subsistence activities have the following characteristics: the participant is not obliged to be far from home; the tasks are relatively monotonous and do not require rapt concentration and the work is not dangerous, can be performed in spite of interruptions, and is easily resumed once interrupted."[232]
|
177 |
+
|
178 |
+
In international law, the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) is an international convention adopted by the United Nations General Assembly and described as an international bill of rights for women. It came into force in those nations ratifying it.[233]
|
179 |
+
|
180 |
+
Feminist jurisprudence is a branch of jurisprudence that examines the relationship between women and law. It addresses questions about the history of legal and social biases against women and about the enhancement of their legal rights.[234]
|
181 |
+
|
182 |
+
Feminist jurisprudence signifies a reaction to the philosophical approach of modern legal scholars, who typically see the law as a process for interpreting and perpetuating a society's universal, gender-neutral ideals. Feminist legal scholars claim that this fails to acknowledge women's values or legal interests or the harms that they may anticipate or experience.[235]
|
183 |
+
|
184 |
+
Proponents of gender-neutral language argue that the use of gender-specific language often implies male superiority or reflects an unequal state of society.[236] According to The Handbook of English Linguistics, generic masculine pronouns and gender-specific job titles are instances "where English linguistic convention has historically treated men as prototypical of the human species."[237]
|
185 |
+
|
186 |
+
Merriam-Webster chose "feminism" as its 2017 Word of the Year, noting that "Word of the Year is a quantitative measure of interest in a particular word."[238]
|
187 |
+
|
188 |
+
Feminist theology is a movement that reconsiders the traditions, practices, scriptures, and theologies of religions from a feminist perspective. Some of the goals of feminist theology include increasing the role of women among the clergy and religious authorities, reinterpreting male-dominated imagery and language about God, determining women's place in relation to career and motherhood, and studying images of women in the religion's sacred texts.[239]
|
189 |
+
|
190 |
+
Christian feminism is a branch of feminist theology which seeks to interpret and understand Christianity in light of the equality of women and men, and that this interpretation is necessary for a complete understanding of Christianity. While there is no standard set of beliefs among Christian feminists, most agree that God does not discriminate on the basis of sex, and are involved in issues such as the ordination of women, male dominance and the balance of parenting in Christian marriage, claims of moral deficiency and inferiority of women compared to men, and the overall treatment of women in the church.[240][241]
|
191 |
+
|
192 |
+
Islamic feminists advocate women's rights, gender equality, and social justice grounded within an Islamic framework. Advocates seek to highlight the deeply rooted teachings of equality in the Quran and encourage a questioning of the patriarchal interpretation of Islamic teaching through the Quran, hadith (sayings of Muhammad), and sharia (law) towards the creation of a more equal and just society.[242] Although rooted in Islam, the movement's pioneers have also utilized secular and Western feminist discourses and recognize the role of Islamic feminism as part of an integrated global feminist movement.[243]
|
193 |
+
|
194 |
+
Buddhist feminism is a movement that seeks to improve the religious, legal, and social status of women within Buddhism. It is an aspect of feminist theology which seeks to advance and understand the equality of men and women morally, socially, spiritually, and in leadership from a Buddhist perspective. The Buddhist feminist Rita Gross describes Buddhist feminism as "the radical practice of the co-humanity of women and men."[244]
|
195 |
+
|
196 |
+
Jewish feminism is a movement that seeks to improve the religious, legal, and social status of women within Judaism and to open up new opportunities for religious experience and leadership for Jewish women. The main issues for early Jewish feminists in these movements were the exclusion from the all-male prayer group or minyan, the exemption from positive time-bound mitzvot, and women's inability to function as witnesses and to initiate divorce.[245] Many Jewish women have become leaders of feminist movements throughout their history.[246]
|
197 |
+
|
198 |
+
Dianic Wicca is a feminist-centred thealogy.[247]
|
199 |
+
|
200 |
+
Secular or atheist feminists have engaged in feminist criticism of religion, arguing that many religions have oppressive rules towards women and misogynistic themes and elements in religious texts.[248][249][250]
|
201 |
+
|
202 |
+
Patriarchy is a social system in which society is organized around male authority figures. In this system, fathers have authority over women, children, and property. It implies the institutions of male rule and privilege and is dependent on female subordination.[251] Most forms of feminism characterize patriarchy as an unjust social system that is oppressive to women. Carole Pateman argues that the patriarchal distinction "between masculinity and femininity is the political difference between freedom and subjection."[252] In feminist theory the concept of patriarchy often includes all the social mechanisms that reproduce and exert male dominance over women. Feminist theory typically characterizes patriarchy as a social construction, which can be overcome by revealing and critically analyzing its manifestations.[253] Some radical feminists have proposed that because patriarchy is too deeply rooted in society, separatism is the only viable solution.[254] Other feminists have criticized these views as being anti-men.[255][256][257]
|
203 |
+
|
204 |
+
Feminist theory has explored the social construction of masculinity and its implications for the goal of gender equality. The social construct of masculinity is seen by feminism as problematic because it associates males with aggression and competition, and reinforces patriarchal and unequal gender relations.[78][258] Patriarchal cultures are criticized for "limiting forms of masculinity" available to men and thus narrowing their life choices.[259] Some feminists are engaged with men's issues activism, such as bringing attention to male rape and spousal battery and addressing negative social expectations for men.[260][261][262]
|
205 |
+
|
206 |
+
Male participation in feminism is generally encouraged by feminists and is seen as an important strategy for achieving full societal commitment to gender equality.[10][263][264] Many male feminists and pro-feminists are active in both women's rights activism, feminist theory, and masculinity studies. However, some argue that while male engagement with feminism is necessary, it is problematic because of the ingrained social influences of patriarchy in gender relations.[265] The consensus today in feminist and masculinity theories is that men and women should cooperate to achieve the larger goals of feminism.[259] It has been proposed that, in large part, this can be achieved through considerations of women's agency.[266]
|
207 |
+
|
208 |
+
Different groups of people have responded to feminism, and both men and women have been among its supporters and critics. Among American university students, for both men and women, support for feminist ideas is more common than self-identification as a feminist.[267][268][269] The US media tends to portray feminism negatively and feminists "are less often associated with day-to-day work/leisure activities of regular women."[270][271] However, as recent research has demonstrated, as people are exposed to self-identified feminists and to discussions relating to various forms of feminism, their own self-identification with feminism increases.[272]
|
209 |
+
|
210 |
+
Pro-feminism is the support of feminism without implying that the supporter is a member of the feminist movement. The term is most often used in reference to men who are actively supportive of feminism. The activities of pro-feminist men's groups include anti-violence work with boys and young men in schools, offering sexual harassment workshops in workplaces, running community education campaigns, and counselling male perpetrators of violence. Pro-feminist men also may be involved in men's health, activism against pornography including anti-pornography legislation, men's studies, and the development of gender equity curricula in schools. This work is sometimes in collaboration with feminists and women's services, such as domestic violence and rape crisis centres.[273][274]
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
Anti-feminism is opposition to feminism in some or all of its forms.[275]
|
215 |
+
|
216 |
+
In the nineteenth century, anti-feminism was mainly focused on opposition to women's suffrage. Later, opponents of women's entry into institutions of higher learning argued that education was too great a physical burden on women. Other anti-feminists opposed women's entry into the labour force, or their right to join unions, to sit on juries, or to obtain birth control and control of their sexuality.[276]
|
217 |
+
|
218 |
+
Some people have opposed feminism on the grounds that they believe it is contrary to traditional values or religious beliefs. These anti-feminists argue, for example, that social acceptance of divorce and non-married women is wrong and harmful, and that men and women are fundamentally different and thus their different traditional roles in society should be maintained.[277][278][279] Other anti-feminists oppose women's entry into the workforce, political office, and the voting process, as well as the lessening of male authority in families.[280][281]
|
219 |
+
|
220 |
+
Writers such as Camille Paglia, Christina Hoff Sommers, Jean Bethke Elshtain, Elizabeth Fox-Genovese, Lisa Lucile Owens[282] and Daphne Patai oppose some forms of feminism, though they identify as feminists. They argue, for example, that feminism often promotes misandry and the elevation of women's interests above men's, and criticize radical feminist positions as harmful to both men and women.[283] Daphne Patai and Noretta Koertge argue that the term "anti-feminist" is used to silence academic debate about feminism.[284][285] Lisa Lucile Owens argues that certain rights extended exclusively to women are patriarchal because they relieve women from exercising a crucial aspect of their moral agency.[266]
|
221 |
+
|
222 |
+
Secular humanism is an ethical framework that attempts to dispense with any unreasoned dogma, pseudoscience, and superstition. Critics of feminism sometimes ask "Why feminism and not humanism?". Some humanists argue, however, that the goals of feminists and humanists largely overlap, and the distinction is only in motivation. For example, a humanist may consider abortion in terms of a utilitarian ethical framework, rather than considering the motivation of any particular woman in getting an abortion. In this respect, it is possible to be a humanist without being a feminist, but this does not preclude the existence of feminist humanism.[286][287] Humanism plays a significant role in protofeminism during the renaissance period in such that humanists made educated women a popular figure despite the challenge to the male patriarchal organization of society.[288]
|
223 |
+
|
224 |
+
For Isla Vista killings, see Bennett, Jessica (10 September 2014). "Behold the Power of #Hashtag Feminism". Time.
|
en/1961.html.txt
ADDED
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Feminism is a range of social movements, political movements, and ideologies that aim to define, establish, and achieve the political, economic, personal, and social equality of the sexes.[a][2][3][4][5] Feminism incorporates the position that societies prioritize the male point of view, and that women are treated unjustly within those societies.[6] Efforts to change that include fighting against gender stereotypes and establishing educational, professional, and interpersonal opportunities and outcomes for women that are equal to those for men.
|
6 |
+
|
7 |
+
Feminist movements have campaigned and continue to campaign for women's rights, including the right to: vote, hold public office, work, earn equal pay, own property, receive education, enter contracts, have equal rights within marriage, and maternity leave. Feminists have also worked to ensure access to legal abortions and social integration and to protect women and girls from rape, sexual harassment, and domestic violence.[7] Changes in dress and acceptable physical activity have often been part of feminist movements.[8]
|
8 |
+
|
9 |
+
Some scholars consider feminist campaigns to be a main force behind major historical societal changes for women's rights, particularly in the West, where they are near-universally credited with achieving women's suffrage, gender-neutral language, reproductive rights for women (including access to contraceptives and abortion), and the right to enter into contracts and own property.[9] Although feminist advocacy is, and has been, mainly focused on women's rights, some feminists argue for the inclusion of men's liberation within its aims, because they believe that men are also harmed by traditional gender roles.[10]
|
10 |
+
Feminist theory, which emerged from feminist movements, aims to understand the nature of gender inequality by examining women's social roles and lived experience; it has developed theories in a variety of disciplines in order to respond to issues concerning gender.[11][12]
|
11 |
+
|
12 |
+
Numerous feminist movements and ideologies have developed over the years and represent different viewpoints and aims. Some forms of feminism have been criticized for taking into account only white, middle class, or college-educated perspectives. This criticism led to the creation of ethnically specific or multicultural forms of feminism, such as black feminism and intersectional feminism.[13]
|
13 |
+
|
14 |
+
Charles Fourier, a utopian socialist and French philosopher, is credited with having coined the word "féminisme" in 1837.[14] The words "féminisme" ("feminism") and "féministe" ("feminist") first appeared in France and the Netherlands in 1872,[15] Great Britain in the 1890s, and the United States in 1910.[16][17] The Oxford English Dictionary lists 1852 as the year of the first appearance of "feminist"[18] and 1895 for "feminism".[19] Depending on the historical moment, culture and country, feminists around the world have had different causes and goals. Most western feminist historians contend that all movements working to obtain women's rights should be considered feminist movements, even when they did not (or do not) apply the term to themselves.[20][21][22][23][24][25] Other historians assert that the term should be limited to the modern feminist movement and its descendants. Those historians use the label "protofeminist" to describe earlier movements.[26]
|
15 |
+
|
16 |
+
The history of the modern western feminist movement is divided into four "waves".[27][28][29] The first comprised women's suffrage movements of the 19th and early-20th centuries, promoting women's right to vote. The second wave, the women's liberation movement, began in the 1960s and campaigned for legal and social equality for women. In or around 1992, a third wave was identified, characterized by a focus on individuality and diversity.[30] The fourth wave, from around 2012, used social media to combat sexual harassment, violence against women and rape culture; it is best known for the Me Too movement.[31]
|
17 |
+
|
18 |
+
First-wave feminism was a period of activity during the 19th and early-20th centuries. In the UK and US, it focused on the promotion of equal contract, marriage, parenting, and property rights for women. New legislation included the Custody of Infants Act 1839 in the UK, which introduced the tender years doctrine for child custody and gave women the right of custody of their children for the first time.[32][33][34] Other legislation, such as the Married Women's Property Act 1870 in the UK and extended in the 1882 Act,[35] became models for similar legislation in other British territories. Victoria passed legislation in 1884 and New South Wales in 1889; the remaining Australian colonies passed similar legislation between 1890 and 1897. With the turn of the 19th century, activism focused primarily on gaining political power, particularly the right of women's suffrage, though some feminists were active in campaigning for women's sexual, reproductive, and economic rights too.[36]
|
19 |
+
|
20 |
+
Women's suffrage (the right to vote and stand for parliamentary office) began in Britain's Australasian colonies at the close of the 19th century, with the self-governing colonies of New Zealand granting women the right to vote in 1893; South Australia followed suit in 1895. This was followed by Australia granting female suffrage in 1902.[37][38]
|
21 |
+
|
22 |
+
In Britain, the suffragettes and suffragists campaigned for the women's vote, and in 1918 the Representation of the People Act was passed granting the vote to women over the age of 30 who owned property. In 1928 this was extended to all women over 21.[39] Emmeline Pankhurst was the most notable activist in England. Time named her one of the 100 Most Important People of the 20th Century, stating: "she shaped an idea of women for our time; she shook society into a new pattern from which there could be no going back."[40] In the US, notable leaders of this movement included Lucretia Mott, Elizabeth Cady Stanton, and Susan B. Anthony, who each campaigned for the abolition of slavery before championing women's right to vote. These women were influenced by the Quaker theology of spiritual equality, which asserts that men and women are equal under God.[41] In the US, first-wave feminism is considered to have ended with the passage of the Nineteenth Amendment to the United States Constitution (1919), granting women the right to vote in all states. The term first wave was coined retroactively when the term second-wave feminism came into use.[36][42][43][44][45]
|
23 |
+
|
24 |
+
During the late Qing period and reform movements such as the Hundred Days' Reform, Chinese feminists called for women's liberation from traditional roles and Neo-Confucian gender segregation.[46][47][48] Later, the Chinese Communist Party created projects aimed at integrating women into the workforce, and claimed that the revolution had successfully achieved women's liberation.[49]
|
25 |
+
|
26 |
+
According to Nawar al-Hassan Golley, Arab feminism was closely connected with Arab nationalism. In 1899, Qasim Amin, considered the "father" of Arab feminism, wrote The Liberation of Women, which argued for legal and social reforms for women.[50] He drew links between women's position in Egyptian society and nationalism, leading to the development of Cairo University and the National Movement.[51] In 1923 Hoda Shaarawi founded the Egyptian Feminist Union, became its president and a symbol of the Arab women's rights movement.[51]
|
27 |
+
|
28 |
+
The Iranian Constitutional Revolution in 1905 triggered the Iranian women's movement, which aimed to achieve women's equality in education, marriage, careers, and legal rights.[52] However, during the Iranian revolution of 1979, many of the rights that women had gained from the women's movement were systematically abolished, such as the Family Protection Law.[53]
|
29 |
+
|
30 |
+
In France, women obtained the right to vote only with the Provisional Government of the French Republic of 21 April 1944. The Consultative Assembly of Algiers of 1944 proposed on 24 March 1944 to grant eligibility to women but following an amendment by Fernand Grenier, they were given full citizenship, including the right to vote. Grenier's proposition was adopted 51 to 16. In May 1947, following the November 1946 elections, the sociologist Robert Verdier minimized the "gender gap", stating in Le Populaire that women had not voted in a consistent way, dividing themselves, as men, according to social classes. During the baby boom period, feminism waned in importance. Wars (both World War I and World War II) had seen the provisional emancipation of some women, but post-war periods signalled the return to conservative roles.[54]
|
31 |
+
|
32 |
+
By the mid-20th century, women still lacked significant rights. In Switzerland, women gained the right to vote in federal elections in 1971;[55] but in the canton of Appenzell Innerrhoden women obtained the right to vote on local issues only in 1991, when the canton was forced to do so by the Federal Supreme Court of Switzerland.[56] In Liechtenstein, women were given the right to vote by the women's suffrage referendum of 1984. Three prior referendums held in 1968, 1971 and 1973 had failed to secure women's right to vote.
|
33 |
+
|
34 |
+
Feminists continued to campaign for the reform of family laws which gave husbands control over their wives. Although by the 20th century coverture had been abolished in the UK and US, in many continental European countries married women still had very few rights. For instance, in France, married women did not receive the right to work without their husband's permission until 1965.[57][58] Feminists have also worked to abolish the "marital exemption" in rape laws which precluded the prosecution of husbands for the rape of their wives.[59] Earlier efforts by first-wave feminists such as Voltairine de Cleyre, Victoria Woodhull and Elizabeth Clarke Wolstenholme Elmy to criminalize marital rape in the late 19th century had failed;[60][61] this was only achieved a century later in most Western countries, but is still not achieved in many other parts of the world.[62]
|
35 |
+
|
36 |
+
French philosopher Simone de Beauvoir provided a Marxist solution and an existentialist view on many of the questions of feminism with the publication of Le Deuxième Sexe (The Second Sex) in 1949.[63] The book expressed feminists' sense of injustice. Second-wave feminism is a feminist movement beginning in the early 1960s[64] and continuing to the present; as such, it coexists with third-wave feminism. Second-wave feminism is largely concerned with issues of equality beyond suffrage, such as ending gender discrimination.[36]
|
37 |
+
|
38 |
+
Second-wave feminists see women's cultural and political inequalities as inextricably linked and encourage women to understand aspects of their personal lives as deeply politicized and as reflecting sexist power structures. The feminist activist and author Carol Hanisch coined the slogan "The Personal is Political", which became synonymous with the second wave.[7][65]
|
39 |
+
|
40 |
+
Second- and third-wave feminism in China has been characterized by a reexamination of women's roles during the communist revolution and other reform movements, and new discussions about whether women's equality has actually been fully achieved.[49]
|
41 |
+
|
42 |
+
In 1956, President Gamal Abdel Nasser of Egypt initiated "state feminism", which outlawed discrimination based on gender and granted women's suffrage, but also blocked political activism by feminist leaders.[66] During Sadat's presidency, his wife, Jehan Sadat, publicly advocated further women's rights, though Egyptian policy and society began to move away from women's equality with the new Islamist movement and growing conservatism.[67] However, some activists proposed a new feminist movement, Islamic feminism, which argues for women's equality within an Islamic framework.[68]
|
43 |
+
|
44 |
+
In Latin America, revolutions brought changes in women's status in countries such as Nicaragua, where feminist ideology during the Sandinista Revolution aided women's quality of life but fell short of achieving a social and ideological change.[69]
|
45 |
+
|
46 |
+
In 1963, Betty Friedan's book The Feminine Mystique helped voice the discontent that American women felt. The book is widely credited with sparking the beginning of second-wave feminism in the United States.[70] Within ten years, women made up over half the First World workforce.[71]
|
47 |
+
|
48 |
+
Third-wave feminism is traced to the emergence of the Riot grrrl feminist punk subculture in Olympia, Washington, in the early 1990s,[72][73] and to Anita Hill's televised testimony in 1991—to an all-male, all-white Senate Judiciary Committee—that Clarence Thomas, nominated for the Supreme Court of the United States, had sexually harassed her. The term third wave is credited to Rebecca Walker, who responded to Thomas's appointment to the Supreme Court with an article in Ms. magazine, "Becoming the Third Wave" (1992).[74][75] She wrote:
|
49 |
+
|
50 |
+
So I write this as a plea to all women, especially women of my generation: Let Thomas’ confirmation serve to remind you, as it did me, that the fight is far from over. Let this dismissal of a woman's experience move you to anger. Turn that outrage into political power. Do not vote for them unless they work for us. Do not have sex with them, do not break bread with them, do not nurture them if they don't prioritize our freedom to control our bodies and our lives. I am not a post-feminism feminist. I am the Third Wave.[74]
|
51 |
+
|
52 |
+
Third-wave feminism also sought to challenge or avoid what it deemed the second wave's essentialist definitions of femininity, which, third-wave feminists argued, over-emphasized the experiences of upper middle-class white women. Third-wave feminists often focused on "micro-politics" and challenged the second wave's paradigm as to what was, or was not, good for women, and tended to use a post-structuralist interpretation of gender and sexuality.[36][76][77][78] Feminist leaders rooted in the second wave, such as Gloria Anzaldúa, bell hooks, Chela Sandoval, Cherríe Moraga, Audre Lorde, Maxine Hong Kingston, and many other non-white feminists, sought to negotiate a space within feminist thought for consideration of race-related subjectivities.[77][79][80] Third-wave feminism also contained internal debates between difference feminists, who believe that there are important psychological differences between the sexes, and those who believe that there are no inherent psychological differences between the sexes and contend that gender roles are due to social conditioning.[81]
|
53 |
+
|
54 |
+
Standpoint theory is a feminist theoretical point of view stating that a person's social position influences their knowledge. This perspective argues that research and theory treat women and the feminist movement as insignificant and refuses to see traditional science as unbiased.[82] Since the 1980s, standpoint feminists have argued that the feminist movement should address global issues (such as rape, incest, and prostitution) and culturally specific issues (such as female genital mutilation in some parts of Africa and Arab societies, as well as glass ceiling practices that impede women's advancement in developed economies) in order to understand how gender inequality interacts with racism, homophobia, classism and colonization in a "matrix of domination".[83][84]
|
55 |
+
|
56 |
+
Fourth-wave feminism refers to a resurgence of interest in feminism that began around 2012 and is associated with the use of social media.[85] According to feminist scholar Prudence Chamberlain, the focus of the fourth wave is justice for women and opposition to sexual harassment and violence against women. Its essence, she writes, is "incredulity that certain attitudes can still exist".[86]
|
57 |
+
|
58 |
+
Fourth-wave feminism is "defined by technology", according to Kira Cochrane, and is characterized particularly by the use of Facebook, Twitter, Instagram, YouTube, Tumblr, and blogs such as Feministing to challenge misogyny and further gender equality.[85][87][88][85]
|
59 |
+
|
60 |
+
Issues that fourth-wave feminists focus on include street and workplace harassment, campus sexual assault and rape culture. Scandals involving the harassment, abuse, and murder of women and girls have galvanized the movement. These have included the 2012 Delhi gang rape, 2012 Jimmy Savile allegations, the Bill Cosby allegations, 2014 Isla Vista killings, 2016 trial of Jian Ghomeshi, 2017 Harvey Weinstein allegations and subsequent Weinstein effect, and the 2017 Westminster sexual scandals.[89]
|
61 |
+
|
62 |
+
Examples of fourth-wave feminist campaigns include the Everyday Sexism Project, No More Page 3, Stop Bild Sexism, Mattress Performance, 10 Hours of Walking in NYC as a Woman, #YesAllWomen, Free the Nipple, One Billion Rising, the 2017 Women's March, the 2018 Women's March, and the #MeToo movement. In December 2017, Time magazine chose several prominent female activists involved in the #MeToo movement, dubbed "the silence breakers", as Person of the Year.[90][91]
|
63 |
+
|
64 |
+
The term postfeminism is used to describe a range of viewpoints reacting to feminism since the 1980s. While not being "anti-feminist", postfeminists believe that women have achieved second wave goals while being critical of third- and fourth-wave feminist goals. The term was first used to describe a backlash against second-wave feminism, but it is now a label for a wide range of theories that take critical approaches to previous feminist discourses and includes challenges to the second wave's ideas.[92] Other postfeminists say that feminism is no longer relevant to today's society.[93] Amelia Jones has written that the postfeminist texts which emerged in the 1980s and 1990s portrayed second-wave feminism as a monolithic entity.[94] Dorothy Chunn notes a "blaming narrative" under the postfeminist moniker, where feminists are undermined for continuing to make demands for gender equality in a "post-feminist" society, where "gender equality has (already) been achieved." According to Chunn, "many feminists have voiced disquiet about the ways in which rights and equality discourses are now used against them."[95]
|
65 |
+
|
66 |
+
Feminist theory is the extension of feminism into theoretical or philosophical fields. It encompasses work in a variety of disciplines, including anthropology, sociology, economics, women's studies, literary criticism,[96][97] art history,[98] psychoanalysis[99] and philosophy.[100][101] Feminist theory aims to understand gender inequality and focuses on gender politics, power relations, and sexuality. While providing a critique of these social and political relations, much of feminist theory also focuses on the promotion of women's rights and interests. Themes explored in feminist theory include discrimination, stereotyping, objectification (especially sexual objectification), oppression, and patriarchy.[11][12]
|
67 |
+
In the field of literary criticism, Elaine Showalter describes the development of feminist theory as having three phases. The first she calls "feminist critique", in which the feminist reader examines the ideologies behind literary phenomena. The second Showalter calls "gynocriticism", in which the "woman is producer of textual meaning". The last phase she calls "gender theory", in which the "ideological inscription and the literary effects of the sex/gender system are explored".[102]
|
68 |
+
|
69 |
+
This was paralleled in the 1970s by French feminists, who developed the concept of écriture féminine (which translates as 'female or feminine writing').[92] Helene Cixous argues that writing and philosophy are phallocentric and along with other French feminists such as Luce Irigaray emphasize "writing from the body" as a subversive exercise.[92] The work of Julia Kristeva, a feminist psychoanalyst and philosopher, and Bracha Ettinger,[103] artist and psychoanalyst, has influenced feminist theory in general and feminist literary criticism in particular. However, as the scholar Elizabeth Wright points out, "none of these French feminists align themselves with the feminist movement as it appeared in the Anglophone world".[92][104] More recent feminist theory, such as that of Lisa Lucile Owens,[105] has concentrated on characterizing feminism as a universal emancipatory movement.
|
70 |
+
|
71 |
+
Many overlapping feminist movements and ideologies have developed over the years.
|
72 |
+
|
73 |
+
Some branches of feminism closely track the political leanings of the larger society, such as liberalism and conservatism, or focus on the environment. Liberal feminism seeks individualistic equality of men and women through political and legal reform without altering the structure of society. Catherine Rottenberg has argued that the neoliberal shirt in Liberal feminism has led to that form of feminism being individualized rather than collectivized and becoming detached from social inequality.[106] Due to this she argues that Liberal Feminism cannot offer any sustained analysis of the structures of male dominance, power, or privilege.[106]
|
74 |
+
|
75 |
+
Radical feminism considers the male-controlled capitalist hierarchy as the defining feature of women's oppression and the total uprooting and reconstruction of society as necessary.[7] Conservative feminism is conservative relative to the society in which it resides. Libertarian feminism conceives of people as self-owners and therefore as entitled to freedom from coercive interference.[107] Separatist feminism does not support heterosexual relationships. Lesbian feminism is thus closely related. Other feminists criticize separatist feminism as sexist.[10] Ecofeminists see men's control of land as responsible for the oppression of women and destruction of the natural environment; ecofeminism has been criticized for focusing too much on a mystical connection between women and nature.[108]
|
76 |
+
|
77 |
+
Rosemary Hennessy and Chrys Ingraham say that materialist forms of feminism grew out of Western Marxist thought and have inspired a number of different (but overlapping) movements, all of which are involved in a critique of capitalism and are focused on ideology's relationship to women.[109] Marxist feminism argues that capitalism is the root cause of women's oppression, and that discrimination against women in domestic life and employment is an effect of capitalist ideologies.[110] Socialist feminism distinguishes itself from Marxist feminism by arguing that women's liberation can only be achieved by working to end both the economic and cultural sources of women's oppression.[111] Anarcha-feminists believe that class struggle and anarchy against the state[112] require struggling against patriarchy, which comes from involuntary hierarchy.
|
78 |
+
|
79 |
+
Sara Ahmed argues that Black and Postcolonial feminisms pose a challenge "to some of the organizing premises of Western feminist thought."[113] During much of its history, feminist movements and theoretical developments were led predominantly by middle-class white women from Western Europe and North America.[79][83][114] However, women of other races have proposed alternative feminisms.[83] This trend accelerated in the 1960s with the civil rights movement in the United States and the collapse of European colonialism in Africa, the Caribbean, parts of Latin America, and Southeast Asia. Since that time, women in developing nations and former colonies and who are of colour or various ethnicities or living in poverty have proposed additional feminisms.[114] Womanism[115][116] emerged after early feminist movements were largely white and middle-class.[79] Postcolonial feminists argue that colonial oppression and Western feminism marginalized postcolonial women but did not turn them passive or voiceless.[13] Third-world feminism and Indigenous feminism are closely related to postcolonial feminism.[114] These ideas also correspond with ideas in African feminism, motherism,[117] Stiwanism,[118] negofeminism,[119] femalism, transnational feminism, and Africana womanism.[120]
|
80 |
+
|
81 |
+
In the late twentieth century various feminists began to argue that gender roles are socially constructed,[121][122] and that it is impossible to generalize women's experiences across cultures and histories.[123] Post-structural feminism draws on the philosophies of post-structuralism and deconstruction in order to argue that the concept of gender is created socially and culturally through discourse.[124] Postmodern feminists also emphasize the social construction of gender and the discursive nature of reality;[121] however, as Pamela Abbott et al. note, a postmodern approach to feminism highlights "the existence of multiple truths (rather than simply men and women's standpoints)".[125]
|
82 |
+
|
83 |
+
Feminist views on transgender people differ. Some feminists do not view trans women as women,[126][127] believing that they have male privilege due to their sex assignment at birth.[128] Additionally, some feminists reject the concept of transgender identity due to views that all behavioral differences between genders are a result of socialization.[129] In contrast, other feminists and transfeminists believe that the liberation of trans women is a necessary part of feminist goals.[130] Third-wave feminists are overall more supportive of trans rights.[131][132] A key concept in transfeminism is of transmisogyny,[133] which is the irrational fear of, aversion to, or discrimination against transgender women or feminine gender-nonconforming people.[134][135]
|
84 |
+
|
85 |
+
Riot grrrls took an anti-corporate stance of self-sufficiency and self-reliance.[136] Riot grrrl's emphasis on universal female identity and separatism often appears more closely allied with second-wave feminism than with the third wave.[137] The movement encouraged and made "adolescent girls' standpoints central", allowing them to express themselves fully.[138] Lipstick feminism is a cultural feminist movement that attempts to respond to the backlash of second-wave radical feminism of the 1960s and 1970s by reclaiming symbols of "feminine" identity such as make-up, suggestive clothing and having a sexual allure as valid and empowering personal choices.[139][140]
|
86 |
+
|
87 |
+
According to 2014 Ipsos poll covering 15 developed countries, 53 percent of respondents identified as feminists, and 87% agreed that "women should be treated equally to men in all areas based on their competency, not their gender". However, only 55% of women agreed that they have "full equality with men and the freedom to reach their full dreams and aspirations".[141] Taken together, these studies reflect the importance differentiating between claiming a "feminist identity" and holding "feminist attitudes or beliefs"[142]
|
88 |
+
|
89 |
+
According to a 2015 poll, 18 percent of Americans consider themselves feminists, while 85 percent reported they believe in "equality for women". Despite the popular belief in equal rights, 52 percent did not identify as feminist, 26 percent were unsure, and four percent provided no response.[143]
|
90 |
+
|
91 |
+
Sociological research shows that, in the US, increased educational attainment is associated with greater support for feminist issues. In addition, politically liberal people are more likely to support feminist ideals compared to those who are conservative.[144][145]
|
92 |
+
|
93 |
+
According to numerous polls, 7% of Britons consider themselves feminists, with 83% saying they support equality of opportunity for women – this included even higher support from men (86%) than women (81%).[146][147]
|
94 |
+
|
95 |
+
Feminist views on sexuality vary, and have differed by historical period and by cultural context. Feminist attitudes to female sexuality have taken a few different directions. Matters such as the sex industry, sexual representation in the media, and issues regarding consent to sex under conditions of male dominance have been particularly controversial among feminists. This debate has culminated in the late 1970s and the 1980s, in what came to be known as the feminist sex wars, which pitted anti-pornography feminism against sex-positive feminism, and parts of the feminist movement were deeply divided by these debates.[148][149][150][151][152] Feminists have taken a variety of positions on different aspects of the sexual revolution from the 1960s and 70s. Over the course of the 1970s, a large number of influential women accepted lesbian and bisexual women as part of feminism.[153]
|
96 |
+
|
97 |
+
Opinions on the sex industry are diverse. Feminists critical of the sex industry generally see it as the exploitative result of patriarchal social structures which reinforce sexual and cultural attitudes complicit in rape and sexual harassment. Alternately, feminists who support at least part of the sex industry argue that it can be a medium of feminist expression and a means for women to take control of their sexuality. For the views of feminism on male prostitutes see the article on male prostitution.
|
98 |
+
|
99 |
+
Feminist views of pornography range from condemnation of pornography as a form of violence against women, to an embracing of some forms of pornography as a medium of feminist expression.[148][149][150][151][152] Similarly, feminists' views on prostitution vary, ranging from critical to supportive.[154]
|
100 |
+
|
101 |
+
For feminists, a woman's right to control her own sexuality is a key issue. Feminists such as Catharine MacKinnon argue that women have very little control over their own bodies, with female sexuality being largely controlled and defined by men in patriarchal societies. Feminists argue that sexual violence committed by men is often rooted in ideologies of male sexual entitlement and that these systems grant women very few legitimate options to refuse sexual advances.[155][156] Feminists argue that all cultures are, in one way or another, dominated by ideologies that largely deny women the right to decide how to express their sexuality, because men under patriarchy feel entitled to define sex on their own terms. This entitlement can take different forms, depending on the culture. In conservative and religious cultures marriage is regarded as an institution which requires a wife to be sexually available at all times, virtually without limit; thus, forcing or coercing sex on a wife is not considered a crime or even an abusive behaviour.[157][158] In more liberal cultures, this entitlement takes the form of a general sexualization of the whole culture. This is played out in the sexual objectification of women, with pornography and other forms of sexual entertainment creating the fantasy that all women exist solely for men's sexual pleasure and that women are readily available and desiring to engage in sex at any time, with any man, on a man's terms.[159]
|
102 |
+
|
103 |
+
Sandra Harding says that the "moral and political insights of the women's movement have inspired social scientists and biologists to raise critical questions about the ways traditional researchers have explained gender, sex and relations within and between the social and natural worlds."[160] Some feminists, such as Ruth Hubbard and Evelyn Fox Keller, criticize traditional scientific discourse as being historically biased towards a male perspective.[161] A part of the feminist research agenda is the examination of the ways in which power inequities are created or reinforced in scientific and academic institutions.[162] Physicist Lisa Randall, appointed to a task force at Harvard by then-president Lawrence Summers after his controversial discussion of why women may be underrepresented in science and engineering, said, "I just want to see a whole bunch more women enter the field so these issues don't have to come up anymore."[163]
|
104 |
+
|
105 |
+
Lynn Hankinson Nelson notes that feminist empiricists find fundamental differences between the experiences of men and women. Thus, they seek to obtain knowledge through the examination of the experiences of women and to "uncover the consequences of omitting, misdescribing, or devaluing them" to account for a range of human experience.[164] Another part of the feminist research agenda is the uncovering of ways in which power inequities are created or reinforced in society and in scientific and academic institutions.[162] Furthermore, despite calls for greater attention to be paid to structures of gender inequity in the academic literature, structural analyses of gender bias rarely appear in highly cited psychological journals, especially in the commonly studied areas of psychology and personality.[165]
|
106 |
+
|
107 |
+
One criticism of feminist epistemology is that it allows social and political values to influence its findings.[166] Susan Haack also points out that feminist epistemology reinforces traditional stereotypes about women's thinking (as intuitive and emotional, etc.); Meera Nanda further cautions that this may in fact trap women within "traditional gender roles and help justify patriarchy".[167]
|
108 |
+
|
109 |
+
Modern feminism challenges the essentialist view of gender as biologically intrinsic.[168][169] For example, Anne Fausto-Sterling's book, Myths of Gender, explores the assumptions embodied in scientific research that support a biologically essentialist view of gender.[170] In Delusions of Gender, Cordelia Fine disputes scientific evidence that suggests that there is an innate biological difference between men's and women's minds, asserting instead that cultural and societal beliefs are the reason for differences between individuals that are commonly perceived as sex differences.[171]
|
110 |
+
|
111 |
+
Feminism in psychology emerged as a critique of the dominant male outlook on psychological research where only male perspectives were studied with all male subjects. As women earned doctorates in psychology, females and their issues were introduced as legitimate topics of study. Feminist psychology emphasizes social context, lived experience, and qualitative analysis.[172] Projects such as Psychology's Feminist Voices have emerged to catalogue the influence of feminist psychologists on the discipline.[173]
|
112 |
+
|
113 |
+
Gender-based inquiries into and conceptualization of architecture have also come about, leading to feminism in modern architecture. Piyush Mathur coined the term "archigenderic". Claiming that "architectural planning has an inextricable link with the defining and regulation of gender roles, responsibilities, rights, and limitations", Mathur came up with that term "to explore ... the meaning of 'architecture' in terms of gender" and "to explore the meaning of 'gender' in terms of architecture".[174]
|
114 |
+
|
115 |
+
Feminist activists have established a range of feminist businesses, including women's bookstores, feminist credit unions, feminist presses, feminist mail-order catalogs, and feminist restaurants. These businesses flourished as part of the second and third-waves of feminism in the 1970s, 1980s, and 1990s.[175][176]
|
116 |
+
|
117 |
+
Corresponding with general developments within feminism, and often including such self-organizing tactics as the consciousness-raising group, the movement began in the 1960s and flourished throughout the 1970s.[177] Jeremy Strick, director of the Museum of Contemporary Art in Los Angeles, described the feminist art movement as "the most influential international movement of any during the postwar period", and Peggy Phelan says that it "brought about the most far-reaching transformations in both artmaking and art writing over the past four decades".[177] Feminist artist Judy Chicago, who created The Dinner Party, a set of vulva-themed ceramic plates in the 1970s, said in 2009 to ARTnews, "There is still an institutional lag and an insistence on a male Eurocentric narrative. We are trying to change the future: to get girls and boys to realize that women's art is not an exception—it's a normal part of art history."[178] A feminist approach to the visual arts has most recently developed through Cyberfeminism and the posthuman turn, giving voice to the ways "contemporary female artists are dealing with gender, social media and the notion of embodiment".[179]
|
118 |
+
|
119 |
+
The feminist movement produced feminist fiction, feminist non-fiction, and feminist poetry, which created new interest in women's writing. It also prompted a general reevaluation of women's historical and academic contributions in response to the belief that women's lives and contributions have been underrepresented as areas of scholarly interest.[180] There has also been a close link between feminist literature and activism, with feminist writing typically voicing key concerns or ideas of feminism in a particular era.
|
120 |
+
|
121 |
+
Much of the early period of feminist literary scholarship was given over to the rediscovery and reclamation of texts written by women. In Western feminist literary scholarship, Studies like Dale Spender's Mothers of the Novel (1986) and Jane Spencer's The Rise of the Woman Novelist (1986) were ground-breaking in their insistence that women have always been writing.
|
122 |
+
|
123 |
+
Commensurate with this growth in scholarly interest, various presses began the task of reissuing long-out-of-print texts. Virago Press began to publish its large list of 19th and early-20th-century novels in 1975 and became one of the first commercial presses to join in the project of reclamation. In the 1980s Pandora Press, responsible for publishing Spender's study, issued a companion line of 18th-century novels written by women.[181] More recently, Broadview Press continues to issue 18th- and 19th-century novels, many hitherto out of print, and the University of Kentucky has a series of republications of early women's novels.
|
124 |
+
|
125 |
+
Particular works of literature have come to be known as key feminist texts. A Vindication of the Rights of Woman (1792) by Mary Wollstonecraft, is one of the earliest works of feminist philosophy. A Room of One's Own (1929) by Virginia Woolf, is noted in its argument for both a literal and figural space for women writers within a literary tradition dominated by patriarchy.
|
126 |
+
|
127 |
+
The widespread interest in women's writing is related to a general reassessment and expansion of the literary canon. Interest in post-colonial literatures, gay and lesbian literature, writing by people of colour, working people's writing, and the cultural productions of other historically marginalized groups has resulted in a whole scale expansion of what is considered "literature", and genres hitherto not regarded as "literary", such as children's writing, journals, letters, travel writing, and many others are now the subjects of scholarly interest.[180][182][183] Most genres and subgenres have undergone a similar analysis, so literary studies have entered new territories such as the "female gothic"[184] or women's science fiction.
|
128 |
+
|
129 |
+
According to Elyce Rae Helford, "Science fiction and fantasy serve as important vehicles for feminist thought, particularly as bridges between theory and practice."[185] Feminist science fiction is sometimes taught at the university level to explore the role of social constructs in understanding gender.[186] Notable texts of this kind are Ursula K. Le Guin's The Left Hand of Darkness (1969), Joanna Russ' The Female Man (1970), Octavia Butler's Kindred (1979) and Margaret Atwood's Handmaid's Tale (1985).
|
130 |
+
|
131 |
+
Feminist nonfiction has played an important role in voicing concerns about women's lived experiences. For example, Maya Angelou's I Know Why the Caged Bird Sings was extremely influential, as it represented the specific racism and sexism experienced by black women growing up in the United States.[187]
|
132 |
+
|
133 |
+
In addition, many feminist movements have embraced poetry as a vehicle through which to communicate feminist ideas to public audiences through anthologies, poetry collections, and public readings.[188]
|
134 |
+
|
135 |
+
Moreover, historical pieces of writing by women have been used by feminists to speak about what women's lives would have been like in the past, while demonstrating the power that they held and the impact they had in their communities even centuries ago.[189] An important figure in the history of women in relation to literature is Hrothsvitha. Hrothsvitha was a canoness from 935 - 973,[190] as the first female poetess in the German lands, and first female historian Hrothsvitha is one of the few people to speak about women's lives from a woman's perspective during the Middle Ages[191].
|
136 |
+
|
137 |
+
Women's music (or womyn's music or wimmin's music) is the music by women, for women, and about women.[192] The genre emerged as a musical expression of the second-wave feminist movement[193] as well as the labour, civil rights, and peace movements.[194] The movement was started by lesbians such as Cris Williamson, Meg Christian, and Margie Adam, African-American women activists such as Bernice Johnson Reagon and her group Sweet Honey in the Rock, and peace activist Holly Near.[194] Women's music also refers to the wider industry of women's music that goes beyond the performing artists to include studio musicians, producers, sound engineers, technicians, cover artists, distributors, promoters, and festival organizers who are also women.[192]
|
138 |
+
Riot grrrl is an underground feminist hardcore punk movement described in the cultural movements section of this article.
|
139 |
+
|
140 |
+
Feminism became a principal concern of musicologists in the 1980s[195] as part of the New Musicology. Prior to this, in the 1970s, musicologists were beginning to discover women composers and performers, and had begun to review concepts of canon, genius, genre and periodization from a feminist perspective. In other words, the question of how women musicians fit into traditional music history was now being asked.[195] Through the 1980s and 1990s, this trend continued as musicologists like Susan McClary, Marcia Citron and Ruth Solie began to consider the cultural reasons for the marginalizing of women from the received body of work. Concepts such as music as gendered discourse; professionalism; reception of women's music; examination of the sites of music production; relative wealth and education of women; popular music studies in relation to women's identity; patriarchal ideas in music analysis; and notions of gender and difference are among the themes examined during this time.[195]
|
141 |
+
|
142 |
+
While the music industry has long been open to having women in performance or entertainment roles, women are much less likely to have positions of authority, such as being the leader of an orchestra.[196] In popular music, while there are many women singers recording songs, there are very few women behind the audio console acting as music producers, the individuals who direct and manage the recording process.[197]
|
143 |
+
|
144 |
+
Feminist cinema, advocating or illustrating feminist perspectives, arose largely with the development of feminist film theory in the late '60s and early '70s. Women who were radicalized during the 1960s by political debate and sexual liberation; but the failure of radicalism to produce substantive change for women galvanized them to form consciousness-raising groups and set about analysing, from different perspectives, dominant cinema's construction of women.[198] Differences were particularly marked between feminists on either side of the Atlantic. 1972 saw the first feminist film festivals in the U.S. and U.K. as well as the first feminist film journal, Women and Film. Trailblazers from this period included Claire Johnston and Laura Mulvey, who also organized the Women's Event at the Edinburgh Film Festival.[199] Other theorists making a powerful impact on feminist film include Teresa de Lauretis, Anneke Smelik and Kaja Silverman. Approaches in philosophy and psychoanalysis fuelled feminist film criticism, feminist independent film and feminist distribution.
|
145 |
+
|
146 |
+
It has been argued that there are two distinct approaches to independent, theoretically inspired feminist filmmaking. 'Deconstruction' concerns itself with analysing and breaking down codes of mainstream cinema, aiming to create a different relationship between the spectator and dominant cinema. The second approach, a feminist counterculture, embodies feminine writing to investigate a specifically feminine cinematic language.[200] Some recent criticism[201] of "feminist film" approaches has centred around a Swedish rating system called the Bechdel test.
|
147 |
+
|
148 |
+
During the 1930s–1950s heyday of the big Hollywood studios, the status of women in the industry was abysmal.[202] Since then female directors such as Sally Potter, Catherine Breillat, Claire Denis and Jane Campion have made art movies, and directors like Kathryn Bigelow and Patty Jenkins have had mainstream success. This progress stagnated in the 90s, and men outnumber women five to one in behind the camera roles.[203][204]
|
149 |
+
|
150 |
+
Feminism had complex interactions with the major political movements of the twentieth century.
|
151 |
+
|
152 |
+
Since the late nineteenth century, some feminists have allied with socialism, whereas others have criticized socialist ideology for being insufficiently concerned about women's rights. August Bebel, an early activist of the German Social Democratic Party (SPD), published his work Die Frau und der Sozialismus, juxtaposing the struggle for equal rights between sexes with social equality in general. In 1907 there was an International Conference of Socialist Women in Stuttgart where suffrage was described as a tool of class struggle. Clara Zetkin of the SPD called for women's suffrage to build a "socialist order, the only one that allows for a radical solution to the women's question".[205][206]
|
153 |
+
|
154 |
+
In Britain, the women's movement was allied with the Labour party. In the U.S., Betty Friedan emerged from a radical background to take leadership. Radical Women is the oldest socialist feminist organization in the U.S. and is still active.[207] During the Spanish Civil War, Dolores Ibárruri (La Pasionaria) led the Communist Party of Spain. Although she supported equal rights for women, she opposed women fighting on the front and clashed with the anarcha-feminist Mujeres Libres.[208]
|
155 |
+
|
156 |
+
Feminists in Ireland in the early 20th century included the revolutionary Irish Republican, suffragette and socialist Constance Markievicz who in 1918 was the first woman elected to the British House of Commons. However, in line with Sinn Féin abstentionist policy, she would not take her seat in the House of Commons.[209] She was re-elected to the Second Dáil in the elections of 1921.[210] She was also a commander of the Irish Citizens Army which was led by the socialist & self-described feminist, Irish leader James Connolly during the 1916 Easter Rising.[211]
|
157 |
+
|
158 |
+
Fascism has been prescribed dubious stances on feminism by its practitioners and by women's groups. Amongst other demands concerning social reform presented in the Fascist manifesto in 1919 was expanding the suffrage to all Italian citizens of age 18 and above, including women (accomplished only in 1946, after the defeat of fascism) and eligibility for all to stand for office from age 25. This demand was particularly championed by special Fascist women's auxiliary groups such as the fasci femminilli and only partly realized in 1925, under pressure from dictator Benito Mussolini's more conservative coalition partners.[212][213]
|
159 |
+
|
160 |
+
Cyprian Blamires states that although feminists were among those who opposed the rise of Adolf Hitler, feminism has a complicated relationship with the Nazi movement as well. While Nazis glorified traditional notions of patriarchal society and its role for women, they claimed to recognize women's equality in employment.[214] However, Hitler and Mussolini declared themselves as opposed to feminism,[214] and after the rise of Nazism in Germany in 1933, there was a rapid dissolution of the political rights and economic opportunities that feminists had fought for during the pre-war period and to some extent during the 1920s.[206] Georges Duby et al. note that in practice fascist society was hierarchical and emphasized male virility, with women maintaining a largely subordinate position.[206] Blamires also notes that Neofascism has since the 1960s been hostile towards feminism and advocates that women accept "their traditional roles".[214]
|
161 |
+
|
162 |
+
The civil rights movement has influenced and informed the feminist movement and vice versa. Many Western feminists adapted the language and theories of black equality activism and drew parallels between women's rights and the rights of non-white people.[215] Despite the connections between the women's and civil rights movements, some tensions arose during the late 1960s and the 1970s as non-white women argued that feminism was predominantly white, straight, and middle class, and did not understand and was not concerned with issues of race and sexuality.[216] Similarly, some women argued that the civil rights movement had sexist and homophobic elements and did not adequately address minority women's concerns.[215][217][218] These criticisms created new feminist social theories about identity politics and the intersections of racism, classism, and sexism; they also generated new feminisms such as black feminism and Chicana feminism in addition to making large contributions to lesbian feminism and other integrations of queer of colour identity.[219][220][221]
|
163 |
+
|
164 |
+
Neoliberalism has been criticized by feminist theory for having a negative effect on the female workforce population across the globe, especially in the global south. Masculinist assumptions and objectives continue to dominate economic and geopolitical thinking.[222]:177 Women's experiences in non-industrialized countries reveal often deleterious effects of modernization policies and undercut orthodox claims that development benefits everyone.[222]:175
|
165 |
+
|
166 |
+
Proponents of neoliberalism have theorized that by increasing women's participation in the workforce, there will be heightened economic progress, but feminist critics have noted that this participation alone does not further equality in gender relations.[223]:186–98 Neoliberalism has failed to address significant problems such as the devaluation of feminized labour, the structural privileging of men and masculinity, and the politicization of women's subordination in the family and the workplace.[222]:176 The "feminization of employment" refers to a conceptual characterization of deteriorated and devalorized labour conditions that are less desirable, meaningful, safe and secure.[222]:179 Employers in the global south have perceptions about feminine labour and seek workers who are perceived to be undemanding, docile and willing to accept low wages.[222]:180 Social constructs about feminized labour have played a big part in this, for instance, employers often perpetuate ideas about women as 'secondary income earners to justify their lower rates of pay and not deserving of training or promotion.[223]:189
|
167 |
+
|
168 |
+
The feminist movement has effected change in Western society, including women's suffrage; greater access to education; more nearly equitable[weasel words] pay with men; the right to initiate divorce proceedings; the right of women to make individual decisions regarding pregnancy (including access to contraceptives and abortion); and the right to own property.[9]
|
169 |
+
|
170 |
+
From the 1960s on, the campaign for women's rights[224] was met with mixed results[225] in the U.S. and the U.K. Other countries of the EEC agreed to ensure that discriminatory laws would be phased out across the European Community.
|
171 |
+
|
172 |
+
Some feminist campaigning also helped reform attitudes to child sexual abuse. The view that young girls cause men to have sexual intercourse with them was replaced by that of men's responsibility for their own conduct, the men being adults.[226]
|
173 |
+
|
174 |
+
In the U.S., the National Organization for Women (NOW) began in 1966 to seek women's equality, including through the Equal Rights Amendment (ERA),[227] which did not pass, although some states enacted their own. Reproductive rights in the U.S. centred on the court decision in Roe v. Wade enunciating a woman's right to choose whether to carry a pregnancy to term. Western women gained more reliable birth control, allowing family planning and careers. The movement started in the 1910s in the U.S. under Margaret Sanger and elsewhere under Marie Stopes. In the final three decades of the 20th century, Western women knew a new freedom through birth control, which enabled women to plan their adult lives, often making way for both career and family.[228]
|
175 |
+
|
176 |
+
The division of labour within households was affected by the increased entry of women into workplaces in the 20th century. Sociologist Arlie Russell Hochschild found that, in two-career couples, men and women, on average, spend about equal amounts of time working, but women still spend more time on housework,[229][230] although Cathy Young responded by arguing that women may prevent equal participation by men in housework and parenting.[231] Judith K. Brown writes, "Women are most likely to make a substantial contribution when subsistence activities have the following characteristics: the participant is not obliged to be far from home; the tasks are relatively monotonous and do not require rapt concentration and the work is not dangerous, can be performed in spite of interruptions, and is easily resumed once interrupted."[232]
|
177 |
+
|
178 |
+
In international law, the Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW) is an international convention adopted by the United Nations General Assembly and described as an international bill of rights for women. It came into force in those nations ratifying it.[233]
|
179 |
+
|
180 |
+
Feminist jurisprudence is a branch of jurisprudence that examines the relationship between women and law. It addresses questions about the history of legal and social biases against women and about the enhancement of their legal rights.[234]
|
181 |
+
|
182 |
+
Feminist jurisprudence signifies a reaction to the philosophical approach of modern legal scholars, who typically see the law as a process for interpreting and perpetuating a society's universal, gender-neutral ideals. Feminist legal scholars claim that this fails to acknowledge women's values or legal interests or the harms that they may anticipate or experience.[235]
|
183 |
+
|
184 |
+
Proponents of gender-neutral language argue that the use of gender-specific language often implies male superiority or reflects an unequal state of society.[236] According to The Handbook of English Linguistics, generic masculine pronouns and gender-specific job titles are instances "where English linguistic convention has historically treated men as prototypical of the human species."[237]
|
185 |
+
|
186 |
+
Merriam-Webster chose "feminism" as its 2017 Word of the Year, noting that "Word of the Year is a quantitative measure of interest in a particular word."[238]
|
187 |
+
|
188 |
+
Feminist theology is a movement that reconsiders the traditions, practices, scriptures, and theologies of religions from a feminist perspective. Some of the goals of feminist theology include increasing the role of women among the clergy and religious authorities, reinterpreting male-dominated imagery and language about God, determining women's place in relation to career and motherhood, and studying images of women in the religion's sacred texts.[239]
|
189 |
+
|
190 |
+
Christian feminism is a branch of feminist theology which seeks to interpret and understand Christianity in light of the equality of women and men, and that this interpretation is necessary for a complete understanding of Christianity. While there is no standard set of beliefs among Christian feminists, most agree that God does not discriminate on the basis of sex, and are involved in issues such as the ordination of women, male dominance and the balance of parenting in Christian marriage, claims of moral deficiency and inferiority of women compared to men, and the overall treatment of women in the church.[240][241]
|
191 |
+
|
192 |
+
Islamic feminists advocate women's rights, gender equality, and social justice grounded within an Islamic framework. Advocates seek to highlight the deeply rooted teachings of equality in the Quran and encourage a questioning of the patriarchal interpretation of Islamic teaching through the Quran, hadith (sayings of Muhammad), and sharia (law) towards the creation of a more equal and just society.[242] Although rooted in Islam, the movement's pioneers have also utilized secular and Western feminist discourses and recognize the role of Islamic feminism as part of an integrated global feminist movement.[243]
|
193 |
+
|
194 |
+
Buddhist feminism is a movement that seeks to improve the religious, legal, and social status of women within Buddhism. It is an aspect of feminist theology which seeks to advance and understand the equality of men and women morally, socially, spiritually, and in leadership from a Buddhist perspective. The Buddhist feminist Rita Gross describes Buddhist feminism as "the radical practice of the co-humanity of women and men."[244]
|
195 |
+
|
196 |
+
Jewish feminism is a movement that seeks to improve the religious, legal, and social status of women within Judaism and to open up new opportunities for religious experience and leadership for Jewish women. The main issues for early Jewish feminists in these movements were the exclusion from the all-male prayer group or minyan, the exemption from positive time-bound mitzvot, and women's inability to function as witnesses and to initiate divorce.[245] Many Jewish women have become leaders of feminist movements throughout their history.[246]
|
197 |
+
|
198 |
+
Dianic Wicca is a feminist-centred thealogy.[247]
|
199 |
+
|
200 |
+
Secular or atheist feminists have engaged in feminist criticism of religion, arguing that many religions have oppressive rules towards women and misogynistic themes and elements in religious texts.[248][249][250]
|
201 |
+
|
202 |
+
Patriarchy is a social system in which society is organized around male authority figures. In this system, fathers have authority over women, children, and property. It implies the institutions of male rule and privilege and is dependent on female subordination.[251] Most forms of feminism characterize patriarchy as an unjust social system that is oppressive to women. Carole Pateman argues that the patriarchal distinction "between masculinity and femininity is the political difference between freedom and subjection."[252] In feminist theory the concept of patriarchy often includes all the social mechanisms that reproduce and exert male dominance over women. Feminist theory typically characterizes patriarchy as a social construction, which can be overcome by revealing and critically analyzing its manifestations.[253] Some radical feminists have proposed that because patriarchy is too deeply rooted in society, separatism is the only viable solution.[254] Other feminists have criticized these views as being anti-men.[255][256][257]
|
203 |
+
|
204 |
+
Feminist theory has explored the social construction of masculinity and its implications for the goal of gender equality. The social construct of masculinity is seen by feminism as problematic because it associates males with aggression and competition, and reinforces patriarchal and unequal gender relations.[78][258] Patriarchal cultures are criticized for "limiting forms of masculinity" available to men and thus narrowing their life choices.[259] Some feminists are engaged with men's issues activism, such as bringing attention to male rape and spousal battery and addressing negative social expectations for men.[260][261][262]
|
205 |
+
|
206 |
+
Male participation in feminism is generally encouraged by feminists and is seen as an important strategy for achieving full societal commitment to gender equality.[10][263][264] Many male feminists and pro-feminists are active in both women's rights activism, feminist theory, and masculinity studies. However, some argue that while male engagement with feminism is necessary, it is problematic because of the ingrained social influences of patriarchy in gender relations.[265] The consensus today in feminist and masculinity theories is that men and women should cooperate to achieve the larger goals of feminism.[259] It has been proposed that, in large part, this can be achieved through considerations of women's agency.[266]
|
207 |
+
|
208 |
+
Different groups of people have responded to feminism, and both men and women have been among its supporters and critics. Among American university students, for both men and women, support for feminist ideas is more common than self-identification as a feminist.[267][268][269] The US media tends to portray feminism negatively and feminists "are less often associated with day-to-day work/leisure activities of regular women."[270][271] However, as recent research has demonstrated, as people are exposed to self-identified feminists and to discussions relating to various forms of feminism, their own self-identification with feminism increases.[272]
|
209 |
+
|
210 |
+
Pro-feminism is the support of feminism without implying that the supporter is a member of the feminist movement. The term is most often used in reference to men who are actively supportive of feminism. The activities of pro-feminist men's groups include anti-violence work with boys and young men in schools, offering sexual harassment workshops in workplaces, running community education campaigns, and counselling male perpetrators of violence. Pro-feminist men also may be involved in men's health, activism against pornography including anti-pornography legislation, men's studies, and the development of gender equity curricula in schools. This work is sometimes in collaboration with feminists and women's services, such as domestic violence and rape crisis centres.[273][274]
|
211 |
+
|
212 |
+
|
213 |
+
|
214 |
+
Anti-feminism is opposition to feminism in some or all of its forms.[275]
|
215 |
+
|
216 |
+
In the nineteenth century, anti-feminism was mainly focused on opposition to women's suffrage. Later, opponents of women's entry into institutions of higher learning argued that education was too great a physical burden on women. Other anti-feminists opposed women's entry into the labour force, or their right to join unions, to sit on juries, or to obtain birth control and control of their sexuality.[276]
|
217 |
+
|
218 |
+
Some people have opposed feminism on the grounds that they believe it is contrary to traditional values or religious beliefs. These anti-feminists argue, for example, that social acceptance of divorce and non-married women is wrong and harmful, and that men and women are fundamentally different and thus their different traditional roles in society should be maintained.[277][278][279] Other anti-feminists oppose women's entry into the workforce, political office, and the voting process, as well as the lessening of male authority in families.[280][281]
|
219 |
+
|
220 |
+
Writers such as Camille Paglia, Christina Hoff Sommers, Jean Bethke Elshtain, Elizabeth Fox-Genovese, Lisa Lucile Owens[282] and Daphne Patai oppose some forms of feminism, though they identify as feminists. They argue, for example, that feminism often promotes misandry and the elevation of women's interests above men's, and criticize radical feminist positions as harmful to both men and women.[283] Daphne Patai and Noretta Koertge argue that the term "anti-feminist" is used to silence academic debate about feminism.[284][285] Lisa Lucile Owens argues that certain rights extended exclusively to women are patriarchal because they relieve women from exercising a crucial aspect of their moral agency.[266]
|
221 |
+
|
222 |
+
Secular humanism is an ethical framework that attempts to dispense with any unreasoned dogma, pseudoscience, and superstition. Critics of feminism sometimes ask "Why feminism and not humanism?". Some humanists argue, however, that the goals of feminists and humanists largely overlap, and the distinction is only in motivation. For example, a humanist may consider abortion in terms of a utilitarian ethical framework, rather than considering the motivation of any particular woman in getting an abortion. In this respect, it is possible to be a humanist without being a feminist, but this does not preclude the existence of feminist humanism.[286][287] Humanism plays a significant role in protofeminism during the renaissance period in such that humanists made educated women a popular figure despite the challenge to the male patriarchal organization of society.[288]
|
223 |
+
|
224 |
+
For Isla Vista killings, see Bennett, Jessica (10 September 2014). "Behold the Power of #Hashtag Feminism". Time.
|
en/1962.html.txt
ADDED
@@ -0,0 +1,212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pregnancy, also known as gestation, is the time during which one or more offspring develops inside a woman.[4] A multiple pregnancy involves more than one offspring, such as with twins.[13] Pregnancy usually occurs by sexual intercourse, but can occur through assisted reproductive technology procedures.[6] A pregnancy may end in a live birth, a spontaneous miscarriage, an induced abortion, or a stillbirth. Childbirth typically occurs around 40 weeks from the start of the last menstrual period (LMP).[4][5] This is just over nine months – (gestational age) where each month averages 31 days.[4][5] When using fertilization age it is about 38 weeks.[5] An embryo is the developing offspring during the first eight weeks following fertilization, (ten weeks gestational age) after which, the term fetus is used until birth.[5] Signs and symptoms of early pregnancy may include missed periods, tender breasts, nausea and vomiting, hunger, and frequent urination.[1] Pregnancy may be confirmed with a pregnancy test.[7]
|
4 |
+
|
5 |
+
Pregnancy is divided into three trimesters, each lasting for approximately 3 months.[4] The first trimester includes conception, which is when the sperm fertilizes the egg.[4] The fertilized egg then travels down the fallopian tube and attaches to the inside of the uterus, where it begins to form the embryo and placenta.[4] During the first trimester, the possibility of miscarriage (natural death of embryo or fetus) is at its highest.[2] Around the middle of the second trimester, movement of the fetus may be felt.[4] At 28 weeks, more than 90% of babies can survive outside of the uterus if provided with high-quality medical care.[4]
|
6 |
+
|
7 |
+
Prenatal care improves pregnancy outcomes.[9] Prenatal care may include taking extra folic acid, avoiding drugs, tobacco smoking, and alcohol, taking regular exercise, having blood tests, and regular physical examinations.[9] Complications of pregnancy may include disorders of high blood pressure, gestational diabetes, iron-deficiency anemia, and severe nausea and vomiting.[3] In the ideal childbirth labor begins on its own when a woman is "at term".[14] Babies born before 37 weeks are "preterm" and at higher risk of health problems such as cerebral palsy.[4] Babies born between weeks 37 and 39 are considered "early term" while those born between weeks 39 and 41 are considered "full term".[4] Babies born between weeks 41 and 42 weeks are considered "late term" while after 42 week they are considered "post term".[4] Delivery before 39 weeks by labor induction or caesarean section is not recommended unless required for other medical reasons.[15]
|
8 |
+
|
9 |
+
About 213 million pregnancies occurred in 2012, of which, 190 million (89%) were in the developing world and 23 million (11%) were in the developed world.[11] The number of pregnancies in women aged between 15 and 44 is 133 per 1,000 women.[11] About 10% to 15% of recognized pregnancies end in miscarriage.[2] In 2016, complications of pregnancy resulted in 230,600 maternal deaths, down from 377,000 deaths in 1990.[12] Common causes include bleeding, infections, hypertensive diseases of pregnancy, obstructed labor, miscarriage, abortion, or ectopic pregnancy.[12] Globally, 44% of pregnancies are unplanned.[16] Over half (56%) of unplanned pregnancies are aborted.[16] Among unintended pregnancies in the United States, 60% of the women used birth control to some extent during the month pregnancy occurred.[17]
|
10 |
+
|
11 |
+
Associated terms for pregnancy are gravid and parous. Gravidus and gravid come from the Latin word meaning "heavy" and a pregnant female is sometimes referred to as a gravida.[18] Gravidity refers to the number of times that a female has been pregnant. Similarly, the term parity is used for the number of times that a female carries a pregnancy to a viable stage.[19] Twins and other multiple births are counted as one pregnancy and birth. A woman who has never been pregnant is referred to as a nulligravida. A woman who is (or has been only) pregnant for the first time is referred to as a primigravida,[20] and a woman in subsequent pregnancies as a multigravida or as multiparous.[18][21] Therefore, during a second pregnancy a woman would be described as gravida 2, para 1 and upon live delivery as gravida 2, para 2. In-progress pregnancies, abortions, miscarriages and/or stillbirths account for parity values being less than the gravida number. In the case of a multiple birth the gravida number and parity value are increased by one only. Women who have never carried a pregnancy achieving more than 20 weeks of gestation age are referred to as nulliparous.[22]
|
12 |
+
|
13 |
+
A pregnancy is considered term at 37 weeks of gestation. It is preterm if less than 37 weeks and postterm at or beyond 42 weeks of gestation. American College of Obstetricians and Gynecologists have recommended further division with early term 37 weeks up to 39 weeks, full term 39 weeks up to 41 weeks, and late term 41 weeks up to 42 weeks.[23] The terms preterm and postterm have largely replaced earlier terms of premature and postmature. Preterm and postterm are defined above, whereas premature and postmature have historical meaning and relate more to the infant's size and state of development rather than to the stage of pregnancy.[24][25]
|
14 |
+
|
15 |
+
The usual symptoms and discomforts of pregnancy do not significantly interfere with activities of daily living or pose a health-threat to the mother or baby. However, pregnancy complications can cause other more severe symptoms, such as those associated with anemia.
|
16 |
+
|
17 |
+
Common symptoms and discomforts of pregnancy include:
|
18 |
+
|
19 |
+
The chronology of pregnancy is, unless otherwise specified, generally given as gestational age, where the starting point is the beginning of the woman's last menstrual period (LMP), or the corresponding age of the gestation as estimated by a more accurate method if available. Sometimes, timing may also use the fertilization age which is the age of the embryo.
|
20 |
+
|
21 |
+
The American Congress of Obstetricians and Gynecologists recommend the following methods to calculate gestational age:[29]
|
22 |
+
|
23 |
+
Pregnancy is divided into three trimesters, each lasting for approximately 3 months.[4] The exact length of each trimester can vary between sources.
|
24 |
+
|
25 |
+
Due date estimation basically follows two steps:
|
26 |
+
|
27 |
+
Naegele's rule is a standard way of calculating the due date for a pregnancy when assuming a gestational age of 280 days at childbirth. The rule estimates the expected date of delivery (EDD) by adding a year, subtracting three months, and adding seven days to the origin of gestational age. Alternatively there are mobile apps, which essentially always give consistent estimations compared to each other and correct for leap year, while pregnancy wheels made of paper can differ from each other by 7 days and generally do not correct for leap year.[34]
|
28 |
+
|
29 |
+
Furthermore, actual childbirth has only a certain probability of occurring within the limits of the estimated due date. A study of singleton live births came to the result that childbirth has a standard deviation of 14 days when gestational age is estimated by first trimester ultrasound, and 16 days when estimated directly by last menstrual period.[32]
|
30 |
+
|
31 |
+
Through an interplay of hormones that includes follicle stimulating hormone that stimulates folliculogenesis and oogenesis creates a mature egg cell, the female gamete. Fertilization is the event where the egg cell fuses with the male gamete, spermatozoon. After the point of fertilization, the fused product of the female and male gamete is referred to as a zygote or fertilized egg. The fusion of female and male gametes usually occurs following the act of sexual intercourse. Pregnancy rates for sexual intercourse are highest during the menstrual cycle time from some 5 days before until 1 to 2 days after ovulation.[35] Fertilization can also occur by assisted reproductive technology such as artificial insemination and in vitro fertilisation.
|
32 |
+
|
33 |
+
Fertilization (conception) is sometimes used as the initiation of pregnancy, with the derived age being termed fertilization age. Fertilization usually occurs about two weeks before the next expected menstrual period.
|
34 |
+
|
35 |
+
A third point in time is also considered by some people to be the true beginning of a pregnancy: This is time of implantation, when the future fetus attaches to the lining of the uterus. This is about a week to ten days after fertilization.[36]
|
36 |
+
|
37 |
+
The sperm and the egg cell, which has been released from one of the female's two ovaries, unite in one of the two fallopian tubes. The fertilized egg, known as a zygote, then moves toward the uterus, a journey that can take up to a week to complete. Cell division begins approximately 24 to 36 hours after the female and male cells unite. Cell division continues at a rapid rate and the cells then develop into what is known as a blastocyst. The blastocyst arrives at the uterus and attaches to the uterine wall, a process known as implantation.
|
38 |
+
|
39 |
+
The development of the mass of cells that will become the infant is called embryogenesis during the first approximately ten weeks of gestation. During this time, cells begin to differentiate into the various body systems. The basic outlines of the organ, body, and nervous systems are established. By the end of the embryonic stage, the beginnings of features such as fingers, eyes, mouth, and ears become visible. Also during this time, there is development of structures important to the support of the embryo, including the placenta and umbilical cord. The placenta connects the developing embryo to the uterine wall to allow nutrient uptake, waste elimination, and gas exchange via the mother's blood supply. The umbilical cord is the connecting cord from the embryo or fetus to the placenta.
|
40 |
+
|
41 |
+
After about ten weeks of gestational age – which is the same as eight weeks after conception – the embryo becomes known as a fetus.[37] At the beginning of the fetal stage, the risk of miscarriage decreases sharply.[38] At this stage, a fetus is about 30 mm (1.2 inches) in length, the heartbeat is seen via ultrasound, and the fetus makes involuntary motions.[39] During continued fetal development, the early body systems, and structures that were established in the embryonic stage continue to develop. Sex organs begin to appear during the third month of gestation. The fetus continues to grow in both weight and length, although the majority of the physical growth occurs in the last weeks of pregnancy.
|
42 |
+
|
43 |
+
Electrical brain activity is first detected between the fifth and sixth week of gestation. It is considered primitive neural activity rather than the beginning of conscious thought. Synapses begin forming at 17 weeks, and begin to multiply quickly at week 28 until 3 to 4 months after birth.[40]
|
44 |
+
|
45 |
+
Although the fetus begins to move during the first trimester, it is not until the second trimester that movement, known as quickening, can be felt. This typically happens in the fourth month, more specifically in the 20th to 21st week, or by the 19th week if the woman has been pregnant before. It is common for some women not to feel the fetus move until much later. During the second trimester, most women begin to wear maternity clothes.
|
46 |
+
|
47 |
+
Embryo at 4 weeks after fertilization. (Gestational age of 6 weeks.)
|
48 |
+
|
49 |
+
Fetus at 8 weeks after fertilization. (Gestational age of 10 weeks.)
|
50 |
+
|
51 |
+
Fetus at 18 weeks after fertilization. (Gestational age of 20 weeks.)
|
52 |
+
|
53 |
+
Fetus at 38 weeks after fertilization. (Gestational age of 40 weeks.)
|
54 |
+
|
55 |
+
Relative size in 1st month (simplified illustration)
|
56 |
+
|
57 |
+
Relative size in 3rd month (simplified illustration)
|
58 |
+
|
59 |
+
Relative size in 5th month (simplified illustration)
|
60 |
+
|
61 |
+
Relative size in 9th month (simplified illustration)
|
62 |
+
|
63 |
+
During pregnancy, a woman undergoes many physiological changes, which are entirely normal, including behavioral, cardiovascular, hematologic, metabolic, renal, and respiratory changes. Increases in blood sugar, breathing, and cardiac output are all required. Levels of progesterone and estrogens rise continually throughout pregnancy, suppressing the hypothalamic axis and therefore also the menstrual cycle. A full-term pregnancy at an early age reduces the risk of breast, ovarian and endometrial cancer and the risk declines further with each additional full-term pregnancy.[41][42]
|
64 |
+
|
65 |
+
The fetus is genetically different from its mother, and can be viewed as an unusually successful allograft.[43] The main reason for this success is increased immune tolerance during pregnancy.[44] Immune tolerance is the concept that the body is able to not mount an immune system response against certain triggers.[43]
|
66 |
+
|
67 |
+
During the first trimester, minute ventilation increases by 40%.[45] The womb will grow to the size of a lemon by eight weeks. Many symptoms and discomforts of pregnancy like nausea and tender breasts appear in the first trimester.[46]
|
68 |
+
|
69 |
+
During the second trimester, most women feel more energized, and begin to put on weight as the symptoms of morning sickness subside and eventually fade away. The uterus, the muscular organ that holds the developing fetus, can expand up to 20 times its normal size during pregnancy.
|
70 |
+
|
71 |
+
Final weight gain takes place during the third trimester, which is the most weight gain throughout the pregnancy. The woman's abdomen will transform in shape as it drops due to the fetus turning in a downward position ready for birth. During the second trimester, the woman's abdomen would have been upright, whereas in the third trimester it will drop down low. The fetus moves regularly, and is felt by the woman. Fetal movement can become strong and be disruptive to the woman. The woman's navel will sometimes become convex, "popping" out, due to the expanding abdomen.
|
72 |
+
|
73 |
+
Head engagement, where the fetal head descends into cephalic presentation, relieves pressure on the upper abdomen with renewed ease in breathing. It also severely reduces bladder capacity, and increases pressure on the pelvic floor and the rectum.
|
74 |
+
|
75 |
+
It is also during the third trimester that maternal activity and sleep positions may affect fetal development due to restricted blood flow. For instance, the enlarged uterus may impede blood flow by compressing the vena cava when lying flat, which is relieved by lying on the left side.[47]
|
76 |
+
|
77 |
+
Childbirth, referred to as labor and delivery in the medical field, is the process whereby an infant is born.[48]
|
78 |
+
|
79 |
+
A woman is considered to be in labour when she begins experiencing regular uterine contractions, accompanied by changes of her cervix – primarily effacement and dilation. While childbirth is widely experienced as painful, some women do report painless labours, while others find that concentrating on the birth helps to quicken labour and lessen the sensations. Most births are successful vaginal births, but sometimes complications arise and a woman may undergo a cesarean section.
|
80 |
+
|
81 |
+
During the time immediately after birth, both the mother and the baby are hormonally cued to bond, the mother through the release of oxytocin, a hormone also released during breastfeeding. Studies show that skin-to-skin contact between a mother and her newborn immediately after birth is beneficial for both the mother and baby. A review done by the World Health Organization found that skin-to-skin contact between mothers and babies after birth reduces crying, improves mother–infant interaction, and helps mothers to breastfeed successfully. They recommend that neonates be allowed to bond with the mother during their first two hours after birth, the period that they tend to be more alert than in the following hours of early life.[49]
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
In the ideal childbirth labor begins on its own when a woman is "at term".[14]
|
86 |
+
Events before completion of 37 weeks are considered preterm.[50] Preterm birth is associated with a range of complications and should be avoided if possible.[52]
|
87 |
+
|
88 |
+
Sometimes if a woman's water breaks or she has contractions before 39 weeks, birth is unavoidable.[51] However, spontaneous birth after 37 weeks is considered term and is not associated with the same risks of a pre-term birth.[48] Planned birth before 39 weeks by caesarean section or labor induction, although "at term", results in an increased risk of complications.[53] This is from factors including underdeveloped lungs of newborns, infection due to underdeveloped immune system, feeding problems due to underdeveloped brain, and jaundice from underdeveloped liver.[54]
|
89 |
+
|
90 |
+
Babies born between 39 and 41 weeks gestation have better outcomes than babies born either before or after this range.[51] This special time period is called "full term".[51] Whenever possible, waiting for labor to begin on its own in this time period is best for the health of the mother and baby.[14] The decision to perform an induction must be made after weighing the risks and benefits, but is safer after 39 weeks.[14]
|
91 |
+
|
92 |
+
Events after 42 weeks are considered postterm.[51] When a pregnancy exceeds 42 weeks, the risk of complications for both the woman and the fetus increases significantly.[55][56] Therefore, in an otherwise uncomplicated pregnancy, obstetricians usually prefer to induce labour at some stage between 41 and 42 weeks.[57]
|
93 |
+
|
94 |
+
The postnatal period, also referred to as the puerperium, begins immediately after delivery and extends for about six weeks.[48] During this period, the mother's body begins the return to pre-pregnancy conditions that includes changes in hormone levels and uterus size.[48]
|
95 |
+
|
96 |
+
The beginning of pregnancy may be detected either based on symptoms by the woman herself, or by using pregnancy tests. However, an important condition with serious health implications that is quite common is the denial of pregnancy by the pregnant woman. About one in 475 denials will last until around the 20th week of pregnancy. The proportion of cases of denial, persisting until delivery is about 1 in 2500.[58] Conversely, some non-pregnant women have a very strong belief that they are pregnant along with some of the physical changes. This condition is known as a false pregnancy.[59]
|
97 |
+
|
98 |
+
Most pregnant women experience a number of symptoms,[60] which can signify pregnancy. A number of early medical signs are associated with pregnancy.[61][62] These signs include:
|
99 |
+
|
100 |
+
Pregnancy detection can be accomplished using one or more various pregnancy tests,[64] which detect hormones generated by the newly formed placenta, serving as biomarkers of pregnancy.[65] Blood and urine tests can detect pregnancy 12 days after implantation.[66] Blood pregnancy tests are more sensitive than urine tests (giving fewer false negatives).[67] Home pregnancy tests are urine tests, and normally detect a pregnancy 12 to 15 days after fertilization.[68] A quantitative blood test can determine approximately the date the embryo was conceived because hCG doubles every 36 to 48 hours.[48] A single test of progesterone levels can also help determine how likely a fetus will survive in those with a threatened miscarriage (bleeding in early pregnancy).[69]
|
101 |
+
|
102 |
+
Obstetric ultrasonography can detect fetal abnormalities, detect multiple pregnancies, and improve gestational dating at 24 weeks.[70] The resultant estimated gestational age and due date of the fetus are slightly more accurate than methods based on last menstrual period.[71] Ultrasound is used to measure the nuchal fold in order to screen for Down syndrome.[72]
|
103 |
+
|
104 |
+
Pre-conception counseling is care that is provided to a woman and/ or couple to discuss conception, pregnancy, current health issues and recommendations for the period before pregnancy.[75]
|
105 |
+
|
106 |
+
Prenatal medical care is the medical and nursing care recommended for women during pregnancy, time intervals and exact goals of each visit differ by country.[76] Women who are high risk have better outcomes if they are seen regularly and frequently by a medical professional than women who are low risk.[77] A woman can be labeled as high risk for different reasons including previous complications in pregnancy, complications in the current pregnancy, current medical diseases, or social issues.[78][79]
|
107 |
+
|
108 |
+
The aim of good prenatal care is prevention, early identification, and treatment of any medical complications.[80] A basic prenatal visit consists of measurement of blood pressure, fundal height, weight and fetal heart rate, checking for symptoms of labor, and guidance for what to expect next.[75]
|
109 |
+
|
110 |
+
Nutrition during pregnancy is important to ensure healthy growth of the fetus.[81] Nutrition during pregnancy is different from the non-pregnant state.[81] There are increased energy requirements and specific micronutrient requirements.[81] Women benefit from education to encourage a balanced energy and protein intake during pregnancy.[82] Some women may need professional medical advice if their diet is affected by medical conditions, food allergies, or specific religious/ ethical beliefs.[83] Further studies are needed to access the effect of dietary advice to prevent gestational diabetes, although low quality evidence suggests some benefit.[84]
|
111 |
+
|
112 |
+
Adequate periconceptional (time before and right after conception) folic acid (also called folate or Vitamin B9) intake has been shown to decrease the risk of fetal neural tube defects, such as spina bifida.[85] The neural tube develops during the first 28 days of pregnancy, a urine pregnancy test is not usually positive until 14 days post-conception, explaining the necessity to guarantee adequate folate intake before conception.[68][86] Folate is abundant in green leafy vegetables, legumes, and citrus.[87] In the United States and Canada, most wheat products (flour, noodles) are fortified with folic acid.[88]
|
113 |
+
|
114 |
+
DHA omega-3 is a major structural fatty acid in the brain and retina, and is naturally found in breast milk.[89] It is important for the woman to consume adequate amounts of DHA during pregnancy and while nursing to support her well-being and the health of her infant.[89] Developing infants cannot produce DHA efficiently, and must receive this vital nutrient from the woman through the placenta during pregnancy and in breast milk after birth.[90]
|
115 |
+
|
116 |
+
Several micronutrients are important for the health of the developing fetus, especially in areas of the world where insufficient nutrition is common.[10] Women living in low and middle income countries are suggested to take multiple micronutrient supplements containing iron and folic acid.[10] These supplements have been shown to improve birth outcomes in developing countries, but do not have an effect on perinatal mortality.[10][91] Adequate intake of folic acid, and iron is often recommended.[92][93] In developed areas, such as Western Europe and the United States, certain nutrients such as Vitamin D and calcium, required for bone development, may also require supplementation.[94][95][96] Vitamin E supplementation has not been shown to improve birth outcomes.[97] Zinc supplementation has been associated with a decrease in preterm birth, but it is unclear whether it is causative.[98] Daily iron supplementation reduces the risk of maternal anemia.[99] Studies of routine daily iron supplementation for pregnant women found improvement in blood iron levels, without a clear clinical benefit.[100] The nutritional needs for women carrying twins or triplets are higher than those of women carrying one baby.[101]
|
117 |
+
|
118 |
+
Women are counseled to avoid certain foods, because of the possibility of contamination with bacteria or parasites that can cause illness.[102] Careful washing of fruits and raw vegetables may remove these pathogens, as may thoroughly cooking leftovers, meat, or processed meat.[103] Unpasteurized dairy and deli meats may contain Listeria, which can cause neonatal meningitis, stillbirth and miscarriage.[104] Pregnant women are also more prone to Salmonella infections, can be in eggs and poultry, which should be thoroughly cooked.[105] Cat feces and undercooked meats may contain the parasite Toxoplasma gondii and can cause toxoplasmosis.[103] Practicing good hygiene in the kitchen can reduce these risks.[106]
|
119 |
+
|
120 |
+
Women are also counseled to eat seafood in moderation and to eliminate seafood known to be high in mercury because of the risk of birth defects.[105] Pregnant women are counseled to consume caffeine in moderation, because large amounts of caffeine are associated with miscarriage.[48] However, the relationship between caffeine, birthweight, and preterm birth is unclear.[107]
|
121 |
+
|
122 |
+
The amount of healthy weight gain during a pregnancy varies.[108] Weight gain is related to the weight of the baby, the placenta, extra circulatory fluid, larger tissues, and fat and protein stores.[81] Most needed weight gain occurs later in pregnancy.[109]
|
123 |
+
|
124 |
+
The Institute of Medicine recommends an overall pregnancy weight gain for those of normal weight (body mass index of 18.5–24.9), of 11.3–15.9 kg (25–35 pounds) having a singleton pregnancy.[110] Women who are underweight (BMI of less than 18.5), should gain between 12.7–18 kg (28–40 lbs), while those who are overweight (BMI of 25–29.9) are advised to gain between 6.8–11.3 kg (15–25 lbs) and those who are obese (BMI>30) should gain between 5–9 kg (11–20 lbs).[111] These values reference the expectations for a term pregnancy.
|
125 |
+
|
126 |
+
During pregnancy, insufficient or excessive weight gain can compromise the health of the mother and fetus.[109] The most effective intervention for weight gain in underweight women is not clear.[109] Being or becoming overweight in pregnancy increases the risk of complications for mother and fetus, including cesarean section, gestational hypertension, pre-eclampsia, macrosomia and shoulder dystocia.[108] Excessive weight gain can make losing weight after the pregnancy difficult.[108][112]
|
127 |
+
|
128 |
+
Around 50% of women of childbearing age in developed countries like the United Kingdom are overweight or obese before pregnancy.[112] Diet modification is the most effective way to reduce weight gain and associated risks in pregnancy.[112]
|
129 |
+
|
130 |
+
Drugs used during pregnancy can have temporary or permanent effects on the fetus.[113] Anything (including drugs) that can cause permanent deformities in the fetus are labeled as teratogens.[114] In the U.S., drugs were classified into categories A, B, C, D and X based on the Food and Drug Administration (FDA) rating system to provide therapeutic guidance based on potential benefits and fetal risks.[115] Drugs, including some multivitamins, that have demonstrated no fetal risks after controlled studies in humans are classified as Category A.[113] On the other hand, drugs like thalidomide with proven fetal risks that outweigh all benefits are classified as Category X.[113]
|
131 |
+
|
132 |
+
The use of recreational drugs in pregnancy can cause various pregnancy complications.[48]
|
133 |
+
|
134 |
+
Intrauterine exposure to environmental toxins in pregnancy has the potential to cause adverse effects on prenatal development, and to cause pregnancy complications.[48] Air pollution has been associated with low birth weight infants.[122] Conditions of particular severity in pregnancy include mercury poisoning and lead poisoning.[48] To minimize exposure to environmental toxins, the American College of Nurse-Midwives recommends: checking whether the home has lead paint, washing all fresh fruits and vegetables thoroughly and buying organic produce, and avoiding cleaning products labeled "toxic" or any product with a warning on the label.[123]
|
135 |
+
|
136 |
+
Pregnant women can also be exposed to toxins in the workplace, including airborne particles. The effects of wearing N95 filtering facepiece respirators are similar for pregnant women as for non-pregnant women, and wearing a respirator for one hour does not affect the fetal heart rate.[124]
|
137 |
+
|
138 |
+
Most women can continue to engage in sexual activity throughout pregnancy.[125] Most research suggests that during pregnancy both sexual desire and frequency of sexual relations decrease.[126][127] In context of this overall decrease in desire, some studies indicate a second-trimester increase, preceding a decrease during the third trimester.[128][129]
|
139 |
+
|
140 |
+
Sex during pregnancy is a low-risk behavior except when the healthcare provider advises that sexual intercourse be avoided for particular medical reasons.[125] For a healthy pregnant woman, there is no single safe or right way to have sex during pregnancy.[125] Pregnancy alters the vaginal flora with a reduction in microscopic species/genus diversity.[130]
|
141 |
+
|
142 |
+
Regular aerobic exercise during pregnancy appears to improve (or maintain) physical fitness.[131] Physical exercise during pregnancy does appear to decrease the need for C-section.[132] Bed rest, outside of research studies, is not recommended as there is no evidence of benefit and potential harm.[133]
|
143 |
+
|
144 |
+
The Clinical Practice Obstetrics Committee of Canada recommends that "All women without contraindications should be encouraged to participate in aerobic and strength-conditioning exercises as part of a healthy lifestyle during their pregnancy".[134] Although an upper level of safe exercise intensity has not been established, women who were regular exercisers before pregnancy and who have uncomplicated pregnancies should be able to engage in high intensity exercise programs.[134] In general, participation in a wide range of recreational activities appears to be safe, with the avoidance of those with a high risk of falling such as horseback riding or skiing or those that carry a risk of abdominal trauma, such as soccer or hockey.[135]
|
145 |
+
|
146 |
+
The American College of Obstetricians and Gynecologists reports that in the past, the main concerns of exercise in pregnancy were focused on the fetus and any potential maternal benefit was thought to be offset by potential risks to the fetus. However, they write that more recent information suggests that in the uncomplicated pregnancy, fetal injuries are highly unlikely.[135] They do, however, list several circumstances when a woman should contact her health care provider before continuing with an exercise program: vaginal bleeding, dyspnea before exertion, dizziness, headache, chest pain, muscle weakness, preterm labor, decreased fetal movement, amniotic fluid leakage, and calf pain or swelling (to rule out thrombophlebitis).[135]
|
147 |
+
|
148 |
+
It has been suggested that shift work and exposure to bright light at night should be avoided at least during the last trimester of pregnancy to decrease the risk of psychological and behavioral problems in the newborn.[136]
|
149 |
+
|
150 |
+
The increased levels of progesterone and estrogen during pregnancy make gingivitis more likely; the gums become edematous, red in colour, and tend to bleed.[137] Also a pyogenic granuloma or “pregnancy tumor,” is commonly seen on the labial surface of the papilla. Lesions can be treated by local debridement or deep incision depending on their size, and by following adequate oral hygiene measures.[138] There have been suggestions that severe periodontitis may increase the risk of having preterm birth and low birth weight, however, a Cochrane review found insufficient evidence to determine if periodontitis can develop adverse birth outcomes.[139]
|
151 |
+
|
152 |
+
In low risk pregnancies, most health care providers approve flying until about 36 weeks of gestational age.[140] Most airlines allow pregnant women to fly short distances at less than 36 weeks, and long distances at less than 32 weeks.[141] Many airlines require a doctor's note that approves flying, specially at over 28 weeks.[141] During flights, the risk of deep vein thrombosis is decreased by getting up and walking occasionally, as well as by avoiding dehydration.[141]
|
153 |
+
|
154 |
+
Full body scanners do not use ionizing radiation, and are safe in pregnancy.[142] Airports can also possibly use backscatter X-ray scanners, which use a very low dose, but where safety in pregnancy is not fully established.
|
155 |
+
|
156 |
+
Each year, ill health as a result of pregnancy is experienced (sometimes permanently) by more than 20 million women around the world.[143] In 2016, complications of pregnancy resulted in 230,600 deaths down from 377,000 deaths in 1990.[12] Common causes include bleeding (72,000), infections (20,000), hypertensive diseases of pregnancy (32,000), obstructed labor (10,000), and pregnancy with abortive outcome (20,000), which includes miscarriage, abortion, and ectopic pregnancy.[12]
|
157 |
+
|
158 |
+
The following are some examples of pregnancy complications:
|
159 |
+
|
160 |
+
There is also an increased susceptibility and severity of certain infections in pregnancy.
|
161 |
+
|
162 |
+
A pregnant woman may have a pre-existing disease, which is not directly caused by the pregnancy, but may cause complications to develop that include a potential risk to the pregnancy; or a disease may develop during pregnancy.
|
163 |
+
|
164 |
+
Medical imaging may be indicated in pregnancy because of pregnancy complications, disease, or routine prenatal care. Medical ultrasonography including obstetric ultrasonography, and magnetic resonance imaging (MRI) without contrast agents are not associated with any risk for the mother or the fetus, and are the imaging techniques of choice for pregnant women.[151] Projectional radiography, CT scan and nuclear medicine imaging result in some degree of ionizing radiation exposure, but in most cases the absorbed doses are not associated with harm to the baby.[151] At higher dosages, effects can include miscarriage, birth defects and intellectual disability.[151]
|
165 |
+
|
166 |
+
About 213 million pregnancies occurred in 2012 of which 190 million were in the developing world and 23 million were in the developed world.[11] This is about 133 pregnancies per 1,000 women aged 15 to 44.[11] About 10% to 15% of recognized pregnancies end in miscarriage.[2] Globally, 44% of pregnancies are unplanned. Over half (56%) of unplanned pregnancies are aborted. In countries where abortion is prohibited, or only carried out in circumstances where the mother's life is at risk, 48% of unplanned pregnancies are aborted illegally. Compared to the rate in countries where abortion is legal, at 69%.[16]
|
167 |
+
|
168 |
+
Of pregnancies in 2012, 120 million occurred in Asia, 54 million in Africa, 19 million in Europe, 18 million in Latin America and the Caribbean, 7 million in North America, and 1 million in Oceania.[11] Pregnancy rates are 140 per 1000 women of childbearing age in the developing world and 94 per 1000 in the developed world.[11]
|
169 |
+
|
170 |
+
The rate of pregnancy, as well as the ages at which it occurs, differ by country and region. It is influenced by a number of factors, such as cultural, social and religious norms; access to contraception; and rates of education. The total fertility rate (TFR) in 2013 was estimated to be highest in Niger (7.03 children/woman) and lowest in Singapore (0.79 children/woman).[152]
|
171 |
+
|
172 |
+
In Europe, the average childbearing age has been rising continuously for some time. In Western, Northern, and Southern Europe, first-time mothers are on average 26 to 29 years old, up from 23 to 25 years at the start of the 1970s. In a number of European countries (Spain), the mean age of women at first childbirth has crossed the 30-year threshold.
|
173 |
+
|
174 |
+
This process is not restricted to Europe. Asia, Japan and the United States are all seeing average age at first birth on the rise, and increasingly the process is spreading to countries in the developing world like China, Turkey and Iran. In the US, the average age of first childbirth was 25.4 in 2010.[153]
|
175 |
+
|
176 |
+
In the United States and United Kingdom, 40% of pregnancies are unplanned, and between a quarter and half of those unplanned pregnancies were unwanted pregnancies.[154][155]
|
177 |
+
|
178 |
+
In most cultures, pregnant women have a special status in society and receive particularly gentle care.[156] At the same time, they are subject to expectations that may exert great psychological pressure, such as having to produce a son and heir. In many traditional societies, pregnancy must be preceded by marriage, on pain of ostracism of mother and (illegitimate) child.
|
179 |
+
|
180 |
+
Overall, pregnancy is accompanied by numerous customs that are often subject to ethnological research, often rooted in traditional medicine or religion. The baby shower is an example of a modern custom.
|
181 |
+
|
182 |
+
Pregnancy is an important topic in sociology of the family. The prospective child may preliminarily be placed into numerous social roles. The parents' relationship and the relation between parents and their surroundings are also affected.
|
183 |
+
|
184 |
+
A belly cast may be made during pregnancy as a keepsake.
|
185 |
+
|
186 |
+
Images of pregnant women, especially small figurines, were made in traditional cultures in many places and periods, though it is rarely one of the most common types of image. These include ceramic figures from some Pre-Columbian cultures, and a few figures from most of the ancient Mediterranean cultures. Many of these seem to be connected with fertility. Identifying whether such figures are actually meant to show pregnancy is often a problem, as well as understanding their role in the culture concerned.
|
187 |
+
|
188 |
+
Among the oldest surviving examples of the depiction of pregnancy are prehistoric figurines found across much of Eurasia and collectively known as Venus figurines. Some of these appear to be pregnant.
|
189 |
+
|
190 |
+
Due to the important role of the Mother of God in Christianity, the Western visual arts have a long tradition of depictions of pregnancy, especially in the biblical scene of the Visitation, and devotional images called a Madonna del Parto.[157]
|
191 |
+
|
192 |
+
The unhappy scene usually called Diana and Callisto, showing the moment of discovery of Callisto's forbidden pregnancy, is sometimes painted from the Renaissance onwards. Gradually, portraits of pregnant women began to appear, with a particular fashion for "pregnancy portraits" in elite portraiture of the years around 1600.
|
193 |
+
|
194 |
+
Pregnancy, and especially pregnancy of unmarried women, is also an important motif in literature. Notable examples include Hardy's Tess of the d'Urbervilles and Goethe's Faust.
|
195 |
+
|
196 |
+
Anatomical model of a pregnant woman; Stephan Zick (1639–1715); 1700; Germanisches Nationalmuseum
|
197 |
+
|
198 |
+
Statue of a pregnant woman, Macedonia
|
199 |
+
|
200 |
+
Bronze figure of a pregnant naked woman by Danny Osborne, Merrion Square
|
201 |
+
|
202 |
+
Marcus Gheeraerts the Younger Portrait of Susanna Temple, second wife of Sir Martin Lister, 1620
|
203 |
+
|
204 |
+
Octave Tassaert, The Waif aka L'abandonnée 1852, Musée Fabre, Montpellier
|
205 |
+
|
206 |
+
Modern reproductive medicine offers many forms of assisted reproductive technology for couples who stay childless against their will, such as fertility medication, artificial insemination, in vitro fertilization and surrogacy.
|
207 |
+
|
208 |
+
An abortion is the termination of an embryo or fetus, either naturally or via medical methods.[158] When carried out by choice, it is usually within the first trimester, sometimes in the second, and rarely in the third.[38] Not using contraception, contraceptive failure, poor family planning or rape can lead to undesired pregnancies. Legality of socially indicated abortions varies widely both internationally and through time. In most countries of Western Europe, abortions during the first trimester were a criminal offense a few decades ago[when?] but have since been legalized, sometimes subject to mandatory consultations. In Germany, for example, as of 2009 less than 3% of abortions had a medical indication.
|
209 |
+
|
210 |
+
Many countries have various legal regulations in place to protect pregnant women and their children. Maternity Protection Convention ensures that pregnant women are exempt from activities such as night shifts or carrying heavy stocks. Maternity leave typically provides paid leave from work during roughly the last trimester of pregnancy and for some time after birth. Notable extreme cases include Norway (8 months with full pay) and the United States (no paid leave at all except in some states). Moreover, many countries have laws against pregnancy discrimination.
|
211 |
+
|
212 |
+
In the United States, some actions that result in miscarriage or stillbirth are considered crimes. One law that does so is the federal Unborn Victims of Violence Act. In 2014, the American state of Tennessee passed a law which allows prosecutors to charge a woman with criminal assault if she uses illegal drugs during her pregnancy and her fetus or newborn is considered harmed as a result.[159]
|
en/1963.html.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A politician is a person active in party politics, or a person holding or seeking an office in government. Politicians propose, support and create laws or policies that govern the land and, by extension, its people. Broadly speaking, a "politician" can be anyone who seeks to achieve political power in any bureaucratic institution.
|
4 |
+
|
5 |
+
Politicians are people who are politically active, especially in party politics. Positions range from local offices to executive, legislative, and judicial offices of regional and national governments.[1][2] Some elected law enforcement officers, such as sheriffs, are considered politicians.[3][4]
|
6 |
+
|
7 |
+
Politicians are known for their rhetoric, as in speeches or campaign advertisements. They are especially known for using common themes that allow them to develop their political positions in terms familiar to the voters.[5] Politicians of necessity become expert users of the media.[6] Politicians in the 19th century made heavy use of newspapers, magazines, and pamphlets, as well as posters.[7] In the 20th century, they branched into radio and television, making television commercials the single most expensive part of an election campaign.[8] In the 21st century, they have become increasingly involved with the social media based on the Internet and smartphones.[9]
|
8 |
+
|
9 |
+
Rumor has always played a major role in politics, with negative rumors about an opponent typically more effective than positive rumors about one's own side.[10]
|
10 |
+
|
11 |
+
Once elected, the politician becomes a government official and has to deal with a permanent bureaucracy of non-politicians. Historically, there has been a subtle conflict between the long-term goals of each side.[11] In patronage-based systems, such as the United States and Canada in the 19th century, winning politicians replace the bureaucracy with local politicians who formed their base of support, the "spoils system". Civil service reform was initiated to eliminate the corruption of government services that were involved.[12] However, in many less developed countries, the spoils system is in full-scale operation today.[13]
|
12 |
+
|
13 |
+
Mattozzi and Merlo argue that there are two main career paths which are typically followed by politicians in modern democracies. First, come the career politicians. They are politicians who work in the political sector until retirement. Second, are the "political careerists". These are politicians who gain a reputation for expertise in controlling certain bureaucracies, then leave politics for a well-paid career in the private sector making use of their political contacts.[14]
|
14 |
+
|
15 |
+
The personal histories of politicians have been frequently studied, as it is presumed that their experiences and characteristics shape their beliefs and behaviors. There are four pathways by which a politician's biography could influence their leadership style and abilities. The first is that biography may influence one's core beliefs, which are used to shape a worldview. The second is that politicians' skills and competence are influenced by personal experience. The areas of skill and competence can define where they devote resources and attention as a leader. The third pathway is that biographical attributes may define and shape political incentives. A leader's previous profession, for example, could be viewed as higher importance, causing a disproportionate investment of leadership resources to ensure the growth and health of that profession, including former colleagues. Other examples beside profession include the politician's innate characteristics, such as race or gender. The fourth pathway is how a politician's biography affects their public perception, which can, in turn, affect their leadership style. Female politicians, for example, may use different strategies to attract the same level of respect given to male politicians.[15]
|
16 |
+
|
17 |
+
Numerous scholars have studied the characteristics of politicians, comparing those at the local and national levels, and comparing the more liberal or the more conservative ones, and comparing the more successful and less successful in terms of elections.[16] In recent years, special attention has focused on the distinctive career path of women politicians.[17] For example, there are studies of the "Supermadre" model in Latin American politics.[18]
|
18 |
+
|
19 |
+
Many politicians have the knack to remember thousands of names and faces and recall personal anecdotes about their constituents—it is an advantage in the job, rather like being seven-foot tall for a basketball player. United States Presidents George W. Bush and Bill Clinton were renowned for their memories.[19][20]
|
20 |
+
|
21 |
+
Many critics attack politicians for being out of touch with the public. Areas of friction include the manner in which politicians speak, which has been described as being overly formal and filled with many euphemistic and metaphorical expressions and commonly perceived as an attempt to "obscure, mislead, and confuse".[21]
|
22 |
+
|
23 |
+
In the popular image, politicians are thought of as clueless, selfish, incompetent and corrupt, taking money in exchange for goods or services, rather than working for the general public good.[22] Politicians in many countries are regarded as the "most hated professionals".[23]
|
en/1964.html.txt
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The femur (/ˈfiːmər/, pl. femurs or femora /ˈfɛmərə/)[1][2], or thigh bone, is the proximal bone of the hindlimb in tetrapod vertebrates (for example, the largest bone of the human thigh). The head of the femur articulates with the acetabulum in the pelvic bone forming the hip joint, while the distal part of the femur articulates with the tibia and kneecap, forming the knee joint. By most measures the two (left and right) femurs are the strongest bones of the body, and in humans,[vague]
|
2 |
+
the longest.
|
3 |
+
|
4 |
+
The femur is the only bone in the upper leg. The two femurs converge medially toward the knees, where they articulate with the proximal ends of the tibiae. The angle of convergence of the femora is a major factor in determining the femoral-tibial angle. Human females have wider pelvic bones, causing their femora to converge more than in males.
|
5 |
+
|
6 |
+
In the condition genu valgum (knock knee) the femurs converge so much that the knees touch one another. The opposite extreme is genu varum (bow-leggedness). In the general population of people without either genu valgum or genu varum, the femoral-tibial angle is about 175 degrees.[3]
|
7 |
+
|
8 |
+
The femur is the longest and, by some measures, the strongest bone in the human body. This depends on the type of measurement taken to calculate strength. Some strength tests show the temporal bone in the skull to be the strongest bone. The femur length on average is 26.74% of a person's height,[4] a ratio found in both men and women and most ethnic groups with only restricted variation, and is useful in anthropology because it offers a basis for a reasonable estimate of a subject's height from an incomplete skeleton.
|
9 |
+
|
10 |
+
The femur is categorised as a long bone and comprises a diaphysis (shaft or body) and two epiphyses (extremities) that articulate with adjacent bones in the hip and knee.[3]
|
11 |
+
|
12 |
+
The upper or proximal extremity (close to the torso) contains the head, neck, the two trochanters and adjacent structures.[3]
|
13 |
+
|
14 |
+
The head of the femur, which articulates with the acetabulum of the pelvic bone, comprises two-thirds of a sphere. It has a small groove, or fovea, connected through the round ligament to the sides of the acetabular notch. The head of the femur is connected to the shaft through the neck or collum. The neck is 4–5 cm. long and the diameter is smallest front to back and compressed at its middle. The collum forms an angle with the shaft in about 130 degrees. This angle is highly variant. In the infant it is about 150 degrees and in old age reduced to 120 degrees on average. An abnormal increase in the angle is known as coxa valga and an abnormal reduction is called coxa vara. Both the head and neck of the femur is vastly embedded in the hip musculature and can not be directly palpated. In skinny people with the thigh laterally rotated, the head of the femur can be felt deep as a resistance profound (deep) for the femoral artery.[3]
|
15 |
+
|
16 |
+
The transition area between the head and neck is quite rough due to attachment of muscles and the hip joint capsule. Here the two trochanters, greater and lesser trochanter, are found. The greater trochanter is almost box-shaped and is the most lateral prominent of the femur. The highest point of the greater trochanter is located higher than the collum and reaches the midpoint of the hip joint. The greater trochanter can easily be felt. The trochanteric fossa is a deep depression bounded posteriorly by the intertrochanteric crest on medial surface of the greater trochanter.
|
17 |
+
The lesser trochanter is a cone-shaped extension of the lowest part of the femur neck. The two trochanters are joined by the intertrochanteric crest on the back side and by the intertrochanteric line on the front.[3]
|
18 |
+
|
19 |
+
A slight ridge is sometimes seen commencing about the middle of the intertrochanteric crest, and reaching vertically downward for about 5 cm. along the back part of the body: it is called the linea quadrata (or quadrate line).
|
20 |
+
|
21 |
+
About the junction of the upper one-third and lower two-thirds on the intertrochanteric crest is the quadrate tubercle located. The size of the tubercle varies and it is not always located on the intertrochanteric crest and that also adjacent areas can be part of the quadrate tubercle, such as the posterior surface of the greater trochanter or the neck of the femur. In a small anatomical study it was shown that the epiphyseal line passes directly through the quadrate tubercle.[5]
|
22 |
+
|
23 |
+
The body of the femur (or shaft) is long, slender and almost cylindrical in form. It is a little broader above than in the center, broadest and somewhat flattened from before backward below. It is slightly arched, so as to be convex in front, and concave behind, where it is strengthened by a prominent longitudinal ridge, the linea aspera which diverges proximally and distal as the medial and lateral ridge. Proximally the lateral ridge of the linea aspera becomes the gluteal tuberosity while the medial ridge continues as the pectineal line. Besides the linea aspera the shaft has two other bordes; a lateral and medial border. These three bordes separates the shaft into three surfaces: One anterior, one medial and one lateral. Due to the vast musculature of the thigh the shaft can not be palpated.[3]
|
24 |
+
|
25 |
+
The third trochanter is a bony projection occasionally present on the proximal femur near the superior border of the gluteal tuberosity. When present, it is oblong, rounded, or conical in shape and sometimes continuous with the gluteal ridge.[6] A structure of minor importance in humans, the incidence of the third trochanter varies from 17–72% between ethnic groups and it is frequently reported as more common in females than in males.[7]
|
26 |
+
|
27 |
+
The lower extremity of the femur (or distal extremity) is larger than the upper extremity. It is somewhat cuboid in form, but its transverse diameter is greater than its antero-posterior (front to back). It consists of two oblong eminences known as the condyles.[3]
|
28 |
+
|
29 |
+
Anteriorly, the condyles are slightly prominent and are separated by a smooth shallow articular depression called the patellar surface. Posteriorly, they project considerably and a deep notch, the Intercondylar fossa of femur, is present between them. The lateral condyle is the more prominent and is the broader both in its antero-posterior and transverse diameters. The medial condyle is the longer and, when the femur is held with its body perpendicular, projects to a lower level. When, however, the femur is in its natural oblique position the lower surfaces of the two condyles lie practically in the same horizontal plane. The condyles are not quite parallel with one another; the long axis of the lateral is almost directly antero-posterior, but that of the medial runs backward and medialward. Their opposed surfaces are small, rough, and concave, and form the walls of the intercondyloid fossa. This fossa is limited above by a ridge, the intercondyloid line, and below by the central part of the posterior margin of the patellar surface. The posterior cruciate ligament of the knee joint is attached to the lower and front part of the medial wall of the fossa and the anterior cruciate ligament to an impression on the upper and back part of its lateral wall.[3]
|
30 |
+
|
31 |
+
The articular surface of the lower end of the femur occupies the anterior, inferior, and posterior surfaces of the condyles. Its front part is named the patellar surface and articulates with the patella; it presents a median groove which extends downward to the intercondyloid fossa and two convexities, the lateral of which is broader, more prominent, and extends farther upward than the medial.[3]
|
32 |
+
|
33 |
+
Each condyle is surmounted by an elevation, the epicondyle. The medial epicondyle is a large convex eminence to which the tibial collateral ligament of the knee-joint is attached. At its upper part is the adductor tubercle and behind it is a rough impression which gives origin to the medial head of the gastrocnemius. The lateral epicondyle which is smaller and less prominent than the medial, gives attachment to the fibular collateral ligament of the knee-joint.[3]
|
34 |
+
|
35 |
+
The femur develops from the limb buds as a result of interactions between the ectoderm and the underlying mesoderm, formation occurs roughly around the fourth week of development.[8]
|
36 |
+
|
37 |
+
By the sixth week of development, the first hyaline cartilage model of the femur is formed by chondrocytes. Endochondral ossification begins by the end of the embryonic period and primary ossification centers are present in all long bones of the limbs, including the femur, by the 12th week of development. The hindlimb development lags behind forelimb development by 1–2 days.
|
38 |
+
|
39 |
+
As the femur is the only bone in the thigh, it serves as an attachment point for all the muscles that exert their force over the hip and knee joints. Some biarticular muscles – which cross two joints, like the gastrocnemius and plantaris muscles – also originate from the femur. In all, 23 individual muscles either originate from or insert onto the femur.
|
40 |
+
|
41 |
+
In cross-section, the thigh is divided up into three separate fascial compartments divided by fascia, each containing muscles. These compartments use the femur as an axis, and are separated by tough connective tissue membranes (or septa). Each of these compartments has its own blood and nerve supply, and contains a different group of muscles. These compartments are named the anterior, medial and posterior fascial compartments.
|
42 |
+
|
43 |
+
A femoral fracture that involves the femoral head, femoral neck or the shaft of the femur immediately below the lesser trochanter may be classified as a hip fracture, especially when associated with osteoporosis. Femur fractures can be managed in a pre-hospital setting with the use of a traction splint.
|
44 |
+
|
45 |
+
In primitive tetrapods, the main points of muscle attachment along the femur are the internal trochanter and third trochanter, and a ridge along the ventral surface of the femoral shaft referred to as the adductor crest. The neck of the femur is generally minimal or absent in the most primitive forms, reflecting a simple attachment to the acetabulum. The greater trochanter was present in the extinct archosaurs, as well as in modern birds and mammals, being associated with the loss of the primitive sprawling gait. The lesser trochanter is a unique development of mammals, which lack both the internal and fourth trochanters. The adductor crest is also often absent in mammals or alternatively reduced to a series of creases along the surface of the bone.[10]
|
46 |
+
|
47 |
+
Some species of whales,[11] snakes, and other non-walking vertebrates have vestigial femurs.
|
48 |
+
|
49 |
+
One of the earliest known vertebrates to have a femur is the eusthenopteron, a prehistoric lobe-finned fish from the Late Devonian period.
|
50 |
+
|
51 |
+
Structures analogous to the third trochanter are present in mammals, including some primates.[7]
|
52 |
+
|
53 |
+
In invertebrate zoology the name femur appears in arthropodology. The usage is not homologous with that of vertebrate anatomy; the term "femur" simply has been adopted by analogy and refers, where applicable, to the most proximal of (usually) the two longest jointed segments of the legs of the arthropoda. The two basal segments preceding the femur are the coxa and trochanter. This convention is not followed in carcinology but it applies in arachnology and entomology. In myriapodology another segment, the prefemur, connects the trochanter and femur.
|
54 |
+
|
55 |
+
Position of femur (shown in red). Pelvis and patella are shown as semi-transparent.
|
56 |
+
|
57 |
+
View from behind.
|
58 |
+
|
59 |
+
View from the front.
|
60 |
+
|
61 |
+
3D image
|
62 |
+
|
63 |
+
Long Bone (Femur)
|
64 |
+
|
65 |
+
Muscles of thigh. Lateral view.
|
66 |
+
|
67 |
+
Muscles of thigh. Cross section.
|
68 |
+
|
69 |
+
Distribution forces of the femur
|
en/1965.html.txt
ADDED
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A tractor is an engineering vehicle specifically designed to deliver a high tractive effort (or torque) at slow speeds, for the purposes of hauling a trailer or machinery such as that used in agriculture or construction. Most commonly, the term is used to describe a farm vehicle that provides the power and traction to mechanize agricultural tasks, especially (and originally) tillage, but nowadays a great variety of tasks. Agricultural implements may be towed behind or mounted on the tractor, and the tractor may also provide a source of power if the implement is mechanised.
|
2 |
+
|
3 |
+
The word tractor was taken from Latin, being the agent noun of trahere "to pull".[2][3] The first recorded use of the word meaning "an engine or vehicle for pulling wagons or ploughs" occurred in 1896, from the earlier term "traction engine" (1859).[4]
|
4 |
+
|
5 |
+
In the UK, the Republic of Ireland, Australia, India, Spain, Argentina, Slovenia, Serbia, Croatia, the Netherlands, and Germany, the word "tractor" usually means "farm tractor", and the use of the word "tractor" to mean other types of vehicles is familiar to the vehicle trade, but unfamiliar to much of the general public. In Canada and the US, the word may also refer to the road tractor portion of a tractor trailer truck, but also usually refers to the piece of farm equipment.
|
6 |
+
|
7 |
+
The first powered farm implements in the early 19th century were portable engines – steam engines on wheels that could be used to drive mechanical farm machinery by way of a flexible belt. Richard Trevithick designed the first 'semi-portable' stationary steam engine for agricultural use, known as a "barn engine" in 1812, and it was used to drive a corn threshing machine.[5] The truly portable engine was invented in 1893 by William Tuxford of Boston, Lincolnshire who started manufacture of an engine built around a locomotive-style boiler with horizontal smoke tubes. A large flywheel was mounted on the crankshaft, and a stout leather belt was used to transfer the drive to the equipment being driven. In the 1850s, John Fowler used a Clayton & Shuttleworth portable engine to drive apparatus in the first public demonstrations of the application of cable haulage to cultivation.
|
8 |
+
|
9 |
+
In parallel with the early portable engine development, many engineers attempted to make them self-propelled – the fore-runners of the traction engine. In most cases this was achieved by fitting a sprocket on the end of the crankshaft, and running a chain from this to a larger sprocket on the rear axle. These experiments met with mixed success.[6] The first proper traction engine, in the form recognisable today, was developed in 1859 when British engineer Thomas Aveling modified a Clayton & Shuttleworth portable engine, which had to be hauled from job to job by horses, into a self-propelled one. The alteration was made by fitting a long driving chain between the crankshaft and the rear axle.[7]
|
10 |
+
|
11 |
+
The first half of the 1860s was a period of great experimentation but by the end of the decade the standard form of the traction engine had evolved and would change little over the next sixty years. It was widely adopted for agricultural use. The first tractors were steam-powered plowing engines. They were used in pairs, placed on either side of a field to haul a plow back and forth between them using a wire cable. In Britain Mann's and Garrett developed steam tractors for direct ploughing, but the heavy, wet soil of England meant that these designs were less economical than a team of horses. In the United States, where soil conditions permitted, steam tractors were used to direct-haul plows. Steam-powered agricultural engines remained in use well into the 20th century until reliable internal combustion engines had been developed.[8]
|
12 |
+
|
13 |
+
In 1892, John Froelich invented and built the first gasoline/petrol-powered tractor in Clayton County, Iowa, US.[9][10][11] A Van Duzen single-cylinder gasoline engine was mounted on a Robinson engine chassis, which could be controlled and propelled by Froelich's gear box.[12] After receiving a patent, Froelich started up the Waterloo Gasoline Engine Company and invested all of his assets. However, the venture was very unsuccessful, and by 1895 all was lost and he went out of business.[13][14][15][16]
|
14 |
+
|
15 |
+
Richard Hornsby & Sons are credited with producing and selling the first oil-engined tractor in Britain invented by Herbert Akroyd Stuart. The Hornsby-Akroyd Patent Safety Oil Traction Engine was made in 1896 with a 20 hp engine. In 1897, it was bought by Mr. Locke-King, and this is the first recorded sale of a tractor in Britain. Also in that year, the tractor won a Silver Medal of the Royal Agricultural Society of England. That tractor would later be returned to the factory and fitted with a caterpillar track.
|
16 |
+
|
17 |
+
The first commercially successful light-weight petrol-powered general purpose tractor was built by Dan Albone, a British inventor in 1901.[17][18] He filed for a patent on 15 February 1902 for his tractor design and then formed Ivel Agricultural Motors Limited. The other directors were Selwyn Edge, Charles Jarrott, John Hewitt and Lord Willoughby. He called his machine the Ivel Agricultural Motor; the word "tractor" did not come into common use until later. The Ivel Agricultural Motor was light, powerful and compact. It had one front wheel, with a solid rubber tyre, and two large rear wheels like a modern tractor. The engine used water cooling, by evaporation. It had one forward and one reverse gear. A pulley wheel on the left hand side allowed it to be used as a stationary engine, driving a wide range of agricultural machinery. The 1903 sale price was £300. His tractor won a medal at the Royal Agricultural Show, in 1903 and 1904. About 500 were built, and many were exported all over the world.[19] The original engine was made by Payne & Co. of Coventry. After 1906, French Aster engines were used.
|
18 |
+
|
19 |
+
The first successful American tractor was built by Charles W. Hart and Charles H. Parr. They developed a two-cylinder gasoline engine and set up their business in Charles City, Iowa. In 1903, the firm built 15 tractors. Their 14,000-pound #3 is the oldest surviving internal combustion engine tractor in the United States, and is on display at the Smithsonian National Museum of American History in Washington, D.C. The two-cylinder engine has a unique hit-and-miss firing cycle that produced 30 horsepower at the belt and 18 at the drawbar.[20]
|
20 |
+
|
21 |
+
In 1908, the Saunderson Tractor and Implement Co. of Bedford introduced a four-wheel design, and went on to become the largest tractor manufacturer in Britain at the time. While the earlier, heavier tractors were initially very successful, it became increasingly apparent at this time that the weight of a large supporting frame was less efficient than lighter designs. Henry Ford introduced a light-weight, mass-produced design which largely displaced the heavier designs. Some companies halfheartedly followed suit with mediocre designs, as if to disprove the concept, but they were largely unsuccessful in that endeavor.[21]
|
22 |
+
|
23 |
+
While unpopular at first, these gasoline-powered machines began to catch on in the 1910s, when they became smaller and more affordable.[22] Henry Ford introduced the Fordson, a wildly popular mass-produced tractor, in 1917. They were built in the U.S., Ireland, England and Russia, and by 1923, Fordson had 77% of the U.S. market. The Fordson dispensed with a frame, using the strength of the engine block to hold the machine together. By the 1920s, tractors with gasoline-powered internal combustion engines had become the norm.
|
24 |
+
|
25 |
+
The first three-point hitches were experimented with in 1917, however it was not until Harry Ferguson applied for a British patent for his three-point hitch in 1926 that they became popular. a three-point attachment of the implement to the tractor and the simplest and the only statically determinate way of joining two bodies in engineering. The Ferguson-Brown Company produced the Model A Ferguson-Brown tractor with a Ferguson-designed hydraulic hitch. In 1938 Ferguson entered into a collaboration with Henry Ford to produce the Ford-Ferguson 9N tractor. The three-point hitch soon became the favorite hitch attachment system among farmers around the world. This tractor model also included a rear Power Take Off (PTO) shaft that could be used to power three point hitch mounted implements such as sickle-bar mowers. This PTO location set the standard for future tractor developments.
|
26 |
+
|
27 |
+
Tractors can be generally classified by number of axles or wheels, with main categories of two-wheel tractors (single-axle tractors) and four-wheel tractors (two-axle tractors); more axles are possible but uncommon. Among four-wheel tractors (two-axle tractors), most are two-wheel drive (usually at the rear); but many are two-wheel drive with front wheel assist, four-wheel drive (often with articulated steering), or track tractors (with steel or rubber tracks).
|
28 |
+
|
29 |
+
The classic farm tractor is a simple open vehicle, with two very large driving wheels on an axle below and slightly behind a single seat (the seat and steering wheel consequently are in the center), and the engine in front of the driver, with two steerable wheels below the engine compartment. This basic design has remained unchanged for a number of years, but enclosed cabs are fitted on almost all modern models, for reasons of operator safety and comfort.
|
30 |
+
|
31 |
+
In some localities with heavy or wet soils, notably in the Central Valley of California, the "Caterpillar" or "crawler" type of tracked tractor became popular in the 1930s, due to superior traction and flotation. These were usually maneuvered through the use of turning brake pedals and separate track clutches operated by levers rather than a steering wheel.
|
32 |
+
|
33 |
+
Four-wheel drive tractors began to appear in the 1960s. Some four-wheel drive tractors have the standard "two large, two small" configuration typical of smaller tractors, while some have four large, powered wheels. The larger tractors are typically an articulated, center-hinged design steered by hydraulic cylinders that move the forward power unit while the trailing unit is not steered separately.
|
34 |
+
|
35 |
+
In the early 21st century, articulated or non-articulated, steerable multitrack tractors have largely supplanted the Caterpillar type for farm use. Larger types of modern farm tractors include articulated four-wheel or eight-wheel drive units with one or two power units which are hinged in the middle and steered by hydraulic clutches or pumps. A relatively recent development is the replacement of wheels or steel crawler-type tracks with flexible, steel-reinforced rubber tracks, usually powered by hydrostatic or completely hydraulic driving mechanisms. The configuration of these tractors bears little resemblance to the classic farm tractor design.
|
36 |
+
|
37 |
+
The predecessors of modern tractors, traction engines, used steam engines for power.
|
38 |
+
|
39 |
+
Since the turn of the 20th century, internal combustion engines have been the power source of choice. Between 1900 and 1960, gasoline was the predominant fuel, with kerosene (the Rumely Oil Pull was the most notable of this kind) and ethanol being common alternatives. Generally, one engine could burn any of those, although cold starting was easiest on gasoline. Often, a small auxiliary fuel tank was available to hold gasoline for cold starting and warm-up, while the main fuel tank held whatever fuel was most convenient or least expensive for the particular farmer. In the United Kingdom, a gasoline-kerosene engine is known as a petrol-paraffin engine.
|
40 |
+
|
41 |
+
Dieselisation gained momentum starting in the 1960s, and modern farm tractors usually employ diesel engines, which range in power output from 18 to 575 horsepower (15 to 480 kW). Size and output are dependent on application, with smaller tractors used for lawn mowing, landscaping, orchard work, and truck farming, and larger tractors for vast fields of wheat, maize, soy, and other bulk crops.
|
42 |
+
|
43 |
+
Liquefied petroleum gas (LPG) or propane also have been used as tractor fuels, but require special pressurized fuel tanks and filling equipment, so are less prevalent in most markets.
|
44 |
+
|
45 |
+
In some countries such as Germany, biodiesel is often used.[23][24] Some other biofuels such as straight vegetable oil are also being used by some farmers.[25][26]
|
46 |
+
|
47 |
+
Most older farm tractors use a manual transmission with several gear ratios, typically three to six, sometimes multiplied into two or three ranges. This arrangement provides a set of discrete ratios that, combined with the varying of the throttle, allow final-drive speeds from less than one up to about 25 miles per hour (40 km/h), with the lower speeds used for working the land and the highest speed used on the road.
|
48 |
+
|
49 |
+
Slow, controllable speeds are necessary for most of the operations performed with a tractor. They help give the farmer a larger degree of control in certain situations, such as field work. However, when travelling on public roads, the slow operating speeds can cause problems, such as long queues or tailbacks, which can delay or annoy motorists in cars and trucks. These motorists are responsible for being duly careful around farm tractors and sharing the road with them, but many shirk this responsibility, so various ways to minimize the interaction or minimize the speed differential are employed where feasible. Some countries (for example the Netherlands) employ a road sign on some roads that means "no farm tractors". Some modern tractors, such as the JCB Fastrac, are now capable of much higher road speeds of around 50 mph (80 km/h).
|
50 |
+
|
51 |
+
Older tractors usually have unsynchronized transmission designs, which often require the operator stop the tractor to shift between gears. This mode of use is inherently unsuited to some of the work tractors do, and has been circumvented in various ways over the years. For existing unsynchronized tractors, the methods of circumvention are double clutching or power-shifting, both of which require the operator to rely on skill to speed-match the gears while shifting, and are undesirable from a risk-mitigation standpoint because of what can go wrong if the operator makes a mistake – transmission damage is possible, and loss of vehicle control can occur if the tractor is towing a heavy load either uphill or downhill – something that tractors often do. Therefore, operator's manuals for most of these tractors state one must always stop the tractor before shifting, and they do not even mention the alternatives. As already said, that mode of use is inherently unsuited to some of the work tractors do, so better options were pursued for newer tractor designs.
|
52 |
+
|
53 |
+
In these, unsynchronized transmission designs were replaced with synchronization or with continuously variable transmissions (CVTs). Either a synchronized manual transmission with enough available gear ratios (often achieved with dual ranges, high and low) or a CVT allow the engine speed to be matched to the desired final-drive speed, while keeping engine speed within the appropriate speed (as measured in rotations per minute or rpm) range for power generation (the working range) (whereas throttling back to achieve the desired final-drive speed is a trade-off that leaves the working range). The problems, solutions, and developments described here also describe the history of transmission evolution in semi-trailer trucks. The biggest difference is fleet turnover; whereas most of the old road tractors have long since been scrapped, many of the old farm tractors are still in use. Therefore, old transmission design and operation is primarily just of historical interest in trucking, whereas in farming it still often affects daily life.
|
54 |
+
|
55 |
+
The power produced by the engine must be transmitted to the implement or equipment to do the actual work intended for the equipment. This may be accomplished via a drawbar or hitch system if the implement is to be towed or otherwise pulled through the tractive power of the engine, or via a pulley or power takeoff system if the implement is stationary, or a combination of the two.
|
56 |
+
|
57 |
+
Until the 1940s, plows and other tillage equipment usually were connected to the tractor via a drawbar. The classic drawbar is simply a steel bar attached to the tractor (or in some cases, as in the early Fordsons, cast as part of the rear transmission housing) to which the hitch of the implement was attached with a pin or by a loop and clevis. The implement could be readily attached and removed, allowing the tractor to be used for other purposes on a daily basis. If the tractor was equipped with a swinging drawbar, then it could be set at the center or offset from center to allow the tractor to run outside the path of the implement.
|
58 |
+
|
59 |
+
The drawbar system necessitated the implement having its own running gear (usually wheels) and in the case of a plow, chisel cultivator or harrow, some sort of lift mechanism to raise it out of the ground at turns or for transport. Drawbars necessarily posed a rollover risk depending on how the tractive torque was applied.[27] The Fordson tractor (of which more units were produced and placed in service than any other farm tractor) was extremely prone to roll over backwards due to an excessively short wheelbase. The linkage between the implement and the tractor usually had some slack which could lead to jerky starts and greater wear and tear on the tractor and the equipment.
|
60 |
+
|
61 |
+
Drawbars were appropriate to the dawn of mechanization, because they were very simple in concept and because as the tractor replaced the horse, existing horse-drawn implements usually already had running gear. As the history of mechanization progressed, however, the advantages of other hitching systems became apparent, leading to new developments (see below). Depending on the function for which a tractor is used, though, the drawbar is still one of the usual means of attaching an implement to a tractor (see photo at left).
|
62 |
+
|
63 |
+
Some tractor manufacturers produced matching equipment that could be directly mounted on the tractor. Examples included front-end loaders, belly mowers, row crop cultivators, corn pickers and corn planters. In most cases, these fixed mounts were proprietary and unique to each make of tractor, so an implement produced by John Deere, for example, could not be attached to a Minneapolis Moline tractor. Another disadvantage was mounting usually required some time and labor, resulting in the implement being semi-permanently attached with bolts or other mounting hardware. Usually, it was impractical to remove the implement and reinstall it on a day-to-day basis. As a result, the tractor was unavailable for other uses and dedicated to a single use for an appreciable period of time. An implement generally would be mounted at the beginning of its season of use (such as tillage, planting or harvesting) and removed only when the likely use season had ended.
|
64 |
+
|
65 |
+
The drawbar system was virtually the exclusive method of attaching implements (other than direct attachment to the tractor) before Harry Ferguson developed the three-point hitch. Equipment attached to the three-point hitch can be raised or lowered hydraulically with a control lever. The equipment attached to the three-point hitch is usually completely supported by the tractor. Another way to attach an implement is via a quick hitch, which is attached to the three-point hitch. This enables a single person to attach an implement quicker and put the person in less danger when attaching the implement.
|
66 |
+
|
67 |
+
The three-point hitch revolutionized farm tractors and their implements. While the Ferguson system was still under patent, other manufacturers developed new hitching systems to try to fend off some of Ferguson's competitive advantage. For example, International Harvester's Farmall tractors gained a two-point "Fast Hitch", and John Deere had a power lift that was similar to, but not as flexible as, the Ferguson invention. Once the patent protection expired on the three-point hitch, it became an industry standard.
|
68 |
+
|
69 |
+
Almost every tractor today features Ferguson's three-point linkage or a derivative of it. This hitch allows for easy attachment and detachment of implements while allowing the implement to function as a part of the tractor, almost as if it were attached by a fixed mount. Previously, when the implement hit an obstacle, the towing link would break or the tractor could flip over. Ferguson's genius was to combine a connection via two lower and one upper lift arms that were connected to a hydraulic lifting ram. The ram was, in turn, connected to the upper of the three links so the increased drag (as when a plough hits a rock) caused the hydraulics to lift the implement until the obstacle was passed.
|
70 |
+
|
71 |
+
Recently, Bobcat's patent on its front loader connection (inspired by these earlier systems) has expired, and compact tractors are now being outfitted with quick-connect attachments for their front-end loaders.
|
72 |
+
|
73 |
+
In addition to towing an implement or supplying tractive power through the wheels, most tractors have a means to transfer power to another machine such as a baler, swather, or mower. Unless it functions solely by pulling it through or over the ground, a towed implement needs its own power source (such as a baler or combine with a separate engine) or else a means of transmitting power from the tractor to the mechanical operations of the equipment.
|
74 |
+
|
75 |
+
Early tractors used belts or cables wrapped around the flywheel or a separate belt pulley to power stationary equipment, such as a threshing machine, buzz saw, silage blower, or stationary baler. In most cases, it was not practical for the tractor and equipment to move with a flexible belt or cable between them, so this system required the tractor to remain in one location, with the work brought to the equipment, or the tractor to be relocated at each turn and the power set-up reapplied (as in cable-drawn plowing systems used in early steam tractor operations).
|
76 |
+
|
77 |
+
Modern tractors use a power take-off (PTO) shaft to provide rotary power to machinery that may be stationary or pulled. The PTO shaft generally is at the rear of the tractor, and can be connected to an implement that is either towed by a drawbar or a three-point hitch. This eliminates the need for a separate, implement-mounted power source, which is almost never seen in modern farm equipment. It is also optional to get a front PTO as well when buying a new tractor.
|
78 |
+
|
79 |
+
Virtually all modern tractors can also provide external hydraulic fluid and electrical power to the equipment they are towing, either by hoses or wires.
|
80 |
+
|
81 |
+
Modern tractors have many electrical switches and levers in the cab for controlling the multitude of different functions available on the tractor.
|
82 |
+
|
83 |
+
Modern farm tractors usually have four or five foot-pedals for the operator on the floor of the tractor.
|
84 |
+
|
85 |
+
The pedal on the left is the clutch. The operator presses on this pedal to disengage the transmission for either shifting gears or stopping the tractor. Some modern tractors have (or as optional equipment) a button on the gear stick for controlling the clutch, in addition to the standard pedal.
|
86 |
+
|
87 |
+
Two of the pedals on the right are the brakes. The left brake pedal stops the left rear wheel and the right brake pedal does the same with the right side. This independent left and right wheel-braking augments the steering of the tractor when only the two rear wheels are driven. This is usually done when it is necessary to make a sharp turn. The split brake pedal is also used in mud or soft soil to control a tire spinning due to loss of traction. The operator presses both pedals together to stop the tractor. Usually a swinging or sliding bolt is provided to lock the two together when desired.
|
88 |
+
|
89 |
+
The pedal furthest to the right is the foot throttle. Unlike in automobiles, it can also be controlled from a hand-operated lever ("hand throttle"). This helps provide a constant speed in field work. It also helps provide continuous power for stationary tractors that are operating an implement by shaft or belt. The foot throttle gives the operator more automobile-like control over the speed of the tractor for road work. This is a feature of more recent tractors; older tractors often did not have it. In the UK, foot pedal use to control engine speed while travelling on the road is mandatory. Some tractors, especially those designed for row-crop work, have a 'de-accelerator' pedal, which operates in the reverse fashion to an automobile throttle, in that the pedal is pushed down to slow the engine. This allows fine control over the speed of the tractor when maneuvering at the end of crop rows in fields – the operating speed of the engine is set using the hand throttle, and to slow the tractor to turn, the operator simply has to press the pedal, and turn and release it once the turn is completed, rather than having to alter the setting of the hand throttle twice during the maneuver.
|
90 |
+
|
91 |
+
A fifth pedal is traditionally included just in front of the driver's seat (often pressed with the operator's heel) to operate the rear differential lock (diff-lock), which prevents wheel slip. The differential normally allows the outside wheel to travel faster than the inside wheel during a turn. However, in low-traction conditions on a soft surface, the same mechanism could allow one wheel to slip, further reducing traction. The diff-lock overrides this, forcing both wheels to turn at the same speed, reducing wheel slip and improving traction. Care must be taken to unlock the differential before turning, usually by hitting the pedal a second time, since the tractor with good traction cannot perform a turn with the diff-lock engaged. In modern tractors, this pedal is replaced with an electrical switch.
|
92 |
+
|
93 |
+
Many functions once controlled with levers have been replaced with some model of electrical switch with the rise of indirect computer controlling of functions in modern tractors.
|
94 |
+
|
95 |
+
Until the beginning of the 1960s, tractors had a single register of gears, hence one gear stick, often with three to five forward gears and one reverse. Then, group gears were introduced, and another gear stick was added. Later, control of the forward-reverse direction was moved to a special stick attached at the side of the steering wheel, which allowed forward or reverse travel in any gear. Nowadays, with CVTs or other clutch-free gear types, fewer sticks control the transmission, and some are replaced with electrical switches or are totally computer-controlled.
|
96 |
+
|
97 |
+
The three-point hitch was controlled with a lever for adjusting the position, or as with the earliest ones, just the function for raising or lowering the hitch. With modern electrical systems, it is often replaced with a potentiometer for the lower bound position and another one for the upper bound, and a switch allowing automatic adjustment of the hitch between these settings.
|
98 |
+
|
99 |
+
The external hydraulics also originally had levers, but now are often replaced with some form of electrical switch; the same is true for the power take-off shaft.
|
100 |
+
|
101 |
+
Agriculture in the United States is one of the most hazardous industries, only surpassed
|
102 |
+
by mining and construction. No other farm machine is so identified with the hazards of production agriculture as the tractor.[28] Tractor-related injuries account for approximately 32% of the fatalities and 6% of the nonfatal injuries in agriculture. Over 50% is attributed to tractor overturns.[29]
|
103 |
+
|
104 |
+
The roll-over protection structure (ROPS) and seat belt, when worn,[30] are the most important safety devices to protect operators from death during tractor overturns.[31][32]
|
105 |
+
|
106 |
+
Modern tractors have a ROPS to prevent an operator from being crushed if the tractor turns over. The ROPS does not prevent tractor overturns; rather, it prevents the operator from being crushed during an overturn.[33] This is especially important in open-air tractors, where the ROPS is a steel beam that extends above the operator's seat. For tractors with operator cabs, the ROPS is part
|
107 |
+
of the frame of the cab. A ROPS with enclosed cab further reduces the likelihood of serious injury because the operator is protected by the sides and windows of the cab.
|
108 |
+
|
109 |
+
These structures were first required by legislation in Sweden in 1959. Before they were required, some farmers died when their tractors rolled on top of them. Row-crop tractors, before ROPS, were particularly dangerous because of their 'tricycle' design with the two front wheels spaced close together and angled inward toward the ground. Some farmers were killed by rollovers while operating tractors along steep slopes. Others have been killed while attempting to tow or pull an excessive load from above axle height, or when cold weather caused the tires to freeze to the ground, in both cases causing the tractor to pivot around the rear axle.[28] ROPS were first required in the United States in 1986, but this requirement did not retroactively apply to tractors produced before this year; therefore, adoption of ROPS has been incomplete in the farming community. To combat this problem, CROPS (cost-effective roll-over protection structures) have been developed to encourage farmers to retrofit older tractors.[32]
|
110 |
+
|
111 |
+
For the ROPS to work as designed, the operator must stay within its protective frame. This means the operator must wear the seat belt; not wearing it may defeat the primary purpose of the ROPS.
|
112 |
+
|
113 |
+
The most common use of the term "tractor" is for the vehicles used on farms. The farm tractor is used for pulling or pushing agricultural machinery or trailers, for plowing, tilling, disking, harrowing, planting, and similar tasks.
|
114 |
+
|
115 |
+
A variety of specialty farm tractors have been developed for particular uses. These include "row crop" tractors with adjustable tread width to allow the tractor to pass down rows of cereals, maize, tomatoes or other crops without crushing the plants, "wheatland" or "standard" tractors with fixed wheels and a lower center of gravity for plowing and other heavy field work for broadcast crops, and "high crop" tractors with adjustable tread and increased ground clearance, often used in the cultivation of cotton and other high-growing row crop plant operations, and "utility tractors", typically smaller tractors with a low center of gravity and short turning radius, used for general purposes around the farmstead. Many utility tractors are used for nonfarm grading, landscape maintenance and excavation purposes, particularly with loaders, backhoes, pallet forks and similar devices. Small garden or lawn tractors designed for suburban and semirural gardening and landscape maintenance also exist in a variety of configurations.
|
116 |
+
|
117 |
+
Some farm-type tractors are found elsewhere than on farms: with large universities' gardening departments, in public parks, or for highway workman use with blowtorch cylinders strapped to the sides and a pneumatic drill air compressor permanently fastened over the power take-off. These are often fitted with grass (turf) tyres which are less damaging to soft surfaces than agricultural tires.
|
118 |
+
|
119 |
+
Space technology has been incorporated into agriculture in the form of GPS devices, and robust on-board computers installed as optional features on farm tractors. These technologies are used in modern, precision farming techniques. The spin-offs from the space race have actually facilitated automation in plowing and the use of autosteer systems (drone on tractors that are manned but only steered at the end of a row), the idea being to neither overlap and use more fuel nor leave streaks when performing jobs such as cultivating. Several tractor companies have also been working on producing a driverless tractor.
|
120 |
+
|
121 |
+
The durability and engine power of tractors made them very suitable for engineering tasks. Tractors can be fitted with engineering tools such as dozer blades, buckets, hoes, rippers, etc. The most common attachments for the front of a tractor are dozer blades or buckets. When attached to engineering tools, the tractor is called an engineering vehicle.
|
122 |
+
|
123 |
+
A bulldozer is a track-type tractor with a blade attached in the front and a rope-winch behind. Bulldozers are very powerful tractors and have excellent ground-hold, as their main tasks are to push or drag.
|
124 |
+
|
125 |
+
Bulldozers have been further modified over time to evolve into new machines which are capable of working in ways that the original bulldozer can not. One example is that loader tractors were created by removing the blade and substituting a large volume bucket and hydraulic arms which can raise and lower the bucket, thus making it useful for scooping up earth, rock and similar loose material to load it into trucks.
|
126 |
+
|
127 |
+
A front-loader or loader is a tractor with an engineering tool which consists of two hydraulic powered arms on either side of the front engine compartment and a tilting implement. This is usually a wide-open box called a bucket, but other common attachments are a pallet fork and a bale grappler.
|
128 |
+
|
129 |
+
Other modifications to the original bulldozer include making the machine smaller to let it operate in small work areas where movement is limited. Also, tiny wheeled loaders, officially called skid-steer loaders, but nicknamed "Bobcat" after the original manufacturer, are particularly suited for small excavation projects in confined areas.
|
130 |
+
|
131 |
+
The most common variation of the classic farm tractor is the backhoe, also called a backhoe-loader. As the name implies, it has a loader assembly on the front and a backhoe on the back. Backhoes attach to a three-point hitch on farm or industrial tractors. Industrial tractors are often heavier in construction, particularly with regards to the use of a steel grill for protection from rocks and the use of construction tires. When the backhoe is permanently attached, the machine usually has a seat that can swivel to the rear to face the hoe controls. Removable backhoe attachments almost always have a separate seat on the attachment.
|
132 |
+
|
133 |
+
Backhoe-loaders are very common and can be used for a wide variety of tasks: construction, small demolitions, light transportation of building materials, powering building equipment, digging holes, loading trucks, breaking asphalt and paving roads. Some buckets have retractable bottoms, enabling them to empty their loads more quickly and efficiently. Buckets with retractable bottoms are also often used for grading and scratching off sand. The front assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools.
|
134 |
+
|
135 |
+
Their relatively small frames and precise controls make backhoe-loaders very useful and common in urban engineering projects, such as construction and repairs in areas too small for larger equipment. Their versatility and compact size make them one of the most popular urban construction vehicles.
|
136 |
+
|
137 |
+
In the UK and Ireland, the word "JCB" is used colloquially as a genericized trademark for any such type of engineering vehicle. The term JCB now appears in the Oxford English Dictionary, although it is still legally a trademark of J. C. Bamford Ltd. The term "digger" is also commonly used.
|
138 |
+
|
139 |
+
A compact utility tractor (CUT) is a smaller version of an agricultural tractor, but designed primarily for landscaping and estate management tasks rather than for planting and harvesting on a commercial scale. Typical CUTs range from {convert|20 - 50|hp|kW|1}} with available power take-off (PTO) horsepower ranging from 15–45 horsepower (11.2–33.6 kW). CUTs are often equipped with both a mid-mounted and a standard rear PTO, especially those below 40 horsepower (29.8 kW). The mid-mount PTO shaft typically rotates at/near 2000 rpm and is typically used to power mid-mount finish mowers, front-mounted snow blowers or front-mounted rotary brooms. The rear PTO is standardized at 540 rpm for the North American markets, but in some parts of the world, a dual 540/1000 rpm PTO is standard, and implements are available for either standard in those markets.
|
140 |
+
|
141 |
+
One of the most common attachment for a CUT is the front-end loader or FEL. Like the larger agricultural tractors, a CUT will have an adjustable, hydraulically controlled three-point hitch. Typically, a CUT will have four-wheel drive, or more correctly four-wheel assist. Modern CUTs often feature hydrostatic transmissions, but many variants of gear-drive transmissions are also offered from low priced, simple gear transmissions to synchronized transmissions to advanced glide-shift transmissions. All modern CUTs feature government-mandated roll over protection structures just like agricultural tractors. The most well-known brands in North America include Kubota, John Deere Tractor, New Holland Ag, Case-Farmall and Massey-Ferguson. Although less common, compact backhoes are often attached to compact utility tractors.
|
142 |
+
|
143 |
+
Compact utility tractors require special, smaller implements than full-sized agricultural tractors. Very common implements include the box blade, the grader blade, the landscape rake, the post hole digger (or post hole auger), the rotary cutter (slasher or a brush hog), a mid- or rear-mount finish mower, a broadcast seeder, a subsoiler and the rototiller (rotary tiller). In northern climates, a rear-mounted snow blower is very common; some smaller CUT models are available with front-mounted snow blowers powered by mid-PTO shafts. Implement brands outnumber tractor brands, so CUT owners have a wide selection of implements.
|
144 |
+
|
145 |
+
For small-scale farming or large-scale gardening, some planting and harvesting implements are sized for CUTs. One- and two-row planting units are commonly available, as are cultivators, sprayers and different types of seeders (slit, rotary and drop). One of the first CUTs offered for small farms of three to 30 acres and for small jobs on larger farms was a three-wheeled unit, with the rear wheel being the drive wheel, offered by Sears & Roebuck in 1954 and priced at $598 for the basic model.[34]
|
146 |
+
|
147 |
+
The earliest tractors were called "standard" tractors, and were intended almost solely for plowing and harrowing before planting, which were difficult tasks for humans and draft animals. They were characterized by a low, rearward seating position, fixed-width tread, and low ground clearance. These early tractors were cumbersome, and were not well-suited to getting into a field of already-planted row crops to do weed control. The "standard" tractor definition is no longer in current use.
|
148 |
+
|
149 |
+
A general-purpose or row-crop tractor is tailored specifically to the growing of crops grown in rows, and most especially to cultivating these crops. These tractors are universal machines, capable of both primary tillage and cultivation of a crop. The "row-crop" or "general-purpose" designation is no longer in current use.
|
150 |
+
|
151 |
+
The row-crop tractor category evolved rather than appearing overnight, but the International Harvester (IH) Farmall is often considered the "first" tractor of the category. Some earlier tractors of the 1910s and 1920s approached the form factor from the heavier side, as did motorized cultivators from the lighter side, but the Farmall brought all of the salient features together into one package, with a capable distribution network to ensure its commercial success. In the new form factor that the Farmall popularized, the cultivator was mounted in the front so it was easily visible. Additionally, the tractor had a narrow front end; the front tires were spaced very closely and angled in towards the bottom. The back wheels straddled two rows, and the unit could cultivate four rows at once.
|
152 |
+
|
153 |
+
From 1924 until 1963, Farmalls were the largest selling row-crop tractors.
|
154 |
+
|
155 |
+
To compete, John Deere designed the Model C, which had a wide front and could cultivate three rows at once. Only 112 prototypes were made, as Deere realized sales would be lost to Farmall if their model did less. In 1928, Deere released the Model C anyway, only as the Model GP (General Purpose) to avoid confusion with the Model D when ordered over the then unclear telephone.[35]
|
156 |
+
|
157 |
+
Oliver refined its "Row Crop" model early in 1930.[36] Until 1935, the 18–27 was Oliver–Hart-Parr's only row-crop tractor.[37]
|
158 |
+
Many Oliver row-crop models are referred to as "Oliver Row Crop 77", "Oliver Row Crop 88", etc.
|
159 |
+
|
160 |
+
Many early row-crop tractors had a tricycle design with two closely spaced front tires, and some even had a single front tire. This made it dangerous to operate on the side of a steep hill; as a result, many farmers died from tractor rollovers. Also, early row-crop tractors had no rollover protection system (ROPS), meaning if the tractor flipped back, the operator could be crushed. Sweden was the first country which passed legislation requiring ROPS, in 1959.
|
161 |
+
|
162 |
+
Over 50% of tractor related injuries and deaths are attributed to tractor rollover.[29]
|
163 |
+
|
164 |
+
Canadian agricultural equipment manufacturer Versatile makes row-crop tractors that are 265 to 365 horsepower (198 to 272 kW); powered by an 8.9 liter Cummins Diesel engine.[38][39]
|
165 |
+
|
166 |
+
Case IH and New Holland of CNH Industrial both produce high horsepower front-wheel-assist row crop tractors with available rear tracks.[40] Case IH also has a 500 hp (373 kW) four-wheel drive track system called Rowtrac.[41]
|
167 |
+
|
168 |
+
John Deere has an extensive line of available row crop tractors ranging from 140 to 400 horsepower (104 to 298 kW).[42]
|
169 |
+
|
170 |
+
Modern row crop tractors have rollover protection systems in the form of a reinforced cab or a roll bar.
|
171 |
+
|
172 |
+
Garden tractors (mini tractors) are small, light tractors designed for use in domestic gardens and small estates. Garden tractors are designed for cutting grass, snow removal, and small property cultivation. In the U.S., the term riding lawn mower today often is used to refer to mid- or rear-engined machines. Front-engined tractor layout machines designed primarily for cutting grass and light towing are called lawn tractors; heavier-duty tractors of similar size are garden tractors. Garden tractors are capable of mounting a wider array of attachments than lawn tractors. Unlike lawn tractors and rear-engined riding mowers, garden tractors are powered by horizontal-crankshaft engines with a belt-drive to transaxle-type transmissions (usually of four or five speeds, although some may also have two-speed reduction gearboxes, drive-shafts, or hydrostatic or hydraulic drives). Garden tractors from Wheel Horse, Cub Cadet, Economy (Power King), John Deere, Massey Ferguson and Case Ingersoll are built in this manner. The engines are generally one- or two-cylinder petrol (gasoline) engines, although diesel engine models are also available, especially in Europe. Typically, diesel-powered garden tractors are larger and heavier-duty than gasoline-powered units and compare more similarly to compact utility tractors.
|
173 |
+
|
174 |
+
Visually, the distinction between a garden tractor and a lawn tractor is often hard to make – generally, garden tractors are more sturdily built, with stronger frames, 12-inch or larger wheels mounted with multiple lugs (most lawn tractors have a single bolt or clip on the hub), heavier transaxles, and ability to accommodate a wide range of front, belly, and rear mounted attachments.
|
175 |
+
|
176 |
+
Although most people think first of four-wheel vehicles when they think of tractors, a tractor may have one or more axles. The key benefit is the power itself, which only takes one axle to provide. Single-axle tractors, more often called two-wheel tractors or walk-behind tractors, have had many users since the beginning of internal combustion engine tractors. They tend to be small and affordable. This was especially true before the 1960s, when a walk-behind tractor could often be more affordable than a two-axle tractor of comparable power. Today's compact utility tractors and advanced garden tractors may negate most of that market advantage, but two-wheel tractors still enjoy a loyal following, especially where an already-paid-for two-wheel tractor is financially superior to a compact or garden tractor that would have to be purchased. Countries where two-wheel tractors are especially prevalent today include Thailand, China, Bangladesh, India, and other Southeast Asia countries.
|
177 |
+
|
178 |
+
Tractors tailored to use in fruit orchards typically have features suited to passing under tree branches with impunity. These include a lower overall profile; reduced tree-branch-snagging risk (via underslung exhaust pipes rather than smoke-stack-style exhaust, and large sheetmetal cowlings and fairings that allow branches to deflect and slide off rather than catch); spark arrestors on the exhaust tips; and often wire cages to protect the operator from snags.
|
179 |
+
|
180 |
+
The ingenuity of farm mechanics, coupled in some cases with OEM or aftermarket assistance, has often resulted in the conversion of automobiles for use as farm tractors. In the United States, this trend was especially strong from the 1910s through 1950s. It began early in the development of vehicles powered by internal combustion engines, with blacksmiths and amateur mechanics tinkering in their shops. Especially during the interwar period, dozens of manufacturers (Montgomery Ward among them) marketed aftermarket kits for converting Ford Model Ts for use as tractors.[43] (These were sometimes called 'Hoover wagons' during the Great Depression, although this term was usually reserved for automobiles converted to horse-drawn buggy use when gasoline was unavailable or unaffordable. During the same period, another common name was "Doodlebug".) Ford even considered producing an "official" optional kit.[44] Many Model A Fords also were converted for this purpose. In later years, some farm mechanics have been known to convert more modern trucks or cars for use as tractors, more often as curiosities or for recreational purposes (rather than out of the earlier motives of pure necessity or frugality).
|
181 |
+
|
182 |
+
During World War II, a shortage of tractors in Sweden led to the development of the so-called "EPA" tractor (EPA was a chain of discount stores and it was often used to signify something lacking in quality). An EPA tractor was simply an automobile, truck or lorry, with the passenger space cut off behind the front seats, equipped with two gearboxes in a row. When done to an older car with a ladder frame, the result was not dissimilar to a tractor and could be used as one. After the war it remained popular, now not as a farm vehicle, but as a way for young people without a driver's license to own something similar to a car. Since it was legally seen as a tractor, it could be driven from 16 years of age and only required a tractor license. Eventually, the legal loophole was closed and no new EPA tractors were allowed to be made, but the remaining ones were still legal, which led to inflated prices and many protests from people who preferred EPA tractors to ordinary cars.
|
183 |
+
|
184 |
+
The German occupation of Italy during World War II resulted in a severe shortage of mechanized farm equipment. The destruction of tractors was a sort of scorched-earth strategy used to reduce the independence of the conquered. The shortage of tractors in that area of Europe was the origin of Lamborghini. The war was also the inspiration for dual-purpose vehicles such as the Land Rover. Based on the Jeep, the company made a vehicle that combined PTO, tillage, and transportation.
|
185 |
+
|
186 |
+
In March 1975, a similar type of vehicle was introduced in Sweden, the A tractor [from arbetstraktor (work tractor)]; the main difference is an A tractor has a top speed of 30 km/h. This is usually done by fitting two gearboxes in a row and not using one of them. The Volvo Duett was, for a long time, the primary choice for conversion to an EPA or A tractor, but since supplies have dried up, other cars have been used, in most cases another Volvo. The SFRO is a Swedish organization advocating homebuilt and modified vehicles.
|
187 |
+
|
188 |
+
Another type of homemade tractors are ones that are fabricated from scratch. The "from scratch" description is relative, as often individual components will be repurposed from earlier vehicles or machinery (e.g., engines, gearboxes, axle housings), but the tractor's overall chassis is essentially designed and built by the owner (e.g., a frame is welded from bar stock—channel stock, angle stock, flat stock, etc.). As with automobile conversions, the heyday of this type of tractor, at least in developed economies, lies in the past, when there were large populations of blue-collar workers for whom metalworking and farming were prevalent parts of their lives. (For example, many 19th- and 20th-century New England and Midwestern machinists and factory workers had grown up on farms.) Backyard fabrication was a natural activity to them (whereas it might seem daunting to most people today).
|
189 |
+
|
190 |
+
The term "tractor" (US and Canada) or "tractor unit" (UK) is also applied to:
|
191 |
+
|
192 |
+
Diesel-electric locomotive at work
|
193 |
+
|
194 |
+
A Trackmobile 4150
|
195 |
+
|
196 |
+
Aircraft pushback tractor
|
197 |
+
|
198 |
+
Road tractor pulling a flatbed trailer
|
199 |
+
|
200 |
+
Unimog 70200
|
201 |
+
|
202 |
+
Standard Bucket loader - Standard bucket many says it GP bucket also. its common bucket loader used for loading or unloading of material. it also called the JCB Tractor.[45]
|
203 |
+
|
204 |
+
Telescopic loader - Telescopic loader arm height is more as compared to the normal loader. The maximum height goes up to 18 meters in standard 60 HP tractor. its generally used for the low weight material but dumping required at a high point. Or we can say it is used for husk material loading & unloading.[45]
|
205 |
+
|
206 |
+
Grabbing fork bucket loader - Grabbing fork bucket used where bucket needs to hold that material so the material can’t spill from the bucket. Paper Mill, Cotton Industry, and Scrap or waste collection industry generally use it maximum.[45]
|
207 |
+
|
208 |
+
Radial loader - The radial loader has maximum height it used for multi-purpose with different kinds of buckets. Redial Loader bucket can rotate 360 so it will help out those who want to pick the material like wood and load the trucks.[45]
|
209 |
+
|
210 |
+
Manure Fork loader - Manure fork loader use for husk kind of item loading. It’s very rarely used in the industry Manure fork loader.[45]
|
211 |
+
|
212 |
+
Some of the many tractor manufacturers and brands worldwide include:
|
213 |
+
|
214 |
+
In addition to commercial manufacturers, the Open Source Ecology group has developed several working prototypes of an open source hardware tractor called the LifeTrac as part of its Global Village Construction Set.
|
215 |
+
|
216 |
+
An unusual application − road roller powered by a tractor-drive
|
217 |
+
|
218 |
+
The First Tractor, a painting by Vladimir Krikhatsky.
|
219 |
+
|
220 |
+
A single tractor in Brazil
|
221 |
+
|
222 |
+
Tractor that is used for self-sufficiency purposes in Germany
|
223 |
+
|
224 |
+
A scale model of a modern Mahindra tractor in Punjab, India
|
225 |
+
|
226 |
+
Farm tractor in Balnain, Scotland
|
227 |
+
|
228 |
+
Alvin O. Lombard of Waterville, Maine, invented a tractor in 1901 for hauling logs, as displayed at the Maine State Museum in the capital city of Augusta. Known as "Lombard Log Haulers," these vehicles revolutionized logging in Maine.
|
229 |
+
|
230 |
+
Mercedes-Benz tractor
|
en/1966.html.txt
ADDED
@@ -0,0 +1,84 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Feudalism was a combination of legal, economic, military and cultural customs that flourished in Medieval Europe between the 9th and 15th centuries. Broadly defined, it was a way of structuring society around relationships that were derived from the holding of land in exchange for service or labour.
|
2 |
+
Although it is derived from the Latin word feodum or feudum (fief),[1] which was used during the Medieval period, the term feudalism and the system which it describes were not conceived of as a formal political system by the people who lived during the Middle Ages.[2] The classic definition, by François-Louis Ganshof (1944),[3] describes a set of reciprocal legal and military obligations which existed among the warrior nobility and revolved around the three key concepts of lords, vassals and fiefs.[3]
|
3 |
+
|
4 |
+
A broader definition of feudalism, as described by Marc Bloch (1939), includes not only the obligations of the warrior nobility but the obligations of all three estates of the realm: the nobility, the clergy, and the peasantry, all of whom were bound by a system of manorialism; this is sometimes referred to as a "feudal society". Since the publication of Elizabeth A. R. Brown's "The Tyranny of a Construct" (1974) and Susan Reynolds's Fiefs and Vassals (1994), there has been ongoing inconclusive discussion among medieval historians as to whether feudalism is a useful construct for understanding medieval society.[4][5][6][7][8][9]
|
5 |
+
|
6 |
+
There is no commonly accepted modern definition of feudalism, at least among scholars.[4][7] The adjective feudal was coined in the 17th century, and the noun feudalism, often used in a political and propaganda context, was not coined until the 19th century,[4] from the French féodalité (feudality), itself an 18th-century creation.
|
7 |
+
|
8 |
+
According to a classic definition by François-Louis Ganshof (1944),[3] feudalism describes a set of reciprocal legal and military obligations which existed among the warrior nobility and revolved around the three key concepts of lords, vassals and fiefs,[3] though Ganshof himself noted that his treatment was only related to the "narrow, technical, legal sense of the word".
|
9 |
+
|
10 |
+
A broader definition, as described in Marc Bloch's Feudal Society (1939),[10] includes not only the obligations of the warrior nobility but the obligations of all three estates of the realm: the nobility, the clergy, and those who lived off their labor, most directly the peasantry which was bound by a system of manorialism; this order is often referred to as a "feudal society", echoing Bloch's usage.
|
11 |
+
|
12 |
+
Outside its European context,[4] the concept of feudalism is often used by analogy, most often in discussions of feudal Japan under the shoguns, and sometimes in discussions of the Zagwe dynasty in medieval Ethiopia,[11] which had some feudal characteristics (sometimes called "semifeudal").[12][13] Some have taken the feudalism analogy further, seeing feudalism (or traces of it) in places as diverse as China during the Spring and Autumn period, ancient Egypt, the Parthian empire, the Indian subcontinent and the Antebellum and Jim Crow American South.[11] Wu Ta-k'un argued that China's fengjian, being kinship-based and tied to land which was controlled by a king, was entirely distinct from feudalism. This despite the fact that in translation fengjian is frequently paired in both directions with feudal.[14]
|
13 |
+
|
14 |
+
The term feudalism has also been applied—often inappropriately or pejoratively—to non-Western societies where institutions and attitudes which are similar to those which existed in medieval Europe are perceived to prevail.[15] Some historians and political theorists believe that the term feudalism has been deprived of specific meaning by the many ways it has been used, leading them to reject it as a useful concept for understanding society.[4][5]
|
15 |
+
|
16 |
+
The term "féodal" was used in 17th-century French legal treatises (1614)[16][17] and translated into English legal treatises as an adjective, such as "feodal government".
|
17 |
+
|
18 |
+
In the 18th century, Adam Smith, seeking to describe economic systems, effectively coined the forms "feudal government" and "feudal system" in his book Wealth of Nations (1776).[18] In the 19th century the adjective "feudal" evolved into a noun: "feudalism".[18] The term feudalism is recent, first appearing in French in 1823, Italian in 1827, English in 1839, and in German in the second half of the 19th century.[18]
|
19 |
+
|
20 |
+
The term "feudal" or "feodal" is derived from the medieval Latin word feodum. The etymology of feodum is complex with multiple theories, some suggesting a Germanic origin (the most widely held view) and others suggesting an Arabic origin. Initially in medieval Latin European documents, a land grant in exchange for service was called a beneficium (Latin).[19] Later, the term feudum, or feodum, began to replace beneficium in the documents.[19] The first attested instance of this is from 984, although more primitive forms were seen up to one-hundred years earlier.[19] The origin of the feudum and why it replaced beneficium has not been well established, but there are multiple theories, described below.[19]
|
21 |
+
|
22 |
+
The most widely held theory was proposed by Johan Hendrik Caspar Kern in 1870,[20][21] being supported by, amongst others, William Stubbs[19][22] and Marc Bloch.[19][23][24] Kern derived the word from a putative Frankish term *fehu-ôd, in which *fehu means "cattle" and -ôd means "goods", implying "a moveable object of value".[23][24] Bloch explains that by the beginning of the 10th century it was common to value land in monetary terms but to pay for it with moveable objects of equivalent value, such as arms, clothing, horses or food. This was known as feos, a term that took on the general meaning of paying for something in lieu of money. This meaning was then applied to land itself, in which land was used to pay for fealty, such as to a vassal. Thus the old word feos meaning movable property changed little by little to feus meaning the exact opposite: landed property.[23][24]
|
23 |
+
|
24 |
+
Another theory was put forward by Archibald R. Lewis.[19] Lewis said the origin of 'fief' is not feudum (or feodum), but rather foderum, the earliest attested use being in Astronomus's Vita Hludovici (840).[25] In that text is a passage about Louis the Pious that says annona militaris quas vulgo foderum vocant, which can be translated as "Louis forbade that military provender (which they popularly call "fodder") be furnished.."[19]
|
25 |
+
|
26 |
+
Another theory by Alauddin Samarrai suggests an Arabic origin, from fuyū (the plural of fay, which literally means "the returned", and was used especially for 'land that has been conquered from enemies that did not fight').[19][26] Samarrai's theory is that early forms of 'fief' include feo, feu, feuz, feuum and others, the plurality of forms strongly suggesting origins from a loanword. The first use of these terms is in Languedoc, one of the least Germanic areas of Europe and bordering Muslim Spain. Further, the earliest use of feuum (as a replacement for beneficium) can be dated to 899, the same year a Muslim base at Fraxinetum (La Garde-Freinet) in Provence was established. It is possible, Samarrai says, that French scribes, writing in Latin, attempted to transliterate the Arabic word fuyū (the plural of fay), which was being used by the Muslim invaders and occupiers at the time, resulting in a plurality of forms – feo, feu, feuz, feuum and others – from which eventually feudum derived. Samarrai, however, also advises to handle this theory with care, as Medieval and Early Modern Muslim scribes often used etymologically "fanciful roots" in order to claim the most outlandish things to be of Arabian or Muslim origin.[26]
|
27 |
+
|
28 |
+
Feudalism, in its various forms, usually emerged as a result of the decentralization of an empire: especially in the Carolingian Empire in 8th century AD/CE, which lacked the bureaucratic infrastructure[clarification needed] necessary to support cavalry without allocating land to these mounted troops. Mounted soldiers began to secure a system of hereditary rule over their allocated land and their power over the territory came to encompass the social, political, judicial, and economic spheres.[27]
|
29 |
+
|
30 |
+
These acquired powers significantly diminished unitary power in these empires. Only when the infrastructure existed to maintain unitary power—as with the European monarchies—did feudalism begin to yield to this new power structure and eventually disappear.[27]
|
31 |
+
|
32 |
+
The classic François-Louis Ganshof version of feudalism[4][3] describes a set of reciprocal legal and military obligations which existed among the warrior nobility, revolving around the three key concepts of lords, vassals and fiefs. In broad terms a lord was a noble who held land, a vassal was a person who was granted possession of the land by the lord, and the land was known as a fief. In exchange for the use of the fief and protection by the lord, the vassal would provide some sort of service to the lord. There were many varieties of feudal land tenure, consisting of military and non-military service. The obligations and corresponding rights between lord and vassal concerning the fief form the basis of the feudal relationship.[3]
|
33 |
+
|
34 |
+
Before a lord could grant land (a fief) to someone, he had to make that person a vassal. This was done at a formal and symbolic ceremony called a commendation ceremony, which was composed of the two-part act of homage and oath of fealty. During homage, the lord and vassal entered into a contract in which the vassal promised to fight for the lord at his command, whilst the lord agreed to protect the vassal from external forces. Fealty comes from the Latin fidelitas and denotes the fidelity owed by a vassal to his feudal lord. "Fealty" also refers to an oath that more explicitly reinforces the commitments of the vassal made during homage. Such an oath follows homage.[28]
|
35 |
+
|
36 |
+
Once the commendation ceremony was complete, the lord and vassal were in a feudal relationship with agreed obligations to one another. The vassal's principal obligation to the lord was to "aid", or military service. Using whatever equipment the vassal could obtain by virtue of the revenues from the fief, the vassal was responsible to answer calls to military service on behalf of the lord. This security of military help was the primary reason the lord entered into the feudal relationship. In addition, the vassal could have other obligations to his lord, such as attendance at his court, whether manorial, baronial, both termed court baron, or at the king's court.[29]
|
37 |
+
|
38 |
+
It could also involve the vassal providing "counsel", so that if the lord faced a major decision he would summon all his vassals and hold a council. At the level of the manor this might be a fairly mundane matter of agricultural policy, but also included sentencing by the lord for criminal offences, including capital punishment in some cases. Concerning the king's feudal court, such deliberation could include the question of declaring war. These are examples; depending on the period of time and location in Europe, feudal customs and practices varied; see examples of feudalism.
|
39 |
+
|
40 |
+
In its origin, the feudal grant of land had been seen in terms of a personal bond between lord and vassal, but with time and the transformation of fiefs into hereditary holdings, the nature of the system came to be seen as a form of "politics of land" (an expression used by the historian Marc Bloch). The 11th century in France saw what has been called by historians a "feudal revolution" or "mutation" and a "fragmentation of powers" (Bloch) that was unlike the development of feudalism in England or Italy or Germany in the same period or later:[30] Counties and duchies began to break down into smaller holdings as castellans and lesser seigneurs took control of local lands, and (as comital families had done before them) lesser lords usurped/privatized a wide range of prerogatives and rights of the state, most importantly the highly profitable rights of justice, but also travel dues, market dues, fees for using woodlands, obligations to use the lord's mill, etc.[31] (what Georges Duby called collectively the "seigneurie banale"[31]). Power in this period became more personal.[32]
|
41 |
+
|
42 |
+
This "fragmentation of powers" was not, however, systematic throughout France, and in certain counties (such as Flanders, Normandy, Anjou, Toulouse), counts were able to maintain control of their lands into the 12th century or later.[33] Thus, in some regions (like Normandy and Flanders), the vassal/feudal system was an effective tool for ducal and comital control, linking vassals to their lords; but in other regions, the system led to significant confusion, all the more so as vassals could and frequently did pledge themselves to two or more lords. In response to this, the idea of a "liege lord" was developed (where the obligations to one lord are regarded as superior) in the 12th century.[34]
|
43 |
+
|
44 |
+
Most of the military aspects of feudalism effectively ended by about 1500.[35] This was partly since the military shifted from armies consisting of the nobility to professional fighters thus reducing the nobility's claim on power, but also because the Black Death reduced the nobility's hold over the lower classes. Vestiges of the feudal system hung on in France until the French Revolution of the 1790s, and the system lingered on in parts of Central and Eastern Europe as late as the 1850s. Slavery in Romania was abolished in 1856. Russia finally abolished serfdom in 1861.[36][37]
|
45 |
+
|
46 |
+
Even when the original feudal relationships had disappeared, there were many institutional remnants of feudalism left in place. Historian Georges Lefebvre explains how at an early stage of the French Revolution, on just one night of August 4, 1789, France abolished the long-lasting remnants of the feudal order. It announced, "The National Assembly abolishes the feudal system entirely." Lefebvre explains:
|
47 |
+
|
48 |
+
Without debate the Assembly enthusiastically adopted equality of taxation and redemption of all manorial rights except for those involving personal servitude—which were to be abolished without indemnification. Other proposals followed with the same success: the equality of legal punishment, admission of all to public office, abolition of venality in office, conversion of the tithe into payments subject to redemption, freedom of worship, prohibition of plural holding of benefices ... Privileges of provinces and towns were offered as a last sacrifice.[38]
|
49 |
+
|
50 |
+
Originally the peasants were supposed to pay for the release of seigneurial dues; these dues affected more than a quarter of the farmland in France and provided most of the income of the large landowners.[39] The majority refused to pay and in 1793 the obligation was cancelled. Thus the peasants got their land free, and also no longer paid the tithe to the church.[40]
|
51 |
+
|
52 |
+
The phrase "feudal society" as defined by Marc Bloch[10] offers a wider definition than Ganshof's and includes within the feudal structure not only the warrior aristocracy bound by vassalage, but also the peasantry bound by manorialism, and the estates of the Church. Thus the feudal order embraces society from top to bottom, though the "powerful and well-differentiated social group of the urban classes" came to occupy a distinct position to some extent outside the classic feudal hierarchy.
|
53 |
+
|
54 |
+
The idea of feudalism was unknown and the system it describes was not conceived of as a formal political system by the people living in the Medieval Period. This section describes the history of the idea of feudalism, how the concept originated among scholars and thinkers, how it changed over time, and modern debates about its use.
|
55 |
+
|
56 |
+
The concept of a feudal state or period, in the sense of either a regime or a period dominated by lords who possess financial or social power and prestige, became widely held in the middle of the 18th century, as a result of works such as Montesquieu's De L'Esprit des Lois (1748; published in English as The Spirit of the Laws), and Henri de Boulainvilliers’s Histoire des anciens Parlements de France (1737; published in English as An Historical Account of the Ancient Parliaments of France or States-General of the Kingdom, 1739).[18] In the 18th century, writers of the Enlightenment wrote about feudalism to denigrate the antiquated system of the Ancien Régime, or French monarchy. This was the Age of Enlightenment when writers valued reason and the Middle Ages were viewed as the "Dark Ages". Enlightenment authors generally mocked and ridiculed anything from the "Dark Ages" including feudalism, projecting its negative characteristics on the current French monarchy as a means of political gain.[41] For them "feudalism" meant seigneurial privileges and prerogatives. When the French Constituent Assembly abolished the "feudal regime" in August 1789 this is what was meant.
|
57 |
+
|
58 |
+
Adam Smith used the term "feudal system" to describe a social and economic system defined by inherited social ranks, each of which possessed inherent social and economic privileges and obligations. In such a system wealth derived from agriculture, which was arranged not according to market forces but on the basis of customary labour services owed by serfs to landowning nobles.[42]
|
59 |
+
|
60 |
+
Karl Marx also used the term in the 19th century in his analysis of society's economic and political development, describing feudalism (or more usually feudal society or the feudal mode of production) as the order coming before capitalism. For Marx, what defined feudalism was the power of the ruling class (the aristocracy) in their control of arable land, leading to a class society based upon the exploitation of the peasants who farm these lands, typically under serfdom and principally by means of labour, produce and money rents.[43] Marx thus defined feudalism primarily by its economic characteristics.
|
61 |
+
|
62 |
+
He also took it as a paradigm for understanding the power-relationships between capitalists and wage-labourers in his own time: "in pre-capitalist systems it was obvious that most people did not control their own destiny—under feudalism, for instance, serfs had to work for their lords. Capitalism seems different because people are in theory free to work for themselves or for others as they choose. Yet most workers have as little control over their lives as feudal serfs."[44] Some later Marxist theorists (e.g. Eric Wolf) have applied this label to include non-European societies, grouping feudalism together with Imperial Chinese and pre-Columbian Incan societies as 'tributary'.
|
63 |
+
|
64 |
+
In the late 19th and early 20th centuries, John Horace Round and Frederic William Maitland, both historians of medieval Britain, arrived at different conclusions as to the character of English society before the Norman Conquest in 1066. Round argued that the Normans had brought feudalism with them to England, while Maitland contended that its fundamentals were already in place in Britain before 1066. The debate continues today, but a consensus viewpoint is that England before the Conquest had commendation (which embodied some of the personal elements in feudalism) while William the Conqueror introduced a modified and stricter northern French feudalism to England incorporating (1086) oaths of loyalty to the king by all who held by feudal tenure, even the vassals of his principal vassals (holding by feudal tenure meant that vassals must provide the quota of knights required by the king or a money payment in substitution).
|
65 |
+
|
66 |
+
In the 20th century, two outstanding historians offered still more widely differing perspectives. The French historian Marc Bloch, arguably the most influential 20th-century medieval historian,[43] approached feudalism not so much from a legal and military point of view but from a sociological one, presenting in Feudal Society (1939; English 1961) a feudal order not limited solely to the nobility. It is his radical notion that peasants were part of the feudal relationship that sets Bloch apart from his peers: while the vassal performed military service in exchange for the fief, the peasant performed physical labour in return for protection – both are a form of feudal relationship. According to Bloch, other elements of society can be seen in feudal terms; all the aspects of life were centered on "lordship", and so we can speak usefully of a feudal church structure, a feudal courtly (and anti-courtly) literature, and a feudal economy.[43]
|
67 |
+
|
68 |
+
In contradistinction to Bloch, the Belgian historian François-Louis Ganshof defined feudalism from a narrow legal and military perspective, arguing that feudal relationships existed only within the medieval nobility itself. Ganshof articulated this concept in Qu'est-ce que la féodalité? ("What is feudalism?", 1944; translated in English as Feudalism). His classic definition of feudalism is widely accepted today among medieval scholars,[43] though questioned both by those who view the concept in wider terms and by those who find insufficient uniformity in noble exchanges to support such a model.
|
69 |
+
|
70 |
+
Although he was never formally a student in the circle of scholars around Marc Bloch and Lucien Febvre that came to be known as the Annales School, Georges Duby was an exponent of the Annaliste tradition. In a published version of his 1952 doctoral thesis entitled La société aux XIe et XIIe siècles dans la région mâconnaise (Society in the 11th and 12th centuries in the Mâconnais region), and working from the extensive documentary sources surviving from the Burgundian monastery of Cluny, as well as the dioceses of Mâcon and Dijon, Duby excavated the complex social and economic relationships among the individuals and institutions of the Mâconnais region and charted a profound shift in the social structures of medieval society around the year 1000. He argued that in early 11th century, governing institutions—particularly comital courts established under the Carolingian monarchy—that had represented public justice and order in Burgundy during the 9th and 10th centuries receded and gave way to a new feudal order wherein independent aristocratic knights wielded power over peasant communities through strong-arm tactics and threats of violence.
|
71 |
+
|
72 |
+
In 1939 the Austrian historian Theodor Mayer subordinated the feudal state as secondary to his concept of a persons association state (Personenverbandsstaat [de]), understanding it in contrast to the territorial state.[45] This form of statehood, identified with the Holy Roman Empire, is described as the most complete form of medieval rule, completing conventional feudal structure of lordship and vassalage with the personal association between the nobility.[46] But the applicability of this concept to cases outside of the Holy Roman Empire has been questioned, as by Susan Reynolds.[47] The concept has also been questioned and superseded in German histography because of its bias and reductionism towards legitimating the Führerprinzip.
|
73 |
+
|
74 |
+
In 1974, the American historian Elizabeth A. R. Brown[5] rejected the label feudalism as an anachronism that imparts a false sense of uniformity to the concept. Having noted the current use of many, often contradictory, definitions of feudalism, she argued that the word is only a construct with no basis in medieval reality, an invention of modern historians read back "tyrannically" into the historical record. Supporters of Brown have suggested that the term should be expunged from history textbooks and lectures on medieval history entirely.[43] In Fiefs and Vassals: The Medieval Evidence Reinterpreted (1994),[6] Susan Reynolds expanded upon Brown's original thesis. Although some contemporaries questioned Reynolds's methodology, other historians have supported it and her argument.[43] Reynolds argues:
|
75 |
+
|
76 |
+
Too many models of feudalism used for comparisons, even by Marxists, are still either constructed on the 16th-century basis or incorporate what, in a Marxist view, must surely be superficial or irrelevant features from it. Even when one restricts oneself to Europe and to feudalism in its narrow sense it is extremely doubtful whether feudo-vassalic institutions formed a coherent bundle of institutions or concepts that were structurally separate from other institutions and concepts of the time.[48]
|
77 |
+
|
78 |
+
The term feudal has also been applied to non-Western societies in which institutions and attitudes similar to those of medieval Europe are perceived to have prevailed (See Examples of feudalism). Japan has been extensively studied in this regard.[49] Friday notes that in the 21st century historians of Japan rarely invoke feudalism; instead of looking at similarities, specialists attempting comparative analysis concentrate on fundamental differences.[50] Ultimately, critics say, the many ways the term feudalism has been used have deprived it of specific meaning, leading some historians and political theorists to reject it as a useful concept for understanding society.[43]
|
79 |
+
|
80 |
+
Richard Abels notes that "Western Civilization and World Civilization textbooks now shy away from the term 'feudalism'."[51]
|
81 |
+
|
82 |
+
Military:
|
83 |
+
|
84 |
+
Non-European:
|
en/1967.html.txt
ADDED
@@ -0,0 +1,265 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Iron (/ˈaɪərn/) is a chemical element with symbol Fe (from Latin: ferrum) and atomic number 26. It is a metal that belongs to the first transition series and group 8 of the periodic table. It is by mass the most common element on Earth, forming much of Earth's outer and inner core. It is the fourth most common element in the Earth's crust.
|
6 |
+
|
7 |
+
In its metallic state, iron is rare in the Earth's crust, limited mainly to deposition by meteorites. Iron ores, by contrast, are among the most abundant in the Earth's crust, although extracting usable metal from them requires kilns or furnaces capable of reaching 1,500 °C (2,730 °F) or higher, about 500 °C (900 °F) higher than what is enough to smelt copper. Humans started to master that process in Eurasia only about 2000 BCE[not verified in body], and the use of iron tools and weapons began to displace copper alloys, in some regions, only around 1200 BCE. That event is considered the transition from the Bronze Age to the Iron Age. In the modern world, iron alloys, such as steel, inox, cast iron and special steels are by far the most common industrial metals, because of their mechanical properties and low cost.
|
8 |
+
|
9 |
+
Pristine and smooth pure iron surfaces are mirror-like silvery-gray. However, iron reacts readily with oxygen and water to give brown to black hydrated iron oxides, commonly known as rust. Unlike the oxides of some other metals, that form passivating layers, rust occupies more volume than the metal and thus flakes off, exposing fresh surfaces for corrosion.
|
10 |
+
|
11 |
+
The body of an adult human contains about 4 grams (0.005% body weight) of iron, mostly in hemoglobin and myoglobin. These two proteins play essential roles in vertebrate metabolism, respectively oxygen transport by blood and oxygen storage in muscles. To maintain the necessary levels, human iron metabolism requires a minimum of iron in the diet. Iron is also the metal at the active site of many important redox enzymes dealing with cellular respiration and oxidation and reduction in plants and animals.[5]
|
12 |
+
|
13 |
+
Chemically, the most common oxidation states of iron are iron(II) and iron(III). Iron shares many properties of other transition metals, including the other group 8 elements, ruthenium and osmium. Iron forms compounds in a wide range of oxidation states, −2 to +7. Iron also forms many coordination compounds; some of them, such as ferrocene, ferrioxalate, and Prussian blue, have substantial industrial, medical, or research applications.
|
14 |
+
|
15 |
+
At least four allotropes of iron (differing atom arrangements in the solid) are known, conventionally denoted α, γ, δ, and ε.
|
16 |
+
|
17 |
+
The first three forms are observed at ordinary pressures. As molten iron cools past its freezing point of 1538 °C, it crystallizes into its δ allotrope, which has a body-centered cubic (bcc) crystal structure. As it cools further to 1394 °C, it changes to its γ-iron allotrope, a face-centered cubic (fcc) crystal structure, or austenite. At 912 °C and below, the crystal structure again becomes the bcc α-iron allotrope.[6]
|
18 |
+
|
19 |
+
The physical properties of iron at very high pressures and temperatures have also been studied extensively,[7][8] because of their relevance to theories about the cores of the Earth and other planets. Above approximately 10 GPa and temperatures of a few hundred kelvin or less, α-iron changes into another hexagonal close-packed (hcp) structure, which is also known as ε-iron. The higher-temperature γ-phase also changes into ε-iron, but does so at higher pressure.
|
20 |
+
|
21 |
+
Some controversial experimental evidence exists for a stable β phase at pressures above 50 GPa and temperatures of at least 1500 K. It is supposed to have an orthorhombic or a double hcp structure.[9] (Confusingly, the term "β-iron" is sometimes also used to refer to α-iron above its Curie point, when it changes from being ferromagnetic to paramagnetic, even though its crystal structure has not changed.[6])
|
22 |
+
|
23 |
+
The inner core of the Earth is generally presumed to consist of an iron-nickel alloy with ε (or β) structure.[10]
|
24 |
+
|
25 |
+
The melting and boiling points of iron, along with its enthalpy of atomization, are lower than those of the earlier 3d elements from scandium to chromium, showing the lessened contribution of the 3d electrons to metallic bonding as they are attracted more and more into the inert core by the nucleus;[11] however, they are higher than the values for the previous element manganese because that element has a half-filled 3d subshell and consequently its d-electrons are not easily delocalized. This same trend appears for ruthenium but not osmium.[12]
|
26 |
+
|
27 |
+
The melting point of iron is experimentally well defined for pressures less than 50 GPa. For greater pressures, published data (as of 2007) still varies by tens of gigapascals and over a thousand kelvin.[13]
|
28 |
+
|
29 |
+
Below its Curie point of 770 °C, α-iron changes from paramagnetic to ferromagnetic: the spins of the two unpaired electrons in each atom generally align with the spins of its neighbors, creating an overall magnetic field.[15] This happens because the orbitals of those two electrons (dz2 and dx2 − y2) do not point toward neighboring atoms in the lattice, and therefore are not involved in metallic bonding.[6]
|
30 |
+
|
31 |
+
In the absence of an external source of magnetic field, the atoms get spontaneously partitioned into magnetic domains, about 10 micrometers across,[16] such that the atoms in each domain have parallel spins, but some domains have other orientations. Thus a macroscopic piece of iron will have a nearly zero overall magnetic field.
|
32 |
+
|
33 |
+
Application of an external magnetic field causes the domains that are magnetized in the same general direction to grow at the expense of adjacent ones that point in other directions, reinforcing the external field. This effect is exploited in devices that needs to channel magnetic fields, such as electrical transformers, magnetic recording heads, and electric motors. Impurities, lattice defects, or grain and particle boundaries can "pin" the domains in the new positions, so that the effect persists even after the external field is removed -- thus turning the iron object into a (permanent) magnet.[15]
|
34 |
+
|
35 |
+
Similar behavior is exhibited by some iron compounds, such as the ferrites and the mineral magnetite, a crystalline form of the mixed iron(II,III) oxide Fe3O4 (although the atomic-scale mechanism, ferrimagnetism, is somewhat different). Pieces of magnetite with natural permanent magnetization (lodestones) provided the earliest compasses for navigation. Particles of magnetite were extensively used in magnetic recording media such as core memories, magnetic tapes, floppies, and disks, until they were replaced by cobalt-based materials.
|
36 |
+
|
37 |
+
Iron has four stable isotopes: 54Fe (5.845% of natural iron), 56Fe (91.754%), 57Fe (2.119%) and 58Fe (0.282%). 20-30 artificial isotopes have also been created. Of these stable isotopes, only 57Fe has a nuclear spin (−1⁄2). The nuclide 54Fe theoretically can undergo double electron capture to 54Cr, but the process has never been observed and only a lower limit on the half-life of 3.1×1022 years has been established.[17]
|
38 |
+
|
39 |
+
60Fe is an extinct radionuclide of long half-life (2.6 million years).[18] It is not found on Earth, but its ultimate decay product is its granddaughter, the stable nuclide 60Ni.[17] Much of the past work on isotopic composition of iron has focused on the nucleosynthesis of 60Fe through studies of meteorites and ore formation. In the last decade, advances in mass spectrometry have allowed the detection and quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of this work is driven by the Earth and planetary science communities, although applications to biological and industrial systems are emerging.[19]
|
40 |
+
|
41 |
+
In phases of the meteorites Semarkona and Chervony Kut, a correlation between the concentration of 60Ni, the granddaughter of 60Fe, and the abundance of the stable iron isotopes provided evidence for the existence of 60Fe at the time of formation of the Solar System. Possibly the energy released by the decay of 60Fe, along with that released by 26Al, contributed to the remelting and differentiation of asteroids after their formation 4.6 billion years ago. The abundance of 60Ni present in extraterrestrial material may bring further insight into the origin and early history of the Solar System.[20]
|
42 |
+
|
43 |
+
The most abundant iron isotope 56Fe is of particular interest to nuclear scientists because it represents the most common endpoint of nucleosynthesis.[21] Since 56Ni (14 alpha particles) is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see silicon burning process), it is the endpoint of fusion chains inside extremely massive stars, since addition of another alpha particle, resulting in 60Zn, requires a great deal more energy. This 56Ni, which has a half-life of about 6 days, is created in quantity in these stars, but soon decays by two successive positron emissions within supernova decay products in the supernova remnant gas cloud, first to radioactive 56Co, and then to stable 56Fe. As such, iron is the most abundant element in the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of planets such as Earth.[22] It is also very common in the universe, relative to other stable metals of approximately the same atomic weight.[22][23] Iron is the sixth most abundant element in the Universe, and the most common refractory element.[24]
|
44 |
+
|
45 |
+
Although a further tiny energy gain could be extracted by synthesizing 62Ni, which has a marginally higher binding energy than 56Fe, conditions in stars are unsuitable for this process. Element production in supernovas and distribution on Earth greatly favor iron over nickel, and in any case, 56Fe still has a lower mass per nucleon than 62Ni due to its higher fraction of lighter protons.[25] Hence, elements heavier than iron require a supernova for their formation, involving rapid neutron capture by starting 56Fe nuclei.[22]
|
46 |
+
|
47 |
+
In the far future of the universe, assuming that proton decay does not occur, cold fusion occurring via quantum tunnelling would cause the light nuclei in ordinary matter to fuse into 56Fe nuclei. Fission and alpha-particle emission would then make heavy nuclei decay into iron, converting all stellar-mass objects to cold spheres of pure iron.[26]
|
48 |
+
|
49 |
+
Iron's abundance in rocky planets like Earth is due to its abundant production by fusion in high-mass stars, where it is the last element to be produced with release of energy before the violent collapse of a supernova, which scatters the iron into space.
|
50 |
+
|
51 |
+
Metallic or native iron is rarely found on the surface of the Earth because it tends to oxidize. However, both the Earth's inner and outer core, that account for 35% of the mass of the whole Earth, are believed to consist largely of an iron alloy, possibly with nickel. Electric currents in the liquid outer core are believed to be the origin of the Earth's magnetic field. The other terrestrial planets (Mercury, Venus, and Mars) as well as the Moon are believed to have a metallic core consisting mostly of iron. The M-type asteroids are also believed to be partly or mostly made of metallic iron alloy.
|
52 |
+
|
53 |
+
The rare iron meteorites are the main form of natural metallic iron on the Earth's surface. Items made of cold-worked meteoritic iron have been found in various archaeological sites dating from a time when iron smelting had not yet been developed; and the Inuit in Greenland have been reported to use iron from the Cape York meteorite for tools and hunting weapons.[27] About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite (90–95% iron).[28] Native iron is also rarely found in basalts that have formed from magmas that have come into contact with carbon-rich sedimentary rocks, which have reduced the oxygen fugacity sufficiently for iron to crystallize. This is known as Telluric iron and is described from a few localities, such as Disko Island in West Greenland, Yakutia in Russia and Bühl in Germany.[29]
|
54 |
+
|
55 |
+
Ferropericlase (Mg,Fe)O, a solid solution of periclase (MgO) and wüstite (FeO), makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite (Mg,Fe)SiO3; it also is the major host for iron in the lower mantle.[30] At the bottom of the transition zone of the mantle, the reaction γ-(Mg,Fe)2[SiO4] ↔ (Mg,Fe)[SiO3] + (Mg,Fe)O transforms γ-olivine into a mixture of silicate perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite.[31] Silicate perovskite may form up to 93% of the lower mantle,[32] and the magnesium iron form, (Mg,Fe)SiO3, is considered to be the most abundant mineral in the Earth, making up 38% of its volume.[33]
|
56 |
+
|
57 |
+
While iron is the most abundant element on Earth, it accounts for only 5% of the Earth's crust; thus being only the fourth most abundant element, after oxygen, silicon, and aluminium.[34]
|
58 |
+
|
59 |
+
Most of the iron in the crust is combined with various other elements to form many iron minerals. An important class is the iron oxide minerals such as hematite (Fe2O3), magnetite (Fe3O4), and siderite (FeCO3), which are the major ores of iron. Many igneous rocks also contain the sulfide minerals pyrrhotite and pentlandite.[35][36] During weathering, iron tends to leach from sulfide deposits as the sulfate and from silicate deposits as the bicarbonate. Both of these are oxidized in aqueous solution and precipitate in even mildly elevated pH as iron(III) oxide.[37]
|
60 |
+
|
61 |
+
Large deposits of iron are banded iron formations, a type of rock consisting of repeated thin layers of iron oxides alternating with bands of iron-poor shale and chert. The banded iron formations were laid down in the time between 3,700 million years ago and 1,800 million years ago.[38][39]
|
62 |
+
|
63 |
+
Materials containing finely ground iron(III) oxides or oxide-hydroxides, such as ochre, have been used as yellow, red, and brown pigments since pre-historical times. They contribute as well to the color of various rocks and clays, including entire geological formations like the Painted Hills in Oregon and the Buntsandstein ("colored sandstone", British Bunter).[40] Through Eisensandstein (a jurassic 'iron sandstone', e.g. from Donzdorf in Germany)[41] and Bath stone in the UK, iron compounds are responsible for the yellowish color of many historical buildings and sculptures.[42] The proverbial red color of the surface of Mars is derived from an iron oxide-rich regolith.[43]
|
64 |
+
|
65 |
+
Significant amounts of iron occur in the iron sulfide mineral pyrite (FeS2), but it is difficult to extract iron from it and it is therefore not exploited. In fact, iron is so common that production generally focuses only on ores with very high quantities of it.
|
66 |
+
|
67 |
+
According to the International Resource Panel's Metal Stocks in Society report, the global stock of iron in use in society is 2200 kg per capita. More-developed countries differ in this respect from less-developed countries (7000–14000 vs 2000 kg per capita).[44]
|
68 |
+
|
69 |
+
Iron shows the characteristic chemical properties of the transition metals, namely the ability to form variable oxidation states differing by steps of one and a very large coordination and organometallic chemistry: indeed, it was the discovery of an iron compound, ferrocene, that revolutionalized the latter field in the 1950s.[45] Iron is sometimes considered as a prototype for the entire block of transition metals, due to its abundance and the immense role it has played in the technological progress of humanity.[46] Its 26 electrons are arranged in the configuration [Ar]3d64s2, of which the 3d and 4s electrons are relatively close in energy, and thus it can lose a variable number of electrons and there is no clear point where further ionization becomes unprofitable.[12]
|
70 |
+
|
71 |
+
Iron forms compounds mainly in the oxidation states +2 (iron(II), "ferrous") and +3 (iron(III), "ferric"). Iron also occurs in higher oxidation states, e.g. the purple potassium ferrate (K2FeO4), which contains iron in its +6 oxidation state. Although iron(VIII) oxide (FeO4) has been claimed, the report could not be reproduced and such a species (at least with iron in its +8 oxidation state) has been found to be improbable computationally.[47] However, one form of anionic [FeO4]– with iron in its +7 oxidation state, along with an iron(V)-peroxo isomer, has been detected by infrared spectroscopy at 4 K after cocondensation of laser-ablated Fe atoms with a mixture of O2/Ar.[48] Iron(IV) is a common intermediate in many biochemical oxidation reactions.[49][50] Numerous organoiron compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other bonding properties are often assessed using the technique of Mössbauer spectroscopy.[51]
|
72 |
+
Many mixed valence compounds contain both iron(II) and iron(III) centers, such as magnetite and Prussian blue (Fe4(Fe[CN]6)3).[50] The latter is used as the traditional "blue" in blueprints.[52]
|
73 |
+
|
74 |
+
Iron is the first of the transition metals that cannot reach its group oxidation state of +8, although its heavier congeners ruthenium and osmium can, with ruthenium having more difficulty than osmium.[6] Ruthenium exhibits an aqueous cationic chemistry in its low oxidation states similar to that of iron, but osmium does not, favoring high oxidation states in which it forms anionic complexes.[6] In the second half of the 3d transition series, vertical similarities down the groups compete with the horizontal similarities of iron with its neighbors cobalt and nickel in the periodic table, which are also ferromagnetic at room temperature and share similar chemistry. As such, iron, cobalt, and nickel are sometimes grouped together as the iron triad.[46]
|
75 |
+
|
76 |
+
Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in standardized 76 pound flasks (34 kg) made of iron.[53]
|
77 |
+
|
78 |
+
Iron is by far the most reactive element in its group; it is pyrophoric when finely divided and dissolves easily in dilute acids, giving Fe2+. However, it does not react with concentrated nitric acid and other oxidizing acids due to the formation of an impervious oxide layer, which can nevertheless react with hydrochloric acid.[6]
|
79 |
+
|
80 |
+
Iron forms various oxide and hydroxide compounds; the most common are iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at room temperature. Despite their names, they are actually all non-stoichiometric compounds whose compositions may vary.[54] These oxides are the principal ores for the production of iron (see bloomery and blast furnace). They are also used in the production of ferrites, useful magnetic storage media in computers, and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden luster.[50] It is not an iron(IV) compound, but is actually an iron(II) polysulfide containing Fe2+ and S2−2 ions in a distorted sodium chloride structure.[54]
|
81 |
+
|
82 |
+
The binary ferrous and ferric halides are well-known. The ferrous halides typically arise from treating iron metal with the corresponding hydrohalic acid to give the corresponding hydrated salts.[50]
|
83 |
+
|
84 |
+
Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride being the most common.[55]
|
85 |
+
|
86 |
+
Ferric iodide is an exception, being thermodynamically unstable due to the oxidizing power of Fe3+ and the high reducing power of I−:[55]
|
87 |
+
|
88 |
+
Ferric iodide, a black solid, is not stable in ordinary conditions, but can be prepared through the reaction of iron pentacarbonyl with iodine and carbon monoxide in the presence of hexane and light at the temperature of −20 °C, with oxygen and water excluded.[55]
|
89 |
+
|
90 |
+
The standard reduction potentials in acidic aqueous solution for some common iron ions are given below:[6]
|
91 |
+
|
92 |
+
The red-purple tetrahedral ferrate(VI) anion is such a strong oxidizing agent that it oxidizes nitrogen and ammonia at room temperature, and even water itself in acidic or neutral solutions:[55]
|
93 |
+
|
94 |
+
The Fe3+ ion has a large simple cationic chemistry, although the pale-violet hexaquo ion [Fe(H2O)6]3+ is very readily hydrolyzed when pH increases above 0 as follows:[56]
|
95 |
+
|
96 |
+
As pH rises above 0 the above yellow hydrolyzed species form and as it rises above 2–3, reddish-brown hydrous iron(III) oxide precipitates out of solution. Although Fe3+ has an d5 configuration, its absorption spectrum is not like that of Mn2+ with its weak, spin-forbidden d–d bands, because Fe3+ has higher positive charge and is more polarizing, lowering the energy of its ligand-to-metal charge transfer absorptions. Thus, all the above complexes are rather strongly colored, with the single exception of the hexaquo ion – and even that has a spectrum dominated by charge transfer in the near ultraviolet region.[56] On the other hand, the pale green iron(II) hexaquo ion [Fe(H2O)6]2+ does not undergo appreciable hydrolysis. Carbon dioxide is not evolved when carbonate anions are added, which instead results in white iron(II) carbonate being precipitated out. In excess carbon dioxide this forms the slightly soluble bicarbonate, which occurs commonly in groundwater, but it oxidises quickly in air to form iron(III) oxide that accounts for the brown deposits present in a sizeable number of streams.[57]
|
97 |
+
|
98 |
+
Due to its electronic structure, iron has a very large coordination and organometallic chemistry.
|
99 |
+
|
100 |
+
Many coordination compounds of iron are known. A typical six-coordinate anion is hexachloroferrate(III), [FeCl6]3−, found in the mixed salt tetrakis(methylammonium) hexachloroferrate(III) chloride.[58][59] Complexes with multiple bidentate ligands have geometric isomers. For example, the trans-chlorohydridobis(bis-1,2-(diphenylphosphino)ethane)iron(II) complex is used as a starting material for compounds with the Fe(dppe)2 moiety.[60][61] The ferrioxalate ion with three oxalate ligands (shown at right) displays helical chirality with its two non-superposable geometries labelled Λ (lambda) for the left-handed screw axis and Δ (delta) for the right-handed screw axis, in line with IUPAC conventions.[56] Potassium ferrioxalate is used in chemical actinometry and along with its sodium salt undergoes photoreduction applied in old-style photographic processes. The dihydrate of iron(II) oxalate has a polymeric structure with co-planar oxalate ions bridging between iron centres with the water of crystallisation located forming the caps of each octahedron, as illustrated below.[62]
|
101 |
+
|
102 |
+
Iron(III) complexes are quite similar to those of chromium(III) with the exception of iron(III)'s preference for O-donor instead of N-donor ligands. The latter tend to be rather more unstable than iron(II) complexes and often dissociate in water. Many Fe–O complexes show intense colors and are used as tests for phenols or enols. For example, in the ferric chloride test, used to determine the presence of phenols, iron(III) chloride reacts with a phenol to form a deep violet complex:[56]
|
103 |
+
|
104 |
+
Among the halide and pseudohalide complexes, fluoro complexes of iron(III) are the most stable, with the colorless [FeF5(H2O)]2− being the most stable in aqueous solution. Chloro complexes are less stable and favor tetrahedral coordination as in [FeCl4]−; [FeBr4]− and [FeI4]− are reduced easily to iron(II). Thiocyanate is a common test for the presence of iron(III) as it forms the blood-red [Fe(SCN)(H2O)5]2+. Like manganese(II), most iron(III) complexes are high-spin, the exceptions being those with ligands that are high in the spectrochemical series such as cyanide. An example of a low-spin iron(III) complex is [Fe(CN)6]3−. The cyanide ligands may easily be detached in [Fe(CN)6]3−, and hence this complex is poisonous, unlike the iron(II) complex [Fe(CN)6]4− found in Prussian blue,[56] which does not release hydrogen cyanide except when dilute acids are added.[57] Iron shows a great variety of electronic spin states, including every possible spin quantum number value for a d-block element from 0 (diamagnetic) to 5⁄2 (5 unpaired electrons). This value is always half the number of unpaired electrons. Complexes with zero to two unpaired electrons are considered low-spin and those with four or five are considered high-spin.[54]
|
105 |
+
|
106 |
+
Iron(II) complexes are less stable than iron(III) complexes but the preference for O-donor ligands is less marked, so that for example [Fe(NH3)6]2+ is known while [Fe(NH3)6]3+ is not. They have a tendency to be oxidized to iron(III) but this can be moderated by low pH and the specific ligands used.[57]
|
107 |
+
|
108 |
+
Organoiron chemistry is the study of organometallic compounds of iron, where carbon atoms are covalently bound to the metal atom. They are many and varied, including cyanide complexes, carbonyl complexes, sandwich and half-sandwich compounds.
|
109 |
+
|
110 |
+
Prussian blue or "ferric ferrocyanide", Fe4[Fe(CN)6]3, is an old and well-known iron-cyanide complex, extensively used as pigment and in several other applications. Its formation can be used as a simple wet chemistry test to distinguish between aqueous solutions of Fe2+ and Fe3+ as they react (respectively) with potassium ferricyanide and potassium ferrocyanide to form Prussian blue.[50]
|
111 |
+
|
112 |
+
Another old example of organoiron compound is iron pentacarbonyl, Fe(CO)5, in which a neutral iron atom is bound to the carbon atoms of five carbon monoxide molecules. The compound can be used to make carbonyl iron powder, a highly reactive form of metallic iron. Thermolysis of iron pentacarbonyl gives triiron dodecacarbonyl, Fe3(CO)12, a with a cluster of three iron atoms at its core. Collman's reagent, disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2 oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state.[63]
|
113 |
+
|
114 |
+
A landmark in this field was the discovery in 1951 of the remarkably stable sandwich compound ferrocene Fe(C5H5)2, by] Paulson and Kealy[64] and independently by Miller and others,[65] whose surprising molecular structure was determined only a year later by Woodward and Wilkinson[66] and Fischer.[67]
|
115 |
+
Ferrocene is still one of the most important tools and models in this class.[68]
|
116 |
+
|
117 |
+
Iron-centered organometallic species are used as catalysts. The Knölker complex, for example, is a transfer hydrogenation catalyst for ketones.[69]
|
118 |
+
|
119 |
+
The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less stable to aerial oxidation than Mohr's salt ((NH4)2Fe(SO4)2·6H2O). Iron(II) compounds tend to be oxidized to iron(III) compounds in the air.[50]
|
120 |
+
|
121 |
+
As iron has been in use for such a long time, it has many names. The source of its chemical symbol Fe is the Latin word ferrum, and its descendants are the names of the element in the Romance languages (for example, French fer, Spanish hierro, and Italian and Portuguese ferro).[70] The word ferrum itself possibly comes from the Semitic languages, via Etruscan, from a root that also gave rise to Old English bræs "brass".[71] The English word iron derives ultimately from Proto-Germanic *isarnan, which is also the source of the German name Eisen. It was most likely borrowed from Celtic *isarnon, which ultimately comes from Proto-Indo-European *is-(e)ro- "powerful, holy" and finally *eis "strong", referencing iron's strength as a metal.[72] Kluge relates *isarnon to Illyric and Latin ira, 'wrath').[citation needed] The Balto-Slavic names for iron (e.g. Russian железо [zhelezo], Polish żelazo, Lithuanian geležis) are the only ones to come directly from the Proto-Indo-European *ghelgh- "iron".[73] In many of these languages, the word for iron may also be used to denote other objects made of iron or steel, or figuratively because of the hardness and strength of the metal.[74] The Chinese tiě (traditional 鐵; simplified 铁) derives from Proto-Sino-Tibetan *hliek,[75] and was borrowed into Japanese as 鉄 tetsu, which also has the native reading kurogane "black metal" (similar to how iron is referenced in the English word blacksmith).[76]
|
122 |
+
|
123 |
+
Iron is one of the elements undoubtedly known to the ancient world.[77] It has been worked, or wrought, for millennia. However, iron objects of great age are much rarer than objects made of gold or silver due to the ease with which iron corrodes.[78] The technology developed slowly, and even after the discovery of smelting it took many centuries for iron to replace bronze as the metal of choice for tools and weapons.
|
124 |
+
|
125 |
+
Beads made from meteoric iron in 3500 BC or earlier were found in Gerzah, Egypt by G.A. Wainwright.[79] The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in the Earth's crust generally has only minuscule nickel impurities.
|
126 |
+
|
127 |
+
Meteoric iron was highly regarded due to its origin in the heavens and was often used to forge weapons and tools.[79] For example, a dagger made of meteoric iron was found in the tomb of Tutankhamun, containing similar proportions of iron, cobalt, and nickel to a meteorite discovered in the area, deposited by an ancient meteor shower.[80][81][82] Items that were likely made of iron by Egyptians date from 3000 to 2500 BC.[78]
|
128 |
+
|
129 |
+
Meteoritic iron is comparably soft and ductile and easily cold forged but may get brittle when heated because of the nickel content.[83]
|
130 |
+
|
131 |
+
The first iron production started in the Middle Bronze Age, but it took several centuries before iron displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern Syria were made sometime between 3000 and 2700 BC.[84] The Hittites established an empire in north-central Anatolia around 1600 BC. They appear to be the first to understand the production of iron from its ores and regard it highly in their society.[85] The Hittites began to smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their empire fell in 1180 BC.[84] The subsequent period is called the Iron Age.
|
132 |
+
|
133 |
+
Artifacts of smelted iron are found in India dating from 1800 to 1200 BC,[86] and in the Levant from about 1500 BC (suggesting smelting in Anatolia or the Caucasus).[87][88] Alleged references (compare history of metallurgy in South Asia) to iron in the Indian Vedas have been used for claims of a very early usage of iron in India respectively to date the texts as such. The rigveda term ayas (metal) probably refers to copper and bronze, while iron or śyāma ayas, literally "black metal", first is mentioned in the post-rigvedic Atharvaveda.[89]
|
134 |
+
|
135 |
+
Some archaeological evidence suggests iron was smelted in Zimbabwe and southeast Africa as early as the eighth century BC.[90] Iron working was introduced to Greece in the late 11th century BC, from which it spread quickly throughout Europe.[91]
|
136 |
+
|
137 |
+
The spread of ironworking in Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use was common in the Roman era.[79] The annual iron output of the Roman Empire is estimated at 84750 t,[92] while the similarly populous and contemporary Han China produced around 5000 t.[93] In China, iron only appears circa 700–500 BC.[94] Iron smelting may have been introduced into China through Central Asia.[95] The earliest evidence of the use of a blast furnace in China dates to the 1st century AD,[96] and cupola furnaces were used as early as the Warring States period (403–221 BC).[97] Usage of the blast and cupola furnace remained widespread during the Song and Tang Dynasties.[98]
|
138 |
+
|
139 |
+
During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron (or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining iron ore. It was later improved by others, including Joseph Hall.[99]
|
140 |
+
|
141 |
+
Cast iron was first produced in China during 5th century BC,[100] but was hardly in Europe until the medieval period.[101][102] The earliest cast iron artifacts were discovered by archaeologists in what is now modern Luhe County, Jiangsu in China. Cast iron was used in ancient China for warfare, agriculture, and architecture.[103] During the medieval period, means were found in Europe of producing wrought iron from cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was required as fuel.[104]
|
142 |
+
|
143 |
+
Medieval blast furnaces were about 10 feet (3.0 m) tall and made of fireproof brick; forced air was usually provided by hand-operated bellows.[102] Modern blast furnaces have grown much bigger, with hearths fourteen meters in diameter that allow them to produce thousands of tons of iron each day, but essentially operate in much the same way as they did during medieval times.[104]
|
144 |
+
|
145 |
+
In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron, replacing charcoal, although continuing to use blast furnaces. The ensuing availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was cheaper. Carbon content in iron was not implicated as the reason for the differences in properties of wrought iron, cast iron, and steel until the 18th century.[84]
|
146 |
+
|
147 |
+
Since iron was becoming cheaper and more plentiful, it also became a major structural material following the building of the innovative first iron bridge in 1778. This bridge still stands today as a monument to the role iron played in the Industrial Revolution. Following this, iron was used in rails, boats, ships, aqueducts, and buildings, as well as in iron cylinders in steam engines.[104] Railways have been central to the formation of modernity and ideas of progress[105] and various languages (e.g. French, Spanish, Italian and German) refer to railways as iron road.
|
148 |
+
|
149 |
+
Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in antiquity by using a bloomery. Blacksmiths in Luristan in western Persia were making good steel by 1000 BC.[84] Then improved versions, Wootz steel by India and Damascus steel were developed around 300 BC and AD 500 respectively. These methods were specialized, and so steel did not become a major commodity until the 1850s.[106]
|
150 |
+
|
151 |
+
New methods of producing it by carburizing bars of iron in the cementation process were devised in the 17th century. In the Industrial Revolution, new methods of producing bar iron without charcoal were devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made steel much more economical, thereby leading to wrought iron no longer being produced in large quantities.[107]
|
152 |
+
|
153 |
+
In 1774, Antoine Lavoisier used the reaction of water steam with metallic iron inside an incandescent iron tube to produce hydrogen in his experiments leading to the demonstration of the conservation of mass, which was instrumental in changing chemistry from a qualitative science to a quantitative one.[108]
|
154 |
+
|
155 |
+
Iron plays a certain role in mythology and has found various usage as a metaphor and in folklore. The Greek poet Hesiod's Works and Days (lines 109–201) lists different ages of man named after metals like gold, silver, bronze and iron to account for successive ages of humanity.[109] The Iron Age was closely related with Rome, and in Ovid's Metamorphoses
|
156 |
+
|
157 |
+
The Virtues, in despair, quit the earth; and the depravity of man becomes universal and complete. Hard steel succeeded then.
|
158 |
+
|
159 |
+
An example of the importance of iron's symbolic role may be found in the German Campaign of 1813. Frederick William III commissioned then the first Iron Cross as military decoration. Berlin iron jewellery reached its peak production between 1813 and 1815, when the Prussian royal family urged citizens to donate gold and silver jewellery for military funding. The inscription Gold gab ich für Eisen (I gave gold for iron) was used as well in later war efforts.[110]
|
160 |
+
|
161 |
+
For a few limited purposes when it is needed, pure iron is produced in the laboratory in small quantities by reducing the pure oxide or hydroxide with hydrogen, or forming iron pentacarbonyl and heating it to 250 °C so that it decomposes to form pure iron powder.[37] Another method is electrolysis of ferrous chloride onto an iron cathode.[111]
|
162 |
+
|
163 |
+
Nowadays, the industrial production of iron or steel consists of two main stages. In the first stage, iron ore is reduced with coke in a blast furnace, and the molten metal is separated from gross impurities such as silicate minerals. This stage yields an alloy -- pig iron—that contains relatively large amounts of carbon. In the second stage, the amount of carbon in the pig iron is lowered by oxidation to yield wrought iron, steel, or cast iron.[113] Other metals can be added at this stage to form alloy steels.
|
164 |
+
|
165 |
+
The blast furnace is loaded with iron ores, usually hematite Fe2O3 or magnetite Fe3O4, together with coke (coal that has been separately baked to remove volatile components). Air pre-heated to 900 °C is blown through the mixture, in sufficient amount to turn the carbon into carbon monoxide:[113]
|
166 |
+
|
167 |
+
This reaction raises the temperature to about 2000 °C The carbon monoxide reduces the iron ore to metallic iron[113]
|
168 |
+
|
169 |
+
Some iron in the high-temperature lower region of the furnace reacts directly with the coke:[113]
|
170 |
+
|
171 |
+
A flux such as limestone (calcium carbonate) or dolomite (calcium-magnesium carbonate) is also added to the furnace's load. Its purpose is to remove silicaceous minerals in the ore, which would otherwise clog the furnace. The heat of the furnace decomposes the carbonates to calcium oxide, which reacts with any excess silica to form a slag composed of calcium silicate CaSiO3 or other products. At the furnace's temperature, the metal and the slag are both molten. They collect at the bottom as two immiscible liquid layers (with the slag on top), that are then easily separated.[113]
|
172 |
+
The slag can be used as a material in road construction or to improve mineral-poor soils for agriculture.[102]
|
173 |
+
|
174 |
+
In general, the pig iron produced by the blast furnace process contains up to 4–5% carbon, with small amounts of other impurities like sulfur, magnesium, phosphorus, and manganese. The high level of carbon makes it relatively weak and brittle. Reducing the amount of carbon to 0.002–2.1% by mass produces steel, which may be up to 1000 times harder than pure iron. A great variety of steel articles can then be made by cold working, hot rolling, forging, machining, etc. Removing the other impurities, instead, results in cast iron, which is used to cast articles in foundries; for example stoves, pipes, radiators, lamp-posts, and rails.[113]
|
175 |
+
|
176 |
+
Steel products often undergo various heat treatments after they are forged to shape. Annealing consists of heating them to 700–800 °C for several hours and then gradual cooling. It makes the steel softer and more workable.[115]
|
177 |
+
|
178 |
+
Owing to environmental concerns, alternative methods of processing iron have been developed. "Direct iron reduction" reduces iron ore to a ferrous lump called "sponge" iron or "direct" iron that is suitable for steelmaking.[102] Two main reactions comprise the direct reduction process:
|
179 |
+
|
180 |
+
Natural gas is partially oxidized (with heat and a catalyst):[102]
|
181 |
+
|
182 |
+
Iron ore is then treated with these gases in a furnace, producing solid sponge iron:[102]
|
183 |
+
|
184 |
+
Silica is removed by adding a limestone flux as described above.[102]
|
185 |
+
|
186 |
+
Ignition of a mixture of aluminium powder and iron oxide yields metallic iron via the thermite reaction:
|
187 |
+
|
188 |
+
Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron (commercially pure iron). Various processes have been used for this, including finery forges, puddling furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other hand, other metals may be added to make alloy steels.[104]
|
189 |
+
|
190 |
+
Iron is the most widely used of all the metals, accounting for over 90% of worldwide metal production. Its low cost and high strength often make it the material of choice material to withstand stress or transmit forces, such as the construction of machinery and machine tools, rails, automobiles, ship hulls, concrete reinforcing bars, and the load-carrying framework of buildings. Since pure iron is quite soft, it is most commonly combined with alloying elements to make steel.[116]
|
191 |
+
|
192 |
+
The mechanical properties of iron and its alloys are extremely relevant to their structural applications. Those properties can be evaluated in various ways, including the Brinell test, the Rockwell test and the Vickers hardness test.
|
193 |
+
|
194 |
+
The properties of pure iron are often used to calibrate measurements or to compare tests.[118][119] However, the mechanical properties of iron are significantly affected by the sample's purity: pure, single crystals of iron are actually softer than aluminium,[117] and the purest industrially produced iron (99.99%) has a hardness of 20–30 Brinell.[120]
|
195 |
+
|
196 |
+
An increase in the carbon content will cause a significant increase in the hardness and tensile strength of iron. Maximum hardness of 65 Rc is achieved with a 0.6% carbon content, although the alloy has low tensile strength.[121] Because of the softness of iron, it is much easier to work with than its heavier congeners ruthenium and osmium.[12]
|
197 |
+
|
198 |
+
α-Iron is a fairly soft metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C).[122] Austenite (γ-iron) is similarly soft and metallic but can dissolve considerably more carbon (as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for making cutlery, and hospital and food-service equipment.[16]
|
199 |
+
|
200 |
+
Commercially available iron is classified based on purity and the abundance of additives. Pig iron has 3.5–4.5% carbon[123] and contains varying amounts of contaminants such as sulfur, silicon and phosphorus. Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel. The reduction of contaminants in pig iron that negatively affect material properties, such as sulfur and phosphorus, yields cast iron containing 2–4% carbon, 1–6% silicon, and small amounts of manganese.[113] Pig iron has a melting point in the range of 1420–1470 K, which is lower than either of its two main components, and makes it the first product to be melted when carbon and iron are heated together.[6] Its mechanical properties vary greatly and depend on the form the carbon takes in the alloy.[12]
|
201 |
+
|
202 |
+
"White" cast irons contain their carbon in the form of cementite, or iron carbide (Fe3C).[12] This hard, brittle compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant to shock. The broken surface of a white cast iron is full of fine facets of the broken iron carbide, a very pale, silvery, shiny material, hence the appellation. Cooling a mixture of iron with 0.8% carbon slowly below 723 °C to room temperature results in separate, alternating layers of cementite and α-iron, which is soft and malleable and is called pearlite for its appearance. Rapid cooling, on the other hand, does not allow time for this separation and creates hard and brittle martensite. The steel can then be tempered by reheating to a temperature in between, changing the proportions of pearlite and martensite. The end product below 0.8% carbon content is a pearlite-αFe mixture, and that above 0.8% carbon content is a pearlite-cementite mixture.[12]
|
203 |
+
|
204 |
+
In gray iron the carbon exists as separate, fine flakes of graphite, and also renders the material brittle due to the sharp edged flakes of graphite that produce stress concentration sites within the material.[124] A newer variant of gray iron, referred to as ductile iron, is specially treated with trace amounts of magnesium to alter the shape of graphite to spheroids, or nodules, reducing the stress concentrations and vastly increasing the toughness and strength of the material.[124]
|
205 |
+
|
206 |
+
Wrought iron contains less than 0.25% carbon but large amounts of slag that give it a fibrous characteristic.[123] It is a tough, malleable product, but not as fusible as pig iron. If honed to an edge, it loses it quickly. Wrought iron is characterized by the presence of fine fibers of slag entrapped within the metal. Wrought iron is more corrosion resistant than steel. It has been almost completely replaced by mild steel for traditional "wrought iron" products and blacksmithing.
|
207 |
+
|
208 |
+
Mild steel corrodes more readily than wrought iron, but is cheaper and more widely available. Carbon steel contains 2.0% carbon or less,[125] with small amounts of manganese, sulfur, phosphorus, and silicon. Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium, molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular toughness at minimal cost.[125][126][127]
|
209 |
+
|
210 |
+
Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is lighter than another traditional protection material, lead, it is much stronger mechanically. The attenuation of radiation as a function of energy is shown in the graph.[128]
|
211 |
+
|
212 |
+
The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if not protected in some way, a cost amounting to over 1% of the world's economy.[129] Painting, galvanization, passivation, plastic coating and bluing are all used to protect iron from rust by excluding water and oxygen or by cathodic protection. The mechanism of the rusting of iron is as follows:[129]
|
213 |
+
|
214 |
+
The electrolyte is usually iron(II) sulfate in urban areas (formed when atmospheric sulfur dioxide attacks iron), and salt particles in the atmosphere in seaside areas.[129]
|
215 |
+
|
216 |
+
Although the dominant use of iron is in metallurgy, iron compounds are also pervasive in industry. Iron catalysts are traditionally used in the Haber-Bosch process for the production of ammonia and the Fischer-Tropsch process for conversion of carbon monoxide to hydrocarbons for fuels and lubricants.[130] Powdered iron in an acidic solvent was used in the Bechamp reduction the reduction of nitrobenzene to aniline.[131]
|
217 |
+
|
218 |
+
Iron(III) oxide mixed with aluminium powder can be ignited to create a thermite reaction, used in welding large iron parts (like rails) and purifying ores. Iron(III) oxide and oxyhidroxide are used as reddish and ocher pigments.
|
219 |
+
|
220 |
+
Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of printed circuit boards.[132] It can also be dissolved in alcohol to form tincture of iron, which is used as a medicine to stop bleeding in canaries.[133]
|
221 |
+
|
222 |
+
Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in cement. It is used to fortify foods and treat iron deficiency anemia. Iron(III) sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent in organic synthesis.[132]
|
223 |
+
|
224 |
+
Iron is required for life.[5][134][135] The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen.[5] Iron proteins are involved in electron transfer.[136]
|
225 |
+
|
226 |
+
Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase.[5][137] The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day,[136] because the human body recycles its hemoglobin for the iron content.[138]
|
227 |
+
|
228 |
+
Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron.[5] In particular, bacteria have evolved very high-affinity sequestering agents called siderophores.[139][140][141]
|
229 |
+
|
230 |
+
After uptake in human cells, iron storage is precisely regulated.[5][142] A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells.[5][143] Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ and Fe2+ and stored as ferritin to be incorporated into hemoglobin.[136]
|
231 |
+
|
232 |
+
The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450.[5] These compounds participate in transporting gases, building enzymes, and transferring electrons.[136] Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin.[136] Many enzymes vital to life contain iron, such as catalase,[144] lipoxygenases,[145] and IRE-BP.[146]
|
233 |
+
|
234 |
+
Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy.[5] Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled.[136] In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to.[136] When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue.[136]
|
235 |
+
|
236 |
+
When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar.[136] (Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin.) This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa.[136] Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue.[136] As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide.[136]
|
237 |
+
|
238 |
+
Carbon monoxide and phosphorus trifluoride are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a.[136] The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen – with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide.[136] Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows:[136]
|
239 |
+
|
240 |
+
Although the heme proteins are the most important class of iron-containing proteins, the iron-sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, they always have high-spin iron. The simplest of such compounds is rubredoxin, which has only one iron atom coordinated to four sulfur atoms from cysteine residues in the surrounding peptide chains. Another important class of iron-sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes.[136]
|
241 |
+
|
242 |
+
The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times.[148]
|
243 |
+
|
244 |
+
Iron is pervasive, but particularly rich sources of dietary iron include red meat, oysters, lentils, beans, poultry, fish, leaf vegetables, watercress, tofu, chickpeas, black-eyed peas, and blackstrap molasses.[5] Bread and breakfast cereals are sometimes specifically fortified with iron.[5][149]
|
245 |
+
|
246 |
+
Iron provided by dietary supplements is often found as iron(II) fumarate, although iron(II) sulfate is cheaper and is absorbed equally well.[132] Elemental iron, or reduced iron, despite being absorbed at only one-third to two-thirds the efficiency (relative to iron sulfate),[150] is often added to foods such as breakfast cereals or enriched wheat flour. Iron is most available to the body when chelated to amino acids[151] and is also available for use as a common iron supplement. Glycine, the least expensive amino acid, is most often used to produce iron glycinate supplements.[152]
|
247 |
+
|
248 |
+
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for iron in 2001.[5] The current EAR for iron for women ages 14–18 is 7.9 mg/day, 8.1 for ages 19–50 and 5.0 thereafter (post menopause). For men the EAR is 6.0 mg/day for ages 19 and up. The RDA is 15.0 mg/day for women ages 15–18, 18.0 for 19–50 and 8.0 thereafter. For men, 8.0 mg/day for ages 19 and up. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 27 mg/day and, for lactation, 9 mg/day.[5] For children ages 1–3 years 7 mg/day, 10 for ages 4–8 and 8 for ages 9–13. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of iron the UL is set at 45 mg/day. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes.[153]
|
249 |
+
|
250 |
+
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the same as in United States. For women the PRI is 13 mg/day ages 15–17 years, 16 mg/day for women ages 18 and up who are premenopausal and 11 mg/day postmenopausal. For pregnancy and lactation, 16 mg/day. For men the PRI is 11 mg/day ages 15 and older. For children ages 1 to 14 the PRI increases from 7 to 11 mg/day. The PRIs are higher than the U.S. RDAs, with the exception of pregnancy.[154] The EFSA reviewed the same safety question did not establish a UL.[155]
|
251 |
+
|
252 |
+
Infants may require iron supplements if they are bottle-fed cow's milk.[156] Frequent blood donors are at risk of low iron levels and are often advised to supplement their iron intake.[157]
|
253 |
+
|
254 |
+
For U.S. food and dietary supplement labeling purposes the amount in a serving is expressed as a percent of Daily Value (%DV). For iron labeling purposes 100% of the Daily Value was 18 mg, and as of May 27, 2016[update] remained unchanged at 18 mg.[158][159] Compliance with the updated labeling regulations was required by 1 January 2020, for manufacturers with $10 million or more in annual food sales, and by 1 January 2021, for manufacturers with less than $10 million in annual food sales.[160][161][162] During the first six months following the 1 January 2020 compliance date, the FDA plans to work cooperatively with manufacturers to meet the new Nutrition Facts label requirements and will not focus on enforcement actions regarding these requirements during that time.[160] A table of the old and new adult Daily Values is provided at Reference Daily Intake.
|
255 |
+
|
256 |
+
Iron deficiency is the most common nutritional deficiency in the world.[5][163][164][165] When loss of iron is not adequately compensated by adequate dietary iron intake, a state of latent iron deficiency occurs, which over time leads to iron-deficiency anemia if left untreated, which is characterised by an insufficient number of red blood cells and an insufficient amount of hemoglobin.[166] Children, pre-menopausal women (women of child-bearing age), and people with poor diet are most susceptible to the disease. Most cases of iron-deficiency anemia are mild, but if not treated can cause problems like fast or irregular heartbeat, complications during pregnancy, and delayed growth in infants and children.[167]
|
257 |
+
|
258 |
+
Iron uptake is tightly regulated by the human body, which has no regulated physiological means of excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing, so control of iron levels is primarily accomplished by regulating uptake.[168] Regulation of iron uptake is impaired in some people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6 and leads to abnormally low levels of hepcidin, a key regulator of the entry of iron into the circulatory system in mammals.[169] In these people, excessive iron intake can result in iron overload disorders, known medically as hemochromatosis.[5] Many people have an undiagnosed genetic susceptibility to iron overload, and are not aware of a family history of the problem. For this reason, people should not take iron supplements unless they suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to be the cause of 0.3 to 0.8% of all metabolic diseases of Caucasians.[170]
|
259 |
+
|
260 |
+
Overdoses of ingested iron can cause excessive levels of free iron in the blood. High blood levels of free ferrous iron react with peroxides to produce highly reactive free radicals that can damage DNA, proteins, lipids, and other cellular components. Iron toxicity occurs when the cell contains free iron, which generally occurs when iron levels exceed the availability of transferrin to bind the iron. Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption, leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere, causing adverse effects that include coma, metabolic acidosis, shock, liver failure, coagulopathy, adult respiratory distress syndrome, long-term organ damage, and even death.[171] Humans experience iron toxicity when the iron exceeds 20 milligrams for every kilogram of body mass; 60 milligrams per kilogram is considered a lethal dose.[172] Overconsumption of iron, often the result of children eating large quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common toxicological causes of death in children under six.[172] The Dietary Reference Intake (DRI) sets the Tolerable Upper Intake Level (UL) for adults at 45 mg/day. For children under fourteen years old the UL is 40 mg/day.[173]
|
261 |
+
|
262 |
+
The medical management of iron toxicity is complicated, and can include use of a specific chelating agent called deferoxamine to bind and expel excess iron from the body.[171][174][175]
|
263 |
+
|
264 |
+
The role of iron in cancer defense can be described as a "double-edged sword" because of its pervasive presence in non-pathological processes.[176] People having chemotherapy may develop iron deficiency and anemia, for which intravenous iron therapy is used to restore iron levels.[177] Iron overload, which may occur from high consumption of red meat,[5] may initiate tumor growth and increase susceptibility to cancer onset,[177] particularly for colorectal cancer.[5]
|
265 |
+
|
en/1968.html.txt
ADDED
@@ -0,0 +1,145 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Agriculture is the science and art of cultivating plants and livestock.[1] Agriculture was the key development in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that enabled people to live in cities. The history of agriculture began thousands of years ago. After gathering wild grains beginning at least 105,000 years ago, nascent farmers began to plant them around 11,500 years ago. Pigs, sheep and cattle were domesticated over 10,000 years ago. Plants were independently cultivated in at least 11 regions of the world. Industrial agriculture based on large-scale monoculture in the twentieth century came to dominate agricultural output, though about 2 billion people still depended on subsistence agriculture into the twenty-first.
|
6 |
+
|
7 |
+
Modern agronomy, plant breeding, agrochemicals such as pesticides and fertilizers, and technological developments have sharply increased yields, while causing widespread ecological and environmental damage. Selective breeding and modern practices in animal husbandry have similarly increased the output of meat, but have raised concerns about animal welfare and environmental damage. Environmental issues include contributions to global warming, depletion of aquifers, deforestation, antibiotic resistance, and growth hormones in industrial meat production. Genetically modified organisms are widely used, although some are banned in certain countries.
|
8 |
+
|
9 |
+
The major agricultural products can be broadly grouped into foods, fibers, fuels and raw materials (such as rubber). Food classes include cereals (grains), vegetables, fruits, oils, meat, milk, fungi and eggs. Over one-third of the world's workers are employed in agriculture, second only to the service sector, although the number of agricultural workers in developed countries has decreased significantly over the centuries.
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
The word agriculture is a late Middle English adaptation of Latin agricultūra, from ager, "field", and cultūra, "cultivation" or "growing".[2] While agriculture usually refers to human activities, certain species of ant,[3][4] termite and beetle have been cultivating crops for up to 60 million years.[5] Agriculture is defined with varying scopes, in its broadest sense using natural resources to "produce commodities which maintain life, including food, fiber, forest products, horticultural crops, and their related services".[6] Thus defined, it includes arable farming, horticulture, animal husbandry and forestry, but horticulture and forestry are in practice often excluded.[6]
|
14 |
+
|
15 |
+
The development of agriculture enabled the human population to grow many times larger than could be sustained by hunting and gathering.[9] Agriculture began independently in different parts of the globe,[10] and included a diverse range of taxa, in at least 11 separate centres of origin.[7] Wild grains were collected and eaten from at least 105,000 years ago.[11] From around 11,500 years ago, the eight Neolithic founder crops, emmer and einkorn wheat, hulled barley, peas, lentils, bitter vetch, chick peas and flax were cultivated in the Levant. Rice was domesticated in China between 11,500 and 6,200 BC with the earliest known cultivation from 5,700 BC,[12] followed by mung, soy and azuki beans. Sheep were domesticated in Mesopotamia between 13,000 and 11,000 years ago.[13] Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan some 10,500 years ago.[14] Pig production emerged in Eurasia, including Europe, East Asia and Southwest Asia,[15] where wild boar were first domesticated about 10,500 years ago.[16] In the Andes of South America, the potato was domesticated between 10,000 and 7,000 years ago, along with beans, coca, llamas, alpacas, and guinea pigs. Sugarcane and some root vegetables were domesticated in New Guinea around 9,000 years ago. Sorghum was domesticated in the Sahel region of Africa by 7,000 years ago. Cotton was domesticated in Peru by 5,600 years ago,[17] and was independently domesticated in Eurasia. In Mesoamerica, wild teosinte was bred into maize by 6,000 years ago.[18]
|
16 |
+
Scholars have offered multiple hypotheses to explain the historical origins of agriculture. Studies of the transition from hunter-gatherer to agricultural societies indicate an initial period of intensification and increasing sedentism; examples are the Natufian culture in the Levant, and the Early Chinese Neolithic in China. Then, wild stands that had previously been harvested started to be planted, and gradually came to be domesticated.[19][20][21]
|
17 |
+
|
18 |
+
In Eurasia, the Sumerians started to live in villages from about 8,000 BC, relying on the Tigris and Euphrates rivers and a canal system for irrigation. Ploughs appear in pictographs around 3,000 BC; seed-ploughs around 2,300 BC. Farmers grew wheat, barley, vegetables such as lentils and onions, and fruits including dates, grapes, and figs.[22] Ancient Egyptian agriculture relied on the Nile River and its seasonal flooding. Farming started in the predynastic period at the end of the Paleolithic, after 10,000 BC. Staple food crops were grains such as wheat and barley, alongside industrial crops such as flax and papyrus.[23][24] In India, wheat, barley and jujube were domesticated by 9,000 BC, soon followed by sheep and goats.[25] Cattle, sheep and goats were domesticated in Mehrgarh culture by 8,000–6,000 BC.[26][27][27][28] Cotton was cultivated by the 5th–4th millennium BC.[29] Archeological evidence indicates an animal-drawn plough from 2,500 BC in the Indus Valley Civilisation.[30]
|
19 |
+
In China, from the 5th century BC there was a nationwide granary system and widespread silk farming.[31] Water-powered grain mills were in use by the 1st century BC,[32] followed by irrigation.[33] By the late 2nd century, heavy ploughs had been developed with iron ploughshares and mouldboards.[34][35] These spread westwards across Eurasia.[36] Asian rice was domesticated 8,200–13,500 years ago – depending on the molecular clock estimate that is used[37] – on the Pearl River in southern China with a single genetic origin from the wild rice Oryza rufipogon.[38] In Greece and Rome, the major cereals were wheat, emmer, and barley, alongside vegetables including peas, beans, and olives. Sheep and goats were kept mainly for dairy products.[39][40]
|
20 |
+
|
21 |
+
In the Americas, crops domesticated in Mesoamerica (apart from teosinte) include squash, beans, and cocoa.[41] Cocoa was being domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC.[42]
|
22 |
+
The turkey was probably domesticated in Mexico or the American Southwest.[43] The Aztecs developed irrigation systems, formed terraced hillsides, fertilized their soil, and developed chinampas or artificial islands. The Mayas used extensive canal and raised field systems to farm swampland from 400 BC.[44][45][46][47][48] Coca was domesticated in the Andes, as were the peanut, tomato, tobacco, and pineapple.[41] Cotton was domesticated in Peru by 3,600 BC.[49] Animals including llamas, alpacas, and guinea pigs were domesticated there.[50] In North America, the indigenous people of the East domesticated crops such as sunflower, tobacco,[51] squash and Chenopodium.[52][53] Wild foods including wild rice and maple sugar were harvested.[54] The domesticated strawberry is a hybrid of a Chilean and a North American species, developed by breeding in Europe and North America.[55] The indigenous people of the Southwest and the Pacific Northwest practiced forest gardening and fire-stick farming. The natives controlled fire on a regional scale to create a low-intensity fire ecology that sustained a low-density agriculture in loose rotation; a sort of "wild" permaculture.[56][57][58][59] A system of companion planting called the Three Sisters was developed on the Great Plains. The three crops were winter squash, maize, and climbing beans.[60][61]
|
23 |
+
|
24 |
+
Indigenous Australians, long supposed to have been nomadic hunter-gatherers, practised systematic burning to enhance natural productivity in fire-stick farming.[62] The Gunditjmara and other groups developed eel farming and fish trapping systems from some 5,000 years ago.[63] There is evidence of 'intensification' across the whole continent over that period.[64] In two regions of Australia, the central west coast and eastern central, early farmers cultivated yams, native millet, and bush onions, possibly in permanent settlements.[65][21]
|
25 |
+
|
26 |
+
In the Middle Ages, both in the Islamic world and in Europe, agriculture transformed with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton and fruit trees (such as the orange) to Europe by way of Al-Andalus.[66][67] After 1492 the Columbian exchange brought New World crops such as maize, potatoes, tomatoes, sweet potatoes and manioc to Europe, and Old World crops such as wheat, barley, rice and turnips, and livestock (including horses, cattle, sheep and goats) to the Americas.[68]
|
27 |
+
|
28 |
+
Irrigation, crop rotation, and fertilizers advanced from the 17th century with the British Agricultural Revolution, allowing global population to rise significantly. Since 1900 agriculture in developed nations, and to a lesser extent in the developing world, has seen large rises in productivity as mechanization replaces human labor, and assisted by synthetic fertilizers, pesticides, and selective breeding. The Haber-Bosch method allowed the synthesis of ammonium nitrate fertilizer on an industrial scale, greatly increasing crop yields and sustaining a further increase in global population.[69][70] Modern agriculture has raised or encountered ecological, political, and economic issues including water pollution, biofuels, genetically modified organisms, tariffs and farm subsidies, leading to alternative approaches such as the organic movement.[71][72]
|
29 |
+
|
30 |
+
Pastoralism involves managing domesticated animals. In nomadic pastoralism, herds of livestock are moved from place to place in search of pasture, fodder, and water. This type of farming is practised in arid and semi-arid regions of Sahara, Central Asia and some parts of India.[73]
|
31 |
+
|
32 |
+
In shifting cultivation, a small area of forest is cleared by cutting and burning the trees. The cleared land is used for growing crops for a few years until the soil becomes too infertile, and the area is abandoned. Another patch of land is selected and the process is repeated. This type of farming is practiced mainly in areas with abundant rainfall where the forest regenerates quickly. This practice is used in Northeast India, Southeast Asia, and the Amazon Basin.[74]
|
33 |
+
|
34 |
+
Subsistence farming is practiced to satisfy family or local needs alone, with little left over for transport elsewhere. It is intensively practiced in Monsoon Asia and South-East Asia.[75] An estimated 2.5 billion subsistence farmers worked in 2018, cultivating about 60% of the earth's arable land.[76]
|
35 |
+
|
36 |
+
Intensive farming is cultivation to maximise productivity, with a low fallow ratio and a high use of inputs (water, fertilizer, pesticide and automation). It is practiced mainly in developed countries.[77][78]
|
37 |
+
|
38 |
+
From the twentieth century, intensive agriculture increased productivity. It substituted synthetic fertilizers and pesticides for labor, but caused increased water pollution, and often involved farm subsidies. In recent years there has been a backlash against the environmental effects of conventional agriculture, resulting in the organic, regenerative, and sustainable agriculture movements.[71][80] One of the major forces behind this movement has been the European Union, which first certified organic food in 1991 and began reform of its Common Agricultural Policy (CAP) in 2005 to phase out commodity-linked farm subsidies,[81] also known as decoupling. The growth of organic farming has renewed research in alternative technologies such as integrated pest management, selective breeding,[82] and controlled-environment agriculture.[83][84] Recent mainstream technological developments include genetically modified food.[85] Demand for non-food biofuel crops,[86] development of former farm lands, rising transportation costs, climate change, growing consumer demand in China and India, and population growth,[87] are threatening food security in many parts of the world.[88][89][90][91][92] The International Fund for Agricultural Development posits that an increase in smallholder agriculture may be part of the solution to concerns about food prices and overall food security, given the favorable experience of Vietnam.[93] Soil degradation and diseases such as stem rust are major concerns globally;[94] approximately 40% of the world's agricultural land is seriously degraded.[95][96] By 2015, the agricultural output of China was the largest in the world, followed by the European Union, India and the United States.[79] Economists measure the total factor productivity of agriculture and by this measure agriculture in the United States is roughly 1.7 times more productive than it was in 1948.[97]
|
39 |
+
|
40 |
+
Following the three-sector theory, the number of people employed in agriculture and other primary activities (such as fishing) can be more than 80% in the least developed countries, and less than 2% in the most highly developed countries.[98] Since the Industrial Revolution, many countries have made the transition to developed economies, and the proportion of people working in agriculture has steadily fallen. During the 16th century in Europe, for example, between 55 and 75% of the population was engaged in agriculture; by the 19th century, this had dropped to between 35 and 65%.[99] In the same countries today, the figure is less than 10%.[98]
|
41 |
+
At the start of the 21st century, some one billion people, or over 1/3 of the available work force, were employed in agriculture. It constitutes approximately 70% of the global employment of children, and in many countries employs the largest percentage of women of any industry.[100] The service sector overtook the agricultural sector as the largest global employer in 2007.[101]
|
42 |
+
|
43 |
+
Agriculture, specifically farming, remains a hazardous industry, and farmers worldwide remain at high risk of work-related injuries, lung disease, noise-induced hearing loss, skin diseases, as well as certain cancers related to chemical use and prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery, and a common cause of fatal agricultural injuries in developed countries is tractor rollovers.[102] Pesticides and other chemicals used in farming can also be hazardous to worker health, and workers exposed to pesticides may experience illness or have children with birth defects.[103] As an industry in which families commonly share in work and live on the farm itself, entire families can be at risk for injuries, illness, and death.[104] Ages 0–6 may be an especially vulnerable population in agriculture;[105] common causes of fatal injuries among young farm workers include drowning, machinery and motor accidents, including with all-terrain vehicles.[104][105][106]
|
44 |
+
|
45 |
+
The International Labour Organization considers agriculture "one of the most hazardous of all economic sectors".[100] It estimates that the annual work-related death toll among agricultural employees is at least 170,000, twice the average rate of other jobs. In addition, incidences of death, injury and illness related to agricultural activities often go unreported.[107] The organization has developed the Safety and Health in Agriculture Convention, 2001, which covers the range of risks in the agriculture occupation, the prevention of these risks and the role that individuals and organizations engaged in agriculture should play.[100]
|
46 |
+
|
47 |
+
In the United States, agriculture has been identified by the National Institute for Occupational Safety and Health as a priority industry sector in the National Occupational Research Agenda to identify and provide intervention strategies for occupational health and safety issues.[108][109]
|
48 |
+
In the European Union, the European Agency for Safety and Health at Work has issued guidelines on implementing health and safety directives in agriculture, livestock farming, horticulture, and forestry.[110] The Agricultural Safety and Health Council of America (ASHCA) also holds a yearly summit to discuss safety.[111]
|
49 |
+
|
50 |
+
Overall production varies by country as listed.
|
51 |
+
|
52 |
+
The twenty largest countries by agricultural output (in nominal terms) at peak level as of 2018, according to the IMF and CIA World Factbook.
|
53 |
+
|
54 |
+
Cropping systems vary among farms depending on the available resources and constraints; geography and climate of the farm; government policy; economic, social and political pressures; and the philosophy and culture of the farmer.[113][114]
|
55 |
+
|
56 |
+
Shifting cultivation (or slash and burn) is a system in which forests are burnt, releasing nutrients to support cultivation of annual and then perennial crops for a period of several years.[115] Then the plot is left fallow to regrow forest, and the farmer moves to a new plot, returning after many more years (10–20). This fallow period is shortened if population density grows, requiring the input of nutrients (fertilizer or manure) and some manual pest control. Annual cultivation is the next phase of intensity in which there is no fallow period. This requires even greater nutrient and pest control inputs.[115]
|
57 |
+
|
58 |
+
Further industrialization led to the use of monocultures, when one cultivar is planted on a large acreage. Because of the low biodiversity, nutrient use is uniform and pests tend to build up, necessitating the greater use of pesticides and fertilizers.[114] Multiple cropping, in which several crops are grown sequentially in one year, and intercropping, when several crops are grown at the same time, are other kinds of annual cropping systems known as polycultures.[115]
|
59 |
+
|
60 |
+
In subtropical and arid environments, the timing and extent of agriculture may be limited by rainfall, either not allowing multiple annual crops in a year, or requiring irrigation. In all of these environments perennial crops are grown (coffee, chocolate) and systems are practiced such as agroforestry. In temperate environments, where ecosystems were predominantly grassland or prairie, highly productive annual farming is the dominant agricultural system.[115]
|
61 |
+
|
62 |
+
Important categories of food crops include cereals, legumes, forage, fruits and vegetables.[116] Natural fibers include cotton, wool, hemp, silk and flax.[117] Specific crops are cultivated in distinct growing regions throughout the world. Production is listed in millions of metric tons, based on FAO estimates.[116]
|
63 |
+
|
64 |
+
Animal husbandry is the breeding and raising of animals for meat, milk, eggs, or wool, and for work and transport.[118] Working animals, including horses, mules, oxen, water buffalo, camels, llamas, alpacas, donkeys, and dogs, have for centuries been used to help cultivate fields, harvest crops, wrangle other animals, and transport farm products to buyers.[119]
|
65 |
+
|
66 |
+
Livestock production systems can be defined based on feed source, as grassland-based, mixed, and landless.[120] As of 2010[update], 30% of Earth's ice- and water-free area was used for producing livestock, with the sector employing approximately 1.3 billion people. Between the 1960s and the 2000s, there was a significant increase in livestock production, both by numbers and by carcass weight, especially among beef, pigs and chickens, the latter of which had production increased by almost a factor of 10. Non-meat animals, such as milk cows and egg-producing chickens, also showed significant production increases. Global cattle, sheep and goat populations are expected to continue to increase sharply through 2050.[121] Aquaculture or fish farming, the production of fish for human consumption in confined operations, is one of the fastest growing sectors of food production, growing at an average of 9% a year between 1975 and 2007.[122]
|
67 |
+
|
68 |
+
During the second half of the 20th century, producers using selective breeding focused on creating livestock breeds and crossbreeds that increased production, while mostly disregarding the need to preserve genetic diversity. This trend has led to a significant decrease in genetic diversity and resources among livestock breeds, leading to a corresponding decrease in disease resistance and local adaptations previously found among traditional breeds.[123]
|
69 |
+
|
70 |
+
Grassland based livestock production relies upon plant material such as shrubland, rangeland, and pastures for feeding ruminant animals. Outside nutrient inputs may be used, however manure is returned directly to the grassland as a major nutrient source. This system is particularly important in areas where crop production is not feasible because of climate or soil, representing 30–40 million pastoralists.[115] Mixed production systems use grassland, fodder crops and grain feed crops as feed for ruminant and monogastric (one stomach; mainly chickens and pigs) livestock. Manure is typically recycled in mixed systems as a fertilizer for crops.[120]
|
71 |
+
|
72 |
+
Landless systems rely upon feed from outside the farm, representing the de-linking of crop and livestock production found more prevalently in Organisation for Economic Co-operation and Development member countries. Synthetic fertilizers are more heavily relied upon for crop production and manure utilization becomes a challenge as well as a source for pollution.[120] Industrialized countries use these operations to produce much of the global supplies of poultry and pork. Scientists estimate that 75% of the growth in livestock production between 2003 and 2030 will be in confined animal feeding operations, sometimes called factory farming. Much of this growth is happening in developing countries in Asia, with much smaller amounts of growth in Africa.[121] Some of the practices used in commercial livestock production, including the usage of growth hormones, are controversial.[124]
|
73 |
+
|
74 |
+
Tillage is the practice of breaking up the soil with tools such as the plow or harrow to prepare for planting, for nutrient incorporation, or for pest control. Tillage varies in intensity from conventional to no-till. It may improve productivity by warming the soil, incorporating fertilizer and controlling weeds, but also renders soil more prone to erosion, triggers the decomposition of organic matter releasing CO2, and reduces the abundance and diversity of soil organisms.[125][126]
|
75 |
+
|
76 |
+
Pest control includes the management of weeds, insects, mites, and diseases. Chemical (pesticides), biological (biocontrol), mechanical (tillage), and cultural practices are used. Cultural practices include crop rotation, culling, cover crops, intercropping, composting, avoidance, and resistance. Integrated pest management attempts to use all of these methods to keep pest populations below the number which would cause economic loss, and recommends pesticides as a last resort.[127]
|
77 |
+
|
78 |
+
Nutrient management includes both the source of nutrient inputs for crop and livestock production, and the method of utilization of manure produced by livestock. Nutrient inputs can be chemical inorganic fertilizers, manure, green manure, compost and minerals.[128] Crop nutrient use may also be managed using cultural techniques such as crop rotation or a fallow period. Manure is used either by holding livestock where the feed crop is growing, such as in managed intensive rotational grazing, or by spreading either dry or liquid formulations of manure on cropland or pastures.[129][125]
|
79 |
+
|
80 |
+
Water management is needed where rainfall is insufficient or variable, which occurs to some degree in most regions of the world.[115] Some farmers use irrigation to supplement rainfall. In other areas such as the Great Plains in the U.S. and Canada, farmers use a fallow year to conserve soil moisture to use for growing a crop in the following year.[130] Agriculture represents 70% of freshwater use worldwide.[131]
|
81 |
+
|
82 |
+
According to a report by the International Food Policy Research Institute, agricultural technologies will have the greatest impact on food production if adopted in combination with each other; using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, the International Food Policy Research Institute found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half.[132]
|
83 |
+
|
84 |
+
Payment for ecosystem services is a method of providing additional incentives to encourage farmers to conserve some aspects of the environment. Measures might include paying for reforestation upstream of a city, to improve the supply of fresh water.[133]
|
85 |
+
|
86 |
+
Crop alteration has been practiced by humankind for thousands of years, since the beginning of civilization. Altering crops through breeding practices changes the genetic make-up of a plant to develop crops with more beneficial characteristics for humans, for example, larger fruits or seeds, drought-tolerance, or resistance to pests. Significant advances in plant breeding ensued after the work of geneticist Gregor Mendel. His work on dominant and recessive alleles, although initially largely ignored for almost 50 years, gave plant breeders a better understanding of genetics and breeding techniques. Crop breeding includes techniques such as plant selection with desirable traits, self-pollination and cross-pollination, and molecular techniques that genetically modify the organism.[134]
|
87 |
+
|
88 |
+
Domestication of plants has, over the centuries increased yield, improved disease resistance and drought tolerance, eased harvest and improved the taste and nutritional value of crop plants. Careful selection and breeding have had enormous effects on the characteristics of crop plants. Plant selection and breeding in the 1920s and 1930s improved pasture (grasses and clover) in New Zealand. Extensive X-ray and ultraviolet induced mutagenesis efforts (i.e. primitive genetic engineering) during the 1950s produced the modern commercial varieties of grains such as wheat, corn (maize) and barley.[135][136]
|
89 |
+
|
90 |
+
The Green Revolution popularized the use of conventional hybridization to sharply increase yield by creating "high-yielding varieties". For example, average yields of corn (maize) in the US have increased from around 2.5 tons per hectare (t/ha) (40 bushels per acre) in 1900 to about 9.4 t/ha (150 bushels per acre) in 2001. Similarly, worldwide average wheat yields have increased from less than 1 t/ha in 1900 to more than 2.5 t/ha in 1990. South American average wheat yields are around 2 t/ha, African under 1 t/ha, and Egypt and Arabia up to 3.5 to 4 t/ha with irrigation. In contrast, the average wheat yield in countries such as France is over 8 t/ha. Variations in yields are due mainly to variation in climate, genetics, and the level of intensive farming techniques (use of fertilizers, chemical pest control, growth control to avoid lodging).[137][138][139]
|
91 |
+
|
92 |
+
Genetically modified organisms (GMO) are organisms whose genetic material has been altered by genetic engineering techniques generally known as recombinant DNA technology. Genetic engineering has expanded the genes available to breeders to utilize in creating desired germlines for new crops. Increased durability, nutritional content, insect and virus resistance and herbicide tolerance are a few of the attributes bred into crops through genetic engineering.[140] For some, GMO crops cause food safety and food labeling concerns. Numerous countries have placed restrictions on the production, import or use of GMO foods and crops.[141] Currently a global treaty, the Biosafety Protocol, regulates the trade of GMOs. There is ongoing discussion regarding the labeling of foods made from GMOs, and while the EU currently requires all GMO foods to be labeled, the US does not.[142]
|
93 |
+
|
94 |
+
Herbicide-resistant seed has a gene implanted into its genome that allows the plants to tolerate exposure to herbicides, including glyphosate. These seeds allow the farmer to grow a crop that can be sprayed with herbicides to control weeds without harming the resistant crop. Herbicide-tolerant crops are used by farmers worldwide.[143] With the increasing use of herbicide-tolerant crops, comes an increase in the use of glyphosate-based herbicide sprays. In some areas glyphosate resistant weeds have developed, causing farmers to switch to other herbicides.[144][145] Some studies also link widespread glyphosate usage to iron deficiencies in some crops, which is both a crop production and a nutritional quality concern, with potential economic and health implications.[146]
|
95 |
+
|
96 |
+
Other GMO crops used by growers include insect-resistant crops, which have a gene from the soil bacterium Bacillus thuringiensis (Bt), which produces a toxin specific to insects. These crops resist damage by insects.[147] Some believe that similar or better pest-resistance traits can be acquired through traditional breeding practices, and resistance to various pests can be gained through hybridization or cross-pollination with wild species. In some cases, wild species are the primary source of resistance traits; some tomato cultivars that have gained resistance to at least 19 diseases did so through crossing with wild populations of tomatoes.[148]
|
97 |
+
|
98 |
+
Agriculture imposes multiple external costs upon society through effects such as pesticide damage to nature (especially herbicides and insecticides), nutrient runoff, excessive water usage, and loss of natural environment. A 2000 assessment of agriculture in the UK determined total external costs for 1996 of £2,343 million, or £208 per hectare.[149] A 2005 analysis of these costs in the US concluded that cropland imposes approximately $5 to $16 billion ($30 to $96 per hectare), while livestock production imposes $714 million.[150] Both studies, which focused solely on the fiscal impacts, concluded that more should be done to internalize external costs. Neither included subsidies in their analysis, but they noted that subsidies also influence the cost of agriculture to society.[149][150]
|
99 |
+
|
100 |
+
Agriculture seeks to increase yield and to reduce costs. Yield increases with inputs such as fertilisers and removal of pathogens, predators, and competitors (such as weeds). Costs decrease with increasing scale of farm units, such as making fields larger; this means removing hedges, ditches and other areas of habitat. Pesticides kill insects, plants and fungi. These and other measures have cut biodiversity to very low levels on intensively farmed land.[151]
|
101 |
+
|
102 |
+
In 2010, the International Resource Panel of the United Nations Environment Programme assessed the environmental impacts of consumption and production. It found that agriculture and food consumption are two of the most important drivers of environmental pressures, particularly habitat change, climate change, water use and toxic emissions. Agriculture is the main source of toxins released into the environment, including insecticides, especially those used on cotton.[152] The 2011 UNEP Green Economy report states that "[a]agricultural operations, excluding land use changes, produce approximately 13 per cent of anthropogenic global GHG emissions. This includes GHGs emitted by the use of inorganic fertilizers agro-chemical pesticides and herbicides; (GHG emissions resulting from production of these inputs are included in industrial emissions); and fossil fuel-energy inputs.[153] "On average we find that the total amount of fresh residues from agricultural and forestry production for second- generation biofuel production amounts to 3.8 billion tonnes per year between 2011 and 2050 (with an average annual growth rate of 11 per cent throughout the period analysed, accounting for higher growth during early years, 48 per cent for 2011–2020 and an average 2 per cent annual expansion after 2020)."[153]
|
103 |
+
|
104 |
+
A senior UN official, Henning Steinfeld, said that "Livestock are one of the most significant contributors to today's most serious environmental problems".[154] Livestock production occupies 70% of all land used for agriculture, or 30% of the land surface of the planet. It is one of the largest sources of greenhouse gases, responsible for 18% of the world's greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation emits 13.5% of the CO2. It produces 65% of human-related nitrous oxide (which has 296 times the global warming potential of CO2,) and 37% of all human-induced methane (which is 23 times as warming as CO2.) It also generates 64% of the ammonia emission. Livestock expansion is cited as a key factor driving deforestation; in the Amazon basin 70% of previously forested area is now occupied by pastures and the remainder used for feedcrops.[155] Through deforestation and land degradation, livestock is also driving reductions in biodiversity. Furthermore, the UNEP states that "methane emissions from global livestock are projected to increase by 60 per cent by 2030 under current practices and consumption patterns."[153]
|
105 |
+
|
106 |
+
Land transformation, the use of land to yield goods and services, is the most substantial way humans alter the Earth's ecosystems, and is considered the driving force in the loss of biodiversity. Estimates of the amount of land transformed by humans vary from 39 to 50%.[156] Land degradation, the long-term decline in ecosystem function and productivity, is estimated to be occurring on 24% of land worldwide, with cropland overrepresented.[157] The UN-FAO report cites land management as the driving factor behind degradation and reports that 1.5 billion people rely upon the degrading land. Degradation can be deforestation, desertification, soil erosion, mineral depletion, or chemical degradation (acidification and salinization).[115]
|
107 |
+
|
108 |
+
Agriculture lead to rise in Zoonotic disease like the Coronavirus disease 2019, by degrading natural buffers between humans and animals, reducing biodiversity and creating big groups of genetically similar animals[158][159].
|
109 |
+
|
110 |
+
Eutrophication, excessive nutrients in aquatic ecosystems resulting in algal bloom and anoxia, leads to fish kills, loss of biodiversity, and renders water unfit for drinking and other industrial uses. Excessive fertilization and manure application to cropland, as well as high livestock stocking densities cause nutrient (mainly nitrogen and phosphorus) runoff and leaching from agricultural land. These nutrients are major nonpoint pollutants contributing to eutrophication of aquatic ecosystems and pollution of groundwater, with harmful effects on human populations.[160] Fertilisers also reduce terrestrial biodiversity by increasing competition for light, favouring those species that are able to benefit from the added nutrients.[161]
|
111 |
+
Agriculture accounts for 70 percent of withdrawals of freshwater resources.[162] Agriculture is a major draw on water from aquifers, and currently draws from those underground water sources at an unsustainable rate. It is long known that aquifers in areas as diverse as northern China, the Upper Ganges and the western US are being depleted, and new research extends these problems to aquifers in Iran, Mexico and Saudi Arabia.[163] Increasing pressure is being placed on water resources by industry and urban areas, meaning that water scarcity is increasing and agriculture is facing the challenge of producing more food for the world's growing population with reduced water resources.[164] Agricultural water usage can also cause major environmental problems, including the destruction of natural wetlands, the spread of water-borne diseases, and land degradation through salinization and waterlogging, when irrigation is performed incorrectly.[165]
|
112 |
+
|
113 |
+
Pesticide use has increased since 1950 to 2.5 million short tons annually worldwide, yet crop loss from pests has remained relatively constant.[166] The World Health Organization estimated in 1992 that three million pesticide poisonings occur annually, causing 220,000 deaths.[167] Pesticides select for pesticide resistance in the pest population, leading to a condition termed the "pesticide treadmill" in which pest resistance warrants the development of a new pesticide.[168]
|
114 |
+
|
115 |
+
An alternative argument is that the way to "save the environment" and prevent famine is by using pesticides and intensive high yield farming, a view exemplified by a quote heading the Center for Global Food Issues website: 'Growing more per acre leaves more land for nature'.[169][170] However, critics argue that a trade-off between the environment and a need for food is not inevitable,[171] and that pesticides simply replace good agronomic practices such as crop rotation.[168] The Push–pull agricultural pest management technique involves intercropping, using plant aromas to repel pests from crops (push) and to lure them to a place from which they can then be removed (pull).[172]
|
116 |
+
|
117 |
+
Global warming and agriculture are interrelated on a global scale. Global warming affects agriculture through changes in average temperatures, rainfall, and weather extremes (like storms and heat waves); changes in pests and diseases; changes in atmospheric carbon dioxide and ground-level ozone concentrations; changes in the nutritional quality of some foods;[173] and changes in sea level.[174] Global warming is already affecting agriculture, with effects unevenly distributed across the world.[175] Future climate change will probably negatively affect crop production in low latitude countries, while effects in northern latitudes may be positive or negative.[175] Global warming will probably increase the risk of food insecurity for some vulnerable groups, such as the poor.[176]
|
118 |
+
|
119 |
+
Animal husbandry is also responsible for greenhouse gas production of CO2 and a percentage of the world's methane, and future land infertility, and the displacement of wildlife. Agriculture contributes to climate change by anthropogenic emissions of greenhouse gases, and by the conversion of non-agricultural land such as forest for agricultural use.[177] Agriculture, forestry and land-use change contributed around 20 to 25% to global annual emissions in 2010.[178] A range of policies can reduce the risk of negative climate change impacts on agriculture,[179][180] and greenhouse gas emissions from the agriculture sector.[181][182][183]
|
120 |
+
|
121 |
+
Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. There is not enough water to continue farming using current practices; therefore how critical water, land, and ecosystem resources are used to boost crop yields must be reconsidered. A solution would be to give value to ecosystems, recognizing environmental and livelihood tradeoffs, and balancing the rights of a variety of users and interests.[184] Inequities that result when such measures are adopted would need to be addressed, such as the reallocation of water from poor to rich, the clearing of land to make way for more productive farmland, or the preservation of a wetland system that limits fishing rights.[185]
|
122 |
+
|
123 |
+
Technological advancements help provide farmers with tools and resources to make farming more sustainable.[186] Technology permits innovations like conservation tillage, a farming process which helps prevent land loss to erosion, reduces water pollution, and enhances carbon sequestration.[187] Other potential practices include conservation agriculture, agroforestry, improved grazing, avoided grassland conversion, and biochar.[188][189] Current mono-crop farming practices in the United States preclude widespread adoption of sustainable practices, such as 2-3 crop rotations that incorporate grass or hay with annual crops, unless negative emission goals such as soil carbon sequestration become policy.[190]
|
124 |
+
|
125 |
+
According to a report by the International Food Policy Research Institute (IFPRI),[132] agricultural technologies will have the greatest impact on food production if adopted in combination with each other; using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, IFPRI found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half.[132] The caloric demand of Earth's projected population, with current climate change predictions, can be satisfied by additional improvement of agricultural methods, expansion of agricultural areas, and a sustainability-oriented consumer mindset.[191]
|
126 |
+
|
127 |
+
Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of energy-intensive mechanization, fertilizers and pesticides. The vast majority of this energy input comes from fossil fuel sources.[192] Between the 1960s and the 1980s, the Green Revolution transformed agriculture around the globe, with world grain production increasing significantly (between 70% and 390% for wheat and 60% to 150% for rice, depending on geographic area)[193] as world population doubled. Heavy reliance on petrochemicals has raised concerns that oil shortages could increase costs and reduce agricultural output.[194]
|
128 |
+
|
129 |
+
Industrialized agriculture depends on fossil fuels in two fundamental ways: direct consumption on the farm and manufacture of inputs used on the farm. Direct consumption includes the use of lubricants and fuels to operate farm vehicles and machinery.[194]
|
130 |
+
|
131 |
+
Indirect consumption includes the manufacture of fertilizers, pesticides, and farm machinery.[194] In particular, the production of nitrogen fertilizer can account for over half of agricultural energy usage.[198] Together, direct and indirect consumption by US farms accounts for about 2% of the nation's energy use. Direct and indirect energy consumption by U.S. farms peaked in 1979, and has since gradually declined.[194] Food systems encompass not just agriculture but off-farm processing, packaging, transporting, marketing, consumption, and disposal of food and food-related items. Agriculture accounts for less than one-fifth of food system energy use in the US.[199][196]
|
132 |
+
|
133 |
+
Agricultural economics is economics as it relates to the "production, distribution and consumption of [agricultural] goods and services".[200] Combining agricultural production with general theories of marketing and business as a discipline of study began in the late 1800s, and grew significantly through the 20th century.[201] Although the study of agricultural economics is relatively recent, major trends in agriculture have significantly affected national and international economies throughout history, ranging from tenant farmers and sharecropping in the post-American Civil War Southern United States[202] to the European feudal system of manorialism.[203] In the United States, and elsewhere, food costs attributed to food processing, distribution, and agricultural marketing, sometimes referred to as the value chain, have risen while the costs attributed to farming have declined. This is related to the greater efficiency of farming, combined with the increased level of value addition (e.g. more highly processed products) provided by the supply chain. Market concentration has increased in the sector as well, and although the total effect of the increased market concentration is likely increased efficiency, the changes redistribute economic surplus from producers (farmers) and consumers, and may have negative implications for rural communities.[204]
|
134 |
+
|
135 |
+
National government policies can significantly change the economic marketplace for agricultural products, in the form of taxation, subsidies, tariffs and other measures.[206] Since at least the 1960s, a combination of trade restrictions, exchange rate policies and subsidies have affected farmers in both the developing and the developed world. In the 1980s, non-subsidized farmers in developing countries experienced adverse effects from national policies that created artificially low global prices for farm products. Between the mid-1980s and the early 2000s, several international agreements limited agricultural tariffs, subsidies and other trade restrictions.[207]
|
136 |
+
|
137 |
+
However, as of 2009[update], there was still a significant amount of policy-driven distortion in global agricultural product prices. The three agricultural products with the greatest amount of trade distortion were sugar, milk and rice, mainly due to taxation. Among the oilseeds, sesame had the greatest amount of taxation, but overall, feed grains and oilseeds had much lower levels of taxation than livestock products. Since the 1980s, policy-driven distortions have seen a greater decrease among livestock products than crops during the worldwide reforms in agricultural policy.[206] Despite this progress, certain crops, such as cotton, still see subsidies in developed countries artificially deflating global prices, causing hardship in developing countries with non-subsidized farmers.[208] Unprocessed commodities such as corn, soybeans, and cattle are generally graded to indicate quality, affecting the price the producer receives. Commodities are generally reported by production quantities, such as volume, number or weight.[209]
|
138 |
+
|
139 |
+
Agricultural science is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences used in the practice and understanding of agriculture. It covers topics such as agronomy, plant breeding and genetics, plant pathology, crop modelling, soil science, entomology, production techniques and improvement, study of pests and their management, and study of adverse environmental effects such as soil degradation, waste management, and bioremediation.[210][211]
|
140 |
+
|
141 |
+
The scientific study of agriculture began in the 18th century, when Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulphate) as a fertilizer.[212] Research became more systematic when in 1843, John Lawes and Henry Gilbert began a set of long-term agronomy field experiments at Rothamsted Research Station in England; some of them, such as the Park Grass Experiment, are still running.[213][214] In America, the Hatch Act of 1887 provided funding for what it was the first to call "agricultural science", driven by farmers' interest in fertilizers.[215] In agricultural entomology, the USDA began to research biological control in 1881; it instituted its first large program in 1905, searching Europe and Japan for natural enemies of the gypsy moth and brown-tail moth, establishing parasitoids (such as solitary wasps) and predators of both pests in the USA.[216][217][218]
|
142 |
+
|
143 |
+
Agricultural policy is the set of government decisions and actions relating to domestic agriculture and imports of foreign agricultural products. Governments usually implement agricultural policies with the goal of achieving a specific outcome in the domestic agricultural product markets. Some overarching themes include risk management and adjustment (including policies related to climate change, food safety and natural disasters), economic stability (including policies related to taxes), natural resources and environmental sustainability (especially water policy), research and development, and market access for domestic commodities (including relations with global organizations and agreements with other countries).[220] Agricultural policy can also touch on food quality, ensuring that the food supply is of a consistent and known quality, food security, ensuring that the food supply meets the population's needs, and conservation. Policy programs can range from financial programs, such as subsidies, to encouraging producers to enroll in voluntary quality assurance programs.[221]
|
144 |
+
|
145 |
+
There are many influences on the creation of agricultural policy, including consumers, agribusiness, trade lobbies and other groups. Agribusiness interests hold a large amount of influence over policy making, in the form of lobbying and campaign contributions. Political action groups, including those interested in environmental issues and labor unions, also provide influence, as do lobbying organizations representing individual agricultural commodities.[222] The Food and Agriculture Organization of the United Nations (FAO) leads international efforts to defeat hunger and provides a forum for the negotiation of global agricultural regulations and agreements. Dr. Samuel Jutzi, director of FAO's animal production and health division, states that lobbying by large corporations has stopped reforms that would improve human health and the environment. For example, proposals in 2010 for a voluntary code of conduct for the livestock industry that would have provided incentives for improving standards for health, and environmental regulations, such as the number of animals an area of land can support without long-term damage, were successfully defeated due to large food company pressure.[223]
|
en/1969.html.txt
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fermentation is a metabolic process that produces chemical changes in organic substrates through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In the context of food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage.[1] The science of fermentation is known as zymology.
|
2 |
+
|
3 |
+
In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically.[2] Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans.[3]
|
4 |
+
|
5 |
+
Below are some definitions of fermentation. They range from informal, general usages to more scientific definitions.[4]
|
6 |
+
|
7 |
+
Along with photosynthesis and aerobic respiration, fermentation is a way of extracting energy from molecules, but it is the only one common to all bacteria and eukaryotes. It is therefore considered the oldest metabolic pathway, suitable for an environment that did not yet have oxygen.[5]:389 Yeast, a form of fungus, occurs in almost any environment capable of supporting microbes, from the skins of fruits to the guts of insects and mammals and the deep ocean, and harvests sugar-rich materials to produce ethanol and carbon dioxide.[6][7]
|
8 |
+
|
9 |
+
The basic mechanism for fermentation remains present in all cells of higher organisms. Mammalian muscle carries out fermentation during periods of intense exercise where oxygen supply becomes limited, resulting in the creation of lactic acid.[8]:63 In invertebrates, fermentation also produces succinate and alanine.[9]:141
|
10 |
+
|
11 |
+
Fermentative bacteria play an essential role in the production of methane in habitats ranging from the rumens of cattle to sewage digesters and freshwater sediments. They produce hydrogen, carbon dioxide, formate and acetate and carboxylic acids; and then consortia of microbes convert the carbon dioxide and acetate to methane. Acetogenic bacteria oxidize the acids, obtaining more acetate and either hydrogen or formate. Finally, methanogens (in the domain Archea) convert acetate to methane.[10]
|
12 |
+
|
13 |
+
Fermentation reacts NADH with an endogenous, organic electron acceptor.[2] Usually this is pyruvate formed from sugar through glycolysis. The reaction produces NAD+ and an organic product, typical examples being ethanol, lactic acid, and hydrogen gas (H2), and often also carbon dioxide. However, more exotic compounds can be produced by fermentation, such as butyric acid and acetone. Fermentation products are considered waste products, since they cannot be metabolized further without the use of oxygen.[12]
|
14 |
+
|
15 |
+
Fermentation normally occurs in an anaerobic environment. In the presence of O2, NADH, and pyruvate are used to generate ATP in respiration. This is called oxidative phosphorylation, and it generates much more ATP than glycolysis alone since it releases the chemical energy of O2.[12] For that reason, fermentation is rarely utilized when oxygen is available. However, even in the presence of abundant oxygen, some strains of yeast such as Saccharomyces cerevisiae prefer fermentation to aerobic respiration as long as there is an adequate supply of sugars (a phenomenon known as the Crabtree effect).[13] Some fermentation processes involve obligate anaerobes, which cannot tolerate oxygen.
|
16 |
+
|
17 |
+
Although yeast carries out the fermentation in the production of ethanol in beers, wines, and other alcoholic drinks, this is not the only possible agent: bacteria carry out the fermentation in the production of xanthan gum.
|
18 |
+
|
19 |
+
In ethanol fermentation, one glucose molecule is converted into two ethanol molecules and two carbon dioxide molecules.[14][15] It is used to make bread dough rise: the carbon dioxide forms bubbles, expanding the dough into a foam.[16][17] The ethanol is the intoxicating agent in alcoholic beverages such as wine, beer and liquor.[18] Fermentation of feedstocks, including sugarcane, corn, and sugar beets, produces ethanol that is added to gasoline.[19] In some species of fish, including goldfish and carp, it provides energy when oxygen is scarce (along with lactic acid fermentation).[20]
|
20 |
+
|
21 |
+
The figure illustrates the process. Before fermentation, a glucose molecule breaks down into two pyruvate molecules (Glycolysis). The energy from this exothermic reaction is used to bind inorganic phosphates to ADP, which converts it to ATP, and convert NAD+ to NADH. The pyruvates break down into two acetaldehyde molecules and give off two carbon dioxide molecules as waste products. The acetaldehyde is reduced into ethanol using the energy and hydrogen from NADH, and the NADH is oxidized into NAD+ so that the cycle may repeat. The reaction is catalyzed by the enzymes pyruvate decarboxylase and alcohol dehydrogenase.[14]
|
22 |
+
|
23 |
+
Homolactic fermentation (producing only lactic acid) is the simplest type of fermentation. Pyruvate from glycolysis[21] undergoes a simple redox reaction, forming lactic acid.[22][23] It is probably the only respiration process that does not produce a gas as a byproduct. Overall, one molecule of glucose (or any six-carbon sugar) is converted to two molecules of lactic acid:
|
24 |
+
|
25 |
+
It occurs in the muscles of animals when they need energy faster than the blood can supply oxygen. It also occurs in some kinds of bacteria (such as lactobacilli) and some fungi. It is the type of bacteria that convert lactose into lactic acid in yogurt, giving it its sour taste. These lactic acid bacteria can carry out either homolactic fermentation, where the end-product is mostly lactic acid, or heterolactic fermentation, where some lactate is further metabolized to ethanol and carbon dioxide[22] (via the phosphoketolase pathway), acetate, or other metabolic products, e.g.:
|
26 |
+
|
27 |
+
If lactose is fermented (as in yogurts and cheeses), it is first converted into glucose and galactose (both six-carbon sugars with the same atomic formula):
|
28 |
+
|
29 |
+
Heterolactic fermentation is in a sense intermediate between lactic acid fermentation and other types, e.g. alcoholic fermentation. Reasons to go further and convert lactic acid into something else include:
|
30 |
+
|
31 |
+
Hydrogen gas is produced in many types of fermentation as a way to regenerate NAD+ from NADH. Electrons are transferred to ferredoxin, which in turn is oxidized by hydrogenase, producing H2.[14] Hydrogen gas is a substrate for methanogens and sulfate reducers, which keep the concentration of hydrogen low and favor the production of such an energy-rich compound,[24] but hydrogen gas at a fairly high concentration can nevertheless be formed, as in flatus.
|
32 |
+
|
33 |
+
For example, Clostridium pasteurianum ferments glucose to butyrate, acetate, carbon dioxide, and hydrogen gas:[25] The reaction leading to acetate is:
|
34 |
+
|
35 |
+
Other types of fermentation include mixed acid fermentation, butanediol fermentation, butyrate fermentation, caproate fermentation, acetone–butanol–ethanol fermentation, and glyoxylate fermentation.
|
36 |
+
|
37 |
+
Most industrial fermentation uses batch or fed-batch procedures, although continuous fermentation can be more economical if various challenges, particularly the difficulty of maintaining sterility, can be met.[26]
|
38 |
+
|
39 |
+
In a batch process, all the ingredients are combined and the reactions proceed without any further input. Batch fermentation has been used for millennia to make bread and alcoholic beverages, and it is still a common method, especially when the process is not well understood.[27]:1 However, it can be expensive because the fermentor must be sterilized using high pressure steam between batches.[26] Strictly speaking, there is often addition of small quantities of chemicals to control the pH or suppress foaming.[27]:25
|
40 |
+
|
41 |
+
Batch fermentation goes through a series of phases. There is a lag phase in which cells adjust to their environment; then a phase in which exponential growth occurs. Once many of the nutrients have been consumed, the growth slows and becomes non-exponential, but production of secondary metabolites (including commercially important antibiotics and enzymes) accelerates. This continues through a stationary phase after most of the nutrients have been consumed, and then the cells die.[27]:25
|
42 |
+
|
43 |
+
Fed-batch fermentation is a variation of batch fermentation where some of the ingredients are added during the fermentation. This allows greater control over the stages of the process. In particular, production of secondary metabolites can be increased by adding a limited quantity of nutrients during the non-exponential growth phase. Fed-batch operations are often sandwiched between batch operations.[27]:1[28]
|
44 |
+
|
45 |
+
The high cost of sterilizing the fermentor between batches can be avoided using various open fermentation approaches that are able to resist contamination. One is to use a naturally evolved mixed culture. This is particularly favored in wastewater treatment, since mixed populations can adapt to a wide variety of wastes. Thermophilic bacteria can produce lactic acid at temperatures of around 50 °Celsius, sufficient to discourage microbial contamination; and ethanol has been produced at a temperature of 70 °C. This is just below its boiling point (78 °C), making it easy to extract. Halophilic bacteria can produce bioplastics in hypersaline conditions. Solid-state fermentation adds a small amount of water to a solid substrate; it is widely used in the food industry to produce flavors, enzymes and organic acids.[26]
|
46 |
+
|
47 |
+
In continuous fermentation, substrates are added and final products removed continuously.[26] There are three varieties: chemostats, which hold nutrient levels constant; turbidostats, which keep cell mass constant; and plug flow reactors in which the culture medium flows steadily through a tube while the cells are recycled from the outlet to the inlet.[28] If the process works well, there is a steady flow of feed and effluent and the costs of repeatedly setting up a batch are avoided. Also, it can prolong the exponential growth phase and avoid byproducts that inhibit the reactions by continuously removing them. However, it is difficult to maintain a steady state and avoid contamination, and the design tends to be complex.[26] Typically the fermentor must run for over 500 hours to be more economical than batch processors.[28]
|
48 |
+
|
49 |
+
The use of fermentation, particularly for beverages, has existed since the Neolithic and has been documented dating from 7000–6600 BCE in Jiahu, China,[29] 5000 BCE in India, Ayurveda mentions many Medicated Wines, 6000 BCE in Georgia,[30] 3150 BCE in ancient Egypt,[31] 3000 BCE in Babylon,[32] 2000 BCE in pre-Hispanic Mexico,[32] and 1500 BC in Sudan.[33] Fermented foods have a religious significance in Judaism and Christianity. The Baltic god Rugutis was worshiped as the agent of fermentation.[34][35]
|
50 |
+
|
51 |
+
In 1837, Charles Cagniard de la Tour, Theodor Schwann and Friedrich Traugott Kützing independently published papers concluding, as a result of microscopic investigations, that yeast is a living organism that reproduces by budding.[36][37]:6 Schwann boiled grape juice to kill the yeast and found that no fermentation would occur until new yeast was added. However, a lot of chemists, including Antoine Lavoisier, continued to view fermentation as a simple chemical reaction and rejected the notion that living organisms could be involved. This was seen as a reversion to vitalism and was lampooned in an anonymous publication by Justus von Liebig and Friedrich Wöhler.[5]:108–109
|
52 |
+
|
53 |
+
The turning point came when Louis Pasteur (1822–1895), during the 1850s and 1860s, repeated Schwann's experiments and showed that fermentation is initiated by living organisms in a series of investigations.[23][37]:6 In 1857, Pasteur showed that lactic acid fermentation is caused by living organisms.[38] In 1860, he demonstrated that bacteria cause souring in milk, a process formerly thought to be merely a chemical change, and his work in identifying the role of microorganisms in food spoilage led to the process of pasteurization.[39] In 1877, working to improve the French brewing industry, Pasteur published his famous paper on fermentation, "Etudes sur la Bière", which was translated into English in 1879 as "Studies on fermentation".[40] He defined fermentation (incorrectly) as "Life without air",[41] but correctly showed that specific types of microorganisms cause specific types of fermentations and specific end-products.
|
54 |
+
|
55 |
+
Although showing fermentation to be the result of the action of living microorganisms was a breakthrough, it did not explain the basic nature of the fermentation process, or prove that it is caused by the microorganisms that appear to be always present. Many scientists, including Pasteur, had unsuccessfully attempted to extract the fermentation enzyme from yeast.[41] Success came in 1897 when the German chemist Eduard Buechner ground up yeast, extracted a juice from them, then found to his amazement that this "dead" liquid would ferment a sugar solution, forming carbon dioxide and alcohol much like living yeasts.[42] Buechner's results are considered to mark the birth of biochemistry. The "unorganized ferments" behaved just like the organized ones. From that time on, the term enzyme came to be applied to all ferments. It was then understood that fermentation is caused by enzymes that are produced by microorganisms.[43] In 1907, Buechner won the Nobel Prize in chemistry for his work.[44]
|
56 |
+
|
57 |
+
Advances in microbiology and fermentation technology have continued steadily up until the present. For example, in the 1930s, it was discovered that microorganisms could be mutated with physical and chemical treatments to be higher-yielding, faster-growing, tolerant of less oxygen, and able to use a more concentrated medium.[45][46] Strain selection and hybridization developed as well, affecting most modern food fermentations.
|
58 |
+
|
59 |
+
The word "ferment" is derived from the Latin verb fervere, which means to boil. It is thought to have been first used in the late 14th century in alchemy, but only in a broad sense. It was not used in the modern scientific sense until around 1600.
|
60 |
+
|
61 |
+
|
62 |
+
|
en/197.html.txt
ADDED
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
South America is a continent in the Western Hemisphere, mostly in the Southern Hemisphere, with a relatively small portion in the Northern Hemisphere. It may also be considered a subcontinent of the Americas,[6][7] which is how it is viewed in Spanish and Portuguese-speaking regions of the Americas. The reference to South America instead of other regions (like Latin America or the Southern Cone) has increased in the last decades due to changing geopolitical dynamics (in particular, the rise of Brazil).[8][additional citation(s) needed]
|
4 |
+
|
5 |
+
It is bordered on the west by the Pacific Ocean and on the north and east by the Atlantic Ocean; North America and the Caribbean Sea lie to the northwest. It includes twelve sovereign states: Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Guyana, Paraguay, Peru, Suriname, Uruguay, Venezuela, and a part of France : French Guiana. In addition, the ABC islands of the Kingdom of the Netherlands, the Falkland Islands (a British Overseas Territory), Trinidad and Tobago, and Panama may also be considered part of South America.
|
6 |
+
|
7 |
+
South America has an area of 17,840,000 square kilometers (6,890,000 sq mi). Its population as of 2018[update] has been estimated at more than 423 million.[1][2] South America ranks fourth in area (after Asia, Africa, and North America) and fifth in population (after Asia, Africa, Europe, and North America). Brazil is by far the most populous South American country, with more than half of the continent's population, followed by Colombia, Argentina, Venezuela and Peru. In recent decades Brazil has also concentrated half of the region's GDP and has become a first regional power.[8]
|
8 |
+
|
9 |
+
Most of the population lives near the continent's western or eastern coasts while the interior and the far south are sparsely populated. The geography of western South America is dominated by the Andes mountains; in contrast, the eastern part contains both highland regions and vast lowlands where rivers such as the Amazon, Orinoco, and Paraná flow. Most of the continent lies in the tropics.
|
10 |
+
|
11 |
+
The continent's cultural and ethnic outlook has its origin with the interaction of indigenous peoples with European conquerors and immigrants and, more locally, with African slaves. Given a long history of colonialism, the overwhelming majority of South Americans speak Portuguese or Spanish, and societies and states reflect Western traditions.
|
12 |
+
|
13 |
+
South America occupies the southern portion of the Americas. The continent is generally delimited on the northwest by the Darién watershed along the Colombia–Panama border, although some may consider the border instead to be the Panama Canal. Geopolitically and geographically[9] all of Panama – including the segment east of the Panama Canal in the isthmus – is typically included in North America alone[10][11][12] and among the countries of Central America.[13][14] Almost all of mainland South America sits on the South American Plate.
|
14 |
+
|
15 |
+
South America is home to the world's highest uninterrupted waterfall, Angel Falls in Venezuela; the highest single drop waterfall Kaieteur Falls in Guyana; the largest river by volume, the Amazon River; the longest mountain range, the Andes (whose highest mountain is Aconcagua at 6,962 m or 22,841 ft); the driest non-polar place on earth, the Atacama Desert;[15][16][17] the largest rainforest, the Amazon Rainforest; the highest capital city, La Paz, Bolivia; the highest commercially navigable lake in the world, Lake Titicaca; and, excluding research stations in Antarctica, the world's southernmost permanently inhabited community, Puerto Toro, Chile.
|
16 |
+
|
17 |
+
South America's major mineral resources are gold, silver, copper, iron ore, tin, and petroleum. These resources found in South America have brought high income to its countries especially in times of war or of rapid economic growth by industrialized countries elsewhere. However, the concentration in producing one major export commodity often has hindered the development of diversified economies. The fluctuation in the price of commodities in the international markets has led historically to major highs and lows in the economies of South American states, often causing extreme political instability. This is leading to efforts to diversify production to drive away from staying as economies dedicated to one major export.
|
18 |
+
|
19 |
+
South America is one of the most biodiverse continents on earth. South America is home to many interesting and unique species of animals including the llama, anaconda, piranha, jaguar, vicuña, and tapir. The Amazon rainforests possess high biodiversity, containing a major proportion of the Earth's species.
|
20 |
+
|
21 |
+
Brazil is the largest country in South America, encompassing around half of the continent's land area and population. The remaining countries and territories are divided among three regions: The Andean States, the Guianas and the Southern Cone.
|
22 |
+
|
23 |
+
Traditionally, South America also includes some of the nearby islands. Aruba, Bonaire, Curaçao, Trinidad, Tobago, and the federal dependencies of Venezuela sit on the northerly South American continental shelf and are often considered part of the continent. Geo-politically, the island states and overseas territories of the Caribbean are generally grouped as a part or subregion of North America, since they are more distant on the Caribbean Plate, even though San Andres and Providencia are politically part of Colombia and Aves Island is controlled by Venezuela.[12][18][19]
|
24 |
+
|
25 |
+
Other islands that are included with South America are the Galápagos Islands that belong to Ecuador and Easter Island (in Oceania but belonging to Chile), Robinson Crusoe Island, Chiloé (both Chilean) and Tierra del Fuego (split in between Chile and Argentina). In the Atlantic, Brazil owns Fernando de Noronha, Trindade and Martim Vaz, and the Saint Peter and Saint Paul Archipelago, while the Falkland Islands are governed by the United Kingdom, whose sovereignty over the islands is disputed by Argentina. South Georgia and the South Sandwich Islands may be associated with either South America or Antarctica.[20][citation needed]
|
26 |
+
|
27 |
+
The distribution of the average temperatures in the region presents a constant regularity from the 30° of latitude south, when the isotherms tend, more and more, to be confused with the degrees of latitude.[22]
|
28 |
+
|
29 |
+
In temperate latitudes, winters are milder and summers warmer than in North America. Because its most extensive part of the continent is in the equatorial zone, the region has more areas of equatorial plains than any other region.[22]
|
30 |
+
|
31 |
+
The average annual temperatures in the Amazon basin oscillate around 27 °C (81 °F), with low thermal amplitudes and high rainfall indices. Between the Maracaibo Lake and the mouth of the Orinoco, predominates an equatorial climate of the type Congolese, that also includes parts of the Brazilian territory.[22]
|
32 |
+
|
33 |
+
The east-central Brazilian plateau has a humid and warm tropical climate. The northern and eastern parts of the Argentine pampas have a humid subtropical climate with dry winters and humid summers of the Chinese type, while the western and eastern ranges have a subtropical climate of the dinaric type. At the highest points of the Andean region, climates are colder than the ones occurring at the highest point of the Norwegian fjords. In the Andean plateaus, the warm climate prevails, although it is tempered by the altitude, while in the coastal strip, there is an equatorial climate of the Guinean type. From this point until the north of the Chilean coast appear, successively, Mediterranean oceanic climate, temperate of the Breton type and, already in Tierra del Fuego, cold climate of the Siberian type.[22]
|
34 |
+
|
35 |
+
The distribution of rainfall is related to the regime of winds and air masses. In most of the tropical region east of the Andes, winds blowing from the northeast, east and southeast carry moisture from the Atlantic, causing abundant rainfall. However, due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare.[23] In the Orinoco Llanos and in the Guianas plateau, the precipitation levels go from moderate to high. The Pacific coast of Colombia and northern Ecuador are rainy regions, with Chocó in Colombia being the most rainy place in the world along with the northern slopes of Indian Himalayas.[24] The Atacama Desert, along this stretch of coast, is one of the driest regions in the world. The central and southern parts of Chile are subject to extratropical cyclones, and most of the Argentine Patagonia is desert. In the pampas of Argentina, Uruguay and South of Brazil the rainfall is moderate, with rains well distributed during the year. The moderately dry conditions of the Chaco oppose the intense rainfall of the eastern region of Paraguay. In the semiarid coast of the Brazilian Northeast the rains are linked to a monsoon regime.[22]
|
36 |
+
|
37 |
+
Important factors in the determination of climates are sea currents, such as the current Humboldt and Falklands. The equatorial current of the South Atlantic strikes the coast of the Northeast and there is divided into two others: the current of Brazil and a coastal current that flows to the northwest towards the Antilles, where there it moves towards northeast course thus forming the most Important and famous ocean current in the world, the Gulf Stream.[22][25]
|
38 |
+
|
39 |
+
South America is believed to have been joined with Africa from the late Paleozoic Era to the early Mesozoic Era, until the supercontinent Pangaea began to rift and break apart about 225 million years ago. Therefore, South America and Africa share similar fossils and rock layers.
|
40 |
+
|
41 |
+
South America is thought to have been first inhabited by humans when people were crossing the Bering Land Bridge (now the Bering Strait) at least 15,000 years ago from the territory that is present-day Russia. They migrated south through North America, and eventually reached South America through the Isthmus of Panama.
|
42 |
+
|
43 |
+
The first evidence for the existence of the human race in South America dates back to about 9000 BC, when squashes, chili peppers and beans began to be cultivated for food in the highlands of the Amazon Basin. Pottery evidence further suggests that manioc, which remains a staple food today, was being cultivated as early as 2000 BC.[26]
|
44 |
+
|
45 |
+
By 2000 BC, many agrarian communities had been settled throughout the Andes and the surrounding regions. Fishing became a widespread practice along the coast, helping establish fish as a primary source of food. Irrigation systems were also developed at this time, which aided in the rise of an agrarian society.[26]
|
46 |
+
|
47 |
+
South American cultures began domesticating llamas, vicuñas, guanacos, and alpacas in the highlands of the Andes circa 3500 BC. Besides their use as sources of meat and wool, these animals were used for transportation of goods.[26]
|
48 |
+
|
49 |
+
The rise of plant growing and the subsequent appearance of permanent human settlements allowed for the multiple and overlapping beginnings of civilizations in South America.
|
50 |
+
|
51 |
+
One of the earliest known South American civilizations was at Norte Chico, on the central Peruvian coast. Though a pre-ceramic culture, the monumental architecture of Norte Chico is contemporaneous with the pyramids of Ancient Egypt. Norte Chico governing class established a trade network and developed agriculture then followed by Chavín by 900 BC, according to some estimates and archaeological finds. Artifacts were found at a site called Chavín de Huantar in modern Peru at an elevation of 3,177 meters (10,423 ft). Chavín civilization spanned 900 BC to 300 BC.
|
52 |
+
|
53 |
+
In the central coast of Peru, around the beginning of the 1st millennium AD, Moche (100 BC – 700 AD, at the northern coast of Peru), Paracas and Nazca (400 BC – 800 AD, Peru) cultures flourished with centralized states with permanent militia improving agriculture through irrigation and new styles of ceramic art. At the Altiplano, Tiahuanaco or Tiwanaku (100 BC – 1200 AD, Bolivia) managed a large commercial network based on religion.
|
54 |
+
|
55 |
+
Around the 7th century, both Tiahuanaco and Wari or Huari Empire (600–1200, Central and northern Peru) expanded its influence to all the Andean region, imposing the Huari urbanism and Tiahuanaco religious iconography.
|
56 |
+
|
57 |
+
The Muisca were the main indigenous civilization in what is now Colombia. They established the Muisca Confederation of many clans, or cacicazgos, that had a free trade network among themselves. They were goldsmiths and farmers.
|
58 |
+
|
59 |
+
Other important Pre-Columbian cultures include: the Cañaris (in south central Ecuador), Chimú Empire (1300–1470, Peruvian northern coast), Chachapoyas, and the Aymaran kingdoms (1000–1450, Western Bolivia and southern Peru).
|
60 |
+
Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tawantin suyu, and "the land of the four regions," in Quechua, the Inca Empire was highly distinct and developed. Inca rule extended to nearly a hundred linguistic or ethnic communities, some nine to fourteen million people connected by a 25,000 kilometer road system. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture.
|
61 |
+
|
62 |
+
The Mapuche in Central and Southern Chile resisted the European and Chilean settlers, waging the Arauco War for more than 300 years.
|
63 |
+
|
64 |
+
In 1494, Portugal and Spain, the two great maritime European powers of that time, on the expectation of new lands being discovered in the west, signed the Treaty of Tordesillas, by which they agreed, with the support of the Pope, that all the land outside Europe should be an exclusive duopoly between the two countries.
|
65 |
+
|
66 |
+
The treaty established an imaginary line along a north–south meridian 370 leagues west of the Cape Verde Islands, roughly 46° 37' W. In terms of the treaty, all land to the west of the line (known to comprise most of the South American soil) would belong to Spain, and all land to the east, to Portugal. As accurate measurements of longitude were impossible at that time, the line was not strictly enforced, resulting in a Portuguese expansion of Brazil across the meridian.
|
67 |
+
|
68 |
+
Beginning in the 1530s, the people and natural resources of South America were repeatedly exploited by foreign conquistadors, first from Spain and later from Portugal. These competing colonial nations claimed the land and resources as their own and divided it into colonies.
|
69 |
+
|
70 |
+
European infectious diseases (smallpox, influenza, measles, and typhus) – to which the native populations had no immune resistance – caused large-scale depopulation of the native population under Spanish control. Systems of forced labor, such as the haciendas and mining industry's mit'a also contributed to the depopulation. After this, African slaves, who had developed immunities to these diseases, were quickly brought in to replace them.
|
71 |
+
|
72 |
+
The Spaniards were committed to converting their native subjects to Christianity and were quick to purge any native cultural practices that hindered this end; however, many initial attempts at this were only partially successful, as native groups simply blended Catholicism with their established beliefs and practices. Furthermore, the Spaniards brought their language to the degree they did with their religion, although the Roman Catholic Church's evangelization in Quechua, Aymara, and Guaraní actually contributed to the continuous use of these native languages albeit only in the oral form.
|
73 |
+
|
74 |
+
Eventually, the natives and the Spaniards interbred, forming a mestizo class. At the beginning, many mestizos of the Andean region were offspring of Amerindian mothers and Spanish fathers. After independence, most mestizos had native fathers and European or mestizo mothers.
|
75 |
+
|
76 |
+
Many native artworks were considered pagan idols and destroyed by Spanish explorers; this included many gold and silver sculptures and other artifacts found in South America, which were melted down before their transport to Spain or Portugal. Spaniards and Portuguese brought the western European architectural style to the continent, and helped to improve infrastructures like bridges, roads, and the sewer system of the cities they discovered or conquered. They also significantly increased economic and trade relations, not just between the old and new world but between the different South American regions and peoples. Finally, with the expansion of the Portuguese and Spanish languages, many cultures that were previously separated became united through that of Latin American.
|
77 |
+
|
78 |
+
Guyana was first a Dutch, and then a British colony, though there was a brief period during the Napoleonic Wars when it was colonized by the French. The country was once partitioned into three parts, each being controlled by one of the colonial powers until the country was finally taken over fully by the British.
|
79 |
+
|
80 |
+
The indigenous peoples of the Americas in various European colonies were forced to work in European plantations and mines; along with African slaves who were also introduced in the proceeding centuries. The colonists were heavily dependent on indigenous labor during the initial phases of European settlement to maintain the subsistence economy, and natives were often captured by expeditions. The importation of African slaves began midway through the 16th century, but the enslavement of indigenous peoples continued well into the 17th and 18th centuries. The Atlantic slave trade brought African slaves primarily to South American colonies, beginning with the Portuguese since 1502.[27] The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave-dependent colonies in the New World. Nearly 40% of all African slaves trafficked to the Americas went to Brazil. An estimated 4.9 million slaves from Africa came to Brazil during the period from 1501 to 1866.[28][29]
|
81 |
+
|
82 |
+
While the Portuguese, English, French and Dutch settlers enslaved mainly African blacks, the Spaniards became very disposed of the natives. In 1750 Portugal abolished native slavery in the colonies because they considered them unfit for labour and began to import even more African slaves. Slaves were brought to the mainland on slave ships, under inhuman conditions and ill-treatment, and those who survived were sold into the slave markets.
|
83 |
+
|
84 |
+
After independence, all South American countries maintained slavery for some time. The first South American country to abolish slavery was Chile in 1823, Uruguay in 1830, Bolivia in 1831, Colombia and Ecuador in 1851, Argentina in 1853, Peru and Venezuela in 1854, Suriname in 1863, Paraguay in 1869, and in 1888 Brazil was the last South American nation and the last country in western world to abolish slavery.
|
85 |
+
|
86 |
+
The European Peninsular War (1807–1814), a theater of the Napoleonic Wars, changed the political situation of both the Spanish and Portuguese colonies. First, Napoleon invaded Portugal, but the House of Braganza avoided capture by escaping to Brazil. Napoleon also captured King Ferdinand VII of Spain, and appointed his own brother instead. This appointment provoked severe popular resistance, which created Juntas to rule in the name of the captured king.
|
87 |
+
|
88 |
+
Many cities in the Spanish colonies, however, considered themselves equally authorized to appoint local Juntas like those of Spain. This began the Spanish American wars of independence between the patriots, who promoted such autonomy, and the royalists, who supported Spanish authority over the Americas. The Juntas, in both Spain and the Americas, promoted the ideas of the Enlightenment. Five years after the beginning of the war, Ferdinand VII returned to the throne and began the Absolutist Restoration as the royalists got the upper hand in the conflict.
|
89 |
+
|
90 |
+
The independence of South America was secured by Simón Bolívar (Venezuela) and José de San Martín (Argentina), the two most important Libertadores. Bolívar led a great uprising in the north, then led his army southward towards Lima, the capital of the Viceroyalty of Peru. Meanwhile, San Martín led an army across the Andes Mountains, along with Chilean expatriates, and liberated Chile. He organized a fleet to reach Peru by sea, and sought the military support of various rebels from the Viceroyalty of Peru. The two armies finally met in Guayaquil, Ecuador, where they cornered the Royal Army of the Spanish Crown and forced its surrender.
|
91 |
+
|
92 |
+
In the Portuguese Kingdom of Brazil, Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese King Dom João VI, proclaimed the independent Kingdom of Brazil in 1822, which later became the Empire of Brazil. Despite the Portuguese loyalties of garrisons in Bahia, Cisplatina and Pará, independence was diplomatically accepted by the crown in Portugal in 1825, on condition of a high compensation paid by Brazil mediatized by the United Kingdom.
|
93 |
+
|
94 |
+
The newly independent nations began a process of fragmentation, with several civil and international wars. However, it was not as strong as in Central America. Some countries created from provinces of larger countries stayed as such up to modern times (such as Paraguay or Uruguay), while others were reconquered and reincorporated into their former countries (such as the Republic of Entre Ríos and the Riograndense Republic).
|
95 |
+
|
96 |
+
The first separatist attempt was in 1820 by the Argentine province of Entre Ríos, led by a caudillo.[30] In spite of the "Republic" in its title, General Ramírez, its caudillo, never really intended to declare an independent Entre Rios. Rather, he was making a political statement in opposition to the monarchist and centralist ideas that back then permeated Buenos Aires politics. The "country" was reincorporated at the United Provinces in 1821.
|
97 |
+
|
98 |
+
In 1825 the Cisplatine Province declared its independence from the Empire of Brazil, which led to the Cisplatine War between the imperials and the Argentine from the United Provinces of the Río de la Plata to control the region. Three years later, the United Kingdom intervened in the question by proclaiming a tie and creating in the former Cisplatina a new independent country: The Oriental Republic of Uruguay.
|
99 |
+
|
100 |
+
Later in 1836, while Brazil was experiencing the chaos of the regency, Rio Grande do Sul proclaimed its independence motivated by a tax crisis. With the anticipation of the coronation of Pedro II to the throne of Brazil, the country could stabilize and fight the separatists, which the province of Santa Catarina had joined in 1839. The Conflict came to an end by a process of compromise by which both Riograndense Republic and Juliana Republic were reincorporated as provinces in 1845.[31][32]
|
101 |
+
|
102 |
+
The Peru–Bolivian Confederation, a short-lived union of Peru and Bolivia, was blocked by Chile in the War of the Confederation (1836–1839) and again during the War of the Pacific (1879–1883). Paraguay was virtually destroyed by Argentina, Brazil and Uruguay in the Paraguayan War.
|
103 |
+
|
104 |
+
Despite the Spanish American wars of independence and the Brazilian War of Independence, the new nations quickly began to suffer with internal conflicts and wars among themselves.
|
105 |
+
|
106 |
+
In 1825 the proclamation of independence of Cisplatina led to the Cisplatine War between historical rivals the Empire of Brazil and the United Provinces of the Río de la Plata, Argentina's predecessor. The result was a stalemate, ending with the British arranging for the independence of Uruguay. Soon after, another Brazilian province proclaimed its independence leading to the Ragamuffin War which Brazil won.
|
107 |
+
|
108 |
+
Between 1836 and 1839 the War of the Confederation broke out between the short-lived Peru-Bolivian Confederation and Chile, with the support of the Argentine Confederation. The war was fought mostly in the actual territory of Peru and ended with a Confederate defeat and the dissolution of the Confederacy and annexation of many territories by Argentina.
|
109 |
+
|
110 |
+
Meanwhile, the Argentine Civil Wars plagued Argentina since its independence. The conflict was mainly between those who defended the centralization of power in Buenos Aires and those who defended a confederation. During this period it can be said that "there were two Argentines": the Argentine Confederation and the Argentine Republic. At the same time the political instability in Uruguay led to the Uruguayan Civil War among the main political factions of the country. All this instability in the platine region interfered with the goals of other countries such as Brazil, which was soon forced to take sides. In 1851 the Brazilian Empire, supporting the centralizing unitarians, and the Uruguayan government invaded Argentina and deposed the caudillo, Juan Manuel Rosas, who ruled the confederation with an iron hand. Although the Platine War did not put an end to the political chaos and civil war in Argentina, it brought temporary peace to Uruguay where the Colorados faction won, supported by the Brazilian Empire, British Empire, French Empire and the Unitarian Party of Argentina.[33]
|
111 |
+
|
112 |
+
Peace lasted only a short time: in 1864 the Uruguayan factions faced each other again in the Uruguayan War. The Blancos supported by Paraguay started to attack Brazilian and Argentine farmers near the borders. The Empire made an initial attempt to settle the dispute between Blancos and Colorados without success. In 1864, after a Brazilian ultimatum was refused, the imperial government declared that Brazil's military would begin reprisals. Brazil declined to acknowledge a formal state of war, and, for most of its duration, the Uruguayan–Brazilian armed conflict was an undeclared war which led to the deposition of the Blancos and the rise of the pro-Brazilian Colorados to power again. This angered the Paraguayan government, which even before the end of the war invaded Brazil, beginning the biggest and deadliest war in both South American and Latin American histories: the Paraguayan War.[citation needed]
|
113 |
+
|
114 |
+
The Paraguayan War began when the Paraguayan dictator Francisco Solano López ordered the invasion of the Brazilian provinces of Mato Grosso and Rio Grande do Sul. His attempt to cross Argentinian territory without Argentinian approval led the pro-Brazilian Argentine government into the war. The pro-Brazilian Uruguayan government showed its support by sending troops. In 1865 the three countries signed the Treaty of the Triple Alliance against Paraguay. At the beginning of the war, the Paraguayans took the lead with several victories, until the Triple Alliance organized to repel the invaders and fight effectively. This was the second total war experience in the world after the American Civil War. It was deemed the greatest war effort in the history of all participating countries, taking almost 6 years and ending with the complete devastation of Paraguay. The country lost 40% of its territory to Brazil and Argentina and lost 60% of its population, including 90% of the men. The dictator Lopez was killed in battle and a new government was instituted in alliance with Brazil, which maintained occupation forces in the country until 1876.[34]
|
115 |
+
|
116 |
+
The last South American war in the 19th century was the War of the Pacific with Bolivia and Peru on one side and Chile on the other. In 1879 the war began with Chilean troops occupying Bolivian ports, followed by Bolivia declaring war on Chile which activated an alliance treaty with Peru. The Bolivians were completely defeated in 1880 and Lima was occupied in 1881. The peace was signed with Peru in 1883 while a truce was signed with Bolivia in 1884. Chile annexed territories of both countries leaving Bolivia with no path to the sea.[35]
|
117 |
+
|
118 |
+
In the new century, as wars became less violent and less frequent, Brazil entered into a small conflict with Bolivia for the possession of the Acre, which was acquired by Brazil in 1902. In 1917 Brazil declared war on the Central Powers, joined the allied side in World War I and sent a small fleet to the Mediterranean Sea and some troops to be integrated with the British and French forces. Brazil was the only South American country that fought in WWI.[36][37] Later in 1932 Colombia and Peru entered a short armed conflict for territory in the Amazon. In the same year Paraguay declared war on Bolivia for possession of the Chaco, in a conflict that ended three years later with Paraguay's victory. Between 1941 and 1942 Peru and Ecuador fought decisively for territories claimed by both that were annexed by Peru, usurping Ecuador's frontier with Brazil.[38]
|
119 |
+
|
120 |
+
Also in this period the first naval battle of World War II was fought on the continent, in the River Plate, between British forces and German submarines.[39] The Germans still made numerous attacks on Brazilian ships on the coast, causing Brazil to declare war on the Axis powers in 1942, being the only South American country to fight in this war (and in both World Wars). Brazil sent naval and air forces to combat German and Italian submarines off the continent and throughout the South Atlantic, in addition to sending an expeditionary force to fight in the Italian Campaign.[40][41][40]
|
121 |
+
|
122 |
+
A brief war was fought between Argentina and the UK in 1982, following an Argentine invasion of the Falkland Islands, which ended with an Argentine defeat. The last international war to be fought on South American soil was the 1995 Cenepa War between Ecuador and the Peru along their mutual border.
|
123 |
+
|
124 |
+
Wars became less frequent in the 20th century, with Bolivia-Paraguay and Peru-Ecuador fighting the last inter-state wars. Early in the 20th century, the three wealthiest South American countries engaged in a vastly expensive naval arms race which was catalyzed by the introduction of a new warship type, the "dreadnought". At one point, the Argentine government was spending a fifth of its entire yearly budget for just two dreadnoughts, a price that did not include later in-service costs, which for the Brazilian dreadnoughts was sixty percent of the initial purchase.[42][43]
|
125 |
+
|
126 |
+
The continent became a battlefield of the Cold War in the late 20th century. Some democratically elected governments of Argentina, Brazil, Chile, Uruguay and Paraguay were overthrown or displaced by military dictatorships in the 1960s and 1970s. To curtail opposition, their governments detained tens of thousands of political prisoners, many of whom were tortured and/or killed on inter-state collaboration. Economically, they began a transition to neoliberal economic policies. They placed their own actions within the US Cold War doctrine of "National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict.
|
127 |
+
|
128 |
+
Argentina and Britain fought the Falklands War in 1982. The conflict lasted 74 days and ended with an Argentine surrender, returning the occupied Falkland islands to British control.
|
129 |
+
|
130 |
+
Colombia has had an ongoing, though diminished internal conflict, which started in 1964 with the creation of Marxist guerrillas (FARC-EP) and then involved several illegal armed groups of leftist-leaning ideology as well as the private armies of powerful drug lords. Many of these are now defunct, and only a small portion of the ELN remains, along with the stronger, though also greatly reduced, FARC.
|
131 |
+
|
132 |
+
Revolutionary movements and right-wing military dictatorships became common after World War II, but since the 1980s, a wave of democratization passed through the continent, and democratic rule is widespread now.[44] Nonetheless, allegations of corruption are still very common, and several countries have developed crises which have forced the resignation of their governments, although, on most occasions, regular civilian succession has continued.
|
133 |
+
|
134 |
+
International indebtedness turned into a severe problem in the late 1980s, and some countries, despite having strong democracies, have not yet developed political institutions capable of handling such crises without resorting to unorthodox economic policies, as most recently illustrated by Argentina's default in the early 21st century.[45][neutrality is disputed] The last twenty years have seen an increased push towards regional integration, with the creation of uniquely South American institutions such as the Andean Community, Mercosur and Unasur. Notably, starting with the election of Hugo Chávez in Venezuela in 1998, the region experienced what has been termed a pink tide – the election of several leftist and center-left administrations to most countries of the area, except for the Guianas and Colombia.
|
135 |
+
|
136 |
+
Historically, the Hispanic countries were founded as Republican dictatorships led by caudillos. Brazil was the only exception, being a constitutional monarchy for its first 67 years of independence, until a coup d'état proclaimed a republic. In the late 19th century, the most democratic countries were Brazil,[49][full citation needed] Chile, Argentina and Uruguay.[50]
|
137 |
+
|
138 |
+
In the interwar period, nationalism grew stronger on the continent, influenced by countries like Nazi Germany and Fascist Italy. A series of authoritarian rules broke out in South American countries with views bringing them closer to the Axis Powers,[51] like Vargas's Brazil. In the late 20th century, during the Cold War, many countries became military dictatorships under American tutelage in attempts to avoid the influence of the Soviet Union. After the fall of the authoritarian regimes, these countries became democratic republics.
|
139 |
+
|
140 |
+
During the first decade of the 21st century, South American governments have drifted to the political left, with leftist leaders being elected in Chile, Uruguay, Brazil, Argentina, Ecuador, Bolivia, Paraguay, Peru and Venezuela. The gross domestic product for each of those countries, however, dropped over that timeframe. Consequently, most South American countries are making increasing use of protectionist policies in order to help local economic development.
|
141 |
+
|
142 |
+
All South American countries are presidential republics with the exception of Suriname, a parliamentary republic. French Guiana is a French overseas department, while the Falkland Islands and South Georgia and the South Sandwich Islands are British overseas territories. It is currently the only inhabited continent in the world without monarchies; the Empire of Brazil existed during the 19th century and there was an unsuccessful attempt to establish a Kingdom of Araucanía and Patagonia in southern Argentina and Chile. Also in the twentieth century, Suriname was established as a constituent kingdom of the Kingdom of the Netherlands and Guyana retained the British monarch as head of state for 4 years after its independence.
|
143 |
+
|
144 |
+
Recently, an intergovernmental entity has been formed which aims to merge the two existing customs unions: Mercosur and the Andean Community, thus forming the third-largest trade bloc in the world.[52]
|
145 |
+
This new political organization, known as Union of South American Nations, seeks to establish free movement of people, economic development, a common defense policy and the elimination of tariffs.
|
146 |
+
|
147 |
+
South America has over 423 million[1][2] inhabitants and a population growth rate of about 0.6% per year. There are several areas of sparse demographics such as tropical forests, the Atacama Desert and the icy portions of Patagonia. On the other hand, the continent presents regions of high population density, such as the great urban centers. The population is formed by descendants of Europeans (mainly Spaniards, Portuguese and Italians), Africans and Indigenous peoples. There is a high percentage of mestizos that vary greatly in composition by place. There is also a minor population of Asians,[further explanation needed] especially in Brazil. The two main languages are by far Spanish and Portuguese, followed by French, English and Dutch in smaller numbers.
|
148 |
+
|
149 |
+
Spanish and Portuguese are the most spoken languages in South America, with approximately 200 million speakers each. Spanish is the official language of most countries, along with other native languages in some countries. Portuguese is the official language of Brazil. Dutch is the official language of Suriname; English is the official language of Guyana, although there are at least twelve other languages spoken in the country, including Portuguese, Chinese, Hindustani and several native languages.[53] English is also spoken in the Falkland Islands. French is the official language of French Guiana and the second language in Amapá, Brazil.
|
150 |
+
|
151 |
+
Indigenous languages of South America include Quechua in Peru, Bolivia, Ecuador, Chile and Colombia; Wayuunaiki in northern Colombia (La Guajira) and northwestern Venezuela (Zulia); Guaraní in Paraguay and, to a much lesser extent, in Bolivia; Aymara in Bolivia, Peru, and less often in Chile; and Mapudungun is spoken in certain pockets of southern Chile. At least three South American indigenous languages (Quechua, Aymara, and Guarani) are recognized along with Spanish as national languages.
|
152 |
+
|
153 |
+
Other languages found in South America include Hindustani and Javanese in Suriname; Italian in Argentina, Brazil, Uruguay and Venezuela; and German in certain pockets of Argentina and Brazil. German is also spoken in many regions of the southern states of Brazil, Riograndenser Hunsrückisch being the most widely spoken German dialect in the country; among other Germanic dialects, a Brazilian form of East Pomeranian is also well represented and is experiencing a revival. Welsh remains spoken and written in the historic towns of Trelew and Rawson in the Argentine Patagonia. There are also small clusters of Japanese-speakers in Brazil, Colombia and Peru. Arabic speakers, often of Lebanese, Syrian, or Palestinian descent, can be found in Arab communities in Argentina, Colombia, Brazil, Venezuela and in Paraguay.[54]
|
154 |
+
|
155 |
+
An estimated 90% of South Americans are Christians[55] (82% Roman Catholic, 8% other Christian denominations mainly traditional Protestants and Evangelicals but also Orthodox), accounting for c. 19% of Christians worldwide.
|
156 |
+
|
157 |
+
African descendent religions and Indigenous religions are also common throughout all South America, some examples of are Santo Daime, Candomblé, Umbanda and Encantados.
|
158 |
+
|
159 |
+
Crypto-Jews or Marranos, conversos, and Anusim were an important part of colonial life in Latin America.
|
160 |
+
|
161 |
+
Both Buenos Aires, Argentina and São Paulo, Brazil figure among the largest Jewish populations by urban area.
|
162 |
+
|
163 |
+
East Asian religions such as Japanese Buddhism, Shintoism, and Shinto-derived Japanese New Religions are common in Brazil and Peru. Korean Confucianism is especially found in Brazil while Chinese Buddhism and Chinese Confucianism have spread throughout the continent.
|
164 |
+
|
165 |
+
Kardecist Spiritism can be found in several countries.
|
166 |
+
|
167 |
+
Part of Religions in South America (2013):[56]
|
168 |
+
|
169 |
+
Genetic admixture occurs at very high levels in South America. In Argentina, the European influence accounts for 65–79% of the genetic background, Amerindian for 17–31% and sub-Saharan African for 2–4%. In Colombia, the sub-Saharan African genetic background varied from 1% to 89%, while the European genetic background varied from 20% to 79%, depending on the region.
|
170 |
+
In Peru, European ancestries ranged from 1% to 31%, while the African contribution was only 1% to 3%.[57] The Genographic Project determined the average Peruvian from Lima had about 28% European ancestry, 68% Native American, 2% Asian ancestry and 2% sub-Saharan African.[58]
|
171 |
+
|
172 |
+
Descendants of indigenous peoples, such as the Quechua and Aymara, or the Urarina[59] of Amazonia make up the majority of the population in Bolivia (56%) and, per some sources, in Peru (44%).[60][61] In Ecuador, Amerindians are a large minority that comprises two-fifths of the population. The native European population is also a significant element in most other former Portuguese colonies.
|
173 |
+
|
174 |
+
People who identify as of primarily or totally European descent, or identify their phenotype as corresponding to such group, are more of a majority in Argentina,[62] and Uruguay[63] and more than half of the population of Chile (64.7%)[64] and (48.4%) in Brazil.[65][66][67] In Venezuela, according to the national census 42% of the population is primarily native Spanish, Italian and Portuguese descendants.[68] In Colombia, people who identify as European descendant are about 37%.[69][70] In Peru, European descendants are the third group in number (15%).[71]
|
175 |
+
|
176 |
+
Mestizos (mixed European and Amerindian) are the largest ethnic group in Bolivia, Paraguay, Venezuela, Colombia[69] and Ecuador and the second group in Peru and Chile.
|
177 |
+
|
178 |
+
South America is also home to one of the largest populations of Africans. This group is significantly present in Brazil, Colombia, Guyana, Suriname, French Guiana, Venezuela and Ecuador.
|
179 |
+
|
180 |
+
Brazil followed by Peru have the largest Japanese, Korean and Chinese communities in South America, Lima has the largest ethnic Chinese community in Latin America.[72] Guyana and Suriname have the largest ethnic East Indian community.
|
181 |
+
|
182 |
+
In many places indigenous people still practice a traditional lifestyle based on subsistence agriculture or as hunter-gatherers. There are still some uncontacted tribes residing in the Amazon Rainforest.[75]
|
183 |
+
|
184 |
+
The most populous country in South America is Brazil with 209.5 million people. The second largest country is Colombia with a population of 49,661,048. Argentina is the third most populous country with 44,361,150.
|
185 |
+
|
186 |
+
While Brazil, Argentina, and Colombia maintain the largest populations, large city populations are not restricted to those nations. The largest cities in South America, by far, are São Paulo, Rio de Janeiro, Buenos Aires, Santiago, Lima, and Bogotá. These cities are the only cities on the continent to exceed eight million, and three of five in the Americas. Next in size are Caracas, Belo Horizonte, Medellin and Salvador.
|
187 |
+
|
188 |
+
Five of the top ten metropolitan areas are in Brazil. These metropolitan areas all have a population of above 4 million and include the São Paulo metropolitan area, Rio de Janeiro metropolitan area, and Belo Horizonte metropolitan area. Whilst the majority of the largest metropolitan areas are within Brazil, Argentina is host to the second largest metropolitan area by population in South America: the Buenos Aires metropolitan region is above 13 million inhabitants.
|
189 |
+
|
190 |
+
South America has also been witness to the growth of megapolitan areas. In Brazil four megaregions exist including the Expanded Metropolitan Complex of São Paulo with more than 32 million inhabitants. The others are the Greater Rio, Greater Belo Horizonte and Greater Porto Alegre. Colombia also has four megaregions which comprise 72% of its population, followed by Venezuela, Argentina and Peru which are also homes of megaregions.
|
191 |
+
|
192 |
+
The top ten largest South American metropolitan areas by population as of 2015, based on national census numbers from each country:
|
193 |
+
|
194 |
+
2015 Census figures.
|
195 |
+
|
196 |
+
South America relies less on the export of both manufactured goods and natural resources than the world average; merchandise exports from the continent were 16% of GDP on an exchange rate basis, compared to 25% for the world as a whole.[76] Brazil (the seventh largest economy in the world and the largest in South America) leads in terms of merchandise exports at $251 billion, followed by Venezuela at $93 billion, Chile at $86 billion, and Argentina at $84 billion.[76]
|
197 |
+
|
198 |
+
Since 1930, the continent has experienced remarkable growth and diversification in most economic sectors. Most agricultural and livestock products are destined for the domestic market and local consumption. However, the export of agricultural products is essential for the balance of trade in most countries.[77]
|
199 |
+
|
200 |
+
The main agrarian crops are export crops, such as soy and wheat. The production of staple foods such as vegetables, corn or beans is large, but focused on domestic consumption. Livestock raising for meat exports is important in Argentina, Paraguay, Uruguay and Colombia. In tropical regions the most important crops are coffee, cocoa and bananas, mainly in Brazil, Colombia and Ecuador. Traditionally, the countries producing sugar for export are Peru, Guyana and Suriname, and in Brazil, sugar cane is also used to make ethanol. On the coast of Peru, northeast and south of Brazil, cotton is grown. Fifty percent of the South American surface is covered by forests, but timber industries are small and directed to domestic markets. In recent years, however, transnational companies have been settling in the Amazon to exploit noble timber destined for export. The Pacific coastal waters of South America are the most important for commercial fishing. The anchovy catch reaches thousands of tons, and tuna is also abundant (Peru is a major exporter). The capture of crustaceans is remarkable, particularly in northeastern Brazil and Chile.[77]
|
201 |
+
|
202 |
+
Only Brazil and Argentina are part of the G20 (industrial countries), while only Brazil is part of the G8+5 (the most powerful and influential nations in the world). In the tourism sector, a series of negotiations began in 2005 to promote tourism and increase air connections within the region. Punta del Este, Florianópolis and Mar del Plata are among the most important resorts in South America.[77]
|
203 |
+
|
204 |
+
The most industrialized countries in South America are Brazil, Argentina, Chile, Colombia, Venezuela and Uruguay respectively. These countries alone account for more than 75 percent of the region's economy and add up to a GDP of more than US$3.0 trillion. Industries in South America began to take on the economies of the region from the 1930s when the Great Depression in the United States and other countries of the world boosted industrial production in the continent. From that period the region left the agricultural side behind and began to achieve high rates of economic growth that remained until the early 1990s when they slowed due to political instabilities, economic crises and neoliberal policies.[77]
|
205 |
+
|
206 |
+
Since the end of the economic crisis in Brazil and Argentina that occurred in the period from 1998 to 2002, which has led to economic recession, rising unemployment and falling population income, the industrial and service sectors have been recovering rapidly. Chile, Argentina and Brazil have recovered fastest, growing at an average of 5% per year. All of South America after this period has been recovering and showing good signs of economic stability, with controlled inflation and exchange rates, continuous growth, a decrease in social inequality and unemployment–factors that favor industry.[77]
|
207 |
+
|
208 |
+
The main industries are: electronics, textiles, food, automotive, metallurgy, aviation, naval, clothing, beverage, steel, tobacco, timber, chemical, among others. Exports reach almost US$400 billion annually, with Brazil accounting for half of this.[77]
|
209 |
+
|
210 |
+
The economic gap between the rich and poor in most South American nations is larger than on most other continents. The richest 10% receive over 40% of the nation's income in Bolivia, Brazil, Chile, Colombia, and Paraguay,[78] while the poorest 20% receive 4% or less in Bolivia, Brazil, and Colombia.[79] This wide gap can be seen in many large South American cities where makeshift shacks and slums lie in the vicinity of skyscrapers and upper-class luxury apartments; nearly one in nine South Americans live on less than $2 per day (on a purchasing power parity basis).[80]
|
211 |
+
|
212 |
+
Tourism has increasingly become a significant source of income for many South American countries.[86][87]
|
213 |
+
|
214 |
+
Historical relics, architectural and natural wonders, a diverse range of foods and culture, vibrant and colorful cities, and stunning landscapes attract millions of tourists every year to South America. Some of the most visited places in the region are Iguazu Falls, Recife, Olinda, Machu Picchu, Bariloche, the Amazon rainforest, Rio de Janeiro, São Luís, Salvador, Fortaleza, Maceió, Buenos Aires, Florianópolis, San Ignacio Miní, Isla Margarita, Natal, Lima, São Paulo, Angel Falls, Brasília, Nazca Lines, Cuzco, Belo Horizonte, Lake Titicaca, Salar de Uyuni, La Paz, Jesuit Missions of Chiquitos, Los Roques archipelago, Gran Sabana, Patagonia, Tayrona National Natural Park, Santa Marta, Bogotá, Cali, Medellín, Cartagena, Perito Moreno Glacier and the Galápagos Islands.[88][89] In 2016 Brazil hosted the 2016 Summer Olympics.
|
215 |
+
|
216 |
+
|
217 |
+
|
218 |
+
South Americans are culturally influenced by their indigenous peoples, the historic connection with the Iberian Peninsula and Africa, and waves of immigrants from around the globe.
|
219 |
+
|
220 |
+
South American nations have a rich variety of music. Some of the most famous genres include vallenato and cumbia from Colombia, pasillo from Colombia and Ecuador, samba, bossa nova and música sertaneja from Brazil, and tango from Argentina and Uruguay. Also well known is the non-commercial folk genre Nueva Canción movement which was founded in Argentina and Chile and quickly spread to the rest of the Latin America.
|
221 |
+
|
222 |
+
People on the Peruvian coast created the fine guitar and cajon duos or trios in the most mestizo (mixed) of South American rhythms such as the Marinera (from Lima), the Tondero (from Piura), the 19th century popular Creole Valse or Peruvian Valse, the soulful Arequipan Yaravi, and the early 20th century Paraguayan Guarania. In the late 20th century, Spanish rock emerged by young hipsters influenced by British pop and American rock. Brazil has a Portuguese-language pop rock industry as well a great variety of other music genres. In the central and western regions of Bolivia, Andean and folklore music like Diablada, Caporales and Morenada are the most representative of the country, which were originated by European, Aymara and Quechua influences.
|
223 |
+
|
224 |
+
The literature of South America has attracted considerable critical and popular acclaim, especially with the Latin American Boom of the 1960s and 1970s, and the rise of authors such as Mario Vargas Llosa, Gabriel García Márquez in novels and Jorge Luis Borges and Pablo Neruda in other genres. The Brazilians Machado de Assis and João Guimarães Rosa are widely regarded as the greatest Brazilian writers.
|
225 |
+
|
226 |
+
Because of South America's broad ethnic mix, South American cuisine has African, South American Indian, South Asian, East Asian, and European influences. Bahia, Brazil, is especially well known for its West African–influenced cuisine. Argentines, Chileans, Uruguayans, Brazilians, Bolivians, and Venezuelans regularly consume wine. People in Argentina, Paraguay, Uruguay, southern Chile, Bolivia and Brazil drink mate, an herb which is brewed. The Paraguayan version, terere, differs from other forms of mate in that it is served cold. Pisco is a liquor distilled from grapes in Peru and Chile. Peruvian cuisine mixes elements from Chinese, Japanese, Spanish, Italian, African, Arab, Andean, and Amazonic food.
|
227 |
+
|
228 |
+
The artist Oswaldo Guayasamín (1919–1999) from Ecuador, represented with his painting style the feeling of the peoples of Latin America[90] highlighting social injustices in various parts of the world. The Colombian Fernando Botero (1932) is one of the greatest exponents of painting and sculpture that continues still active and has been able to develop a recognizable style of his own.[91] For his part, the Venezuelan Carlos Cruz-Diez has contributed significantly to contemporary art,[92] with the presence of works around the world.
|
229 |
+
|
230 |
+
Currently several emerging South American artists are recognized by international art critics: Guillermo Lorca – Chilean painter,[93][94] Teddy Cobeña – Ecuadorian sculptor and recipient of international sculpture award in France)[95][96][97] and Argentine artist Adrián Villar Rojas[98][99] – winner of the Zurich Museum Art Award among many others.
|
231 |
+
|
232 |
+
A wide range of sports are played in the continent of South America, with football being the most popular overall, while baseball is the most popular in Venezuela.
|
233 |
+
|
234 |
+
Other sports include basketball, cycling, polo, volleyball, futsal, motorsports, rugby (mostly in Argentina and Uruguay), handball, tennis, golf, field hockey, boxing and cricket.
|
235 |
+
|
236 |
+
South America hosted its first Olympic Games in Rio de Janeiro, Brazil in 2016 and will host the Youth Olympic Games in Buenos Aires, Argentina in 2018.
|
237 |
+
|
238 |
+
South America shares with Europe supremacy over the sport of football as all winners in FIFA World Cup history and all winning teams in the FIFA Club World Cup have come from these two continents. Brazil holds the record at the FIFA World Cup with five titles in total. Argentina and Uruguay have two titles each. So far four South American nations have hosted the tournament including the first edition in Uruguay (1930). The other three were Brazil (1950, 2014), Chile (1962), and Argentina (1978).
|
239 |
+
|
240 |
+
South America is home to the longest running international football tournament; the Copa América, which has been regularly contested since 1916. Uruguay won the Copa América a record 15 times, surpassing hosts Argentina in 2011 to reach 15 titles (they were previously equal at 14 titles each during the 2011 Copa América).
|
241 |
+
|
242 |
+
Also, in South America, a multi-sport event, the South American Games, are held every four years. The first edition was held in La Paz in 1978 and the most recent took place in Santiago in 2014.
|
243 |
+
|
244 |
+
South American Cricket Championship is an international limited-overs cricket tournament played since 1995 featuring national teams from South America and certain other invited sides including teams from North America, currently played annually but until 2013 was usually played every two seasons.
|
245 |
+
|
246 |
+
Due to the diversity of topography and pluviometric precipitation conditions, the region's water resources vary enormously in different areas. In the Andes, navigation possibilities are limited, except for the Magdalena River, Lake Titicaca and the lakes of the southern regions of Chile and Argentina. Irrigation is an important factor for agriculture from northwestern Peru to Patagonia. Less than 10% of the known electrical potential of the Andes had been used until the mid-1960s.
|
247 |
+
|
248 |
+
The Brazilian Highlands has a much higher hydroelectric potential than the Andean region and its possibilities of exploitation are greater due to the existence of several large rivers with high margins and the occurrence of great differences forming huge cataracts, such as those of Paulo Afonso, Iguaçu and others. The Amazon River system has about 13,000 km of waterways, but its possibilities for hydroelectric use are still unknown.
|
249 |
+
|
250 |
+
Most of the continent's energy is generated through hydroelectric power plants, but there is also an important share of thermoelectric and wind energy. Brazil and Argentina are the only South American countries that generate nuclear power, each with two nuclear power plants. In 1991 these countries signed a peaceful nuclear cooperation agreement.
|
251 |
+
|
252 |
+
South American transportation systems are still deficient, with low kilometric densities. The region has about 1,700,000 km of highways and 100,000 km of railways, which are concentrated in the coastal strip, and the interior is still devoid of communication.
|
253 |
+
|
254 |
+
Only two railroads are continental: the Transandina, which connects Buenos Aires, in Argentina to Valparaíso, in Chile, and the Brazil–Bolivia Railroad, which makes it the connection between the port of Santos in Brazil and the city of Santa Cruz de la Sierra, in Bolivia. In addition, there is the Pan-American Highway, which crosses the Andean countries from north to south, although some stretches are unfinished.[100]
|
255 |
+
|
256 |
+
Two areas of greater density occur in the railway sector: the platinum network, which develops around the Platine region, largely belonging to Argentina, with more than 45,000 km in length; And the Southeast Brazil network, which mainly serves the state of São Paulo, state of Rio de Janeiro and Minas Gerais. Brazil and Argentina also stand out in the road sector. In addition to the modern roads that extend through northern Argentina and south-east and south of Brazil, a vast road complex aims to link Brasília, the federal capital, to the South, Southeast, Northeast and Northern regions of Brazil.
|
257 |
+
|
258 |
+
The Port of Callao is the main port of Peru.
|
259 |
+
|
260 |
+
South America has one of the largest bays of navigable inland waterways in the world, represented mainly by the Amazon basin, the Platine basin, the São Francisco and the Orinoco basins, Brazil having about 54,000 km navigable, while Argentina has 6,500 km and Venezuela, 1,200 km.
|
261 |
+
|
262 |
+
The two main merchant fleets also belong to Brazil and Argentina. The following are those of Chile, Venezuela, Peru and Colombia. The largest ports in commercial movement are those of Buenos Aires, Santos, Rio de Janeiro, Bahía Blanca, Rosario, Valparaíso, Recife, Salvador, Montevideo, Paranaguá, Rio Grande, Fortaleza, Belém and Maracaibo.
|
263 |
+
|
264 |
+
In South America, commercial aviation has a magnificent expansion field, which has one of the largest traffic density lines in the world, Rio de Janeiro–São Paulo, and large airports, such as Congonhas, São Paulo–Guarulhos International and Viracopos (São Paulo), Rio de Janeiro International and Santos Dumont (Rio de Janeiro), El Dorado (Bogotá), Ezeiza (Buenos Aires), Tancredo Neves International Airport (Belo Horizonte), Curitiba International Airport (Curitiba), Brasilia, Caracas, Montevideo, Lima, Viru Viru International Airport (Santa Cruz de la Sierra), Recife, Salvador, Salgado Filho International Airport (Porto Alegre), Fortaleza, Manaus and Belém.
|
265 |
+
|
266 |
+
The main public transport in major cities is the bus. Many cities also have a diverse system of metro and subway trains, the first of which was the Buenos Aires subte, opened 1913.[101] The Santiago subway[102] is the largest network in South America, with 103 km, while the São Paulo subway is the largest in transportation, with more than 4.6 million passengers per day[103] and was voted the best in the Americas. In Rio de Janeiro was installed the first railroad of the continent, in 1854. Today the city has a vast and diversified system of metropolitan trains, integrated with buses and subway. Recently it was also inaugurated in the city a Light Rail System called VLT, a small electrical trams at low speed, while São Paulo inaugurated its monorail, the first of South America.[citation needed] In Brazil, an express bus system called Bus Rapid Transit (BRT), which operates in several cities, has also been developed. Mi Teleférico, also known as Teleférico La Paz–El Alto (La Paz–El Alto Cable Car), is an aerial cable car urban transit system serving the La Paz–El Alto metropolitan area in Bolivia.
|
267 |
+
|
268 |
+
^ Continent model: In some parts of the world South America is viewed as a subcontinent of the Americas[104] (a single continent in these areas), for example Latin America, Latin Europe, and Iran. In most of the countries with English as an official language, however, it is considered a continent; see Americas (terminology).[clarification needed]
|
269 |
+
|
270 |
+
Africa
|
271 |
+
|
272 |
+
Antarctica
|
273 |
+
|
274 |
+
Asia
|
275 |
+
|
276 |
+
Australia
|
277 |
+
|
278 |
+
Europe
|
279 |
+
|
280 |
+
North America
|
281 |
+
|
282 |
+
South America
|
283 |
+
|
284 |
+
Afro-Eurasia
|
285 |
+
|
286 |
+
America
|
287 |
+
|
288 |
+
Eurasia
|
289 |
+
|
290 |
+
Oceania
|
en/1970.html.txt
ADDED
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fermentation is a metabolic process that produces chemical changes in organic substrates through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In the context of food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage.[1] The science of fermentation is known as zymology.
|
2 |
+
|
3 |
+
In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically.[2] Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans.[3]
|
4 |
+
|
5 |
+
Below are some definitions of fermentation. They range from informal, general usages to more scientific definitions.[4]
|
6 |
+
|
7 |
+
Along with photosynthesis and aerobic respiration, fermentation is a way of extracting energy from molecules, but it is the only one common to all bacteria and eukaryotes. It is therefore considered the oldest metabolic pathway, suitable for an environment that did not yet have oxygen.[5]:389 Yeast, a form of fungus, occurs in almost any environment capable of supporting microbes, from the skins of fruits to the guts of insects and mammals and the deep ocean, and harvests sugar-rich materials to produce ethanol and carbon dioxide.[6][7]
|
8 |
+
|
9 |
+
The basic mechanism for fermentation remains present in all cells of higher organisms. Mammalian muscle carries out fermentation during periods of intense exercise where oxygen supply becomes limited, resulting in the creation of lactic acid.[8]:63 In invertebrates, fermentation also produces succinate and alanine.[9]:141
|
10 |
+
|
11 |
+
Fermentative bacteria play an essential role in the production of methane in habitats ranging from the rumens of cattle to sewage digesters and freshwater sediments. They produce hydrogen, carbon dioxide, formate and acetate and carboxylic acids; and then consortia of microbes convert the carbon dioxide and acetate to methane. Acetogenic bacteria oxidize the acids, obtaining more acetate and either hydrogen or formate. Finally, methanogens (in the domain Archea) convert acetate to methane.[10]
|
12 |
+
|
13 |
+
Fermentation reacts NADH with an endogenous, organic electron acceptor.[2] Usually this is pyruvate formed from sugar through glycolysis. The reaction produces NAD+ and an organic product, typical examples being ethanol, lactic acid, and hydrogen gas (H2), and often also carbon dioxide. However, more exotic compounds can be produced by fermentation, such as butyric acid and acetone. Fermentation products are considered waste products, since they cannot be metabolized further without the use of oxygen.[12]
|
14 |
+
|
15 |
+
Fermentation normally occurs in an anaerobic environment. In the presence of O2, NADH, and pyruvate are used to generate ATP in respiration. This is called oxidative phosphorylation, and it generates much more ATP than glycolysis alone since it releases the chemical energy of O2.[12] For that reason, fermentation is rarely utilized when oxygen is available. However, even in the presence of abundant oxygen, some strains of yeast such as Saccharomyces cerevisiae prefer fermentation to aerobic respiration as long as there is an adequate supply of sugars (a phenomenon known as the Crabtree effect).[13] Some fermentation processes involve obligate anaerobes, which cannot tolerate oxygen.
|
16 |
+
|
17 |
+
Although yeast carries out the fermentation in the production of ethanol in beers, wines, and other alcoholic drinks, this is not the only possible agent: bacteria carry out the fermentation in the production of xanthan gum.
|
18 |
+
|
19 |
+
In ethanol fermentation, one glucose molecule is converted into two ethanol molecules and two carbon dioxide molecules.[14][15] It is used to make bread dough rise: the carbon dioxide forms bubbles, expanding the dough into a foam.[16][17] The ethanol is the intoxicating agent in alcoholic beverages such as wine, beer and liquor.[18] Fermentation of feedstocks, including sugarcane, corn, and sugar beets, produces ethanol that is added to gasoline.[19] In some species of fish, including goldfish and carp, it provides energy when oxygen is scarce (along with lactic acid fermentation).[20]
|
20 |
+
|
21 |
+
The figure illustrates the process. Before fermentation, a glucose molecule breaks down into two pyruvate molecules (Glycolysis). The energy from this exothermic reaction is used to bind inorganic phosphates to ADP, which converts it to ATP, and convert NAD+ to NADH. The pyruvates break down into two acetaldehyde molecules and give off two carbon dioxide molecules as waste products. The acetaldehyde is reduced into ethanol using the energy and hydrogen from NADH, and the NADH is oxidized into NAD+ so that the cycle may repeat. The reaction is catalyzed by the enzymes pyruvate decarboxylase and alcohol dehydrogenase.[14]
|
22 |
+
|
23 |
+
Homolactic fermentation (producing only lactic acid) is the simplest type of fermentation. Pyruvate from glycolysis[21] undergoes a simple redox reaction, forming lactic acid.[22][23] It is probably the only respiration process that does not produce a gas as a byproduct. Overall, one molecule of glucose (or any six-carbon sugar) is converted to two molecules of lactic acid:
|
24 |
+
|
25 |
+
It occurs in the muscles of animals when they need energy faster than the blood can supply oxygen. It also occurs in some kinds of bacteria (such as lactobacilli) and some fungi. It is the type of bacteria that convert lactose into lactic acid in yogurt, giving it its sour taste. These lactic acid bacteria can carry out either homolactic fermentation, where the end-product is mostly lactic acid, or heterolactic fermentation, where some lactate is further metabolized to ethanol and carbon dioxide[22] (via the phosphoketolase pathway), acetate, or other metabolic products, e.g.:
|
26 |
+
|
27 |
+
If lactose is fermented (as in yogurts and cheeses), it is first converted into glucose and galactose (both six-carbon sugars with the same atomic formula):
|
28 |
+
|
29 |
+
Heterolactic fermentation is in a sense intermediate between lactic acid fermentation and other types, e.g. alcoholic fermentation. Reasons to go further and convert lactic acid into something else include:
|
30 |
+
|
31 |
+
Hydrogen gas is produced in many types of fermentation as a way to regenerate NAD+ from NADH. Electrons are transferred to ferredoxin, which in turn is oxidized by hydrogenase, producing H2.[14] Hydrogen gas is a substrate for methanogens and sulfate reducers, which keep the concentration of hydrogen low and favor the production of such an energy-rich compound,[24] but hydrogen gas at a fairly high concentration can nevertheless be formed, as in flatus.
|
32 |
+
|
33 |
+
For example, Clostridium pasteurianum ferments glucose to butyrate, acetate, carbon dioxide, and hydrogen gas:[25] The reaction leading to acetate is:
|
34 |
+
|
35 |
+
Other types of fermentation include mixed acid fermentation, butanediol fermentation, butyrate fermentation, caproate fermentation, acetone–butanol–ethanol fermentation, and glyoxylate fermentation.
|
36 |
+
|
37 |
+
Most industrial fermentation uses batch or fed-batch procedures, although continuous fermentation can be more economical if various challenges, particularly the difficulty of maintaining sterility, can be met.[26]
|
38 |
+
|
39 |
+
In a batch process, all the ingredients are combined and the reactions proceed without any further input. Batch fermentation has been used for millennia to make bread and alcoholic beverages, and it is still a common method, especially when the process is not well understood.[27]:1 However, it can be expensive because the fermentor must be sterilized using high pressure steam between batches.[26] Strictly speaking, there is often addition of small quantities of chemicals to control the pH or suppress foaming.[27]:25
|
40 |
+
|
41 |
+
Batch fermentation goes through a series of phases. There is a lag phase in which cells adjust to their environment; then a phase in which exponential growth occurs. Once many of the nutrients have been consumed, the growth slows and becomes non-exponential, but production of secondary metabolites (including commercially important antibiotics and enzymes) accelerates. This continues through a stationary phase after most of the nutrients have been consumed, and then the cells die.[27]:25
|
42 |
+
|
43 |
+
Fed-batch fermentation is a variation of batch fermentation where some of the ingredients are added during the fermentation. This allows greater control over the stages of the process. In particular, production of secondary metabolites can be increased by adding a limited quantity of nutrients during the non-exponential growth phase. Fed-batch operations are often sandwiched between batch operations.[27]:1[28]
|
44 |
+
|
45 |
+
The high cost of sterilizing the fermentor between batches can be avoided using various open fermentation approaches that are able to resist contamination. One is to use a naturally evolved mixed culture. This is particularly favored in wastewater treatment, since mixed populations can adapt to a wide variety of wastes. Thermophilic bacteria can produce lactic acid at temperatures of around 50 °Celsius, sufficient to discourage microbial contamination; and ethanol has been produced at a temperature of 70 °C. This is just below its boiling point (78 °C), making it easy to extract. Halophilic bacteria can produce bioplastics in hypersaline conditions. Solid-state fermentation adds a small amount of water to a solid substrate; it is widely used in the food industry to produce flavors, enzymes and organic acids.[26]
|
46 |
+
|
47 |
+
In continuous fermentation, substrates are added and final products removed continuously.[26] There are three varieties: chemostats, which hold nutrient levels constant; turbidostats, which keep cell mass constant; and plug flow reactors in which the culture medium flows steadily through a tube while the cells are recycled from the outlet to the inlet.[28] If the process works well, there is a steady flow of feed and effluent and the costs of repeatedly setting up a batch are avoided. Also, it can prolong the exponential growth phase and avoid byproducts that inhibit the reactions by continuously removing them. However, it is difficult to maintain a steady state and avoid contamination, and the design tends to be complex.[26] Typically the fermentor must run for over 500 hours to be more economical than batch processors.[28]
|
48 |
+
|
49 |
+
The use of fermentation, particularly for beverages, has existed since the Neolithic and has been documented dating from 7000–6600 BCE in Jiahu, China,[29] 5000 BCE in India, Ayurveda mentions many Medicated Wines, 6000 BCE in Georgia,[30] 3150 BCE in ancient Egypt,[31] 3000 BCE in Babylon,[32] 2000 BCE in pre-Hispanic Mexico,[32] and 1500 BC in Sudan.[33] Fermented foods have a religious significance in Judaism and Christianity. The Baltic god Rugutis was worshiped as the agent of fermentation.[34][35]
|
50 |
+
|
51 |
+
In 1837, Charles Cagniard de la Tour, Theodor Schwann and Friedrich Traugott Kützing independently published papers concluding, as a result of microscopic investigations, that yeast is a living organism that reproduces by budding.[36][37]:6 Schwann boiled grape juice to kill the yeast and found that no fermentation would occur until new yeast was added. However, a lot of chemists, including Antoine Lavoisier, continued to view fermentation as a simple chemical reaction and rejected the notion that living organisms could be involved. This was seen as a reversion to vitalism and was lampooned in an anonymous publication by Justus von Liebig and Friedrich Wöhler.[5]:108–109
|
52 |
+
|
53 |
+
The turning point came when Louis Pasteur (1822–1895), during the 1850s and 1860s, repeated Schwann's experiments and showed that fermentation is initiated by living organisms in a series of investigations.[23][37]:6 In 1857, Pasteur showed that lactic acid fermentation is caused by living organisms.[38] In 1860, he demonstrated that bacteria cause souring in milk, a process formerly thought to be merely a chemical change, and his work in identifying the role of microorganisms in food spoilage led to the process of pasteurization.[39] In 1877, working to improve the French brewing industry, Pasteur published his famous paper on fermentation, "Etudes sur la Bière", which was translated into English in 1879 as "Studies on fermentation".[40] He defined fermentation (incorrectly) as "Life without air",[41] but correctly showed that specific types of microorganisms cause specific types of fermentations and specific end-products.
|
54 |
+
|
55 |
+
Although showing fermentation to be the result of the action of living microorganisms was a breakthrough, it did not explain the basic nature of the fermentation process, or prove that it is caused by the microorganisms that appear to be always present. Many scientists, including Pasteur, had unsuccessfully attempted to extract the fermentation enzyme from yeast.[41] Success came in 1897 when the German chemist Eduard Buechner ground up yeast, extracted a juice from them, then found to his amazement that this "dead" liquid would ferment a sugar solution, forming carbon dioxide and alcohol much like living yeasts.[42] Buechner's results are considered to mark the birth of biochemistry. The "unorganized ferments" behaved just like the organized ones. From that time on, the term enzyme came to be applied to all ferments. It was then understood that fermentation is caused by enzymes that are produced by microorganisms.[43] In 1907, Buechner won the Nobel Prize in chemistry for his work.[44]
|
56 |
+
|
57 |
+
Advances in microbiology and fermentation technology have continued steadily up until the present. For example, in the 1930s, it was discovered that microorganisms could be mutated with physical and chemical treatments to be higher-yielding, faster-growing, tolerant of less oxygen, and able to use a more concentrated medium.[45][46] Strain selection and hybridization developed as well, affecting most modern food fermentations.
|
58 |
+
|
59 |
+
The word "ferment" is derived from the Latin verb fervere, which means to boil. It is thought to have been first used in the late 14th century in alchemy, but only in a broad sense. It was not used in the modern scientific sense until around 1600.
|
60 |
+
|
61 |
+
|
62 |
+
|
en/1971.html.txt
ADDED
@@ -0,0 +1,217 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Rail transport (also known as train transport) is a means of transferring passengers and goods on wheeled vehicles running on rails, which are located on tracks. In contrast to road transport, where vehicles run on a prepared flat surface, rail vehicles (rolling stock) are directionally guided by the tracks on which they run. Tracks usually consist of steel rails, installed on ties (sleepers) set in ballast, on which the rolling stock, usually fitted with metal wheels, moves. Other variations are also possible, such as slab track. This is where the rails are fastened to a concrete foundation resting on a prepared subsurface.
|
4 |
+
|
5 |
+
Rolling stock in a rail transport system generally encounters lower frictional resistance than rubber-tired road vehicles, so passenger and freight cars (carriages and wagons) can be coupled into longer trains. The operation is carried out by a railway company, providing transport between train stations or freight customer facilities. Power is provided by locomotives which either draw electric power from a railway electrification system or produce their own power, usually by diesel engines or, historically, steam engines. Most tracks are accompanied by a signalling system. Railways are a safe land transport system when compared to other forms of transport.[Nb 1] Railway transport is capable of high levels of passenger and cargo utilization and energy efficiency, but is often less flexible and more capital-intensive than road transport, when lower traffic levels are considered.
|
6 |
+
|
7 |
+
The oldest known, man/animal-hauled railways date back to the 6th century BC in Corinth, Greece. Rail transport then commenced in mid 16th century in Germany in the form of horse-powered funiculars and wagonways. Modern rail transport commenced with the British development of the steam locomotives in the early 19th century. Thus the railway system in Great Britain is the oldest in the world. Built by George Stephenson and his son Robert's company Robert Stephenson and Company, the Locomotion No. 1 is the first steam locomotive to carry passengers on a public rail line, the Stockton and Darlington Railway in 1825. George Stephenson also built the first public inter-city railway line in the world to use only the steam locomotives all the time, the Liverpool and Manchester Railway which opened in 1830. With steam engines, one could construct mainline railways, which were a key component of the Industrial Revolution. Also, railways reduced the costs of shipping, and allowed for fewer lost goods, compared with water transport, which faced occasional sinking of ships. The change from canals to railways allowed for "national markets" in which prices varied very little from city to city. The spread of the railway network and the use of railway timetables, led to the standardisation of time (railway time) in Britain based on Greenwich Mean Time. Prior to this, major towns and cities varied their local time relative to GMT. The invention and development of the railway in the United Kingdom was one of the most important technological inventions of the 19th century. The world's first underground railway, the Metropolitan Railway (part of the London Underground), opened in 1863.
|
8 |
+
|
9 |
+
In the 1880s, electrified trains were introduced, leading to electrification of tramways and rapid transit systems. Starting during the 1940s, the non-electrified railways in most countries had their steam locomotives replaced by diesel-electric locomotives, with the process being almost complete by the 2000s. During the 1960s, electrified high-speed railway systems were introduced in Japan and later in some other countries. Many countries are in the process of replacing diesel locomotives with electric locomotives, mainly due to environmental concerns, a notable example being Switzerland, which has completely electrified its network. Other forms of guided ground transport outside the traditional railway definitions, such as monorail or maglev, have been tried but have seen limited use.
|
10 |
+
|
11 |
+
Following a decline after World War II due to competition from cars and airplanes, rail transport has had a revival in recent decades due to road congestion and rising fuel prices, as well as governments investing in rail as a means of reducing CO2 emissions in the context of concerns about global warming.
|
12 |
+
|
13 |
+
The history of rail transport began in the 6th century BC in Ancient Greece. It can be divided up into several discrete periods defined by the principal means of track material and motive power used.
|
14 |
+
|
15 |
+
Evidence indicates that there was 6 to 8.5 km long Diolkos paved trackway, which transported boats across the Isthmus of Corinth in Greece from around 600 BC.[1][2][3][4][5] Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. The Diolkos was in use for over 650 years, until at least the 1st century AD.[5] Paved trackways were also later built in Roman Egypt.[6]
|
16 |
+
|
17 |
+
In 1515, Cardinal Matthäus Lang wrote a description of the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria. The line originally used wooden rails and a hemp haulage rope and was operated by human or animal power, through a treadwheel.[7] The line still exists and is operational, although in updated form and is possibly the oldest operational railway.[8]
|
18 |
+
|
19 |
+
Wagonways (or tramways) using wooden rails, hauled by horses, started appearing in the 1550s to facilitate the transport of ore tubs to and from mines, and soon became popular in Europe. Such an operation was illustrated in Germany in 1556 by Georgius Agricola in his work De re metallica.[9] This line used "Hund" carts with unflanged wheels running on wooden planks and a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons Hunde ("dogs") from the noise they made on the tracks.[10]
|
20 |
+
|
21 |
+
There are many references to their use in central Europe in the 16th century.[11] Such a transport system was later used by German miners at Caldbeck, Cumbria, England, perhaps from the 1560s.[12] A wagonway was built at Prescot, near Liverpool, sometime around 1600, possibly as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about half a mile away.[13] A funicular railway was also made at Broseley in Shropshire some time before 1604. This carried coal for James Clifford from his mines down to the river Severn to be loaded onto barges and carried to riverside towns.[14] The Wollaton Wagonway, completed in 1604 by Huntingdon Beaumont, has sometimes erroneously been cited as the earliest British railway. It ran from Strelley to Wollaton near Nottingham.[15]
|
22 |
+
|
23 |
+
The Middleton Railway in Leeds, which was built in 1758, later became the world's oldest operational railway (other than funiculars), albeit now in an upgraded form. In 1764, the first railway in the Americas was built in Lewiston, New York.[16]
|
24 |
+
|
25 |
+
In the late 1760s, the Coalbrookdale Company began to fix plates of cast iron to the upper surface of the wooden rails. This allowed a variation of gauge to be used. At first only balloon loops could be used for turning, but later, movable points were taken into use that allowed for switching.[17]
|
26 |
+
|
27 |
+
A system was introduced in which unflanged wheels ran on L-shaped metal plates – these became known as plateways. John Curr, a Sheffield colliery manager, invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks. In 1803, William Jessop opened the Surrey Iron Railway, a double track plateway, erroneously sometimes cited as world's first public railway, in south London.[18]
|
28 |
+
|
29 |
+
Meanwhile, William Jessop had earlier used a form of all-iron edge rail and flanged wheels successfully for an extension to the Charnwood Forest Canal at Nanpantan, Loughborough, Leicestershire in 1789. In 1790, Jessop and his partner Outram began to manufacture edge-rails. Jessop became a partner in the Butterley Company in 1790. The first public edgeway (thus also first public railway) built was Lake Lock Rail Road in 1796. Although the primary purpose of the line was to carry coal, it also carried passengers.
|
30 |
+
|
31 |
+
These two systems of constructing iron railways, the "L" plate-rail and the smooth edge-rail, continued to exist side by side until well into the early 19th century. The flanged wheel and edge-rail eventually proved its superiority and became the standard for railways.
|
32 |
+
|
33 |
+
Cast iron used in rails proved unsatisfactory because it was brittle and broke under heavy loads. The wrought iron invented by John Birkinshaw in 1820 replaced cast iron. Wrought iron (usually simply referred to as "iron") was a ductile material that could undergo considerable deformation before breaking, making it more suitable for iron rails. But iron was expensive to produce until Henry Cort patented the puddling process in 1784. In 1783 Cort also patented the rolling process, which was 15 times faster at consolidating and shaping iron than hammering.[19] These processes greatly lowered the cost of producing iron and rails. The next important development in iron production was hot blast developed by James Beaumont Neilson (patented 1828), which considerably reduced the amount of coke (fuel) or charcoal needed to produce pig iron.[20] Wrought iron was a soft material that contained slag or dross. The softness and dross tended to make iron rails distort and delaminate and they lasted less than 10 years. Sometimes they lasted as little as one year under high traffic. All these developments in the production of iron eventually led to replacement of composite wood/iron rails with superior all iron rails.
|
34 |
+
|
35 |
+
The introduction of the Bessemer process, enabling steel to be made inexpensively, led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron.[21][22][23] Steel rails made heavier locomotives possible, allowing for longer trains and improving the productivity of railroads.[24] The Bessemer process introduced nitrogen into the steel, which caused the steel to become brittle with age. The open hearth furnace began to replace the Bessemer process near the end of the 19th century, improving the quality of steel and further reducing costs. Thus steel completely replaced the use of iron in rails, becoming standard for all railways.
|
36 |
+
|
37 |
+
The first passenger horsecar or tram, Swansea and Mumbles Railway was opened between Swansea and Mumbles in Wales in 1807.[25] Horses remained the preferable mode for tram transport even after the arrival of steam engines until the end of the 19th century, because they were cleaner compared to steam driven trams which caused smoke in city streets.
|
38 |
+
|
39 |
+
In 1784 James Watt, a Scottish inventor and mechanical engineer, patented a design for a steam locomotive. Watt had improved the steam engine of Thomas Newcomen, hitherto used to pump water out of mines, and developed a reciprocating engine in 1769 capable of powering a wheel. This was a large stationary engine, powering cotton mills and a variety of machinery; the state of boiler technology necessitated the use of low pressure steam acting upon a vacuum in the cylinder, which required a separate condenser and an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston, raising the possibility of a smaller engine that might be used to power a vehicle. Following his patent, Watt's employee William Murdoch produced a working model of a self-propelled steam carriage in that year.[26]
|
40 |
+
|
41 |
+
The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This used high-pressure steam to drive the engine by one power stroke. The transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world's first steam-powered railway journey took place when Trevithick's unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales.[27][28] Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London, the Catch Me Who Can, but never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track then in use.[29]
|
42 |
+
|
43 |
+
The first commercially successful steam locomotive was Matthew Murray's rack locomotive Salamanca built for the Middleton Railway in Leeds in 1812. This twin-cylinder locomotive was light enough to not break the edge-rails track and solved the problem of adhesion by a cog-wheel using teeth cast on the side of one of the rails. Thus it was also the first rack railway.
|
44 |
+
|
45 |
+
This was followed in 1813 by the locomotive Puffing Billy built by Christopher Blackett and William Hedley for the Wylam Colliery Railway, the first successful locomotive running by adhesion only. This was accomplished by the distribution of weight between a number of wheels. Puffing Billy is now on display in the Science Museum in London, making it the oldest locomotive in existence.[30]
|
46 |
+
|
47 |
+
In 1814 George Stephenson, inspired by the early locomotives of Trevithick, Murray and Hedley, persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive. His designs considerably improved on the work of the earlier pioneers. He built the locomotive Blücher, also a successful flanged-wheel adhesion locomotive. In 1825 he built the locomotive Locomotion for the Stockton and Darlington Railway in the north east of England, which became the first public steam railway in the world in 1825, although it used both horse power and steam power on different runs. In 1829, he built the locomotive Rocket, which entered in and won the Rainhill Trials. This success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe.[31]:24–30 The first public railway which used only steam locomotives, all the time, was Liverpool and Manchester Railway, built in 1830.
|
48 |
+
|
49 |
+
Steam power continued to be the dominant power system in railways around the world for more than a century.
|
50 |
+
|
51 |
+
The first known electric locomotive was built in 1837 by chemist Robert Davidson of Aberdeen in Scotland, and it was powered by galvanic cells (batteries). Thus it was also the earliest battery electric locomotive. Davidson later built a larger locomotive named Galvani, exhibited at the Royal Scottish Society of Arts Exhibition in 1841. The seven-ton vehicle had two direct-drive reluctance motors, with fixed electromagnets acting on iron bars attached to a wooden cylinder on each axle, and simple commutators. It hauled a load of six tons at four miles per hour (6 kilometers per hour) for a distance of one and a half miles (2.4 kilometres). It was tested on the Edinburgh and Glasgow Railway in September of the following year, but the limited power from batteries prevented its general use. It was destroyed by railway workers, who saw it as a threat to their job security.[32][33][34]
|
52 |
+
|
53 |
+
Werner von Siemens demonstrated an electric railway in 1879 in Berlin. The world's first electric tram line, Gross-Lichterfelde Tramway, opened in Lichterfelde near Berlin, Germany, in 1881. It was built by Siemens. The tram ran on 180 Volt DC, which was supplied by running rails. In 1891 the track was equipped with an overhead wire and the line was extended to Berlin-Lichterfelde West station. The Volk's Electric Railway opened in 1883 in Brighton, England. The railway is still operational, thus making it the oldest operational electric railway in the world. Also in 1883, Mödling and Hinterbrühl Tram opened near Vienna in Austria. It was the first tram line in the world in regular service powered from an overhead line. Five years later, in the U.S. electric trolleys were pioneered in 1888 on the Richmond Union Passenger Railway, using equipment designed by Frank J. Sprague.[35]
|
54 |
+
|
55 |
+
The first use of electrification on a main line was on a four-mile section of the Baltimore Belt Line of the Baltimore and Ohio Railroad (B&O) in 1895 connecting the main portion of the B&O to the new line to New York through a series of tunnels around the edges of Baltimore's downtown. Electricity quickly became the power supply of choice for subways, abetted by the Sprague's invention of multiple-unit train control in 1897. By the early 1900s most street railways were electrified.
|
56 |
+
|
57 |
+
The London Underground, the world's oldest underground railway, opened in 1863, and it began operating electric services using a fourth rail system in 1890 on the City and South London Railway, now part of the London Underground Northern line. This was the first major railway to use electric traction. The world's first deep-level electric railway, it runs from the City of London, under the River Thames, to Stockwell in south London.[36]
|
58 |
+
|
59 |
+
The first practical AC electric locomotive was designed by Charles Brown, then working for Oerlikon, Zürich. In 1891, Brown had demonstrated long-distance power transmission, using three-phase AC, between a hydro-electric plant at Lauffen am Neckar and Frankfurt am Main West, a distance of 280 km. Using experience he had gained while working for Jean Heilmann on steam-electric locomotive designs, Brown observed that three-phase motors had a higher power-to-weight ratio than DC motors and, because of the absence of a commutator, were simpler to manufacture and maintain.[37] However, they were much larger than the DC motors of the time and could not be mounted in underfloor bogies: they could only be carried within locomotive bodies.[38]
|
60 |
+
|
61 |
+
In 1894, Hungarian engineer Kálmán Kandó developed a new type 3-phase asynchronous electric drive motors and generators for electric locomotives. Kandó's early 1894 designs were first applied in a short three-phase AC tramway in Evian-les-Bains (France), which was constructed between 1896 and 1898.[39][40][41][42][43]
|
62 |
+
|
63 |
+
In 1896, Oerlikon installed the first commercial example of the system on the Lugano Tramway. Each 30-tonne locomotive had two 110 kW (150 hp) motors run by three-phase 750 V 40 Hz fed from double overhead lines. Three-phase motors run at constant speed and provide regenerative braking, and are well suited to steeply graded routes, and the first main-line three-phase locomotives were supplied by Brown (by then in partnership with Walter Boveri) in 1899 on the 40 km Burgdorf–Thun line, Switzerland.
|
64 |
+
|
65 |
+
Italian railways were the first in the world to introduce electric traction for the entire length of a main line rather than a short section. The 106 km Valtellina line was opened on 4 September 1902, designed by Kandó and a team from the Ganz works.[44][45] The electrical system was three-phase at 3 kV 15 Hz. In 1918,[46] Kandó invented and developed the rotary phase converter, enabling electric locomotives to use three-phase motors whilst supplied via a single overhead wire, carrying the simple industrial frequency (50 Hz) single phase AC of the high voltage national networks.[45]
|
66 |
+
|
67 |
+
An important contribution to the wider adoption of AC traction came from SNCF of France after World War II. The company conducted trials at AC 50 Hz, and established it as a standard. Following SNCF's successful trials, 50 Hz, now also called industrial frequency was adopted as standard for main-lines across the world.[47]
|
68 |
+
|
69 |
+
Earliest recorded examples of an internal combustion engine for railway use included a prototype designed by William Dent Priestman, which was examined by Sir William Thomson in 1888 who described it as a "[Priestman oil engine] mounted upon a truck which is worked on a temporary line of rails to show the adaptation of a petroleum engine for locomotive purposes.".[48][49] In 1894, a 20 hp (15 kW) two axle machine built by Priestman Brothers was used on the Hull Docks.[50]
|
70 |
+
|
71 |
+
In 1906, Rudolf Diesel, Adolf Klose and the steam and diesel engine manufacturer Gebrüder Sulzer founded Diesel-Sulzer-Klose GmbH to manufacture diesel-powered locomotives. Sulzer had been manufacturing diesel engines since 1898. The Prussian State Railways ordered a diesel locomotive from the company in 1909. The world's first diesel-powered locomotive was operated in the summer of 1912 on the Winterthur–Romanshorn railway in Switzerland, but was not a commercial success.[51] The locomotive weight was 95 tonnes and the power was 883 kW with a maximum speed of 100 km/h.[52] Small numbers of prototype diesel locomotives were produced in a number of countries through the mid-1920s.
|
72 |
+
|
73 |
+
A significant breakthrough occurred in 1914, when Hermann Lemp, a General Electric electrical engineer, developed and patented a reliable direct current electrical control system (subsequent improvements were also patented by Lemp).[53] Lemp's design used a single lever to control both engine and generator in a coordinated fashion, and was the prototype for all diesel–electric locomotive control systems. In 1914, world's first functional diesel–electric railcars were produced for the Königlich-Sächsische Staatseisenbahnen (Royal Saxon State Railways) by Waggonfabrik Rastatt with electric equipment from Brown, Boveri & Cie and diesel engines from Swiss Sulzer AG. They were classified as DET 1 and DET 2 (de.wiki). The first regular use of diesel–electric locomotives was in switching (shunter) applications. General Electric produced several small switching locomotives in the 1930s (the famous "44-tonner" switcher was introduced in 1940) Westinghouse Electric and Baldwin collaborated to build switching locomotives starting in 1929.
|
74 |
+
|
75 |
+
In 1929, the Canadian National Railways became the first North American railway to use diesels in mainline service with two units, 9000 and 9001, from Westinghouse.[54]
|
76 |
+
|
77 |
+
Although steam and diesel services reaching speeds up to 200 km/h were started before the 1960s in Europe, they were not very successful[citation needed].
|
78 |
+
|
79 |
+
The first electrified high-speed rail Tōkaidō Shinkansen was introduced in 1964 between Tokyo and Osaka in Japan. Since then high-speed rail transport, functioning at speeds up to and above 300 km/h, has been built in Japan, Spain, France, Germany, Italy, the People's Republic of China, Taiwan (Republic of China), the United Kingdom, South Korea, Scandinavia, Belgium and the Netherlands. The construction of many of these lines has resulted in the dramatic decline of short haul flights and automotive traffic between connected cities, such as the London–Paris–Brussels corridor, Madrid–Barcelona, Milan–Rome–Naples, as well as many other major lines.[citation needed]
|
80 |
+
|
81 |
+
High-speed trains normally operate on standard gauge tracks of continuously welded rail on grade-separated right-of-way that incorporates a large turning radius in its design. While high-speed rail is most often designed for passenger travel, some high-speed systems also offer freight service.
|
82 |
+
|
83 |
+
A train is a connected series of rail vehicles that move along the track. Propulsion for the train is provided by a separate locomotive or from individual motors in self-propelled multiple units. Most trains carry a revenue load, although non-revenue cars exist for the railway's own use, such as for maintenance-of-way purposes. The engine driver (engineer in North America) controls the locomotive or other power cars, although people movers and some rapid transits are under automatic control.
|
84 |
+
|
85 |
+
Traditionally, trains are pulled using a locomotive. This involves one or more powered vehicles being located at the front of the train, providing sufficient tractive force to haul the weight of the full train. This arrangement remains dominant for freight trains and is often used for passenger trains. A push–pull train has the end passenger car equipped with a driver's cab so that the engine driver can remotely control the locomotive. This allows one of the locomotive-hauled train's drawbacks to be removed, since the locomotive need not be moved to the front of the train each time the train changes direction. A railroad car is a vehicle used for the haulage of either passengers or freight.
|
86 |
+
|
87 |
+
A multiple unit has powered wheels throughout the whole train. These are used for rapid transit and tram systems, as well as many both short- and long-haul passenger trains. A railcar is a single, self-powered car, and may be electrically-propelled or powered by a diesel engine. Multiple units have a driver's cab at each end of the unit, and were developed following the ability to build electric motors and engines small enough to fit under the coach. There are only a few freight multiple units, most of which are high-speed post trains.
|
88 |
+
|
89 |
+
Steam locomotives are locomotives with a steam engine that provides adhesion. Coal, petroleum, or wood is burned in a firebox, boiling water in the boiler to create pressurized steam. The steam travels through the smokebox before leaving via the chimney or smoke stack. In the process, it powers a piston that transmits power directly through a connecting rod (US: main rod) and a crankpin (US: wristpin) on the driving wheel (US main driver) or to a crank on a driving axle. Steam locomotives have been phased out in most parts of the world for economical and safety reasons, although many are preserved in working order by heritage railways.
|
90 |
+
|
91 |
+
Electric locomotives draw power from a stationary source via an overhead wire or third rail. Some also or instead use a battery. In locomotives that are powered by high voltage alternating current, a transformer in the locomotive converts the high voltage, low current power to low voltage, high current used in the traction motors that power the wheels. Modern locomotives may use three-phase AC induction motors or direct current motors. Under certain conditions, electric locomotives are the most powerful traction.[citation needed] They are also the cheapest to run and provide less noise and no local air pollution.[citation needed] However, they require high capital investments both for the overhead lines and the supporting infrastructure, as well as the generating station that is needed to produce electricity. Accordingly, electric traction is used on urban systems, lines with high traffic and for high-speed rail.
|
92 |
+
|
93 |
+
Diesel locomotives use a diesel engine as the prime mover. The energy transmission may be either diesel-electric, diesel-mechanical or diesel-hydraulic but diesel-electric is dominant. Electro-diesel locomotives are built to run as diesel-electric on unelectrified sections and as electric locomotives on electrified sections.
|
94 |
+
|
95 |
+
Alternative methods of motive power include magnetic levitation, horse-drawn, cable, gravity, pneumatics and gas turbine.
|
96 |
+
|
97 |
+
A passenger train travels between stations where passengers may embark and disembark. The oversight of the train is the duty of a guard/train manager/conductor. Passenger trains are part of public transport and often make up the stem of the service, with buses feeding to stations. Passenger trains provide long-distance intercity travel, daily commuter trips, or local urban transit services, operating with a diversity of vehicles, operating speeds, right-of-way requirements, and service frequency. Service frequencies are often expressed as a number of trains per hour (tph).[55] Passenger trains can usually can be into two types of operation, intercity railway and intracity transit. Whereas intercity railway involve higher speeds, longer routes, and lower frequency (usually scheduled), intracity transit involves lower speeds, shorter routes, and higher frequency (especially during peak hours).[56]
|
98 |
+
|
99 |
+
Intercity trains are long-haul trains that operate with few stops between cities. Trains typically have amenities such as a dining car. Some lines also provide over-night services with sleeping cars. Some long-haul trains have been given a specific name. Regional trains are medium distance trains that connect cities with outlying, surrounding areas, or provide a regional service, making more stops and having lower speeds. Commuter trains serve suburbs of urban areas, providing a daily commuting service. Airport rail links provide quick access from city centres to airports.
|
100 |
+
|
101 |
+
High-speed rail are special inter-city trains that operate at much higher speeds than conventional railways, the limit being regarded at 200 to 350 kilometres per hour (120 to 220 mph). High-speed trains are used mostly for long-haul service and most systems are in Western Europe and East Asia. Magnetic levitation trains such as the Shanghai maglev train use under-riding magnets which attract themselves upward towards the underside of a guideway and this line has achieved somewhat higher peak speeds in day-to-day operation than conventional high-speed railways, although only over short distances. Due to their heightened speeds, route alignments for high-speed rail tend to have broader curves than conventional railways, but may have steeper grades that are more easily climbed by trains with large kinetic energy.
|
102 |
+
|
103 |
+
Their high kinetic energy translates to higher horsepower-to-ton ratios (e.g. 20 horsepower per short ton or 16 kilowatts per tonne); this allows trains to accelerate and maintain higher speeds and negotiate steep grades as momentum builds up and recovered in downgrades (reducing cut, fill, and tunnelling requirements). Since lateral forces act on curves, curvatures are designed with the highest possible radius. All these features are dramatically different from freight operations, thus justifying exclusive high-speed rail lines if it is economically feasible.[56]
|
104 |
+
|
105 |
+
Higher-speed rail services are intercity rail services that have top speeds higher than conventional intercity trains but the speeds are not as high as those in the high-speed rail services. These services are provided after improvements to the conventional rail infrastructure in order to support trains that can operate safely at higher speeds.
|
106 |
+
|
107 |
+
Rapid transit is an intracity system built in large cities and has the highest capacity of any passenger transport system. It is usually grade-separated and commonly built underground or elevated. At street level, smaller trams can be used. Light rails are upgraded trams that have step-free access, their own right-of-way and sometimes sections underground. Monorail systems are elevated, medium-capacity systems. A people mover is a driverless, grade-separated train that serves only a few stations, as a shuttle. Due to the lack of uniformity of rapid transit systems, route alignment varies, with diverse rights-of-way (private land, side of road, street median) and geometric characteristics (sharp or broad curves, steep or gentle grades). For instance, the Chicago 'L' trains are designed with extremely short cars to negotiate the sharp curves in the Loop. New Jersey's PATH has similar-sized cars to accommodate curves in the trans-Hudson tunnels. San Francisco's BART operates large cars on its routes.[56]
|
108 |
+
|
109 |
+
A freight train hauls cargo using freight cars specialized for the type of goods. Freight trains are very efficient, with economy of scale and high energy efficiency. However, their use can be reduced by lack of flexibility, if there is need of transshipment at both ends of the trip due to lack of tracks to the points of pick-up and delivery. Authorities often encourage the use of cargo rail transport due to its fame.[57]
|
110 |
+
|
111 |
+
Container trains have become the beta type in the US for bulk haulage. Containers can easily be transshipped to other modes, such as ships and trucks, using cranes. This has succeeded the boxcar (wagon-load), where the cargo had to be loaded and unloaded into the train manually. The intermodal containerization of cargo has revolutionized the supply chain logistics industry, reducing ship costs significantly. In Europe, the sliding wall wagon has largely superseded the ordinary covered wagons. Other types of cars include refrigerator cars, stock cars for livestock and autoracks for road vehicles. When rail is combined with road transport, a roadrailer will allow trailers to be driven onto the train, allowing for easy transition between road and rail.
|
112 |
+
|
113 |
+
Bulk handling represents a key advantage for rail transport. Low or even zero transshipment costs combined with energy efficiency and low inventory costs allow trains to handle bulk much cheaper than by road. Typical bulk cargo includes coal, ore, grains and liquids. Bulk is transported in open-topped cars, hopper cars and tank cars.
|
114 |
+
|
115 |
+
Railway tracks are laid upon land owned or leased by the railway company. Owing to the desirability of maintaining modest grades, rails will often be laid in circuitous routes in hilly or mountainous terrain. Route length and grade requirements can be reduced by the use of alternating cuttings, bridges and tunnels – all of which can greatly increase the capital expenditures required to develop a right of way, while significantly reducing operating costs and allowing higher speeds on longer radius curves. In densely urbanized areas, railways are sometimes laid in tunnels to minimize the effects on existing properties.
|
116 |
+
|
117 |
+
Track consists of two parallel steel rails, anchored perpendicular to members called ties (sleepers) of timber, concrete, steel, or plastic to maintain a consistent distance apart, or rail gauge. Rail gauges are usually categorized as standard gauge (used on approximately 55% of the world's existing railway lines), broad gauge, and narrow gauge.[citation needed] In addition to the rail gauge, the tracks will be laid to conform with a Loading gauge which defines the maximum height and width for railway vehicles and their loads to ensure safe passage through bridges, tunnels and other structures.
|
118 |
+
|
119 |
+
The track guides the conical, flanged wheels, keeping the cars on the track without active steering and therefore allowing trains to be much longer than road vehicles. The rails and ties are usually placed on a foundation made of compressed earth on top of which is placed a bed of ballast to distribute the load from the ties and to prevent the track from buckling as the ground settles over time under the weight of the vehicles passing above.
|
120 |
+
|
121 |
+
The ballast also serves as a means of drainage. Some more modern track in special areas is attached by direct fixation without ballast. Track may be prefabricated or assembled in place. By welding rails together to form lengths of continuous welded rail, additional wear and tear on rolling stock caused by the small surface gap at the joints between rails can be counteracted; this also makes for a quieter ride.
|
122 |
+
|
123 |
+
On curves the outer rail may be at a higher level than the inner rail. This is called superelevation or cant. This reduces the forces tending to displace the track and makes for a more comfortable ride for standing livestock and standing or seated passengers. A given amount of superelevation is most effective over a limited range of speeds.
|
124 |
+
|
125 |
+
Turnouts, also known as points and switches, are the means of directing a train onto a diverging section of track. Laid similar to normal track, a point typically consists of a frog (common crossing), check rails and two switch rails. The switch rails may be moved left or right, under the control of the signalling system, to determine which path the train will follow.
|
126 |
+
|
127 |
+
Spikes in wooden ties can loosen over time, but split and rotten ties may be individually replaced with new wooden ties or concrete substitutes. Concrete ties can also develop cracks or splits, and can also be replaced individually. Should the rails settle due to soil subsidence, they can be lifted by specialized machinery and additional ballast tamped under the ties to level the rails.
|
128 |
+
|
129 |
+
Periodically, ballast must be removed and replaced with clean ballast to ensure adequate drainage. Culverts and other passages for water must be kept clear lest water is impounded by the trackbed, causing landslips. Where trackbeds are placed along rivers, additional protection is usually placed to prevent streambank erosion during times of high water. Bridges require inspection and maintenance, since they are subject to large surges of stress in a short period of time when a heavy train crosses.
|
130 |
+
|
131 |
+
The inspection of railway equipment is essential for the safe movement of trains. Many types of defect detectors are in use on the world's railroads. These devices utilize technologies that vary from a simplistic paddle and switch to infrared and laser scanning, and even ultrasonic audio analysis. Their use has avoided many rail accidents over the 70 years they have been used.
|
132 |
+
|
133 |
+
Railway signalling is a system used to control railway traffic safely to prevent trains from colliding. Being guided by fixed rails which generate low friction, trains are uniquely susceptible to collision since they frequently operate at speeds that do not enable them to stop quickly or within the driver's sighting distance; road vehicles, which encounter a higher level of friction between their rubber tyres and the road surface, have much shorter braking distances. Most forms of train control involve movement authority being passed from those responsible for each section of a rail network to the train crew. Not all methods require the use of signals, and some systems are specific to single track railways.
|
134 |
+
|
135 |
+
The signalling process is traditionally carried out in a signal box, a small building that houses the lever frame required for the signalman to operate switches and signal equipment. These are placed at various intervals along the route of a railway, controlling specified sections of track. More recent technological developments have made such operational doctrine superfluous, with the centralization of signalling operations to regional control rooms. This has been facilitated by the increased use of computers, allowing vast sections of track to be monitored from a single location. The common method of block signalling divides the track into zones guarded by combinations of block signals, operating rules, and automatic-control devices so that only one train may be in a block at any time.
|
136 |
+
|
137 |
+
The electrification system provides electrical energy to the trains, so they can operate without a prime mover on board. This allows lower operating costs, but requires large capital investments along the lines. Mainline and tram systems normally have overhead wires, which hang from poles along the line. Grade-separated rapid transit sometimes use a ground third rail.
|
138 |
+
|
139 |
+
Power may be fed as direct (DC) or alternating current (AC). The most common DC voltages are 600 and 750 V for tram and rapid transit systems, and 1,500 and 3,000 V for mainlines. The two dominant AC systems are 15 kV and 25 kV.
|
140 |
+
|
141 |
+
A railway station serves as an area where passengers can board and alight from trains. A goods station is a yard which is exclusively used for loading and unloading cargo. Large passenger stations have at least one building providing conveniences for passengers, such as purchasing tickets and food. Smaller stations typically only consist of a platform. Early stations were sometimes built with both passenger and goods facilities.[58]
|
142 |
+
|
143 |
+
Platforms are used to allow easy access to the trains, and are connected to each other via underpasses, footbridges and level crossings. Some large stations are built as culs-de-sac, with trains only operating out from one direction. Smaller stations normally serve local residential areas, and may have connection to feeder bus services. Large stations, in particular central stations, serve as the main public transport hub for the city, and have transfer available between rail services, and to rapid transit, tram or bus services.
|
144 |
+
|
145 |
+
Since the 1980s, there has been an increasing trend to split up railway companies, with companies owning the rolling stock separated from those owning the infrastructure. This is particularly true in Europe, where this arrangement is required by the European Union. This has allowed open access by any train operator to any portion of the European railway network. In the UK, the railway track is state owned, with a public controlled body (Network Rail) running, maintaining and developing the track, while Train Operating Companies have run the trains since privatization in the 1990s.[59]
|
146 |
+
|
147 |
+
In the U.S., virtually all rail networks and infrastructure outside the Northeast Corridor are privately owned by freight lines. Passenger lines, primarily Amtrak, operate as tenants on the freight lines. Consequently, operations must be closely synchronized and coordinated between freight and passenger railroads, with passenger trains often being dispatched by the host freight railroad. Due to this shared system, both are regulated by the Federal Railroad Administration (FRA) and may follow the AREMA recommended practices for track work and AAR standards for vehicles.[56]
|
148 |
+
|
149 |
+
The main source of income for railway companies is from ticket revenue (for passenger transport) and shipment fees for cargo. Discounts and monthly passes are sometimes available for frequent travellers (e.g. season ticket and rail pass). Freight revenue may be sold per container slot or for a whole train. Sometimes, the shipper owns the cars and only rents the haulage. For passenger transport, advertisement income can be significant.
|
150 |
+
|
151 |
+
Governments may choose to give subsidies to rail operation, since rail transport has fewer externalities than other dominant modes of transport. If the railway company is state-owned, the state may simply provide direct subsidies in exchange for increased production. If operations have been privatized, several options are available. Some countries have a system where the infrastructure is owned by a government agency or company – with open access to the tracks for any company that meets safety requirements. In such cases, the state may choose to provide the tracks free of charge, or for a fee that does not cover all costs. This is seen as analogous to the government providing free access to roads. For passenger operations, a direct subsidy may be paid to a public-owned operator, or public service obligation tender may be helt, and a time-limited contract awarded to the lowest bidder. Total EU rail subsidies amounted to €73 billion in 2005.[60]
|
152 |
+
|
153 |
+
Amtrak, the US passenger rail service, and Canada's Via Rail are private railroad companies chartered by their respective national governments. As private passenger services declined because of competition from automobiles and airlines, they became shareholders of Amtrak either with a cash entrance fee or relinquishing their locomotives and rolling stock. The government subsidizes Amtrak by supplying start-up capital and making up for losses at the end of the fiscal year.[61][page needed]
|
154 |
+
|
155 |
+
Trains can travel at very high speed, but they are heavy, are unable to deviate from the track and require a great distance to stop. Possible accidents include derailment (jumping the track), a collision with another train or collision with automobiles, other vehicles or pedestrians at level crossings. The last accounts for the majority of rail accidents and casualties. The most important safety measures to prevent accidents are strict operating rules, e.g. railway signalling and gates or grade separation at crossings. Train whistles, bells or horns warn of the presence of a train, while trackside signals maintain the distances between trains.
|
156 |
+
|
157 |
+
An important element in the safety of many high-speed inter-city networks such as Japan's Shinkansen is the fact that trains only run on dedicated railway lines, without level crossings. This effectively eliminates the potential for collision with automobiles, other vehicles or pedestrians, vastly reduces the likelihood of collision with other trains and helps ensure services remain timely.
|
158 |
+
|
159 |
+
As in any infrastructure asset, railways must keep up with periodic inspection and maintenance in order to minimize effect of infrastructure failures that can disrupt freight revenue operations and passenger services. Because passengers are considered the most crucial cargo and usually operate at higher speeds, steeper grades, and higher capacity/frequency, their lines are especially important. Inspection practices include track geometry cars or walking inspection. Curve maintenance especially for transit services includes gauging, fastener tightening, and rail replacement.
|
160 |
+
|
161 |
+
Rail corrugation is a common issue with transit systems due to the high number of light-axle, wheel passages which result in grinding of the wheel/rail interface. Since maintenance may overlap with operations, maintenance windows (nighttime hours, off-peak hours, altering train schedules or routes) must be closely followed. In addition, passenger safety during maintenance work (inter-track fencing, proper storage of materials, track work notices, hazards of equipment near states) must be regarded at all times. At times, maintenance access problems can emerge due to tunnels, elevated structures, and congested cityscapes. Here, specialized equipment or smaller versions of conventional maintenance gear are used.[56]
|
162 |
+
|
163 |
+
Unlike highways or road networks where capacity is disaggregated into unlinked trips over individual route segments, railway capacity is fundamentally considered a network system. As a result, many components are causes and effects of system disruptions. Maintenance must acknowledge the vast array of a route's performance (type of train service, origination/destination, seasonal impacts), line's capacity (length, terrain, number of tracks, types of train control), trains throughput (max speeds, acceleration/deceleration rates), and service features with shared passenger-freight tracks (sidings, terminal capacities, switching routes, and design type).[56]
|
164 |
+
|
165 |
+
Rail transport is an energy-efficient[64] but capital-intensive means of mechanized land transport. The tracks provide smooth and hard surfaces on which the wheels of the train can roll with a relatively low level of friction being generated. Moving a vehicle on and/or through a medium (land, sea, or air) requires that it overcomes resistance to its motion caused by friction. A land vehicle's total resistance (in pounds or Newtons) is a quadratic function of the vehicle's speed:
|
166 |
+
|
167 |
+
where:
|
168 |
+
|
169 |
+
Essentially, resistance differs between vehicle's contact point and surface of roadway. Metal wheels on metal rails have a significant advantage of overcoming resistance compared to rubber-tyred wheels on any road surface (railway – 0.001g at 10 miles per hour (16 km/h) and 0.024g at 60 miles per hour (97 km/h); truck – 0.009g at 10 miles per hour (16 km/h) and 0.090 at 60 miles per hour (97 km/h)). In terms of cargo capacity combining speed and size being moved in a day:
|
170 |
+
|
171 |
+
In terms of the horsepower to weight ratio, a slow-moving barge requires 0.2 horsepower per short ton (0.16 kW/t), a railway and pipeline requires 2.5 horsepower per short ton (2.1 kW/t), and truck requires 10 horsepower per short ton (8.2 kW/t). However, at higher speeds, a railway overcomes the barge and proves most economical.[56]
|
172 |
+
|
173 |
+
As an example, a typical modern wagon can hold up to 113 tonnes (125 short tons) of freight on two four-wheel bogies. The track distributes the weight of the train evenly, allowing significantly greater loads per axle and wheel than in road transport, leading to less wear and tear on the permanent way. This can save energy compared with other forms of transport, such as road transport, which depends on the friction between rubber tyres and the road. Trains have a small frontal area in relation to the load they are carrying, which reduces air resistance and thus energy usage.
|
174 |
+
|
175 |
+
In addition, the presence of track guiding the wheels allows for very long trains to be pulled by one or a few engines and driven by a single operator, even around curves, which allows for economies of scale in both manpower and energy use; by contrast, in road transport, more than two articulations causes fishtailing and makes the vehicle unsafe.
|
176 |
+
|
177 |
+
Considering only the energy spent to move the means of transport, and using the example of the urban area of Lisbon, electric trains seem to be on average 20 times more efficient than automobiles for transportation of passengers, if we consider energy spent per passenger-distance with similar occupation ratios.[65] Considering an automobile with a consumption of around 6 l/100 km (47 mpg‑imp; 39 mpg‑US) of fuel, the average car in Europe has an occupancy of around 1.2 passengers per automobile (occupation ratio around 24%) and that one litre of fuel amounts to about 8.8 kWh (32 MJ), equating to an average of 441 Wh (1,590 kJ) per passenger-km. This compares to a modern train with an average occupancy of 20% and a consumption of about 8.5 kW⋅h/km (31 MJ/km; 13.7 kW⋅h/mi), equating to 21.5 Wh (77 kJ) per passenger-km, 20 times less than the automobile.
|
178 |
+
|
179 |
+
Due to these benefits, rail transport is a major form of passenger and freight transport in many countries. It is ubiquitous in Europe, with an integrated network covering virtually the whole continent. In India, China, South Korea and Japan, many millions use trains as regular transport. In North America, freight rail transport is widespread and heavily used, but intercity passenger rail transport is relatively scarce outside the Northeast Corridor, due to increased preference of other modes, particularly automobiles and airplanes.[61][page needed][66]
|
180 |
+
South Africa, northern Africa and Argentina have extensive rail networks, but some railways elsewhere in Africa and South America are isolated lines. Australia has a generally sparse network befitting its population density but has some areas with significant networks, especially in the southeast. In addition to the previously existing east–west transcontinental line in Australia, a line from north to south has been constructed. The highest railway in the world is the line to Lhasa, in Tibet,[67] partly running over permafrost territory. Western Europe has the highest railway density in the world and many individual trains there operate through several countries despite technical and organizational differences in each national network.
|
181 |
+
|
182 |
+
Railways are central to the formation of modernity and ideas of progress.[68] The process of modernization in the 19th century involved a transition from a spatially oriented world to a time oriented world. Exact time was essential, and everyone had to know what the time was, resulting in clocks towers for railway stations, clocks in public places, pocket watches for railway workers and for travelers. Trains left on time (they never left early). By contrast, in the premodern era, passenger ships left when the captain had enough passengers. In the premodern era, local time was set at noon, when the sun was at its highest. Every place east to west had a different time and that changed with the introduction of standard time zones. Printed time tables were a convenience for the travelers, but more elaborate time tables, called train orders, were even more essential for the train crews, the maintenance workers, the station personnel, and for the repair and maintenance crews, who knew when to expect a train would come along. Most trackage was single track, with sidings and signals to allow lower priority trains to be sidetracked. Schedules told everyone what to do, where to be, and exactly when. If bad weather disrupted the system, telegraphers relayed immediate corrections and updates throughout the system. Just as railways as business organizations created the standards and models for modern big business, so too the railway timetable was adapted to myriad uses, such as schedules for buses ferries, and airplanes, for radio and television programs, for school schedules, for factory time clocks. The modern world was ruled by the clock and the timetable.[69]
|
183 |
+
|
184 |
+
According to historian Henry Adams the system of railroads needed:
|
185 |
+
|
186 |
+
The impact can be examined through five aspects: shipping, finance, management, careers, and popular reaction.
|
187 |
+
|
188 |
+
First they provided a highly efficient network for shipping freight and passengers across a large national market. The result was a transforming impact on most sectors of the economy including manufacturing, retail and wholesale, agriculture, and finance. The United States now had an integrated national market practically the size of Europe, with no internal barriers or tariffs, all supported by a common language, and financial system and a common legal system.[71]
|
189 |
+
|
190 |
+
Railroads financing provided the basis for a dramatic expansion of the private (non-governmental) financial system. Construction of railroads was far more expensive than factories. In 1860, the combined total of railroad stocks and bonds was $1.8 billion; 1897 it reached $10.6 billion (compared to a total national debt of $1.2 billion).[72]
|
191 |
+
Funding came from financiers throughout the Northeast, and from Europe, especially Britain.[73] About 10 percent of the funding came from the government, especially in the form of land grants that could be realized when a certain amount of trackage was opened.[74] The emerging American financial system was based on railroad bonds. New York by 1860 was the dominant financial market. The British invested heavily in railroads around the world, but nowhere more so than the United States; The total came to about $3 billion by 1914. In 1914–1917, they liquidated their American assets to pay for war supplies.[75][76]
|
192 |
+
|
193 |
+
Railroad management designed complex systems that could handle far more complicated simultaneous relationships than could be dreamed of by the local factory owner who could patrol every part of his own factory in a matter of hours. Civil engineers became the senior management of railroads. The leading American innovators were the Western Railroad of Massachusetts and the Baltimore and Ohio Railroad in the 1840s, the Erie in the 1850s and the Pennsylvania in the 1860s.[77]
|
194 |
+
|
195 |
+
The railroads invented the career path in the private sector for both blue-collar workers and white-collar workers. Railroading became a lifetime career for young men; women were almost never hired. A typical career path would see a young man hired at age 18 as a shop laborer, be promoted to skilled mechanic at age 24, brakemen at 25, freight conductor at 27, and passenger conductor at age 57. White-collar careers paths likewise were delineated. Educated young men started in clerical or statistical work and moved up to station agents or bureaucrats at the divisional or central headquarters. At each level they had more and more knowledge, experience, and human capital. They were very hard to replace, and were virtually guaranteed permanent jobs and provided with insurance and medical care. Hiring, firing, and wage rates were set not by foremen, but by central administrators, in order to minimize favoritism and personality conflicts. Everything was done by the book, whereby an increasingly complex set of rules dictated to everyone exactly what should be done in every circumstance, and exactly what their rank and pay would be. By the 1880s the career railroaders were retiring, and pension systems were invented for them.[78]
|
196 |
+
|
197 |
+
Railways contribute to social vibrancy and economic competitiveness by transporting multitudes of customers and workers to city centres and inner suburbs. Hong Kong has recognized rail as "the backbone of the public transit system" and as such developed their franchised bus system and road infrastructure in comprehensive alignment with their rail services.[79] China's large cities such as Beijing, Shanghai, and Guangzhou recognize rail transit lines as the framework and bus lines as the main body to their metropolitan transportation systems.[80] The Japanese Shinkansen was built to meet the growing traffic demand in the "heart of Japan's industry and economy" situated on the Tokyo-Kobe line.[81]
|
198 |
+
|
199 |
+
In the 1863-70 decade the heavy use of railways in the American Civil War,[82] and in Germany's wars against Austria and France,[83] provided a speed of movement unheard-of in the days of horses. During much of the 20th century, rail was a key element of war plans for rapid military mobilization, allowing for the quick and efficient transport of large numbers of reservists to their mustering-points, and infantry soldiers to the front lines.[84] The Western Front in France during World War I required many trainloads of munitions a day.[85] Rail yards and bridges in Germany and occupied France were major targets of Allied air power in World War II.[86] However, by the 21st century, rail transport – limited to locations on the same continent, and vulnerable to air attack – had largely been displaced by the adoption of aerial transport.
|
200 |
+
|
201 |
+
Railways channel growth towards dense city agglomerations and along their arteries, as opposed to highway expansion, indicative of the U.S. transportation policy, which encourages development of suburbs at the periphery, contributing to increased vehicle miles travelled, carbon emissions, development of greenfield spaces, and depletion of natural reserves. These arrangements revalue city spaces, local taxes,[87] housing values, and promotion of mixed use development.[88][89]
|
202 |
+
|
203 |
+
The construction of the first railway of the Austro-Hungarian empire, from Vienna to Prague, came in 1837–1842 to promises of new prosperity. Construction proved more costly than anticipated, and it brought in less revenue because local industry did not have a national market. In town after town the arrival of railway angered the locals because of the noise, smell, and pollution caused by the trains and the damage to homes and the surrounding land caused by the engine's soot and fiery embers; and since most travel was very local ordinary people seldom used the new line.[90]
|
204 |
+
|
205 |
+
A 2018 study found that the opening of the Beijing Metro caused a reduction in "most of the air pollutants concentrations (PM2.5, PM10, SO2, NO2, and CO) but had little effect on ozone pollution."[91]
|
206 |
+
|
207 |
+
European development economists have argued that the existence of modern rail infrastructure is a significant indicator of a country's economic advancement: this perspective is illustrated notably through the Basic Rail Transportation Infrastructure Index (known as BRTI Index).[92]
|
208 |
+
|
209 |
+
In 2014, total rail spending by China was $130 billion and is likely to remain at a similar rate for the rest of the country's next Five Year Period (2016–2020).[93]
|
210 |
+
|
211 |
+
The Indian railways are subsidized by around ₹400 billion (US$5.6 billion), of which around 60% goes to commuter rail and short-haul trips.[94][95]
|
212 |
+
|
213 |
+
According to the 2017 European Railway Performance Index for intensity of use, quality of service and safety performance, the top tier European national rail systems consists of Switzerland, Denmark, Finland, Germany, Austria, Sweden, and France.[96] Performance levels reveal a positive correlation between public cost and a given railway system's performance, and also reveal differences in the value that countries receive in return for their public cost. Denmark, Finland, France, Germany, the Netherlands, Sweden, and Switzerland capture relatively high value for their money, while Luxembourg, Belgium, Latvia, Slovakia, Portugal, Romania, and Bulgaria underperform relative to the average ratio of performance to cost among European countries.[97]
|
214 |
+
|
215 |
+
In 2016 Russian Railways received 94.9 billion roubles (around US$1.4 billion) from the government.[108]
|
216 |
+
|
217 |
+
In 2015, funding from the U.S. federal government for Amtrak was around US$1.4 billion.[109] By 2018, appropriated funding had increased to approximately US$1.9 billion.[110]
|
en/1972.html.txt
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fertility is the natural capability to produce offspring. As a measure, fertility rate is the number of offspring born per mating pair, individual or population. Fertility differs from fecundity, which is defined as the potential for reproduction (influenced by gamete production, fertilization and carrying a pregnancy to term)[1] A lack of fertility is infertility while a lack of fecundity would be called sterility.
|
2 |
+
|
3 |
+
Human fertility depends on factors of nutrition, sexual behavior, consanguinity, culture, instinct, endocrinology, timing, economics, way of life, and emotions.
|
4 |
+
|
5 |
+
In demographic contexts, fertility refers to the actual production of offspring, rather than the physical capability to produce which is termed fecundity.[2][3] While fertility can be measured, fecundity cannot be. Demographers measure the fertility rate in a variety of ways, which can be broadly broken into "period" measures and "cohort" measures. "Period" measures refer to a cross-section of the population in one year. "Cohort" data on the other hand, follows the same people over a period of decades. Both period and cohort measures are widely used.[4]
|
6 |
+
|
7 |
+
A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have.[6] Factors generally associated with increased fertility include religiosity,[7] intention to have children,[8] and maternal support.[9] Factors generally associated with decreased fertility include wealth, education,[10] female labor participation,[11] urban residence,[12] cost of housing,[13] intelligence, increased female age and (to a lesser degree) increased male age.
|
8 |
+
|
9 |
+
The "Three-step Analysis" of the fertility process was introduced by Kingsley Davis and Judith Blake in 1956 and makes use of three proximate determinants:[14][15] The economic analysis of fertility is part of household economics, a field that has grown out of the New Home Economics. Influential economic analyses of fertility include Becker (1960),[16] Mincer (1963),[17] and Easterlin (1969).[18] The latter developed the Easterlin hypothesis to account for the Baby Boom.
|
10 |
+
|
11 |
+
Bongaarts proposed a model where the total fertility rate of a population can be calculated from four proximate determinants and the total fecundity (TF). The index of marriage (Cm), the index of contraception (Cc), the index of induced abortion (Ca) and the index of postpartum infecundability (Ci). These indices range from 0 to 1. The higher the index, the higher it will make the TFR, for example a population where there are no induced abortions would have a Ca of 1, but a country where everybody used infallible contraception would have a Cc of 0.
|
12 |
+
|
13 |
+
TFR = TF × Cm × Ci × Ca × Cc
|
14 |
+
|
15 |
+
These four indices can also be used to calculate the total marital fertility (TMFR) and the total natural fertility (TN).
|
16 |
+
|
17 |
+
TFR = TMFR × Cm
|
18 |
+
|
19 |
+
TMFR = TN × Cc × Ca
|
20 |
+
|
21 |
+
TN = TF × Ci
|
22 |
+
|
23 |
+
Women have hormonal cycles which determine when they can achieve pregnancy. The cycle is approximately twenty-eight days long, with a fertile period of five days per cycle, but can deviate greatly from this norm. Men are fertile continuously, but their sperm quality is affected by their health, frequency of ejaculation, and environmental factors.
|
24 |
+
|
25 |
+
Fertility declines with age in both sexes. In women the decline is more rapid, with complete infertility normally occurring around the age of 50.
|
26 |
+
|
27 |
+
Pregnancy rates for sexual intercourse are highest when it is done every 1 or 2 days,[19] or every 2 or 3 days.[20] Studies have shown no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina.[21]
|
28 |
+
|
29 |
+
A woman's menstrual cycle begins, as it has been arbitrarily assigned, with menses. Next is the follicular phase where estrogen levels build as an ovum matures (due to the follicular stimulating hormone, or FSH) within the ovary. When estrogen levels peak, it spurs a surge of luteinizing hormone (LH) which finishes the ovum and enables it to break through the ovary wall.[23] This is ovulation. During the luteal phase, which follows ovulation LH and FSH cause the post-ovulation ovary to develop into the corpus luteum which produces progesterone. The production of progesterone inhibits the LH and FSH hormones which (in a cycle without pregnancy) causes the corpus luteum to atrophy, and menses to begin the cycle again.
|
30 |
+
|
31 |
+
Peak fertility occurs during just a few days of the cycle: usually two days before and two days after the ovulation date.[24] This fertile window varies from woman to woman, just as the ovulation date often varies from cycle to cycle for the same woman.[25] The ovule is usually capable of being fertilized for up to 48 hours after it is released from the ovary. Sperm survive inside the uterus between 48 and 72 hours on average, with the maximum being 120 hours (5 days).
|
32 |
+
|
33 |
+
These periods and intervals are important factors for couples using the rhythm method of contraception.
|
34 |
+
|
35 |
+
The average age of menarche in the United States is about 12.5 years.[26] In postmenarchal girls, about 80% of the cycles are anovulatory (ovulation does not actually take place) in the first year after menarche, 50% in the third and 10% in the sixth year.[27]
|
36 |
+
|
37 |
+
Menopause occurs during a woman's midlife (between ages 48 and 55).[28][29] During menopause, hormonal production by the ovaries is reduced, eventually causing a permanent cessation of the primary function of the ovaries, particularly the creation of the uterine lining (period). This is considered the end of the fertile phase of a woman's life.
|
38 |
+
|
39 |
+
The following effects of age and female fertility have been found in women trying to get pregnant, without using fertility drugs or in vitro fertilization (data from 1670 to 1830):[30]
|
40 |
+
|
41 |
+
[30]
|
42 |
+
|
43 |
+
Studies of actual couples trying to conceive have come up with higher results: one 2004 study of 770 European women found that 82% of 35- to 39-year-old women conceived within a year,[31] while another in 2013 of 2,820 Danish women saw 78% of 35- to 40-year-olds conceive within a year.[32]
|
44 |
+
|
45 |
+
The use of fertility drugs and/or invitro fertilization can increase the chances of becoming pregnant at a later age.[33] Successful pregnancies facilitated by fertility treatment have been documented in women as old as 67.[34] Studies since 2004 now show that mammals may continue to produce new eggs throughout their lives, rather than being born with a finite number as previously thought. Researchers at the Massachusetts General Hospital in Boston, US, say that if eggs are newly created each month in humans as well, all current theories about the aging of the female reproductive system will have to be overhauled, although at this time this is simply conjecture.[35][36]
|
46 |
+
|
47 |
+
According to the March of Dimes, "about 9 percent of recognized pregnancies for women aged 20 to 24 ended in miscarriage. The risk rose to about 20 percent at age 35 to 39, and more than 50 percent by age 42".[37] Birth defects, especially those involving chromosome number and arrangement, also increase with the age of the mother. According to the March of Dimes, "At age 25, your risk of having a baby with Down syndrome is 1 in 1,340. At age 30, your risk is 1 in 940. At age 35, your risk is 1 in 353. At age 40, your risk is 1 in 85. At age 45, your risk is 1 in 35."[38]
|
48 |
+
|
49 |
+
Some research suggest that increased male age is associated with a decline in semen volume, sperm motility, and sperm morphology.[39] In studies that controlled for female age, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%.[39] It is suggested that sperm count declines with age, with men aged 50–80 years producing sperm at an average rate of 75% compared with men aged 20–50 years and that larger differences are seen in how many of the seminiferous tubules in the testes contain mature sperm:[39]
|
50 |
+
|
51 |
+
Decline in male fertility is influenced by many factors, including lifestyle, environment and psychological factors.[41]
|
52 |
+
|
53 |
+
Some research also suggests increased risks for health problems for children of older fathers, but no clear association has been proven.[42] A large scale in Israel study suggested that the children of men 40 or older were 5.75 times more likely than children of men under 30 to have an autism spectrum disorder, controlling for year of birth, socioeconomic status, and maternal age.[43] Increased paternal age is suggested by some to directly correlate to schizophrenia but it is not proven.[44][45][46][47][48]
|
54 |
+
|
55 |
+
Australian researchers have found evidence to suggest overweight obesity may cause subtle damage to sperm and prevent a healthy pregnancy. They say fertilization was 40% less likely to succeed when the father was overweight.[49]
|
56 |
+
|
57 |
+
The American Fertility Society recommends an age limit for sperm donors of 50 years or less,[50] and many fertility clinics in the United Kingdom will not accept donations from men over 40 or 45 years of age.[51]
|
58 |
+
|
59 |
+
The French pronatalist movement from 1919–1945 failed to convince French couples they had a patriotic duty to help increase their country's birthrate. Even the government was reluctant in its support to the movement. It was only between 1938 and 1939 that the French government became directly and permanently involved in the pronatalist effort. Although the birthrate started to surge in late 1941, the trend was not sustained. Falling birthrate once again became a major concern among demographers and government officials beginning in the 1970s.[52]
|
60 |
+
|
61 |
+
From 1800 to 1940, fertility fell in the US. There was a marked decline in fertility in the early 1900s, associated with improved contraceptives, greater access to contraceptives and sexuality information and the "first" sexual revolution in the 1920s.
|
62 |
+
|
63 |
+
After 1940 fertility suddenly started going up again, reaching a new peak in 1957. After 1960, fertility started declining rapidly. In the Baby Boom years (1946–1964), women married earlier and had their babies sooner; the number of children born to mothers after age 35 did not increase.[54]
|
64 |
+
|
65 |
+
After 1960, new methods of contraception became available, ideal family size fell, from 3 to 2 children. Couples postponed marriage and first births, and they sharply reduced the number of third and fourth births.[55]
|
66 |
+
|
67 |
+
Infertility primarily refers to the biological inability of a person to contribute to conception. Infertility may also refer to the state of a woman who is unable to carry a pregnancy to full term. There are many biological causes of infertility, including some that medical intervention can treat.[56]
|
68 |
+
|
69 |
+
This article incorporates material from the Citizendium article "Fertility (demography)", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
|
en/1973.html.txt
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The buttocks (singular: buttock) are two rounded portions of the exterior anatomy of most mammals, located on the posterior of the pelvic region. In humans, the buttocks are located between the lower back and the perineum. They are composed of a layer of exterior skin and underlying subcutaneous fat superimposed on a left and right gluteus maximus and gluteus medius muscles. The two gluteus maximus muscles are the largest muscles in the human body. They are responsible for achieving the upright posture when the body is bent at the waist; maintaining the body in the upright posture by keeping the hip joints extended; and propelling the body forward via further leg (hip) extension when walking or running.[1] In the seated position, the buttocks bear the weight of the upper body and take that weight off the feet.
|
4 |
+
|
5 |
+
In many cultures, the buttocks play a role in sexual attraction.[2] Some cultures, such as that of Victorian England, have also used the buttocks as a primary target for corporal punishment,[3] as the buttocks' layer of subcutaneous fat offers protection against injury while still allowing for the infliction of pain. There are several connotations of buttocks in art, fashion, culture and humor. The English language is replete with many popular synonyms that range from polite colloquialisms ("posterior", "backside" or "bottom") to vulgar slang ("arse," "ass," "bum," "butt," "booty," "prat").
|
6 |
+
|
7 |
+
The buttocks are formed by the masses of the gluteal muscles or "glutes" (the gluteus maximus muscle and the gluteus medius muscle) superimposed by a layer of fat. The superior aspect of the buttock ends at the iliac crest, and the lower aspect is outlined by the horizontal gluteal crease. The gluteus maximus has two insertion points: 1⁄3 superior portion of the linea aspera of the femur, and the superior portion of the iliotibial tractus. The masses of the gluteus maximus muscle are separated by an intermediate intergluteal cleft or "crack" in which the anus is situated.
|
8 |
+
|
9 |
+
The buttocks allow primates to sit upright without needing to rest their weight on their feet as four-legged animals do. Females of certain species of baboon have red buttocks that blush to attract males. In the case of humans, females tend to have proportionally wider and thicker buttocks due to higher subcutaneous fat and proportionally wider hips. In humans they also have a role in propelling the body in a forward motion and aiding bowel movement.[4][5]
|
10 |
+
|
11 |
+
Some baboons and all gibbons, though otherwise fur-covered, have characteristic naked callosities on their buttocks. While human children generally have smooth buttocks, mature males and females have varying degrees of hair growth, as on other parts of their body. Females may have hair growth in the gluteal cleft (including around the anus), sometimes extending laterally onto the lower aspect of the cheeks. Males may have hair growth over some or all of the buttocks.
|
12 |
+
|
13 |
+
The English word of Greek origin "callipygian" indicates someone who has beautiful buttocks.
|
14 |
+
|
15 |
+
Depending on the context, exposure of the buttocks in non-intimate situations can cause feelings of embarrassment or humiliation, and embarrassment or amusement in an onlooker (see pantsing). Willfully exposing one's own bare buttocks as a protest, a provocation, or just for fun is called mooning.
|
16 |
+
|
17 |
+
In many punitive traditions, the buttocks are a common target for corporal punishment, which can be meted out with no risk of long-term physical harm compared with the dangers of applying it to other parts of the body, such as the hands, which could easily be damaged.[6] Within the Victorian school system in England, the buttocks have been described as "the place provided by nature" for this purpose.[3] A modern-day example can be seen in some Southeast Asian countries, such as Singapore. Caning in Singapore is widely used as a form of judicial corporal punishment, with male convicts being sentenced to a caning on their bare buttocks.
|
18 |
+
|
19 |
+
In Western and some other cultures, many comedians, writers and others rely on the buttocks as a source of amusement, camaraderie and fun. There are numerous colloquial terms for the buttocks.
|
20 |
+
|
21 |
+
In American English, phrases use the buttocks or synonyms (especially "butt" and "ass") as a synecdoche or pars pro toto for a whole person, often with a negative connotation. For example, terminating an employee may be described as "firing his ass". One might say "move your ass" or "haul ass" as an exhortation to greater haste or urgency. Expressed as a function of punishment, defeat or assault becomes "kicking one's ass". Such phrases also may suggest a person's characteristics, e.g. difficult people are termed "hard asses". In America an annoying person or any source of frustration may be termed "a pain in the ass" (a synonym for "a pain in the neck"). People deemed excessively puritanical or proper may be termed "tight asses" (in Australia and New Zealand, "tight arse" refers to someone who is excessively miserly).
|
22 |
+
|
23 |
+
Certain physical dispositions of the buttocks—particularly size—are sometimes identified, controversially, as a racial characteristic (see race). A famous example was the case of Saartjie Baartman, the so-called "Hottentot Venus".
|
24 |
+
|
25 |
+
The Latin name for the buttocks is nates (English pronunciation /ˈneɪtiːz/ NAY-teez,[7] classical pronunciation nătes [ˈnateːs][8]) which is plural; the singular, natis (buttock), is rarely used. There are many colloquial terms to refer to them, including:
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
The 1880s were well known for the fashion trend among women called the bustle, which made even the smallest buttocks appear huge. The popularity of this fashion is shown in the famous Georges Seurat painting A Sunday Afternoon on the Island of La Grande Jatte in the two women to the far left and right. Like long underwear with the ubiquitous "butt flap" (used to allow baring only the bottom with a simple gesture, as for hygiene), this clothing style was acknowledged in popular media such as cartoons and comics for generations afterward.
|
30 |
+
|
31 |
+
More recently, the cleavage of the buttocks is sometimes exposed by some women, deliberately or accidentally, as fashion dictated trousers be worn lower, as with hip-hugger pants.
|
32 |
+
|
33 |
+
An example of another attitude in an otherwise hardly exhibitionist culture is the Japanese fundoshi.
|
34 |
+
|
35 |
+
Jean-Jacques Lequeu (c. 1785).
|
36 |
+
|
37 |
+
Félix Vallotton (c. 1884).
|
en/1974.html.txt
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The buttocks (singular: buttock) are two rounded portions of the exterior anatomy of most mammals, located on the posterior of the pelvic region. In humans, the buttocks are located between the lower back and the perineum. They are composed of a layer of exterior skin and underlying subcutaneous fat superimposed on a left and right gluteus maximus and gluteus medius muscles. The two gluteus maximus muscles are the largest muscles in the human body. They are responsible for achieving the upright posture when the body is bent at the waist; maintaining the body in the upright posture by keeping the hip joints extended; and propelling the body forward via further leg (hip) extension when walking or running.[1] In the seated position, the buttocks bear the weight of the upper body and take that weight off the feet.
|
4 |
+
|
5 |
+
In many cultures, the buttocks play a role in sexual attraction.[2] Some cultures, such as that of Victorian England, have also used the buttocks as a primary target for corporal punishment,[3] as the buttocks' layer of subcutaneous fat offers protection against injury while still allowing for the infliction of pain. There are several connotations of buttocks in art, fashion, culture and humor. The English language is replete with many popular synonyms that range from polite colloquialisms ("posterior", "backside" or "bottom") to vulgar slang ("arse," "ass," "bum," "butt," "booty," "prat").
|
6 |
+
|
7 |
+
The buttocks are formed by the masses of the gluteal muscles or "glutes" (the gluteus maximus muscle and the gluteus medius muscle) superimposed by a layer of fat. The superior aspect of the buttock ends at the iliac crest, and the lower aspect is outlined by the horizontal gluteal crease. The gluteus maximus has two insertion points: 1⁄3 superior portion of the linea aspera of the femur, and the superior portion of the iliotibial tractus. The masses of the gluteus maximus muscle are separated by an intermediate intergluteal cleft or "crack" in which the anus is situated.
|
8 |
+
|
9 |
+
The buttocks allow primates to sit upright without needing to rest their weight on their feet as four-legged animals do. Females of certain species of baboon have red buttocks that blush to attract males. In the case of humans, females tend to have proportionally wider and thicker buttocks due to higher subcutaneous fat and proportionally wider hips. In humans they also have a role in propelling the body in a forward motion and aiding bowel movement.[4][5]
|
10 |
+
|
11 |
+
Some baboons and all gibbons, though otherwise fur-covered, have characteristic naked callosities on their buttocks. While human children generally have smooth buttocks, mature males and females have varying degrees of hair growth, as on other parts of their body. Females may have hair growth in the gluteal cleft (including around the anus), sometimes extending laterally onto the lower aspect of the cheeks. Males may have hair growth over some or all of the buttocks.
|
12 |
+
|
13 |
+
The English word of Greek origin "callipygian" indicates someone who has beautiful buttocks.
|
14 |
+
|
15 |
+
Depending on the context, exposure of the buttocks in non-intimate situations can cause feelings of embarrassment or humiliation, and embarrassment or amusement in an onlooker (see pantsing). Willfully exposing one's own bare buttocks as a protest, a provocation, or just for fun is called mooning.
|
16 |
+
|
17 |
+
In many punitive traditions, the buttocks are a common target for corporal punishment, which can be meted out with no risk of long-term physical harm compared with the dangers of applying it to other parts of the body, such as the hands, which could easily be damaged.[6] Within the Victorian school system in England, the buttocks have been described as "the place provided by nature" for this purpose.[3] A modern-day example can be seen in some Southeast Asian countries, such as Singapore. Caning in Singapore is widely used as a form of judicial corporal punishment, with male convicts being sentenced to a caning on their bare buttocks.
|
18 |
+
|
19 |
+
In Western and some other cultures, many comedians, writers and others rely on the buttocks as a source of amusement, camaraderie and fun. There are numerous colloquial terms for the buttocks.
|
20 |
+
|
21 |
+
In American English, phrases use the buttocks or synonyms (especially "butt" and "ass") as a synecdoche or pars pro toto for a whole person, often with a negative connotation. For example, terminating an employee may be described as "firing his ass". One might say "move your ass" or "haul ass" as an exhortation to greater haste or urgency. Expressed as a function of punishment, defeat or assault becomes "kicking one's ass". Such phrases also may suggest a person's characteristics, e.g. difficult people are termed "hard asses". In America an annoying person or any source of frustration may be termed "a pain in the ass" (a synonym for "a pain in the neck"). People deemed excessively puritanical or proper may be termed "tight asses" (in Australia and New Zealand, "tight arse" refers to someone who is excessively miserly).
|
22 |
+
|
23 |
+
Certain physical dispositions of the buttocks—particularly size—are sometimes identified, controversially, as a racial characteristic (see race). A famous example was the case of Saartjie Baartman, the so-called "Hottentot Venus".
|
24 |
+
|
25 |
+
The Latin name for the buttocks is nates (English pronunciation /ˈneɪtiːz/ NAY-teez,[7] classical pronunciation nătes [ˈnateːs][8]) which is plural; the singular, natis (buttock), is rarely used. There are many colloquial terms to refer to them, including:
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
The 1880s were well known for the fashion trend among women called the bustle, which made even the smallest buttocks appear huge. The popularity of this fashion is shown in the famous Georges Seurat painting A Sunday Afternoon on the Island of La Grande Jatte in the two women to the far left and right. Like long underwear with the ubiquitous "butt flap" (used to allow baring only the bottom with a simple gesture, as for hygiene), this clothing style was acknowledged in popular media such as cartoons and comics for generations afterward.
|
30 |
+
|
31 |
+
More recently, the cleavage of the buttocks is sometimes exposed by some women, deliberately or accidentally, as fashion dictated trousers be worn lower, as with hip-hugger pants.
|
32 |
+
|
33 |
+
An example of another attitude in an otherwise hardly exhibitionist culture is the Japanese fundoshi.
|
34 |
+
|
35 |
+
Jean-Jacques Lequeu (c. 1785).
|
36 |
+
|
37 |
+
Félix Vallotton (c. 1884).
|
en/1975.html.txt
ADDED
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A festival is an event ordinarily celebrated by a community and centering on some characteristic aspect of that community and its religion or cultures. It is often marked as a local or national holiday, mela, or eid. A festival constitutes typical cases of glocalization, as well as the high culture-low culture interrelationship.[1] Next to religion and folklore, a significant origin is agricultural. Food is such a vital resource that many festivals are associated with harvest time. Religious commemoration and thanksgiving for good harvests are blended in events that take place in autumn, such as Halloween in the northern hemisphere and Easter in the southern.
|
4 |
+
|
5 |
+
Festivals often serve to fulfill specific communal purposes, especially in regard to commemoration or thanking to the gods and goddesses. Celebrations offer a sense of belonging for religious, social, or geographical groups, contributing to group cohesiveness. They may also provide entertainment, which was particularly important to local communities before the advent of mass-produced entertainment. Festivals that focus on cultural or ethnic topics also seek to inform community members of their traditions; the involvement of elders sharing stories and experience provides a means for unity among families.
|
6 |
+
|
7 |
+
In Ancient Greece and Rome, festivals such as the Saturnalia were closely associated with social organisation and political processes as well as religion.[2][3][4] In modern times, festivals may be attended by strangers such as tourists, who are attracted to some of the more eccentric or historical ones. The Philippines is one example of a modern society with many festivals, as each day of the year has at least one specific celebration. There are more than 42,000 known major and minor festivals in the country, most of which are specific to the barangay (village) level.[5]
|
8 |
+
|
9 |
+
The word "festival" was originally used as an adjective from the late fourteenth century, deriving from Latin via Old French.[6] In Middle English, a "festival dai" was a religious holiday.[7] Its first recorded used as a noun was in 1589 (as "Festifall").[6] Feast first came into usage as a noun circa 1200,[8] and its first recorded use as a verb was circa 1300.[9] The term "feast" is also used in common secular parlance as a synonym for any large or elaborate meal. When used as in the meaning of a festival, most often refers to a religious festival rather than a film or art festival. In the Philippines and many other former Spanish colonies, the Spanish word fiesta is used to denote a communal religious feast to honor a patron saint.[citation needed]
|
10 |
+
|
11 |
+
The word gala comes from Arabic word khil'a, meaning robe of honor.[10] The word gala was initially used to describe "festive dress", but came to be a synonym of festival starting in the 18th century.[11]
|
12 |
+
|
13 |
+
Many festivals have religious origins and entwine cultural and religious significance in traditional activities. The most important religious festivals such as Christmas, Rosh Hashanah, Diwali, Eid al-Fitr and Eid al-Adha serve to mark out the year. Others, such as harvest festivals, celebrate seasonal change. Events of historical significance, such as important military victories or other nation-building events also provide the impetus for a festival. An early example is the festival established by Ancient Egyptian Pharaoh Ramesses III celebrating his victory over the Libyans.[12] In many countries, royal holidays commemorate dynastic events just as agricultural holidays are about harvests. Festivals are often commemorated annually.
|
14 |
+
|
15 |
+
There are numerous types of festivals in the world and most countries celebrate important events or traditions with traditional cultural events and activities. Most culminate in the consumption of specially prepared food (showing the connection to "feasting") and they bring people together. Festivals are also strongly associated with national holidays. Lists of national festivals are published to make participation easier.[13]
|
16 |
+
|
17 |
+
Among many religions, a feast is a set of celebrations in honour of Gods or God.[14] A feast and a festival are historically interchangeable. Most religions have festivals that recur annually and some, such as Passover, Easter and Eid al-Adha are moveable feasts – that is, those that are determined either by lunar or agricultural cycles or the calendar in use at the time. The Sed festival, for example, celebrated the thirtieth year of an Egyptian pharaoh's rule and then every three (or four in one case) years after that.[15] Among the Ashantis, most of their traditional festivals are linked to gazette sites which are believed to be sacred with several rich biological resources in their pristine forms. Thus, the annual commemoration of the festivals helps in maintaining the buoyancy of the conserved natural site, assisting in biodiversity conservation.[16]
|
18 |
+
|
19 |
+
In the Christian liturgical calendar, there are two principal feasts, properly known as the Feast of the Nativity of our Lord (Christmas) and the Feast of the Resurrection, (Easter). In the Catholic, Eastern Orthodox, and Anglican liturgical calendars there are a great number of lesser feasts throughout the year commemorating saints, sacred events or doctrines. In the Philippines, each day of the year has at least one specific religious festival, either from Catholic, Islamic, or indigenous origins.[citation needed]
|
20 |
+
|
21 |
+
Buddhist religious festivals, such as Esala Perahera are held in Sri Lanka and Thailand.[17] Hindu festivals, such as Holi are very ancient. The Sikh community celebrates the Vaisakhi festival marking the new year and birth of the Khalsa.[18]
|
22 |
+
|
23 |
+
Cleaning in preparation for Passover (c.1320)
|
24 |
+
|
25 |
+
Radha celebrating Holi, Kangra, India (c1788)
|
26 |
+
|
27 |
+
A Christmas mass at the Church of the Nativity, in Bethlehem, Palestine (1979)
|
28 |
+
|
29 |
+
Moors and Christian festival in Villena, Spain
|
30 |
+
|
31 |
+
Decoration of god Krishna on Krishnashtami in India.
|
32 |
+
|
33 |
+
Among the many offspring of general arts festivals are also more specific types of festivals, including ones that showcase intellectual or creative achievement such as science festivals, literary festivals and music festivals.[19] Sub-categories include comedy festivals, rock festivals, jazz festivals and buskers festivals; poetry festivals,[20] theatre festivals, and storytelling festivals; and re-enactment festivals such as Renaissance fairs. In the Philippines, aside from numerous art festivals scattered throughout the year, February is known as national arts month, the culmination of all art festivals in the entire archipelago.[21]
|
34 |
+
|
35 |
+
Film festivals involve the screenings of several different films, and are usually held annually. Some of the most significant film festivals include the Berlin International Film Festival, the Venice Film Festival and the Cannes Film Festival.
|
36 |
+
|
37 |
+
Pushkin Poetry Festival, Russia
|
38 |
+
|
39 |
+
Television studio at the Hôtel Martinez during the Cannes Film Festival, France (2006)
|
40 |
+
|
41 |
+
The opening ceremony at the Woodstock rock festival, United States (1969)
|
42 |
+
|
43 |
+
A food festival is an event celebrating food or drink. These often highlight the output of producers from a certain region. Some food festivals are focused on a particular item of food, such as the National Peanut Festival in the United States, or the Galway International Oyster Festival in Ireland. There are also specific beverage festivals, such as the famous Oktoberfest in Germany for beer. Many countries hold festivals to celebrate wine. One example is the global celebration of the arrival of Beaujolais nouveau, which involves shipping the new wine around the world for its release date on the third Thursday of November each year.[22][23] Both Beaujolais nouveau and the Japanese rice wine sake are associated with harvest time. In the Philippines, there are at least two hundred festivals dedicated to food and drinks.[citation needed]
|
44 |
+
|
45 |
+
Soweto Wine Festival, South Africa (2009)
|
46 |
+
|
47 |
+
Holi Nepal (2011)
|
48 |
+
|
49 |
+
La Tomatina, Spain (2010)
|
50 |
+
|
51 |
+
Beer horse cart from the Hofbräuhaus brewery at Oktoberfest Germany (2013)
|
52 |
+
|
53 |
+
Seasonal festivals, such as Beltane, are determined by the solar and the lunar calendars and by the cycle of the seasons, especially because of its effect on food supply, as a result of which there is a wide range of ancient and modern harvest festivals. Ancient Egyptians relied upon the seasonal inundation caused by the Nile River, a form of irrigation, which provided fertile land for crops.[24] In the Alps, in autumn the return of the cattle from the mountain pastures to the stables in the valley is celebrated as Almabtrieb. A recognized winter festival, the Chinese New Year, is set by the lunar calendar, and celebrated from the day of the second new moon after the winter solstice. Dree Festival of the Apatanis living in Lower Subansiri District of Arunachal Pradesh is celebrated every year from July 4 to 7 by praying for a bumper crop harvest.[25]
|
54 |
+
|
55 |
+
Midsummer or St John's Day, is an example of a seasonal festival, related to the feast day of a Christian saint as well as a celebration of the time of the summer solstice in the northern hemisphere, where it is particularly important in Sweden. Winter carnivals also provide the opportunity to utilise to celebrate creative or sporting activities requiring snow and ice. In the Philippines, each day of the year has at least one festival dedicated to harvesting of crops, fishes, crustaceans, milk, and other local goods.[citation needed]
|
56 |
+
|
57 |
+
Temple Festival in India
|
58 |
+
|
59 |
+
Château de Montsoreau-Museum of Contemporary Art Sky lantern Festival, in Loire Valley
|
60 |
+
|
61 |
+
Midsummer dance by Anders Zorn, Sweden (1897)
|
62 |
+
|
63 |
+
Tanabata summer festival in Sendai, Japan
|
64 |
+
|
65 |
+
Grand Parade at the Sydney Royal Easter Show, Australia (2009)
|
66 |
+
|
67 |
+
Halloween pumpkins show the close relationship between a harvest and religious festivals
|
en/1976.html.txt
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A national day is a day on which celebrations mark the nationhood of a nation or state. It may be the date of independence, of becoming a republic, or a significant date for a patron saint or a ruler (such as a birthday, accession, or removal). The national day is often a public holiday. Many countries have more than one national day.
|
4 |
+
|
5 |
+
Nations that are not broadly recognized sovereign states are shown in pink. For nations that are dependent on, or part of, a sovereign state (such as federal states, autonomous regions, or colonies), the name of the sovereign state is shown in parentheses.
|
6 |
+
|
7 |
+
Days that are not fixed to the Gregorian calendar are sorted by their 2019 occurrences.
|
en/1977.html.txt
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A national day is a day on which celebrations mark the nationhood of a nation or state. It may be the date of independence, of becoming a republic, or a significant date for a patron saint or a ruler (such as a birthday, accession, or removal). The national day is often a public holiday. Many countries have more than one national day.
|
4 |
+
|
5 |
+
Nations that are not broadly recognized sovereign states are shown in pink. For nations that are dependent on, or part of, a sovereign state (such as federal states, autonomous regions, or colonies), the name of the sovereign state is shown in parentheses.
|
6 |
+
|
7 |
+
Days that are not fixed to the Gregorian calendar are sorted by their 2019 occurrences.
|
en/1978.html.txt
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Fire is the rapid oxidation of a material in the exothermic chemical process of combustion, releasing heat, light, and various reaction products.[1][a]
|
4 |
+
Fire is hot because the conversion of the weak double bond in molecular oxygen, O2, to the stronger bonds in the combustion products carbon dioxide and water releases energy (418 kJ per 32 g of O2); the bond energies of the fuel play only a minor role here.[2] At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma.[3] Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.
|
5 |
+
|
6 |
+
Fire in its most common form can result in conflagration, which has the potential to cause physical damage through burning. Fire is an important process that affects ecological systems around the globe. The positive effects of fire include stimulating growth and maintaining various ecological systems.
|
7 |
+
Its negative effects include hazard to life and property, atmospheric pollution, and water contamination.[4] If fire removes protective vegetation, heavy rainfall may lead to an increase in soil erosion by water.[5] Also, when vegetation is burned, the nitrogen it contains is released into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, but this fecundity can potentially be recovered as molecular nitrogen in the atmosphere is "fixed" and converted to ammonia by natural phenomena such as lightning and by leguminous plants that are "nitrogen-fixing" such as clover, peas, and green beans.
|
8 |
+
|
9 |
+
Fire has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction.
|
10 |
+
|
11 |
+
Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily.
|
12 |
+
|
13 |
+
Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel.
|
14 |
+
|
15 |
+
If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight.[6][7] This does not apply if oxygen is supplied to the fire by some process other than thermal convection.
|
16 |
+
|
17 |
+
Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following:
|
18 |
+
|
19 |
+
In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire's own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react.
|
20 |
+
|
21 |
+
A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of "fire". This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single-wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of "Green Dragon".
|
22 |
+
|
23 |
+
The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra. The dominant color in a flame changes with temperature. The photo of the forest fire in Canada is an excellent example of this variation. Near the ground, where most burning is occurring, the fire is white, the hottest color possible for organic material in general, or yellow. Above the yellow region, the color changes to orange, which is cooler, then red, which is cooler still. Above the red region, combustion no longer occurs, and the uncombusted carbon particles are visible as black smoke.
|
24 |
+
|
25 |
+
The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In micro gravity or zero gravity,[8] such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in micro gravity, and tends to smother the flame). There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs.[9] Experiments by NASA reveal that diffusion flames in micro gravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions.[10] These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency.
|
26 |
+
|
27 |
+
In combustion engines, various steps are taken to eliminate a flame. The method depends mainly on whether the fuel is oil, wood, or a high-energy fuel such as jet fuel.
|
28 |
+
|
29 |
+
It is true that objects at specific temperatures do radiate visible light. Objects whose surface is at a temperature above approximately 470 °C (878 °F) will glow, emitting light at a color that indicates the temperature of that surface. See the section on red heat for more about this effect. It is a misconception that one can judge the temperature of a fire by the color of its flames or the sparks in the flames. For many reasons, chemically and optically, these colors may not match the red/orange/yellow/white heat temperatures on the chart. Barium nitrate burns a bright green, for instance, and this is not present on the heat chart.
|
30 |
+
|
31 |
+
The "adiabatic flame temperature" of a given fuel and oxidizer pair indicates the temperature at which the gases achieve stable combustion.
|
32 |
+
|
33 |
+
Every natural ecosystem has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession.[12] Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape.
|
34 |
+
|
35 |
+
The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, 470 million years ago,[13] permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire.[14] Wildfire is first recorded in the Late Silurian fossil record, 420 million years ago, by fossils of charcoalified plants.[15][16] Apart from a controversial gap in the Late Devonian, charcoal is present ever since.[16] The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire.[17] Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around 6 to 7 million years ago;[18] this kindling provided tinder which allowed for the more rapid spread of fire.[17] These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.[17]
|
36 |
+
|
37 |
+
The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing organisms in the food.[19] The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of cooked food is found from 1.9 million years ago,[dubious – discuss] although fire was probably not used in a controlled fashion until 400,000 years ago.[20] There is some evidence that fire may have been used in a controlled fashion about 1 million years ago.[21][22] Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; interestingly, resistance to air pollution started to evolve in human populations at a similar point in time.[20] The use of fire became progressively more sophisticated, with it being used to create charcoal and to control wildlife from 'tens of thousands' of years ago.[20]
|
38 |
+
|
39 |
+
Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer.
|
40 |
+
|
41 |
+
By the Neolithic Revolution,[citation needed] during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or "cool fires",[citation needed] as opposed to uncontrolled "hot fires", which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. "For small farmers, it is a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil", said Miguel Pinedo-Vasquez, an ecologist at the Earth Institute’s Center for Environmental Research and Conservation.[23] However this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth's surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year.[23]
|
42 |
+
|
43 |
+
There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity.
|
44 |
+
|
45 |
+
The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. Homer detailed the use of fire by Greek soldiers who hid in a wooden horse to burn Troy during the Trojan war. Later the Byzantine fleet used Greek fire to attack ships and men. In the First World War, the first modern flamethrowers were used by infantry, and were successfully mounted on armoured vehicles in the Second World War. In the latter war, incendiary bombs were used by Axis and Allies alike, notably on Tokyo, Rotterdam, London, Hamburg and, notoriously, at Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city[citation needed] was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The use of napalm was employed in July 1944, towards the end of the Second World War;[25] although its use did not gain public attention until the Vietnam War.[25] Molotov cocktails were also used.
|
46 |
+
|
47 |
+
Setting fuel aflame releases usable energy. Wood was a prehistoric fuel, and is still viable today. The use of fossil fuels, such as petroleum, natural gas, and coal, in power plants supplies the vast majority of the world's electricity today; the International Energy Agency states that nearly 80% of the world's power came from these sources in 2002.[27] The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly, in both external and internal combustion engines.
|
48 |
+
|
49 |
+
The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature.
|
50 |
+
|
51 |
+
Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns.[28][29] Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.[30]
|
52 |
+
|
53 |
+
Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire.
|
54 |
+
|
55 |
+
Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires.[31] Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.[32]
|
56 |
+
|
57 |
+
Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested.
|
58 |
+
|
59 |
+
Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact.[33]
|
60 |
+
|
61 |
+
Different restoration methods and measures are used depending on the type of fire damage that occurred. Restoration after fire damage can be performed by property management teams, building maintenance personnel, or by the homeowners themselves; however, contacting a certified professional fire damage restoration specialist is often regarded as the safest way to restore fire damaged property due to their training and extensive experience.[34] Most are usually listed under "Fire and Water Restoration" and they can help speed repairs, whether for individual homeowners or for the largest of institutions.[35]
|
62 |
+
|
63 |
+
Fire and Water Restoration companies are regulated by the appropriate state's Department of Consumer Affairs – usually the state contractors license board. In California, all Fire and Water Restoration companies must register with the California Contractors State License Board.[36] Presently, the California Contractors State License Board has no specific classification for "water and fire damage restoration." Hence, the Contractor's State License Board requires both an asbestos certification (ASB) as well as a demolition classification (C-21) in order to perform Fire and Water Restoration work.[37]
|
en/1979.html.txt
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A leaf (plural leaves) is the principal lateral appendage of the vascular plant stem,[1] usually borne above ground and specialized for photosynthesis. The leaves and stem together form the shoot.[2] Leaves are collectively referred to as foliage, as in "autumn foliage".[3][4] In most leaves, the primary photosynthetic tissue, the palisade mesophyll, is located on the upper side of the blade or lamina of the leaf[1] but in some species, including the mature foliage of Eucalyptus,[5] palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll that is essential for photosynthesis as it absorbs light energy from the sun. A leaf with white patches or edges is called a variegated leaf.
|
4 |
+
|
5 |
+
Leaves can have many different shapes, sizes, and textures. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants. In the clubmosses, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls.[6] Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems called phyllodes which differ from leaves both in their structure and origin.[4][7] Some structures of non-vascular plants look and function much like leaves. Examples include the phyllids of mosses and liverworts.
|
6 |
+
|
7 |
+
Leaves are the most important organs of most vascular plants.[8] Green plants are autotrophic, meaning that they do not obtain food from other living things but instead create their own food by photosynthesis. They capture the energy in sunlight and use it to make simple sugars, such as glucose and sucrose, from carbon dioxide and water. The sugars are then stored as starch, further processed by chemical synthesis into more complex organic molecules such as proteins or cellulose, the basic structural material in plant cell walls, or metabolized by cellular respiration to provide chemical energy to run cellular processes. The leaves draw water from the ground in the transpiration stream through a vascular conducting system known as xylem and obtain carbon dioxide from the atmosphere by diffusion through openings called stomata in the outer covering layer of the leaf (epidermis), while leaves are orientated to maximize their exposure to sunlight. Once sugar has been synthesized, it needs to be transported to areas of active growth such as the plant shoots and roots. Vascular plants transport sucrose in a special tissue called the phloem. The phloem and xylem are parallel to each other, but the transport of materials is usually in opposite directions. Within the leaf these vascular systems branch (ramify) to form veins which supply as much of the leaf as possible, ensuring that cells carrying out photosynthesis are close to the transportation system.[9]
|
8 |
+
|
9 |
+
Typically leaves are broad, flat and thin (dorsiventrally flattened), thereby maximising the surface area directly exposed to light and enabling the light to penetrate the tissues and reach the chloroplasts, thus promoting photosynthesis. They are arranged on the plant so as to expose their surfaces to light as efficiently as possible without shading each other, but there are many exceptions and complications. For instance, plants adapted to windy conditions may have pendent leaves, such as in many willows and eucalypts. The flat, or laminar, shape also maximizes thermal contact with the surrounding air, promoting cooling. Functionally, in addition to carrying out photosynthesis, the leaf is the principal site of transpiration, providing the energy required to draw the transpiration stream up from the roots, and guttation.
|
10 |
+
|
11 |
+
Many gymnosperms have thin needle-like or scale-like leaves that can be advantageous in cold climates with frequent snow and frost.[10] These are interpreted as reduced from megaphyllous leaves of their Devonian ancestors.[6] Some leaf forms are adapted to modulate the amount of light they absorb to avoid or mitigate excessive heat, ultraviolet damage, or desiccation, or to sacrifice light-absorption efficiency in favor of protection from herbivory. For xerophytes the major constraint is not light flux or intensity, but drought.[11] Some window plants such as Fenestraria species and some Haworthia species such as Haworthia tesselata and Haworthia truncata are examples of xerophytes.[12] and Bulbine mesembryanthemoides.[13]
|
12 |
+
|
13 |
+
Leaves also function to store chemical energy and water (especially in succulents) and may become specialized organs serving other functions, such as tendrils of peas and other legumes, the protective spines of cacti and the insect traps in carnivorous plants such as Nepenthes and Sarracenia.[14] Leaves are the fundamental structural units from which cones are constructed in gymnosperms (each cone scale is a modified megaphyll leaf known as a sporophyll)[6]:408 and from which flowers are constructed in flowering plants.[6]:445
|
14 |
+
|
15 |
+
The internal organization of most kinds of leaves has evolved to maximize exposure of the photosynthetic organelles, the chloroplasts, to light and to increase the absorption of carbon dioxide while at the same time controlling water loss. Their surfaces are waterproofed by the plant cuticle and gas exchange between the mesophyll cells and the atmosphere is controlled by minute (length and width measured in tens of µm) openings called stomata which open or close to regulate the rate exchange of carbon dioxide, oxygen, and water vapor into and out of the internal intercellular space system. Stomatal opening is controlled by the turgor pressure in a pair of guard cells that surround the stomatal aperture. In any square centimeter of a plant leaf, there may be from 1,000 to 100,000 stomata.[15]
|
16 |
+
|
17 |
+
The shape and structure of leaves vary considerably from species to species of plant, depending largely on their adaptation to climate and available light, but also to other factors such as grazing animals (such as deer), available nutrients, and ecological competition from other plants. Considerable changes in leaf type occur within species, too, for example as a plant matures; as a case in point Eucalyptus species commonly have isobilateral, pendent leaves when mature and dominating their neighbors; however, such trees tend to have erect or horizontal dorsiventral leaves as seedlings, when their growth is limited by the available light.[16] Other factors include the need to balance water loss at high temperature and low humidity against the need to absorb atmospheric carbon dioxide. In most plants, leaves also are the primary organs responsible for transpiration and guttation (beads of fluid forming at leaf margins).
|
18 |
+
|
19 |
+
Leaves can also store food and water, and are modified accordingly to meet these functions, for example in the leaves of succulent plants and in bulb scales. The concentration of photosynthetic structures in leaves requires that they be richer in protein, minerals, and sugars than, say, woody stem tissues. Accordingly, leaves are prominent in the diet of many animals.
|
20 |
+
|
21 |
+
Correspondingly, leaves represent heavy investment on the part of the plants bearing them, and their retention or disposition are the subject of elaborate strategies for dealing with pest pressures, seasonal conditions, and protective measures such as the growth of thorns and the production of phytoliths, lignins, tannins and poisons.
|
22 |
+
|
23 |
+
Deciduous plants in frigid or cold temperate regions typically shed their leaves in autumn, whereas in areas with a severe dry season, some plants may shed their leaves until the dry season ends. In either case, the shed leaves may be expected to contribute their retained nutrients to the soil where they fall.
|
24 |
+
|
25 |
+
In contrast, many other non-seasonal plants, such as palms and conifers, retain their leaves for long periods; Welwitschia retains its two main leaves throughout a lifetime that may exceed a thousand years.
|
26 |
+
|
27 |
+
The leaf-like organs of bryophytes (e.g., mosses and liverworts), known as phyllids, differ morphologically from the leaves of vascular plants in that they lack vascular tissue, are usually only a single cell thick and have no cuticle stomata or internal system of intercellular spaces. The leaves of bryophytes are only present on the gametophytes, while in contrast the leaves of vascular plants are only present on the sporophytes, and are associated with buds (immature shoot systems in the leaf axils). These can further develop into either vegetative or reproductive structures.[14]
|
28 |
+
|
29 |
+
Simple, vascularized leaves (microphylls), such as those of the early Devonian lycopsid Baragwanathia, first evolved as enations, extensions of the stem. True leaves or euphylls of larger size and with more complex venation did not become widespread in other groups until the Devonian period, by which time the carbon dioxide concentration in the atmosphere had dropped significantly. This occurred independently in several separate lineages of vascular plants, in progymnosperms like Archaeopteris, in Sphenopsida, ferns and later in the gymnosperms and angiosperms. Euphylls are also referred to as macrophylls or megaphylls (large leaves).[6]
|
30 |
+
|
31 |
+
A structurally complete leaf of an angiosperm consists of a petiole (leaf stalk), a lamina (leaf blade), stipules (small structures located to either side of the base of the petiole) and a sheath. Not every species produces leaves with all of these structural components. The proximal stalk or petiole is called a stipe in ferns. The lamina is the expanded, flat component of the leaf which contains the chloroplasts. The sheath is a structure, typically at the base that fully or partially clasps the stem above the node, where the latter is attached. Leaf sheathes typically occur in grasses and Apiaceae (umbellifers). Between the sheath and the lamina, there may be a pseudopetiole, a petiole like structure. Pseudopetioles occur in some monocotyledons including bananas, palms and bamboos.[18] Stipules may be conspicuous (e.g. beans and roses), soon falling or otherwise not obvious as in Moraceae or absent altogether as in the Magnoliaceae. A petiole may be absent (apetiolate), or the blade may not be laminar (flattened). The tremendous variety shown in leaf structure (anatomy) from species to species is presented in detail below under morphology. The petiole mechanically links the leaf to the plant and provides the route for transfer of water and sugars to and from the leaf. The lamina is typically the location of the majority of photosynthesis. The upper (adaxial) angle between a leaf and a stem is known as the axil of the leaf. It is often the location of a bud. Structures located there are called "axillary".
|
32 |
+
|
33 |
+
External leaf characteristics, such as shape, margin, hairs, the petiole, and the presence of stipules and glands, are frequently important for identifying plants to family, genus or species levels, and botanists have developed a rich terminology for describing leaf characteristics. Leaves almost always have determinate growth. They grow to a specific pattern and shape and then stop. Other plant parts like stems or roots have non-determinate growth, and will usually continue to grow as long as they have the resources to do so.
|
34 |
+
|
35 |
+
The type of leaf is usually characteristic of a species (monomorphic), although some species produce more than one type of leaf (dimorphic or polymorphic). The longest leaves are those of the Raffia palm, R. regalis which may be up to 25 m (82 ft) long and 3 m (9.8 ft) wide.[19] The terminology associated with the description of leaf morphology is presented, in illustrated form, at Wikibooks.
|
36 |
+
|
37 |
+
Where leaves are basal, and lie on the ground, they are referred to as prostrate.
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
Different terms are usually used to describe the arrangement of leaves on the stem (phyllotaxis):
|
42 |
+
|
43 |
+
As a stem grows, leaves tend to appear arranged around the stem in a way that optimizes yield of light. In essence, leaves form a helix pattern centered around the stem, either clockwise or counterclockwise, with (depending upon the species) the same angle of divergence. There is a regularity in these angles and they follow the numbers in a Fibonacci sequence: 1/2, 2/3, 3/5, 5/8, 8/13, 13/21, 21/34, 34/55, 55/89. This series tends to the golden angle, which is approximately 360° × 34/89 ≈ 137.52° ≈ 137° 30′. In the series, the numerator indicates the number of complete turns or "gyres" until a leaf arrives at the initial position and the denominator indicates the number of leaves in the arrangement. This can be demonstrated by the following:
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
Two basic forms of leaves can be described considering the way the blade (lamina) is divided. A simple leaf has an undivided blade. However, the leaf may be dissected to form lobes, but the gaps between lobes do not reach to the main vein. A compound leaf has a fully subdivided blade, each leaflet of the blade being separated along a main or secondary vein. The leaflets may have petiolules and stipels, the equivalents of the petioles and stipules of leaves. Because each leaflet can appear to be a simple leaf, it is important to recognize where the petiole occurs to identify a compound leaf. Compound leaves are a characteristic of some families of higher plants, such as the Fabaceae. The middle vein of a compound leaf or a frond, when it is present, is called a rachis.
|
48 |
+
|
49 |
+
Petiolated leaves have a petiole (leaf stalk), and are said to be petiolate.
|
50 |
+
|
51 |
+
Sessile (epetiolate) leaves have no petiole and the blade attaches directly to the stem. Subpetiolate leaves are nearly petiolate or have an extremely short petiole and may appear to be sessile.
|
52 |
+
|
53 |
+
In clasping or decurrent leaves, the blade partially surrounds the stem.
|
54 |
+
|
55 |
+
When the leaf base completely surrounds the stem, the leaves are said to be perfoliate, such as in Eupatorium perfoliatum.
|
56 |
+
|
57 |
+
In peltate leaves, the petiole attaches to the blade inside the blade margin.
|
58 |
+
|
59 |
+
In some Acacia species, such as the koa tree (Acacia koa), the petioles are expanded or broadened and function like leaf blades; these are called phyllodes. There may or may not be normal pinnate leaves at the tip of the phyllode.
|
60 |
+
|
61 |
+
A stipule, present on the leaves of many dicotyledons, is an appendage on each side at the base of the petiole, resembling a small leaf. Stipules may be lasting and not be shed (a stipulate leaf, such as in roses and beans), or be shed as the leaf expands, leaving a stipule scar on the twig (an exstipulate leaf).
|
62 |
+
The situation, arrangement, and structure of the stipules is called the "stipulation".
|
63 |
+
|
64 |
+
Veins (sometimes referred to as nerves) constitute one of the more visible leaf traits or characteristics. The veins in a leaf represent the vascular structure of the organ, extending into the leaf via the petiole and providing transportation of water and nutrients between leaf and stem, and play a crucial role in the maintenance of leaf water status and photosynthetic capacity.They also play a role in the mechanical support of the leaf.[20][21] Within the lamina of the leaf, while some vascular plants possess only a single vein, in most this vasculature generally divides (ramifies) according to a variety of patterns (venation) and form cylindrical bundles, usually lying in the median plane of the mesophyll, between the two layers of epidermis.[22] This pattern is often specific to taxa, and of which angiosperms possess two main types, parallel and reticulate (net like). In general, parallel venation is typical of monocots, while reticulate is more typical of eudicots and magnoliids ("dicots"), though there are many exceptions.[23][22][24]
|
65 |
+
|
66 |
+
The vein or veins entering the leaf from the petiole are called primary or first order veins. The veins branching from these are secondary or second order veins. These primary and secondary veins are considered major veins or lower order veins, though some authors include third order.[25] Each subsequent branching is sequentially numbered, and these are the higher order veins, each branching being associated with a narrower vein diameter.[26] In parallel veined leaves, the primary veins run parallel and equidistant to each other for most of the length of the leaf and then converge or fuse (anastomose) towards the apex. Usually, many smaller minor veins interconnect these primary veins, but may terminate with very fine vein endings in the mesophyll. Minor veins are more typical of angiosperms, which may have as many as four higher orders.[25] In contrast, leaves with reticulate venation there is a single (sometimes more) primary vein in the centre of the leaf, referred to as the midrib or costa and is continuous with the vasculature of the petiole more proximally. The midrib then branches to a number of smaller secondary veins, also known as second order veins, that extend toward the leaf margins. These often terminate in a hydathode, a secretory organ, at the margin. In turn, smaller veins branch from the secondary veins, known as tertiary or third order (or higher order) veins, forming a dense reticulate pattern. The areas or islands of mesophyll lying between the higher order veins, are called areoles. Some of the smallest veins (veinlets) may have their endings in the areoles, a process known as areolation.[26] These minor veins act as the sites of exchange between the mesophyll and the plant's vascular system.[21] Thus, minor veins collect the products of photosynthesis (photosynthate) from the cells where it takes place, while major veins are responsible for its transport outside of the leaf. At the same time water is being transported in the opposite direction.[27][23][22]
|
67 |
+
|
68 |
+
The number of vein endings is very variable, as is whether second order veins end at the margin, or link back to other veins.[24] There are many elaborate variations on the patterns that the leaf veins form, and these have functional implications. Of these, angiosperms have the greatest diversity.[25] Within these the major veins function as the support and distribution network for leaves and are correlated with leaf shape. For instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point.[28][21][26][29]
|
69 |
+
|
70 |
+
In evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. Veins appeared in the Permian period (299–252 mya), prior to the appearance of angiosperms in the Triassic (252–201 mya), during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions.[25] Although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. A pseudo-reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some Melanthiaceae, which are monocots; e.g., Paris quadrifolia (True-lover's Knot). In leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves.[30]
|
71 |
+
|
72 |
+
Leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. Many leaves are covered in trichomes (small hairs) which have diverse structures and functions.
|
73 |
+
|
74 |
+
The major tissue systems present are
|
75 |
+
|
76 |
+
These three tissue systems typically form a regular organization at the cellular scale. Specialized cells that differ markedly from surrounding cells, and which often synthesize specialized products such as crystals, are termed idioblasts.[31]
|
77 |
+
|
78 |
+
Cross-section of a leaf
|
79 |
+
|
80 |
+
Epidermal cells
|
81 |
+
|
82 |
+
Spongy mesophyll cells
|
83 |
+
|
84 |
+
The epidermis is the outer layer of cells covering the leaf. It is covered with a waxy cuticle which is impermeable to liquid water and water vapor and forms the boundary separating the plant's inner cells from the external world. The cuticle is in some cases thinner on the lower epidermis than on the upper epidermis, and is generally thicker on leaves from dry climates as compared with those from wet climates.[32] The epidermis serves several functions: protection against water loss by way of transpiration, regulation of gas exchange and secretion of metabolic compounds. Most leaves show dorsoventral anatomy: The upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions.
|
85 |
+
|
86 |
+
The epidermis tissue includes several differentiated cell types; epidermal cells, epidermal hair cells (trichomes), cells in the stomatal complex; guard cells and subsidiary cells. The epidermal cells are the most numerous, largest, and least specialized and form the majority of the epidermis. They are typically more elongated in the leaves of monocots than in those of dicots.
|
87 |
+
|
88 |
+
Chloroplasts are generally absent in epidermal cells, the exception being the guard cells of the stomata. The stomatal pores perforate the epidermis and are surrounded on each side by chloroplast-containing guard cells, and two to four subsidiary cells that lack chloroplasts, forming a specialized cell group known as the stomatal complex. The opening and closing of the stomatal aperture is controlled by the stomatal complex and regulates the exchange of gases and water vapor between the outside air and the interior of the leaf. Stomata therefore play the important role in allowing photosynthesis without letting the leaf dry out. In a typical leaf, the stomata are more numerous over the abaxial (lower) epidermis than the adaxial (upper) epidermis and are more numerous in plants from cooler climates.
|
89 |
+
|
90 |
+
Most of the interior of the leaf between the upper and lower layers of epidermis is a parenchyma (ground tissue) or chlorenchyma tissue called the mesophyll (Greek for "middle leaf"). This assimilation tissue is the primary location of photosynthesis in the plant. The products of photosynthesis are called "assimilates".
|
91 |
+
|
92 |
+
In ferns and most flowering plants, the mesophyll is divided into two layers:
|
93 |
+
|
94 |
+
Leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. Plants that lack chlorophyll cannot photosynthesize.
|
95 |
+
|
96 |
+
The veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. The pattern of the veins is called venation. In angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad-leaved plants. They were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field.[33][34][35]
|
97 |
+
|
98 |
+
A vein is made up of a vascular bundle. At the core of each bundle are clusters of two
|
99 |
+
distinct types of conducting cells:
|
100 |
+
|
101 |
+
The xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. Both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue.
|
102 |
+
|
103 |
+
According to Agnes Arber's partial-shoot theory of the leaf, leaves are partial shoots,[36] being derived from leaf primordia of the shoot apex. Early in development they are dorsiventrally flattened with both dorsal and ventral surfaces.[14] Compound leaves are closer to shoots than simple leaves. Developmental studies have shown that compound leaves, like shoots, may branch in three dimensions.[37][38] On the basis of molecular genetics, Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties."[39]
|
104 |
+
|
105 |
+
Plants respond and adapt to environmental factors, such as light and mechanical stress from wind. Leaves need to support their own mass and align themselves in such a way as to optimize their exposure to the sun, generally more or less horizontally. However, horizontal alignment maximizes exposure to bending forces and failure from stresses such as wind, snow, hail, falling debris, animals, and abrasion from surrounding foliage and plant structures. Overall leaves are relatively flimsy with regard to other plant structures such as stems, branches and roots.[40]
|
106 |
+
|
107 |
+
Both leaf blade and petiole structure influence the leaf's response to forces such as wind, allowing a degree of repositioning to minimize drag and damage, as opposed to resistance. Leaf movement like this may also increase turbulence of the air close to the surface of the leaf, which thins the boundary layer of air immediately adjacent to the surface, increasing the capacity for gas and heat exchange, as well as photosynthesis. Strong wind forces may result in diminished leaf number and surface area, which while reducing drag, involves a trade off of also reducing photosynthesis. Thus, leaf design may involve compromise between carbon gain, thermoregulation and water loss on the one hand, and the cost of sustaining both static and dynamic loads. In vascular plants, perpendicular forces are spread over a larger area and are relatively flexible in both bending and torsion, enabling elastic deforming without damage.[40]
|
108 |
+
|
109 |
+
Many leaves rely on hydrostatic support arranged around a skeleton of vascular tissue for their strength, which depends on maintaining leaf water status. Both the mechanics and architecture of the leaf reflect the need for transportation and support. Read and Stokes (2006) consider two basic models, the "hydrostatic" and "I-beam leaf" form (see Fig 1).[40] Hydrostatic leaves such as in Prostanthera lasianthos are large and thin, and may involve the need for multiple leaves rather single large leaves because of the amount of veins needed to support the periphery of large leaves. But large leaf size favors efficiency in photosynthesis and water conservation, involving further trade offs. On the other hand, I-beam leaves such as Banksia marginata involve specialized structures to stiffen them. These I-beams are formed from bundle sheath extensions of sclerenchyma meeting stiffened sub-epidermal layers. This shifts the balance from reliance on hydrostatic pressure to structural support, an obvious advantage where water is relatively scarce.
|
110 |
+
[40] Long narrow leaves bend more easily than ovate leaf blades of the same area. Monocots typically have such linear leaves that maximize surface area while minimising self-shading. In these a high proportion of longitudinal main veins provide additional support.[40]
|
111 |
+
|
112 |
+
Although not as nutritious as other organs such as fruit, leaves provide a food source for many organisms. The leaf is a vital source of energy production for the plant, and plants have evolved protection against animals that consume leaves, such as tannins, chemicals which hinder the digestion of proteins and have an unpleasant taste. Animals that are specialized to eat leaves are known as folivores.
|
113 |
+
|
114 |
+
Some species have cryptic adaptations by which they use leaves in avoiding predators. For example, the caterpillars of some leaf-roller moths will create a small home in the leaf by folding it over themselves. Some sawflies similarly roll the leaves of their food plants into tubes. Females of the Attelabidae, so-called leaf-rolling weevils, lay their eggs into leaves that they then roll up as means of protection. Other herbivores and their predators mimic the appearance of the leaf. Reptiles such as some chameleons, and insects such as some katydids, also mimic the oscillating movements of leaves in the wind, moving from side to side or back and forth while evading a possible threat.
|
115 |
+
|
116 |
+
Leaves in temperate, boreal, and seasonally dry zones may be seasonally deciduous (falling off or dying for the inclement season). This mechanism to shed leaves is called abscission. When the leaf is shed, it leaves a leaf scar on the twig. In cold autumns, they sometimes change color, and turn yellow, bright-orange, or red, as various accessory pigments (carotenoids and xanthophylls) are revealed when the tree responds to cold and reduced sunlight by curtailing chlorophyll production. Red anthocyanin pigments are now thought to be produced in the leaf as it dies, possibly to mask the yellow hue left when the chlorophyll is lost—yellow leaves appear to attract herbivores such as aphids.[41] Optical masking of chlorophyll by anthocyanins reduces risk of photo-oxidative damage to leaf cells as they senesce, which otherwise may lower the efficiency of nutrient retrieval from senescing autumn leaves.[42]
|
117 |
+
|
118 |
+
In the course of evolution, leaves have adapted to different environments in the following ways:[citation needed]
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
May be coarsely dentate, having large teeth
|
123 |
+
|
124 |
+
or glandular dentate, having teeth which bear glands
|
125 |
+
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
The leaf surface is also host to a large variety of microorganisms; in this context it is referred to as the phyllosphere.
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
"Hairs" on plants are properly called trichomes. Leaves can show several degrees of hairiness. The meaning of several of the following terms can overlap.
|
139 |
+
|
140 |
+
A number of different classification systems of the patterns of leaf veins (venation or veination) have been described,[24] starting with Ettingshausen (1861),[45] together with many different descriptive terms, and the terminology has been described as "formidable".[24] One of the commonest among these is the Hickey system, originally developed for "dicotyledons" and using a number of Ettingshausen's terms derived from Greek (1973–1979):[46][47][48] (see also: Simpson Figure 9.12, p. 468)[24]
|
141 |
+
|
142 |
+
Types 4–6 may similarly be subclassified as basal (primaries joined at the base of the blade) or suprabasal (diverging above the blade base), and perfect or imperfect, but also flabellate.
|
143 |
+
|
144 |
+
At about the same time, Melville (1976) described a system applicable to all Angiosperms and using Latin and English terminology.[49] Melville also had six divisions, based on the order in which veins develop.
|
145 |
+
|
146 |
+
A modified form of the Hickey system was later incorporated into the Smithsonian classification (1999) which proposed seven main types of venation, based on the architecture of the primary veins, adding Flabellate as an additional main type. Further classification was then made on the basis of secondary veins, with 12 further types, such as;
|
147 |
+
|
148 |
+
terms which had been used as subtypes in the original Hickey system.[50]
|
149 |
+
|
150 |
+
Further descriptions included the higher order, or minor veins and the patterns of areoles (see Leaf Architecture Working Group, Figures 28–29).[50]
|
151 |
+
|
152 |
+
Analyses of vein patterns often fall into consideration of the vein orders, primary vein type, secondary vein type (major veins), and minor vein density. A number of authors have adopted simplified versions of these schemes.[51][24] At its simplest the primary vein types can be considered in three or four groups depending on the plant divisions being considered;
|
153 |
+
|
154 |
+
where palmate refers to multiple primary veins that radiate from the petiole, as opposed to branching from the central main vein in the pinnate form, and encompasses both of Hickey types 4 and 5, which are preserved as subtypes; e.g., palmate-acrodromous (see National Park Service Leaf Guide).[52]
|
155 |
+
|
156 |
+
Alternatively, Simpson uses:[24]
|
157 |
+
|
158 |
+
However, these simplified systems allow for further division into multiple subtypes. Simpson,[24] (and others)[54] divides parallel and netted (and some use only these two terms for Angiosperms)[55] on the basis of the number of primary veins (costa) as follows;
|
159 |
+
|
160 |
+
These complex systems are not used much in morphological descriptions of taxa, but have usefulness in plant identification,
|
161 |
+
[24] although criticized as being unduly burdened with jargon.[58]
|
162 |
+
|
163 |
+
An older, even simpler system, used in some flora[59] uses only two categories, open and closed.[60]
|
164 |
+
|
165 |
+
There are also many other descriptive terms, often with very specialized usage and confined to specific taxonomic groups.[61] The conspicuousness of veins depends on a number of features. These include the width of the veins, their prominence in relation to the lamina surface and the degree of opacity of the surface, which may hide finer veins. In this regard, veins are called obscure and the order of veins that are obscured and whether upper, lower or both surfaces, further specified.[62][53]
|
166 |
+
|
167 |
+
Terms that describe vein prominence include bullate, channelled, flat, guttered, impressed, prominent and recessed (Fig. 6.1 Hawthorne & Lawrence 2013).[58][63] Veins may show different types of prominence in different areas of the leaf. For instance Pimenta racemosa has a channelled midrib on the upper surfae, but this is prominent on the lower surface.[58]
|
168 |
+
|
169 |
+
Describing vein prominence:
|
170 |
+
|
171 |
+
Describing other features:
|
172 |
+
|
173 |
+
The terms megaphyll, macrophyll, mesophyll, notophyll, microphyll, nanophyll and leptophyll are used to describe leaf sizes (in descending order), in a classification devised in 1934 by Christen C. Raunkiær and since modified by others.[70]
|
en/198.html.txt
ADDED
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
South America is a continent in the Western Hemisphere, mostly in the Southern Hemisphere, with a relatively small portion in the Northern Hemisphere. It may also be considered a subcontinent of the Americas,[6][7] which is how it is viewed in Spanish and Portuguese-speaking regions of the Americas. The reference to South America instead of other regions (like Latin America or the Southern Cone) has increased in the last decades due to changing geopolitical dynamics (in particular, the rise of Brazil).[8][additional citation(s) needed]
|
4 |
+
|
5 |
+
It is bordered on the west by the Pacific Ocean and on the north and east by the Atlantic Ocean; North America and the Caribbean Sea lie to the northwest. It includes twelve sovereign states: Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador, Guyana, Paraguay, Peru, Suriname, Uruguay, Venezuela, and a part of France : French Guiana. In addition, the ABC islands of the Kingdom of the Netherlands, the Falkland Islands (a British Overseas Territory), Trinidad and Tobago, and Panama may also be considered part of South America.
|
6 |
+
|
7 |
+
South America has an area of 17,840,000 square kilometers (6,890,000 sq mi). Its population as of 2018[update] has been estimated at more than 423 million.[1][2] South America ranks fourth in area (after Asia, Africa, and North America) and fifth in population (after Asia, Africa, Europe, and North America). Brazil is by far the most populous South American country, with more than half of the continent's population, followed by Colombia, Argentina, Venezuela and Peru. In recent decades Brazil has also concentrated half of the region's GDP and has become a first regional power.[8]
|
8 |
+
|
9 |
+
Most of the population lives near the continent's western or eastern coasts while the interior and the far south are sparsely populated. The geography of western South America is dominated by the Andes mountains; in contrast, the eastern part contains both highland regions and vast lowlands where rivers such as the Amazon, Orinoco, and Paraná flow. Most of the continent lies in the tropics.
|
10 |
+
|
11 |
+
The continent's cultural and ethnic outlook has its origin with the interaction of indigenous peoples with European conquerors and immigrants and, more locally, with African slaves. Given a long history of colonialism, the overwhelming majority of South Americans speak Portuguese or Spanish, and societies and states reflect Western traditions.
|
12 |
+
|
13 |
+
South America occupies the southern portion of the Americas. The continent is generally delimited on the northwest by the Darién watershed along the Colombia–Panama border, although some may consider the border instead to be the Panama Canal. Geopolitically and geographically[9] all of Panama – including the segment east of the Panama Canal in the isthmus – is typically included in North America alone[10][11][12] and among the countries of Central America.[13][14] Almost all of mainland South America sits on the South American Plate.
|
14 |
+
|
15 |
+
South America is home to the world's highest uninterrupted waterfall, Angel Falls in Venezuela; the highest single drop waterfall Kaieteur Falls in Guyana; the largest river by volume, the Amazon River; the longest mountain range, the Andes (whose highest mountain is Aconcagua at 6,962 m or 22,841 ft); the driest non-polar place on earth, the Atacama Desert;[15][16][17] the largest rainforest, the Amazon Rainforest; the highest capital city, La Paz, Bolivia; the highest commercially navigable lake in the world, Lake Titicaca; and, excluding research stations in Antarctica, the world's southernmost permanently inhabited community, Puerto Toro, Chile.
|
16 |
+
|
17 |
+
South America's major mineral resources are gold, silver, copper, iron ore, tin, and petroleum. These resources found in South America have brought high income to its countries especially in times of war or of rapid economic growth by industrialized countries elsewhere. However, the concentration in producing one major export commodity often has hindered the development of diversified economies. The fluctuation in the price of commodities in the international markets has led historically to major highs and lows in the economies of South American states, often causing extreme political instability. This is leading to efforts to diversify production to drive away from staying as economies dedicated to one major export.
|
18 |
+
|
19 |
+
South America is one of the most biodiverse continents on earth. South America is home to many interesting and unique species of animals including the llama, anaconda, piranha, jaguar, vicuña, and tapir. The Amazon rainforests possess high biodiversity, containing a major proportion of the Earth's species.
|
20 |
+
|
21 |
+
Brazil is the largest country in South America, encompassing around half of the continent's land area and population. The remaining countries and territories are divided among three regions: The Andean States, the Guianas and the Southern Cone.
|
22 |
+
|
23 |
+
Traditionally, South America also includes some of the nearby islands. Aruba, Bonaire, Curaçao, Trinidad, Tobago, and the federal dependencies of Venezuela sit on the northerly South American continental shelf and are often considered part of the continent. Geo-politically, the island states and overseas territories of the Caribbean are generally grouped as a part or subregion of North America, since they are more distant on the Caribbean Plate, even though San Andres and Providencia are politically part of Colombia and Aves Island is controlled by Venezuela.[12][18][19]
|
24 |
+
|
25 |
+
Other islands that are included with South America are the Galápagos Islands that belong to Ecuador and Easter Island (in Oceania but belonging to Chile), Robinson Crusoe Island, Chiloé (both Chilean) and Tierra del Fuego (split in between Chile and Argentina). In the Atlantic, Brazil owns Fernando de Noronha, Trindade and Martim Vaz, and the Saint Peter and Saint Paul Archipelago, while the Falkland Islands are governed by the United Kingdom, whose sovereignty over the islands is disputed by Argentina. South Georgia and the South Sandwich Islands may be associated with either South America or Antarctica.[20][citation needed]
|
26 |
+
|
27 |
+
The distribution of the average temperatures in the region presents a constant regularity from the 30° of latitude south, when the isotherms tend, more and more, to be confused with the degrees of latitude.[22]
|
28 |
+
|
29 |
+
In temperate latitudes, winters are milder and summers warmer than in North America. Because its most extensive part of the continent is in the equatorial zone, the region has more areas of equatorial plains than any other region.[22]
|
30 |
+
|
31 |
+
The average annual temperatures in the Amazon basin oscillate around 27 °C (81 °F), with low thermal amplitudes and high rainfall indices. Between the Maracaibo Lake and the mouth of the Orinoco, predominates an equatorial climate of the type Congolese, that also includes parts of the Brazilian territory.[22]
|
32 |
+
|
33 |
+
The east-central Brazilian plateau has a humid and warm tropical climate. The northern and eastern parts of the Argentine pampas have a humid subtropical climate with dry winters and humid summers of the Chinese type, while the western and eastern ranges have a subtropical climate of the dinaric type. At the highest points of the Andean region, climates are colder than the ones occurring at the highest point of the Norwegian fjords. In the Andean plateaus, the warm climate prevails, although it is tempered by the altitude, while in the coastal strip, there is an equatorial climate of the Guinean type. From this point until the north of the Chilean coast appear, successively, Mediterranean oceanic climate, temperate of the Breton type and, already in Tierra del Fuego, cold climate of the Siberian type.[22]
|
34 |
+
|
35 |
+
The distribution of rainfall is related to the regime of winds and air masses. In most of the tropical region east of the Andes, winds blowing from the northeast, east and southeast carry moisture from the Atlantic, causing abundant rainfall. However, due to a consistently strong wind shear and a weak Intertropical Convergence Zone, South Atlantic tropical cyclones are rare.[23] In the Orinoco Llanos and in the Guianas plateau, the precipitation levels go from moderate to high. The Pacific coast of Colombia and northern Ecuador are rainy regions, with Chocó in Colombia being the most rainy place in the world along with the northern slopes of Indian Himalayas.[24] The Atacama Desert, along this stretch of coast, is one of the driest regions in the world. The central and southern parts of Chile are subject to extratropical cyclones, and most of the Argentine Patagonia is desert. In the pampas of Argentina, Uruguay and South of Brazil the rainfall is moderate, with rains well distributed during the year. The moderately dry conditions of the Chaco oppose the intense rainfall of the eastern region of Paraguay. In the semiarid coast of the Brazilian Northeast the rains are linked to a monsoon regime.[22]
|
36 |
+
|
37 |
+
Important factors in the determination of climates are sea currents, such as the current Humboldt and Falklands. The equatorial current of the South Atlantic strikes the coast of the Northeast and there is divided into two others: the current of Brazil and a coastal current that flows to the northwest towards the Antilles, where there it moves towards northeast course thus forming the most Important and famous ocean current in the world, the Gulf Stream.[22][25]
|
38 |
+
|
39 |
+
South America is believed to have been joined with Africa from the late Paleozoic Era to the early Mesozoic Era, until the supercontinent Pangaea began to rift and break apart about 225 million years ago. Therefore, South America and Africa share similar fossils and rock layers.
|
40 |
+
|
41 |
+
South America is thought to have been first inhabited by humans when people were crossing the Bering Land Bridge (now the Bering Strait) at least 15,000 years ago from the territory that is present-day Russia. They migrated south through North America, and eventually reached South America through the Isthmus of Panama.
|
42 |
+
|
43 |
+
The first evidence for the existence of the human race in South America dates back to about 9000 BC, when squashes, chili peppers and beans began to be cultivated for food in the highlands of the Amazon Basin. Pottery evidence further suggests that manioc, which remains a staple food today, was being cultivated as early as 2000 BC.[26]
|
44 |
+
|
45 |
+
By 2000 BC, many agrarian communities had been settled throughout the Andes and the surrounding regions. Fishing became a widespread practice along the coast, helping establish fish as a primary source of food. Irrigation systems were also developed at this time, which aided in the rise of an agrarian society.[26]
|
46 |
+
|
47 |
+
South American cultures began domesticating llamas, vicuñas, guanacos, and alpacas in the highlands of the Andes circa 3500 BC. Besides their use as sources of meat and wool, these animals were used for transportation of goods.[26]
|
48 |
+
|
49 |
+
The rise of plant growing and the subsequent appearance of permanent human settlements allowed for the multiple and overlapping beginnings of civilizations in South America.
|
50 |
+
|
51 |
+
One of the earliest known South American civilizations was at Norte Chico, on the central Peruvian coast. Though a pre-ceramic culture, the monumental architecture of Norte Chico is contemporaneous with the pyramids of Ancient Egypt. Norte Chico governing class established a trade network and developed agriculture then followed by Chavín by 900 BC, according to some estimates and archaeological finds. Artifacts were found at a site called Chavín de Huantar in modern Peru at an elevation of 3,177 meters (10,423 ft). Chavín civilization spanned 900 BC to 300 BC.
|
52 |
+
|
53 |
+
In the central coast of Peru, around the beginning of the 1st millennium AD, Moche (100 BC – 700 AD, at the northern coast of Peru), Paracas and Nazca (400 BC – 800 AD, Peru) cultures flourished with centralized states with permanent militia improving agriculture through irrigation and new styles of ceramic art. At the Altiplano, Tiahuanaco or Tiwanaku (100 BC – 1200 AD, Bolivia) managed a large commercial network based on religion.
|
54 |
+
|
55 |
+
Around the 7th century, both Tiahuanaco and Wari or Huari Empire (600–1200, Central and northern Peru) expanded its influence to all the Andean region, imposing the Huari urbanism and Tiahuanaco religious iconography.
|
56 |
+
|
57 |
+
The Muisca were the main indigenous civilization in what is now Colombia. They established the Muisca Confederation of many clans, or cacicazgos, that had a free trade network among themselves. They were goldsmiths and farmers.
|
58 |
+
|
59 |
+
Other important Pre-Columbian cultures include: the Cañaris (in south central Ecuador), Chimú Empire (1300–1470, Peruvian northern coast), Chachapoyas, and the Aymaran kingdoms (1000–1450, Western Bolivia and southern Peru).
|
60 |
+
Holding their capital at the great city of Cusco, the Inca civilization dominated the Andes region from 1438 to 1533. Known as Tawantin suyu, and "the land of the four regions," in Quechua, the Inca Empire was highly distinct and developed. Inca rule extended to nearly a hundred linguistic or ethnic communities, some nine to fourteen million people connected by a 25,000 kilometer road system. Cities were built with precise, unmatched stonework, constructed over many levels of mountain terrain. Terrace farming was a useful form of agriculture.
|
61 |
+
|
62 |
+
The Mapuche in Central and Southern Chile resisted the European and Chilean settlers, waging the Arauco War for more than 300 years.
|
63 |
+
|
64 |
+
In 1494, Portugal and Spain, the two great maritime European powers of that time, on the expectation of new lands being discovered in the west, signed the Treaty of Tordesillas, by which they agreed, with the support of the Pope, that all the land outside Europe should be an exclusive duopoly between the two countries.
|
65 |
+
|
66 |
+
The treaty established an imaginary line along a north–south meridian 370 leagues west of the Cape Verde Islands, roughly 46° 37' W. In terms of the treaty, all land to the west of the line (known to comprise most of the South American soil) would belong to Spain, and all land to the east, to Portugal. As accurate measurements of longitude were impossible at that time, the line was not strictly enforced, resulting in a Portuguese expansion of Brazil across the meridian.
|
67 |
+
|
68 |
+
Beginning in the 1530s, the people and natural resources of South America were repeatedly exploited by foreign conquistadors, first from Spain and later from Portugal. These competing colonial nations claimed the land and resources as their own and divided it into colonies.
|
69 |
+
|
70 |
+
European infectious diseases (smallpox, influenza, measles, and typhus) – to which the native populations had no immune resistance – caused large-scale depopulation of the native population under Spanish control. Systems of forced labor, such as the haciendas and mining industry's mit'a also contributed to the depopulation. After this, African slaves, who had developed immunities to these diseases, were quickly brought in to replace them.
|
71 |
+
|
72 |
+
The Spaniards were committed to converting their native subjects to Christianity and were quick to purge any native cultural practices that hindered this end; however, many initial attempts at this were only partially successful, as native groups simply blended Catholicism with their established beliefs and practices. Furthermore, the Spaniards brought their language to the degree they did with their religion, although the Roman Catholic Church's evangelization in Quechua, Aymara, and Guaraní actually contributed to the continuous use of these native languages albeit only in the oral form.
|
73 |
+
|
74 |
+
Eventually, the natives and the Spaniards interbred, forming a mestizo class. At the beginning, many mestizos of the Andean region were offspring of Amerindian mothers and Spanish fathers. After independence, most mestizos had native fathers and European or mestizo mothers.
|
75 |
+
|
76 |
+
Many native artworks were considered pagan idols and destroyed by Spanish explorers; this included many gold and silver sculptures and other artifacts found in South America, which were melted down before their transport to Spain or Portugal. Spaniards and Portuguese brought the western European architectural style to the continent, and helped to improve infrastructures like bridges, roads, and the sewer system of the cities they discovered or conquered. They also significantly increased economic and trade relations, not just between the old and new world but between the different South American regions and peoples. Finally, with the expansion of the Portuguese and Spanish languages, many cultures that were previously separated became united through that of Latin American.
|
77 |
+
|
78 |
+
Guyana was first a Dutch, and then a British colony, though there was a brief period during the Napoleonic Wars when it was colonized by the French. The country was once partitioned into three parts, each being controlled by one of the colonial powers until the country was finally taken over fully by the British.
|
79 |
+
|
80 |
+
The indigenous peoples of the Americas in various European colonies were forced to work in European plantations and mines; along with African slaves who were also introduced in the proceeding centuries. The colonists were heavily dependent on indigenous labor during the initial phases of European settlement to maintain the subsistence economy, and natives were often captured by expeditions. The importation of African slaves began midway through the 16th century, but the enslavement of indigenous peoples continued well into the 17th and 18th centuries. The Atlantic slave trade brought African slaves primarily to South American colonies, beginning with the Portuguese since 1502.[27] The main destinations of this phase were the Caribbean colonies and Brazil, as European nations built up economically slave-dependent colonies in the New World. Nearly 40% of all African slaves trafficked to the Americas went to Brazil. An estimated 4.9 million slaves from Africa came to Brazil during the period from 1501 to 1866.[28][29]
|
81 |
+
|
82 |
+
While the Portuguese, English, French and Dutch settlers enslaved mainly African blacks, the Spaniards became very disposed of the natives. In 1750 Portugal abolished native slavery in the colonies because they considered them unfit for labour and began to import even more African slaves. Slaves were brought to the mainland on slave ships, under inhuman conditions and ill-treatment, and those who survived were sold into the slave markets.
|
83 |
+
|
84 |
+
After independence, all South American countries maintained slavery for some time. The first South American country to abolish slavery was Chile in 1823, Uruguay in 1830, Bolivia in 1831, Colombia and Ecuador in 1851, Argentina in 1853, Peru and Venezuela in 1854, Suriname in 1863, Paraguay in 1869, and in 1888 Brazil was the last South American nation and the last country in western world to abolish slavery.
|
85 |
+
|
86 |
+
The European Peninsular War (1807–1814), a theater of the Napoleonic Wars, changed the political situation of both the Spanish and Portuguese colonies. First, Napoleon invaded Portugal, but the House of Braganza avoided capture by escaping to Brazil. Napoleon also captured King Ferdinand VII of Spain, and appointed his own brother instead. This appointment provoked severe popular resistance, which created Juntas to rule in the name of the captured king.
|
87 |
+
|
88 |
+
Many cities in the Spanish colonies, however, considered themselves equally authorized to appoint local Juntas like those of Spain. This began the Spanish American wars of independence between the patriots, who promoted such autonomy, and the royalists, who supported Spanish authority over the Americas. The Juntas, in both Spain and the Americas, promoted the ideas of the Enlightenment. Five years after the beginning of the war, Ferdinand VII returned to the throne and began the Absolutist Restoration as the royalists got the upper hand in the conflict.
|
89 |
+
|
90 |
+
The independence of South America was secured by Simón Bolívar (Venezuela) and José de San Martín (Argentina), the two most important Libertadores. Bolívar led a great uprising in the north, then led his army southward towards Lima, the capital of the Viceroyalty of Peru. Meanwhile, San Martín led an army across the Andes Mountains, along with Chilean expatriates, and liberated Chile. He organized a fleet to reach Peru by sea, and sought the military support of various rebels from the Viceroyalty of Peru. The two armies finally met in Guayaquil, Ecuador, where they cornered the Royal Army of the Spanish Crown and forced its surrender.
|
91 |
+
|
92 |
+
In the Portuguese Kingdom of Brazil, Dom Pedro I (also Pedro IV of Portugal), son of the Portuguese King Dom João VI, proclaimed the independent Kingdom of Brazil in 1822, which later became the Empire of Brazil. Despite the Portuguese loyalties of garrisons in Bahia, Cisplatina and Pará, independence was diplomatically accepted by the crown in Portugal in 1825, on condition of a high compensation paid by Brazil mediatized by the United Kingdom.
|
93 |
+
|
94 |
+
The newly independent nations began a process of fragmentation, with several civil and international wars. However, it was not as strong as in Central America. Some countries created from provinces of larger countries stayed as such up to modern times (such as Paraguay or Uruguay), while others were reconquered and reincorporated into their former countries (such as the Republic of Entre Ríos and the Riograndense Republic).
|
95 |
+
|
96 |
+
The first separatist attempt was in 1820 by the Argentine province of Entre Ríos, led by a caudillo.[30] In spite of the "Republic" in its title, General Ramírez, its caudillo, never really intended to declare an independent Entre Rios. Rather, he was making a political statement in opposition to the monarchist and centralist ideas that back then permeated Buenos Aires politics. The "country" was reincorporated at the United Provinces in 1821.
|
97 |
+
|
98 |
+
In 1825 the Cisplatine Province declared its independence from the Empire of Brazil, which led to the Cisplatine War between the imperials and the Argentine from the United Provinces of the Río de la Plata to control the region. Three years later, the United Kingdom intervened in the question by proclaiming a tie and creating in the former Cisplatina a new independent country: The Oriental Republic of Uruguay.
|
99 |
+
|
100 |
+
Later in 1836, while Brazil was experiencing the chaos of the regency, Rio Grande do Sul proclaimed its independence motivated by a tax crisis. With the anticipation of the coronation of Pedro II to the throne of Brazil, the country could stabilize and fight the separatists, which the province of Santa Catarina had joined in 1839. The Conflict came to an end by a process of compromise by which both Riograndense Republic and Juliana Republic were reincorporated as provinces in 1845.[31][32]
|
101 |
+
|
102 |
+
The Peru–Bolivian Confederation, a short-lived union of Peru and Bolivia, was blocked by Chile in the War of the Confederation (1836–1839) and again during the War of the Pacific (1879–1883). Paraguay was virtually destroyed by Argentina, Brazil and Uruguay in the Paraguayan War.
|
103 |
+
|
104 |
+
Despite the Spanish American wars of independence and the Brazilian War of Independence, the new nations quickly began to suffer with internal conflicts and wars among themselves.
|
105 |
+
|
106 |
+
In 1825 the proclamation of independence of Cisplatina led to the Cisplatine War between historical rivals the Empire of Brazil and the United Provinces of the Río de la Plata, Argentina's predecessor. The result was a stalemate, ending with the British arranging for the independence of Uruguay. Soon after, another Brazilian province proclaimed its independence leading to the Ragamuffin War which Brazil won.
|
107 |
+
|
108 |
+
Between 1836 and 1839 the War of the Confederation broke out between the short-lived Peru-Bolivian Confederation and Chile, with the support of the Argentine Confederation. The war was fought mostly in the actual territory of Peru and ended with a Confederate defeat and the dissolution of the Confederacy and annexation of many territories by Argentina.
|
109 |
+
|
110 |
+
Meanwhile, the Argentine Civil Wars plagued Argentina since its independence. The conflict was mainly between those who defended the centralization of power in Buenos Aires and those who defended a confederation. During this period it can be said that "there were two Argentines": the Argentine Confederation and the Argentine Republic. At the same time the political instability in Uruguay led to the Uruguayan Civil War among the main political factions of the country. All this instability in the platine region interfered with the goals of other countries such as Brazil, which was soon forced to take sides. In 1851 the Brazilian Empire, supporting the centralizing unitarians, and the Uruguayan government invaded Argentina and deposed the caudillo, Juan Manuel Rosas, who ruled the confederation with an iron hand. Although the Platine War did not put an end to the political chaos and civil war in Argentina, it brought temporary peace to Uruguay where the Colorados faction won, supported by the Brazilian Empire, British Empire, French Empire and the Unitarian Party of Argentina.[33]
|
111 |
+
|
112 |
+
Peace lasted only a short time: in 1864 the Uruguayan factions faced each other again in the Uruguayan War. The Blancos supported by Paraguay started to attack Brazilian and Argentine farmers near the borders. The Empire made an initial attempt to settle the dispute between Blancos and Colorados without success. In 1864, after a Brazilian ultimatum was refused, the imperial government declared that Brazil's military would begin reprisals. Brazil declined to acknowledge a formal state of war, and, for most of its duration, the Uruguayan–Brazilian armed conflict was an undeclared war which led to the deposition of the Blancos and the rise of the pro-Brazilian Colorados to power again. This angered the Paraguayan government, which even before the end of the war invaded Brazil, beginning the biggest and deadliest war in both South American and Latin American histories: the Paraguayan War.[citation needed]
|
113 |
+
|
114 |
+
The Paraguayan War began when the Paraguayan dictator Francisco Solano López ordered the invasion of the Brazilian provinces of Mato Grosso and Rio Grande do Sul. His attempt to cross Argentinian territory without Argentinian approval led the pro-Brazilian Argentine government into the war. The pro-Brazilian Uruguayan government showed its support by sending troops. In 1865 the three countries signed the Treaty of the Triple Alliance against Paraguay. At the beginning of the war, the Paraguayans took the lead with several victories, until the Triple Alliance organized to repel the invaders and fight effectively. This was the second total war experience in the world after the American Civil War. It was deemed the greatest war effort in the history of all participating countries, taking almost 6 years and ending with the complete devastation of Paraguay. The country lost 40% of its territory to Brazil and Argentina and lost 60% of its population, including 90% of the men. The dictator Lopez was killed in battle and a new government was instituted in alliance with Brazil, which maintained occupation forces in the country until 1876.[34]
|
115 |
+
|
116 |
+
The last South American war in the 19th century was the War of the Pacific with Bolivia and Peru on one side and Chile on the other. In 1879 the war began with Chilean troops occupying Bolivian ports, followed by Bolivia declaring war on Chile which activated an alliance treaty with Peru. The Bolivians were completely defeated in 1880 and Lima was occupied in 1881. The peace was signed with Peru in 1883 while a truce was signed with Bolivia in 1884. Chile annexed territories of both countries leaving Bolivia with no path to the sea.[35]
|
117 |
+
|
118 |
+
In the new century, as wars became less violent and less frequent, Brazil entered into a small conflict with Bolivia for the possession of the Acre, which was acquired by Brazil in 1902. In 1917 Brazil declared war on the Central Powers, joined the allied side in World War I and sent a small fleet to the Mediterranean Sea and some troops to be integrated with the British and French forces. Brazil was the only South American country that fought in WWI.[36][37] Later in 1932 Colombia and Peru entered a short armed conflict for territory in the Amazon. In the same year Paraguay declared war on Bolivia for possession of the Chaco, in a conflict that ended three years later with Paraguay's victory. Between 1941 and 1942 Peru and Ecuador fought decisively for territories claimed by both that were annexed by Peru, usurping Ecuador's frontier with Brazil.[38]
|
119 |
+
|
120 |
+
Also in this period the first naval battle of World War II was fought on the continent, in the River Plate, between British forces and German submarines.[39] The Germans still made numerous attacks on Brazilian ships on the coast, causing Brazil to declare war on the Axis powers in 1942, being the only South American country to fight in this war (and in both World Wars). Brazil sent naval and air forces to combat German and Italian submarines off the continent and throughout the South Atlantic, in addition to sending an expeditionary force to fight in the Italian Campaign.[40][41][40]
|
121 |
+
|
122 |
+
A brief war was fought between Argentina and the UK in 1982, following an Argentine invasion of the Falkland Islands, which ended with an Argentine defeat. The last international war to be fought on South American soil was the 1995 Cenepa War between Ecuador and the Peru along their mutual border.
|
123 |
+
|
124 |
+
Wars became less frequent in the 20th century, with Bolivia-Paraguay and Peru-Ecuador fighting the last inter-state wars. Early in the 20th century, the three wealthiest South American countries engaged in a vastly expensive naval arms race which was catalyzed by the introduction of a new warship type, the "dreadnought". At one point, the Argentine government was spending a fifth of its entire yearly budget for just two dreadnoughts, a price that did not include later in-service costs, which for the Brazilian dreadnoughts was sixty percent of the initial purchase.[42][43]
|
125 |
+
|
126 |
+
The continent became a battlefield of the Cold War in the late 20th century. Some democratically elected governments of Argentina, Brazil, Chile, Uruguay and Paraguay were overthrown or displaced by military dictatorships in the 1960s and 1970s. To curtail opposition, their governments detained tens of thousands of political prisoners, many of whom were tortured and/or killed on inter-state collaboration. Economically, they began a transition to neoliberal economic policies. They placed their own actions within the US Cold War doctrine of "National Security" against internal subversion. Throughout the 1980s and 1990s, Peru suffered from an internal conflict.
|
127 |
+
|
128 |
+
Argentina and Britain fought the Falklands War in 1982. The conflict lasted 74 days and ended with an Argentine surrender, returning the occupied Falkland islands to British control.
|
129 |
+
|
130 |
+
Colombia has had an ongoing, though diminished internal conflict, which started in 1964 with the creation of Marxist guerrillas (FARC-EP) and then involved several illegal armed groups of leftist-leaning ideology as well as the private armies of powerful drug lords. Many of these are now defunct, and only a small portion of the ELN remains, along with the stronger, though also greatly reduced, FARC.
|
131 |
+
|
132 |
+
Revolutionary movements and right-wing military dictatorships became common after World War II, but since the 1980s, a wave of democratization passed through the continent, and democratic rule is widespread now.[44] Nonetheless, allegations of corruption are still very common, and several countries have developed crises which have forced the resignation of their governments, although, on most occasions, regular civilian succession has continued.
|
133 |
+
|
134 |
+
International indebtedness turned into a severe problem in the late 1980s, and some countries, despite having strong democracies, have not yet developed political institutions capable of handling such crises without resorting to unorthodox economic policies, as most recently illustrated by Argentina's default in the early 21st century.[45][neutrality is disputed] The last twenty years have seen an increased push towards regional integration, with the creation of uniquely South American institutions such as the Andean Community, Mercosur and Unasur. Notably, starting with the election of Hugo Chávez in Venezuela in 1998, the region experienced what has been termed a pink tide – the election of several leftist and center-left administrations to most countries of the area, except for the Guianas and Colombia.
|
135 |
+
|
136 |
+
Historically, the Hispanic countries were founded as Republican dictatorships led by caudillos. Brazil was the only exception, being a constitutional monarchy for its first 67 years of independence, until a coup d'état proclaimed a republic. In the late 19th century, the most democratic countries were Brazil,[49][full citation needed] Chile, Argentina and Uruguay.[50]
|
137 |
+
|
138 |
+
In the interwar period, nationalism grew stronger on the continent, influenced by countries like Nazi Germany and Fascist Italy. A series of authoritarian rules broke out in South American countries with views bringing them closer to the Axis Powers,[51] like Vargas's Brazil. In the late 20th century, during the Cold War, many countries became military dictatorships under American tutelage in attempts to avoid the influence of the Soviet Union. After the fall of the authoritarian regimes, these countries became democratic republics.
|
139 |
+
|
140 |
+
During the first decade of the 21st century, South American governments have drifted to the political left, with leftist leaders being elected in Chile, Uruguay, Brazil, Argentina, Ecuador, Bolivia, Paraguay, Peru and Venezuela. The gross domestic product for each of those countries, however, dropped over that timeframe. Consequently, most South American countries are making increasing use of protectionist policies in order to help local economic development.
|
141 |
+
|
142 |
+
All South American countries are presidential republics with the exception of Suriname, a parliamentary republic. French Guiana is a French overseas department, while the Falkland Islands and South Georgia and the South Sandwich Islands are British overseas territories. It is currently the only inhabited continent in the world without monarchies; the Empire of Brazil existed during the 19th century and there was an unsuccessful attempt to establish a Kingdom of Araucanía and Patagonia in southern Argentina and Chile. Also in the twentieth century, Suriname was established as a constituent kingdom of the Kingdom of the Netherlands and Guyana retained the British monarch as head of state for 4 years after its independence.
|
143 |
+
|
144 |
+
Recently, an intergovernmental entity has been formed which aims to merge the two existing customs unions: Mercosur and the Andean Community, thus forming the third-largest trade bloc in the world.[52]
|
145 |
+
This new political organization, known as Union of South American Nations, seeks to establish free movement of people, economic development, a common defense policy and the elimination of tariffs.
|
146 |
+
|
147 |
+
South America has over 423 million[1][2] inhabitants and a population growth rate of about 0.6% per year. There are several areas of sparse demographics such as tropical forests, the Atacama Desert and the icy portions of Patagonia. On the other hand, the continent presents regions of high population density, such as the great urban centers. The population is formed by descendants of Europeans (mainly Spaniards, Portuguese and Italians), Africans and Indigenous peoples. There is a high percentage of mestizos that vary greatly in composition by place. There is also a minor population of Asians,[further explanation needed] especially in Brazil. The two main languages are by far Spanish and Portuguese, followed by French, English and Dutch in smaller numbers.
|
148 |
+
|
149 |
+
Spanish and Portuguese are the most spoken languages in South America, with approximately 200 million speakers each. Spanish is the official language of most countries, along with other native languages in some countries. Portuguese is the official language of Brazil. Dutch is the official language of Suriname; English is the official language of Guyana, although there are at least twelve other languages spoken in the country, including Portuguese, Chinese, Hindustani and several native languages.[53] English is also spoken in the Falkland Islands. French is the official language of French Guiana and the second language in Amapá, Brazil.
|
150 |
+
|
151 |
+
Indigenous languages of South America include Quechua in Peru, Bolivia, Ecuador, Chile and Colombia; Wayuunaiki in northern Colombia (La Guajira) and northwestern Venezuela (Zulia); Guaraní in Paraguay and, to a much lesser extent, in Bolivia; Aymara in Bolivia, Peru, and less often in Chile; and Mapudungun is spoken in certain pockets of southern Chile. At least three South American indigenous languages (Quechua, Aymara, and Guarani) are recognized along with Spanish as national languages.
|
152 |
+
|
153 |
+
Other languages found in South America include Hindustani and Javanese in Suriname; Italian in Argentina, Brazil, Uruguay and Venezuela; and German in certain pockets of Argentina and Brazil. German is also spoken in many regions of the southern states of Brazil, Riograndenser Hunsrückisch being the most widely spoken German dialect in the country; among other Germanic dialects, a Brazilian form of East Pomeranian is also well represented and is experiencing a revival. Welsh remains spoken and written in the historic towns of Trelew and Rawson in the Argentine Patagonia. There are also small clusters of Japanese-speakers in Brazil, Colombia and Peru. Arabic speakers, often of Lebanese, Syrian, or Palestinian descent, can be found in Arab communities in Argentina, Colombia, Brazil, Venezuela and in Paraguay.[54]
|
154 |
+
|
155 |
+
An estimated 90% of South Americans are Christians[55] (82% Roman Catholic, 8% other Christian denominations mainly traditional Protestants and Evangelicals but also Orthodox), accounting for c. 19% of Christians worldwide.
|
156 |
+
|
157 |
+
African descendent religions and Indigenous religions are also common throughout all South America, some examples of are Santo Daime, Candomblé, Umbanda and Encantados.
|
158 |
+
|
159 |
+
Crypto-Jews or Marranos, conversos, and Anusim were an important part of colonial life in Latin America.
|
160 |
+
|
161 |
+
Both Buenos Aires, Argentina and São Paulo, Brazil figure among the largest Jewish populations by urban area.
|
162 |
+
|
163 |
+
East Asian religions such as Japanese Buddhism, Shintoism, and Shinto-derived Japanese New Religions are common in Brazil and Peru. Korean Confucianism is especially found in Brazil while Chinese Buddhism and Chinese Confucianism have spread throughout the continent.
|
164 |
+
|
165 |
+
Kardecist Spiritism can be found in several countries.
|
166 |
+
|
167 |
+
Part of Religions in South America (2013):[56]
|
168 |
+
|
169 |
+
Genetic admixture occurs at very high levels in South America. In Argentina, the European influence accounts for 65–79% of the genetic background, Amerindian for 17–31% and sub-Saharan African for 2–4%. In Colombia, the sub-Saharan African genetic background varied from 1% to 89%, while the European genetic background varied from 20% to 79%, depending on the region.
|
170 |
+
In Peru, European ancestries ranged from 1% to 31%, while the African contribution was only 1% to 3%.[57] The Genographic Project determined the average Peruvian from Lima had about 28% European ancestry, 68% Native American, 2% Asian ancestry and 2% sub-Saharan African.[58]
|
171 |
+
|
172 |
+
Descendants of indigenous peoples, such as the Quechua and Aymara, or the Urarina[59] of Amazonia make up the majority of the population in Bolivia (56%) and, per some sources, in Peru (44%).[60][61] In Ecuador, Amerindians are a large minority that comprises two-fifths of the population. The native European population is also a significant element in most other former Portuguese colonies.
|
173 |
+
|
174 |
+
People who identify as of primarily or totally European descent, or identify their phenotype as corresponding to such group, are more of a majority in Argentina,[62] and Uruguay[63] and more than half of the population of Chile (64.7%)[64] and (48.4%) in Brazil.[65][66][67] In Venezuela, according to the national census 42% of the population is primarily native Spanish, Italian and Portuguese descendants.[68] In Colombia, people who identify as European descendant are about 37%.[69][70] In Peru, European descendants are the third group in number (15%).[71]
|
175 |
+
|
176 |
+
Mestizos (mixed European and Amerindian) are the largest ethnic group in Bolivia, Paraguay, Venezuela, Colombia[69] and Ecuador and the second group in Peru and Chile.
|
177 |
+
|
178 |
+
South America is also home to one of the largest populations of Africans. This group is significantly present in Brazil, Colombia, Guyana, Suriname, French Guiana, Venezuela and Ecuador.
|
179 |
+
|
180 |
+
Brazil followed by Peru have the largest Japanese, Korean and Chinese communities in South America, Lima has the largest ethnic Chinese community in Latin America.[72] Guyana and Suriname have the largest ethnic East Indian community.
|
181 |
+
|
182 |
+
In many places indigenous people still practice a traditional lifestyle based on subsistence agriculture or as hunter-gatherers. There are still some uncontacted tribes residing in the Amazon Rainforest.[75]
|
183 |
+
|
184 |
+
The most populous country in South America is Brazil with 209.5 million people. The second largest country is Colombia with a population of 49,661,048. Argentina is the third most populous country with 44,361,150.
|
185 |
+
|
186 |
+
While Brazil, Argentina, and Colombia maintain the largest populations, large city populations are not restricted to those nations. The largest cities in South America, by far, are São Paulo, Rio de Janeiro, Buenos Aires, Santiago, Lima, and Bogotá. These cities are the only cities on the continent to exceed eight million, and three of five in the Americas. Next in size are Caracas, Belo Horizonte, Medellin and Salvador.
|
187 |
+
|
188 |
+
Five of the top ten metropolitan areas are in Brazil. These metropolitan areas all have a population of above 4 million and include the São Paulo metropolitan area, Rio de Janeiro metropolitan area, and Belo Horizonte metropolitan area. Whilst the majority of the largest metropolitan areas are within Brazil, Argentina is host to the second largest metropolitan area by population in South America: the Buenos Aires metropolitan region is above 13 million inhabitants.
|
189 |
+
|
190 |
+
South America has also been witness to the growth of megapolitan areas. In Brazil four megaregions exist including the Expanded Metropolitan Complex of São Paulo with more than 32 million inhabitants. The others are the Greater Rio, Greater Belo Horizonte and Greater Porto Alegre. Colombia also has four megaregions which comprise 72% of its population, followed by Venezuela, Argentina and Peru which are also homes of megaregions.
|
191 |
+
|
192 |
+
The top ten largest South American metropolitan areas by population as of 2015, based on national census numbers from each country:
|
193 |
+
|
194 |
+
2015 Census figures.
|
195 |
+
|
196 |
+
South America relies less on the export of both manufactured goods and natural resources than the world average; merchandise exports from the continent were 16% of GDP on an exchange rate basis, compared to 25% for the world as a whole.[76] Brazil (the seventh largest economy in the world and the largest in South America) leads in terms of merchandise exports at $251 billion, followed by Venezuela at $93 billion, Chile at $86 billion, and Argentina at $84 billion.[76]
|
197 |
+
|
198 |
+
Since 1930, the continent has experienced remarkable growth and diversification in most economic sectors. Most agricultural and livestock products are destined for the domestic market and local consumption. However, the export of agricultural products is essential for the balance of trade in most countries.[77]
|
199 |
+
|
200 |
+
The main agrarian crops are export crops, such as soy and wheat. The production of staple foods such as vegetables, corn or beans is large, but focused on domestic consumption. Livestock raising for meat exports is important in Argentina, Paraguay, Uruguay and Colombia. In tropical regions the most important crops are coffee, cocoa and bananas, mainly in Brazil, Colombia and Ecuador. Traditionally, the countries producing sugar for export are Peru, Guyana and Suriname, and in Brazil, sugar cane is also used to make ethanol. On the coast of Peru, northeast and south of Brazil, cotton is grown. Fifty percent of the South American surface is covered by forests, but timber industries are small and directed to domestic markets. In recent years, however, transnational companies have been settling in the Amazon to exploit noble timber destined for export. The Pacific coastal waters of South America are the most important for commercial fishing. The anchovy catch reaches thousands of tons, and tuna is also abundant (Peru is a major exporter). The capture of crustaceans is remarkable, particularly in northeastern Brazil and Chile.[77]
|
201 |
+
|
202 |
+
Only Brazil and Argentina are part of the G20 (industrial countries), while only Brazil is part of the G8+5 (the most powerful and influential nations in the world). In the tourism sector, a series of negotiations began in 2005 to promote tourism and increase air connections within the region. Punta del Este, Florianópolis and Mar del Plata are among the most important resorts in South America.[77]
|
203 |
+
|
204 |
+
The most industrialized countries in South America are Brazil, Argentina, Chile, Colombia, Venezuela and Uruguay respectively. These countries alone account for more than 75 percent of the region's economy and add up to a GDP of more than US$3.0 trillion. Industries in South America began to take on the economies of the region from the 1930s when the Great Depression in the United States and other countries of the world boosted industrial production in the continent. From that period the region left the agricultural side behind and began to achieve high rates of economic growth that remained until the early 1990s when they slowed due to political instabilities, economic crises and neoliberal policies.[77]
|
205 |
+
|
206 |
+
Since the end of the economic crisis in Brazil and Argentina that occurred in the period from 1998 to 2002, which has led to economic recession, rising unemployment and falling population income, the industrial and service sectors have been recovering rapidly. Chile, Argentina and Brazil have recovered fastest, growing at an average of 5% per year. All of South America after this period has been recovering and showing good signs of economic stability, with controlled inflation and exchange rates, continuous growth, a decrease in social inequality and unemployment–factors that favor industry.[77]
|
207 |
+
|
208 |
+
The main industries are: electronics, textiles, food, automotive, metallurgy, aviation, naval, clothing, beverage, steel, tobacco, timber, chemical, among others. Exports reach almost US$400 billion annually, with Brazil accounting for half of this.[77]
|
209 |
+
|
210 |
+
The economic gap between the rich and poor in most South American nations is larger than on most other continents. The richest 10% receive over 40% of the nation's income in Bolivia, Brazil, Chile, Colombia, and Paraguay,[78] while the poorest 20% receive 4% or less in Bolivia, Brazil, and Colombia.[79] This wide gap can be seen in many large South American cities where makeshift shacks and slums lie in the vicinity of skyscrapers and upper-class luxury apartments; nearly one in nine South Americans live on less than $2 per day (on a purchasing power parity basis).[80]
|
211 |
+
|
212 |
+
Tourism has increasingly become a significant source of income for many South American countries.[86][87]
|
213 |
+
|
214 |
+
Historical relics, architectural and natural wonders, a diverse range of foods and culture, vibrant and colorful cities, and stunning landscapes attract millions of tourists every year to South America. Some of the most visited places in the region are Iguazu Falls, Recife, Olinda, Machu Picchu, Bariloche, the Amazon rainforest, Rio de Janeiro, São Luís, Salvador, Fortaleza, Maceió, Buenos Aires, Florianópolis, San Ignacio Miní, Isla Margarita, Natal, Lima, São Paulo, Angel Falls, Brasília, Nazca Lines, Cuzco, Belo Horizonte, Lake Titicaca, Salar de Uyuni, La Paz, Jesuit Missions of Chiquitos, Los Roques archipelago, Gran Sabana, Patagonia, Tayrona National Natural Park, Santa Marta, Bogotá, Cali, Medellín, Cartagena, Perito Moreno Glacier and the Galápagos Islands.[88][89] In 2016 Brazil hosted the 2016 Summer Olympics.
|
215 |
+
|
216 |
+
|
217 |
+
|
218 |
+
South Americans are culturally influenced by their indigenous peoples, the historic connection with the Iberian Peninsula and Africa, and waves of immigrants from around the globe.
|
219 |
+
|
220 |
+
South American nations have a rich variety of music. Some of the most famous genres include vallenato and cumbia from Colombia, pasillo from Colombia and Ecuador, samba, bossa nova and música sertaneja from Brazil, and tango from Argentina and Uruguay. Also well known is the non-commercial folk genre Nueva Canción movement which was founded in Argentina and Chile and quickly spread to the rest of the Latin America.
|
221 |
+
|
222 |
+
People on the Peruvian coast created the fine guitar and cajon duos or trios in the most mestizo (mixed) of South American rhythms such as the Marinera (from Lima), the Tondero (from Piura), the 19th century popular Creole Valse or Peruvian Valse, the soulful Arequipan Yaravi, and the early 20th century Paraguayan Guarania. In the late 20th century, Spanish rock emerged by young hipsters influenced by British pop and American rock. Brazil has a Portuguese-language pop rock industry as well a great variety of other music genres. In the central and western regions of Bolivia, Andean and folklore music like Diablada, Caporales and Morenada are the most representative of the country, which were originated by European, Aymara and Quechua influences.
|
223 |
+
|
224 |
+
The literature of South America has attracted considerable critical and popular acclaim, especially with the Latin American Boom of the 1960s and 1970s, and the rise of authors such as Mario Vargas Llosa, Gabriel García Márquez in novels and Jorge Luis Borges and Pablo Neruda in other genres. The Brazilians Machado de Assis and João Guimarães Rosa are widely regarded as the greatest Brazilian writers.
|
225 |
+
|
226 |
+
Because of South America's broad ethnic mix, South American cuisine has African, South American Indian, South Asian, East Asian, and European influences. Bahia, Brazil, is especially well known for its West African–influenced cuisine. Argentines, Chileans, Uruguayans, Brazilians, Bolivians, and Venezuelans regularly consume wine. People in Argentina, Paraguay, Uruguay, southern Chile, Bolivia and Brazil drink mate, an herb which is brewed. The Paraguayan version, terere, differs from other forms of mate in that it is served cold. Pisco is a liquor distilled from grapes in Peru and Chile. Peruvian cuisine mixes elements from Chinese, Japanese, Spanish, Italian, African, Arab, Andean, and Amazonic food.
|
227 |
+
|
228 |
+
The artist Oswaldo Guayasamín (1919–1999) from Ecuador, represented with his painting style the feeling of the peoples of Latin America[90] highlighting social injustices in various parts of the world. The Colombian Fernando Botero (1932) is one of the greatest exponents of painting and sculpture that continues still active and has been able to develop a recognizable style of his own.[91] For his part, the Venezuelan Carlos Cruz-Diez has contributed significantly to contemporary art,[92] with the presence of works around the world.
|
229 |
+
|
230 |
+
Currently several emerging South American artists are recognized by international art critics: Guillermo Lorca – Chilean painter,[93][94] Teddy Cobeña – Ecuadorian sculptor and recipient of international sculpture award in France)[95][96][97] and Argentine artist Adrián Villar Rojas[98][99] – winner of the Zurich Museum Art Award among many others.
|
231 |
+
|
232 |
+
A wide range of sports are played in the continent of South America, with football being the most popular overall, while baseball is the most popular in Venezuela.
|
233 |
+
|
234 |
+
Other sports include basketball, cycling, polo, volleyball, futsal, motorsports, rugby (mostly in Argentina and Uruguay), handball, tennis, golf, field hockey, boxing and cricket.
|
235 |
+
|
236 |
+
South America hosted its first Olympic Games in Rio de Janeiro, Brazil in 2016 and will host the Youth Olympic Games in Buenos Aires, Argentina in 2018.
|
237 |
+
|
238 |
+
South America shares with Europe supremacy over the sport of football as all winners in FIFA World Cup history and all winning teams in the FIFA Club World Cup have come from these two continents. Brazil holds the record at the FIFA World Cup with five titles in total. Argentina and Uruguay have two titles each. So far four South American nations have hosted the tournament including the first edition in Uruguay (1930). The other three were Brazil (1950, 2014), Chile (1962), and Argentina (1978).
|
239 |
+
|
240 |
+
South America is home to the longest running international football tournament; the Copa América, which has been regularly contested since 1916. Uruguay won the Copa América a record 15 times, surpassing hosts Argentina in 2011 to reach 15 titles (they were previously equal at 14 titles each during the 2011 Copa América).
|
241 |
+
|
242 |
+
Also, in South America, a multi-sport event, the South American Games, are held every four years. The first edition was held in La Paz in 1978 and the most recent took place in Santiago in 2014.
|
243 |
+
|
244 |
+
South American Cricket Championship is an international limited-overs cricket tournament played since 1995 featuring national teams from South America and certain other invited sides including teams from North America, currently played annually but until 2013 was usually played every two seasons.
|
245 |
+
|
246 |
+
Due to the diversity of topography and pluviometric precipitation conditions, the region's water resources vary enormously in different areas. In the Andes, navigation possibilities are limited, except for the Magdalena River, Lake Titicaca and the lakes of the southern regions of Chile and Argentina. Irrigation is an important factor for agriculture from northwestern Peru to Patagonia. Less than 10% of the known electrical potential of the Andes had been used until the mid-1960s.
|
247 |
+
|
248 |
+
The Brazilian Highlands has a much higher hydroelectric potential than the Andean region and its possibilities of exploitation are greater due to the existence of several large rivers with high margins and the occurrence of great differences forming huge cataracts, such as those of Paulo Afonso, Iguaçu and others. The Amazon River system has about 13,000 km of waterways, but its possibilities for hydroelectric use are still unknown.
|
249 |
+
|
250 |
+
Most of the continent's energy is generated through hydroelectric power plants, but there is also an important share of thermoelectric and wind energy. Brazil and Argentina are the only South American countries that generate nuclear power, each with two nuclear power plants. In 1991 these countries signed a peaceful nuclear cooperation agreement.
|
251 |
+
|
252 |
+
South American transportation systems are still deficient, with low kilometric densities. The region has about 1,700,000 km of highways and 100,000 km of railways, which are concentrated in the coastal strip, and the interior is still devoid of communication.
|
253 |
+
|
254 |
+
Only two railroads are continental: the Transandina, which connects Buenos Aires, in Argentina to Valparaíso, in Chile, and the Brazil–Bolivia Railroad, which makes it the connection between the port of Santos in Brazil and the city of Santa Cruz de la Sierra, in Bolivia. In addition, there is the Pan-American Highway, which crosses the Andean countries from north to south, although some stretches are unfinished.[100]
|
255 |
+
|
256 |
+
Two areas of greater density occur in the railway sector: the platinum network, which develops around the Platine region, largely belonging to Argentina, with more than 45,000 km in length; And the Southeast Brazil network, which mainly serves the state of São Paulo, state of Rio de Janeiro and Minas Gerais. Brazil and Argentina also stand out in the road sector. In addition to the modern roads that extend through northern Argentina and south-east and south of Brazil, a vast road complex aims to link Brasília, the federal capital, to the South, Southeast, Northeast and Northern regions of Brazil.
|
257 |
+
|
258 |
+
The Port of Callao is the main port of Peru.
|
259 |
+
|
260 |
+
South America has one of the largest bays of navigable inland waterways in the world, represented mainly by the Amazon basin, the Platine basin, the São Francisco and the Orinoco basins, Brazil having about 54,000 km navigable, while Argentina has 6,500 km and Venezuela, 1,200 km.
|
261 |
+
|
262 |
+
The two main merchant fleets also belong to Brazil and Argentina. The following are those of Chile, Venezuela, Peru and Colombia. The largest ports in commercial movement are those of Buenos Aires, Santos, Rio de Janeiro, Bahía Blanca, Rosario, Valparaíso, Recife, Salvador, Montevideo, Paranaguá, Rio Grande, Fortaleza, Belém and Maracaibo.
|
263 |
+
|
264 |
+
In South America, commercial aviation has a magnificent expansion field, which has one of the largest traffic density lines in the world, Rio de Janeiro–São Paulo, and large airports, such as Congonhas, São Paulo–Guarulhos International and Viracopos (São Paulo), Rio de Janeiro International and Santos Dumont (Rio de Janeiro), El Dorado (Bogotá), Ezeiza (Buenos Aires), Tancredo Neves International Airport (Belo Horizonte), Curitiba International Airport (Curitiba), Brasilia, Caracas, Montevideo, Lima, Viru Viru International Airport (Santa Cruz de la Sierra), Recife, Salvador, Salgado Filho International Airport (Porto Alegre), Fortaleza, Manaus and Belém.
|
265 |
+
|
266 |
+
The main public transport in major cities is the bus. Many cities also have a diverse system of metro and subway trains, the first of which was the Buenos Aires subte, opened 1913.[101] The Santiago subway[102] is the largest network in South America, with 103 km, while the São Paulo subway is the largest in transportation, with more than 4.6 million passengers per day[103] and was voted the best in the Americas. In Rio de Janeiro was installed the first railroad of the continent, in 1854. Today the city has a vast and diversified system of metropolitan trains, integrated with buses and subway. Recently it was also inaugurated in the city a Light Rail System called VLT, a small electrical trams at low speed, while São Paulo inaugurated its monorail, the first of South America.[citation needed] In Brazil, an express bus system called Bus Rapid Transit (BRT), which operates in several cities, has also been developed. Mi Teleférico, also known as Teleférico La Paz–El Alto (La Paz–El Alto Cable Car), is an aerial cable car urban transit system serving the La Paz–El Alto metropolitan area in Bolivia.
|
267 |
+
|
268 |
+
^ Continent model: In some parts of the world South America is viewed as a subcontinent of the Americas[104] (a single continent in these areas), for example Latin America, Latin Europe, and Iran. In most of the countries with English as an official language, however, it is considered a continent; see Americas (terminology).[clarification needed]
|
269 |
+
|
270 |
+
Africa
|
271 |
+
|
272 |
+
Antarctica
|
273 |
+
|
274 |
+
Asia
|
275 |
+
|
276 |
+
Australia
|
277 |
+
|
278 |
+
Europe
|
279 |
+
|
280 |
+
North America
|
281 |
+
|
282 |
+
South America
|
283 |
+
|
284 |
+
Afro-Eurasia
|
285 |
+
|
286 |
+
America
|
287 |
+
|
288 |
+
Eurasia
|
289 |
+
|
290 |
+
Oceania
|
en/1980.html.txt
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A leaf (plural leaves) is the principal lateral appendage of the vascular plant stem,[1] usually borne above ground and specialized for photosynthesis. The leaves and stem together form the shoot.[2] Leaves are collectively referred to as foliage, as in "autumn foliage".[3][4] In most leaves, the primary photosynthetic tissue, the palisade mesophyll, is located on the upper side of the blade or lamina of the leaf[1] but in some species, including the mature foliage of Eucalyptus,[5] palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll that is essential for photosynthesis as it absorbs light energy from the sun. A leaf with white patches or edges is called a variegated leaf.
|
4 |
+
|
5 |
+
Leaves can have many different shapes, sizes, and textures. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants. In the clubmosses, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls.[6] Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems called phyllodes which differ from leaves both in their structure and origin.[4][7] Some structures of non-vascular plants look and function much like leaves. Examples include the phyllids of mosses and liverworts.
|
6 |
+
|
7 |
+
Leaves are the most important organs of most vascular plants.[8] Green plants are autotrophic, meaning that they do not obtain food from other living things but instead create their own food by photosynthesis. They capture the energy in sunlight and use it to make simple sugars, such as glucose and sucrose, from carbon dioxide and water. The sugars are then stored as starch, further processed by chemical synthesis into more complex organic molecules such as proteins or cellulose, the basic structural material in plant cell walls, or metabolized by cellular respiration to provide chemical energy to run cellular processes. The leaves draw water from the ground in the transpiration stream through a vascular conducting system known as xylem and obtain carbon dioxide from the atmosphere by diffusion through openings called stomata in the outer covering layer of the leaf (epidermis), while leaves are orientated to maximize their exposure to sunlight. Once sugar has been synthesized, it needs to be transported to areas of active growth such as the plant shoots and roots. Vascular plants transport sucrose in a special tissue called the phloem. The phloem and xylem are parallel to each other, but the transport of materials is usually in opposite directions. Within the leaf these vascular systems branch (ramify) to form veins which supply as much of the leaf as possible, ensuring that cells carrying out photosynthesis are close to the transportation system.[9]
|
8 |
+
|
9 |
+
Typically leaves are broad, flat and thin (dorsiventrally flattened), thereby maximising the surface area directly exposed to light and enabling the light to penetrate the tissues and reach the chloroplasts, thus promoting photosynthesis. They are arranged on the plant so as to expose their surfaces to light as efficiently as possible without shading each other, but there are many exceptions and complications. For instance, plants adapted to windy conditions may have pendent leaves, such as in many willows and eucalypts. The flat, or laminar, shape also maximizes thermal contact with the surrounding air, promoting cooling. Functionally, in addition to carrying out photosynthesis, the leaf is the principal site of transpiration, providing the energy required to draw the transpiration stream up from the roots, and guttation.
|
10 |
+
|
11 |
+
Many gymnosperms have thin needle-like or scale-like leaves that can be advantageous in cold climates with frequent snow and frost.[10] These are interpreted as reduced from megaphyllous leaves of their Devonian ancestors.[6] Some leaf forms are adapted to modulate the amount of light they absorb to avoid or mitigate excessive heat, ultraviolet damage, or desiccation, or to sacrifice light-absorption efficiency in favor of protection from herbivory. For xerophytes the major constraint is not light flux or intensity, but drought.[11] Some window plants such as Fenestraria species and some Haworthia species such as Haworthia tesselata and Haworthia truncata are examples of xerophytes.[12] and Bulbine mesembryanthemoides.[13]
|
12 |
+
|
13 |
+
Leaves also function to store chemical energy and water (especially in succulents) and may become specialized organs serving other functions, such as tendrils of peas and other legumes, the protective spines of cacti and the insect traps in carnivorous plants such as Nepenthes and Sarracenia.[14] Leaves are the fundamental structural units from which cones are constructed in gymnosperms (each cone scale is a modified megaphyll leaf known as a sporophyll)[6]:408 and from which flowers are constructed in flowering plants.[6]:445
|
14 |
+
|
15 |
+
The internal organization of most kinds of leaves has evolved to maximize exposure of the photosynthetic organelles, the chloroplasts, to light and to increase the absorption of carbon dioxide while at the same time controlling water loss. Their surfaces are waterproofed by the plant cuticle and gas exchange between the mesophyll cells and the atmosphere is controlled by minute (length and width measured in tens of µm) openings called stomata which open or close to regulate the rate exchange of carbon dioxide, oxygen, and water vapor into and out of the internal intercellular space system. Stomatal opening is controlled by the turgor pressure in a pair of guard cells that surround the stomatal aperture. In any square centimeter of a plant leaf, there may be from 1,000 to 100,000 stomata.[15]
|
16 |
+
|
17 |
+
The shape and structure of leaves vary considerably from species to species of plant, depending largely on their adaptation to climate and available light, but also to other factors such as grazing animals (such as deer), available nutrients, and ecological competition from other plants. Considerable changes in leaf type occur within species, too, for example as a plant matures; as a case in point Eucalyptus species commonly have isobilateral, pendent leaves when mature and dominating their neighbors; however, such trees tend to have erect or horizontal dorsiventral leaves as seedlings, when their growth is limited by the available light.[16] Other factors include the need to balance water loss at high temperature and low humidity against the need to absorb atmospheric carbon dioxide. In most plants, leaves also are the primary organs responsible for transpiration and guttation (beads of fluid forming at leaf margins).
|
18 |
+
|
19 |
+
Leaves can also store food and water, and are modified accordingly to meet these functions, for example in the leaves of succulent plants and in bulb scales. The concentration of photosynthetic structures in leaves requires that they be richer in protein, minerals, and sugars than, say, woody stem tissues. Accordingly, leaves are prominent in the diet of many animals.
|
20 |
+
|
21 |
+
Correspondingly, leaves represent heavy investment on the part of the plants bearing them, and their retention or disposition are the subject of elaborate strategies for dealing with pest pressures, seasonal conditions, and protective measures such as the growth of thorns and the production of phytoliths, lignins, tannins and poisons.
|
22 |
+
|
23 |
+
Deciduous plants in frigid or cold temperate regions typically shed their leaves in autumn, whereas in areas with a severe dry season, some plants may shed their leaves until the dry season ends. In either case, the shed leaves may be expected to contribute their retained nutrients to the soil where they fall.
|
24 |
+
|
25 |
+
In contrast, many other non-seasonal plants, such as palms and conifers, retain their leaves for long periods; Welwitschia retains its two main leaves throughout a lifetime that may exceed a thousand years.
|
26 |
+
|
27 |
+
The leaf-like organs of bryophytes (e.g., mosses and liverworts), known as phyllids, differ morphologically from the leaves of vascular plants in that they lack vascular tissue, are usually only a single cell thick and have no cuticle stomata or internal system of intercellular spaces. The leaves of bryophytes are only present on the gametophytes, while in contrast the leaves of vascular plants are only present on the sporophytes, and are associated with buds (immature shoot systems in the leaf axils). These can further develop into either vegetative or reproductive structures.[14]
|
28 |
+
|
29 |
+
Simple, vascularized leaves (microphylls), such as those of the early Devonian lycopsid Baragwanathia, first evolved as enations, extensions of the stem. True leaves or euphylls of larger size and with more complex venation did not become widespread in other groups until the Devonian period, by which time the carbon dioxide concentration in the atmosphere had dropped significantly. This occurred independently in several separate lineages of vascular plants, in progymnosperms like Archaeopteris, in Sphenopsida, ferns and later in the gymnosperms and angiosperms. Euphylls are also referred to as macrophylls or megaphylls (large leaves).[6]
|
30 |
+
|
31 |
+
A structurally complete leaf of an angiosperm consists of a petiole (leaf stalk), a lamina (leaf blade), stipules (small structures located to either side of the base of the petiole) and a sheath. Not every species produces leaves with all of these structural components. The proximal stalk or petiole is called a stipe in ferns. The lamina is the expanded, flat component of the leaf which contains the chloroplasts. The sheath is a structure, typically at the base that fully or partially clasps the stem above the node, where the latter is attached. Leaf sheathes typically occur in grasses and Apiaceae (umbellifers). Between the sheath and the lamina, there may be a pseudopetiole, a petiole like structure. Pseudopetioles occur in some monocotyledons including bananas, palms and bamboos.[18] Stipules may be conspicuous (e.g. beans and roses), soon falling or otherwise not obvious as in Moraceae or absent altogether as in the Magnoliaceae. A petiole may be absent (apetiolate), or the blade may not be laminar (flattened). The tremendous variety shown in leaf structure (anatomy) from species to species is presented in detail below under morphology. The petiole mechanically links the leaf to the plant and provides the route for transfer of water and sugars to and from the leaf. The lamina is typically the location of the majority of photosynthesis. The upper (adaxial) angle between a leaf and a stem is known as the axil of the leaf. It is often the location of a bud. Structures located there are called "axillary".
|
32 |
+
|
33 |
+
External leaf characteristics, such as shape, margin, hairs, the petiole, and the presence of stipules and glands, are frequently important for identifying plants to family, genus or species levels, and botanists have developed a rich terminology for describing leaf characteristics. Leaves almost always have determinate growth. They grow to a specific pattern and shape and then stop. Other plant parts like stems or roots have non-determinate growth, and will usually continue to grow as long as they have the resources to do so.
|
34 |
+
|
35 |
+
The type of leaf is usually characteristic of a species (monomorphic), although some species produce more than one type of leaf (dimorphic or polymorphic). The longest leaves are those of the Raffia palm, R. regalis which may be up to 25 m (82 ft) long and 3 m (9.8 ft) wide.[19] The terminology associated with the description of leaf morphology is presented, in illustrated form, at Wikibooks.
|
36 |
+
|
37 |
+
Where leaves are basal, and lie on the ground, they are referred to as prostrate.
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
Different terms are usually used to describe the arrangement of leaves on the stem (phyllotaxis):
|
42 |
+
|
43 |
+
As a stem grows, leaves tend to appear arranged around the stem in a way that optimizes yield of light. In essence, leaves form a helix pattern centered around the stem, either clockwise or counterclockwise, with (depending upon the species) the same angle of divergence. There is a regularity in these angles and they follow the numbers in a Fibonacci sequence: 1/2, 2/3, 3/5, 5/8, 8/13, 13/21, 21/34, 34/55, 55/89. This series tends to the golden angle, which is approximately 360° × 34/89 ≈ 137.52° ≈ 137° 30′. In the series, the numerator indicates the number of complete turns or "gyres" until a leaf arrives at the initial position and the denominator indicates the number of leaves in the arrangement. This can be demonstrated by the following:
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
Two basic forms of leaves can be described considering the way the blade (lamina) is divided. A simple leaf has an undivided blade. However, the leaf may be dissected to form lobes, but the gaps between lobes do not reach to the main vein. A compound leaf has a fully subdivided blade, each leaflet of the blade being separated along a main or secondary vein. The leaflets may have petiolules and stipels, the equivalents of the petioles and stipules of leaves. Because each leaflet can appear to be a simple leaf, it is important to recognize where the petiole occurs to identify a compound leaf. Compound leaves are a characteristic of some families of higher plants, such as the Fabaceae. The middle vein of a compound leaf or a frond, when it is present, is called a rachis.
|
48 |
+
|
49 |
+
Petiolated leaves have a petiole (leaf stalk), and are said to be petiolate.
|
50 |
+
|
51 |
+
Sessile (epetiolate) leaves have no petiole and the blade attaches directly to the stem. Subpetiolate leaves are nearly petiolate or have an extremely short petiole and may appear to be sessile.
|
52 |
+
|
53 |
+
In clasping or decurrent leaves, the blade partially surrounds the stem.
|
54 |
+
|
55 |
+
When the leaf base completely surrounds the stem, the leaves are said to be perfoliate, such as in Eupatorium perfoliatum.
|
56 |
+
|
57 |
+
In peltate leaves, the petiole attaches to the blade inside the blade margin.
|
58 |
+
|
59 |
+
In some Acacia species, such as the koa tree (Acacia koa), the petioles are expanded or broadened and function like leaf blades; these are called phyllodes. There may or may not be normal pinnate leaves at the tip of the phyllode.
|
60 |
+
|
61 |
+
A stipule, present on the leaves of many dicotyledons, is an appendage on each side at the base of the petiole, resembling a small leaf. Stipules may be lasting and not be shed (a stipulate leaf, such as in roses and beans), or be shed as the leaf expands, leaving a stipule scar on the twig (an exstipulate leaf).
|
62 |
+
The situation, arrangement, and structure of the stipules is called the "stipulation".
|
63 |
+
|
64 |
+
Veins (sometimes referred to as nerves) constitute one of the more visible leaf traits or characteristics. The veins in a leaf represent the vascular structure of the organ, extending into the leaf via the petiole and providing transportation of water and nutrients between leaf and stem, and play a crucial role in the maintenance of leaf water status and photosynthetic capacity.They also play a role in the mechanical support of the leaf.[20][21] Within the lamina of the leaf, while some vascular plants possess only a single vein, in most this vasculature generally divides (ramifies) according to a variety of patterns (venation) and form cylindrical bundles, usually lying in the median plane of the mesophyll, between the two layers of epidermis.[22] This pattern is often specific to taxa, and of which angiosperms possess two main types, parallel and reticulate (net like). In general, parallel venation is typical of monocots, while reticulate is more typical of eudicots and magnoliids ("dicots"), though there are many exceptions.[23][22][24]
|
65 |
+
|
66 |
+
The vein or veins entering the leaf from the petiole are called primary or first order veins. The veins branching from these are secondary or second order veins. These primary and secondary veins are considered major veins or lower order veins, though some authors include third order.[25] Each subsequent branching is sequentially numbered, and these are the higher order veins, each branching being associated with a narrower vein diameter.[26] In parallel veined leaves, the primary veins run parallel and equidistant to each other for most of the length of the leaf and then converge or fuse (anastomose) towards the apex. Usually, many smaller minor veins interconnect these primary veins, but may terminate with very fine vein endings in the mesophyll. Minor veins are more typical of angiosperms, which may have as many as four higher orders.[25] In contrast, leaves with reticulate venation there is a single (sometimes more) primary vein in the centre of the leaf, referred to as the midrib or costa and is continuous with the vasculature of the petiole more proximally. The midrib then branches to a number of smaller secondary veins, also known as second order veins, that extend toward the leaf margins. These often terminate in a hydathode, a secretory organ, at the margin. In turn, smaller veins branch from the secondary veins, known as tertiary or third order (or higher order) veins, forming a dense reticulate pattern. The areas or islands of mesophyll lying between the higher order veins, are called areoles. Some of the smallest veins (veinlets) may have their endings in the areoles, a process known as areolation.[26] These minor veins act as the sites of exchange between the mesophyll and the plant's vascular system.[21] Thus, minor veins collect the products of photosynthesis (photosynthate) from the cells where it takes place, while major veins are responsible for its transport outside of the leaf. At the same time water is being transported in the opposite direction.[27][23][22]
|
67 |
+
|
68 |
+
The number of vein endings is very variable, as is whether second order veins end at the margin, or link back to other veins.[24] There are many elaborate variations on the patterns that the leaf veins form, and these have functional implications. Of these, angiosperms have the greatest diversity.[25] Within these the major veins function as the support and distribution network for leaves and are correlated with leaf shape. For instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point.[28][21][26][29]
|
69 |
+
|
70 |
+
In evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. Veins appeared in the Permian period (299–252 mya), prior to the appearance of angiosperms in the Triassic (252–201 mya), during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions.[25] Although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. A pseudo-reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some Melanthiaceae, which are monocots; e.g., Paris quadrifolia (True-lover's Knot). In leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves.[30]
|
71 |
+
|
72 |
+
Leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. Many leaves are covered in trichomes (small hairs) which have diverse structures and functions.
|
73 |
+
|
74 |
+
The major tissue systems present are
|
75 |
+
|
76 |
+
These three tissue systems typically form a regular organization at the cellular scale. Specialized cells that differ markedly from surrounding cells, and which often synthesize specialized products such as crystals, are termed idioblasts.[31]
|
77 |
+
|
78 |
+
Cross-section of a leaf
|
79 |
+
|
80 |
+
Epidermal cells
|
81 |
+
|
82 |
+
Spongy mesophyll cells
|
83 |
+
|
84 |
+
The epidermis is the outer layer of cells covering the leaf. It is covered with a waxy cuticle which is impermeable to liquid water and water vapor and forms the boundary separating the plant's inner cells from the external world. The cuticle is in some cases thinner on the lower epidermis than on the upper epidermis, and is generally thicker on leaves from dry climates as compared with those from wet climates.[32] The epidermis serves several functions: protection against water loss by way of transpiration, regulation of gas exchange and secretion of metabolic compounds. Most leaves show dorsoventral anatomy: The upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions.
|
85 |
+
|
86 |
+
The epidermis tissue includes several differentiated cell types; epidermal cells, epidermal hair cells (trichomes), cells in the stomatal complex; guard cells and subsidiary cells. The epidermal cells are the most numerous, largest, and least specialized and form the majority of the epidermis. They are typically more elongated in the leaves of monocots than in those of dicots.
|
87 |
+
|
88 |
+
Chloroplasts are generally absent in epidermal cells, the exception being the guard cells of the stomata. The stomatal pores perforate the epidermis and are surrounded on each side by chloroplast-containing guard cells, and two to four subsidiary cells that lack chloroplasts, forming a specialized cell group known as the stomatal complex. The opening and closing of the stomatal aperture is controlled by the stomatal complex and regulates the exchange of gases and water vapor between the outside air and the interior of the leaf. Stomata therefore play the important role in allowing photosynthesis without letting the leaf dry out. In a typical leaf, the stomata are more numerous over the abaxial (lower) epidermis than the adaxial (upper) epidermis and are more numerous in plants from cooler climates.
|
89 |
+
|
90 |
+
Most of the interior of the leaf between the upper and lower layers of epidermis is a parenchyma (ground tissue) or chlorenchyma tissue called the mesophyll (Greek for "middle leaf"). This assimilation tissue is the primary location of photosynthesis in the plant. The products of photosynthesis are called "assimilates".
|
91 |
+
|
92 |
+
In ferns and most flowering plants, the mesophyll is divided into two layers:
|
93 |
+
|
94 |
+
Leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. Plants that lack chlorophyll cannot photosynthesize.
|
95 |
+
|
96 |
+
The veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. The pattern of the veins is called venation. In angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad-leaved plants. They were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field.[33][34][35]
|
97 |
+
|
98 |
+
A vein is made up of a vascular bundle. At the core of each bundle are clusters of two
|
99 |
+
distinct types of conducting cells:
|
100 |
+
|
101 |
+
The xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. Both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue.
|
102 |
+
|
103 |
+
According to Agnes Arber's partial-shoot theory of the leaf, leaves are partial shoots,[36] being derived from leaf primordia of the shoot apex. Early in development they are dorsiventrally flattened with both dorsal and ventral surfaces.[14] Compound leaves are closer to shoots than simple leaves. Developmental studies have shown that compound leaves, like shoots, may branch in three dimensions.[37][38] On the basis of molecular genetics, Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties."[39]
|
104 |
+
|
105 |
+
Plants respond and adapt to environmental factors, such as light and mechanical stress from wind. Leaves need to support their own mass and align themselves in such a way as to optimize their exposure to the sun, generally more or less horizontally. However, horizontal alignment maximizes exposure to bending forces and failure from stresses such as wind, snow, hail, falling debris, animals, and abrasion from surrounding foliage and plant structures. Overall leaves are relatively flimsy with regard to other plant structures such as stems, branches and roots.[40]
|
106 |
+
|
107 |
+
Both leaf blade and petiole structure influence the leaf's response to forces such as wind, allowing a degree of repositioning to minimize drag and damage, as opposed to resistance. Leaf movement like this may also increase turbulence of the air close to the surface of the leaf, which thins the boundary layer of air immediately adjacent to the surface, increasing the capacity for gas and heat exchange, as well as photosynthesis. Strong wind forces may result in diminished leaf number and surface area, which while reducing drag, involves a trade off of also reducing photosynthesis. Thus, leaf design may involve compromise between carbon gain, thermoregulation and water loss on the one hand, and the cost of sustaining both static and dynamic loads. In vascular plants, perpendicular forces are spread over a larger area and are relatively flexible in both bending and torsion, enabling elastic deforming without damage.[40]
|
108 |
+
|
109 |
+
Many leaves rely on hydrostatic support arranged around a skeleton of vascular tissue for their strength, which depends on maintaining leaf water status. Both the mechanics and architecture of the leaf reflect the need for transportation and support. Read and Stokes (2006) consider two basic models, the "hydrostatic" and "I-beam leaf" form (see Fig 1).[40] Hydrostatic leaves such as in Prostanthera lasianthos are large and thin, and may involve the need for multiple leaves rather single large leaves because of the amount of veins needed to support the periphery of large leaves. But large leaf size favors efficiency in photosynthesis and water conservation, involving further trade offs. On the other hand, I-beam leaves such as Banksia marginata involve specialized structures to stiffen them. These I-beams are formed from bundle sheath extensions of sclerenchyma meeting stiffened sub-epidermal layers. This shifts the balance from reliance on hydrostatic pressure to structural support, an obvious advantage where water is relatively scarce.
|
110 |
+
[40] Long narrow leaves bend more easily than ovate leaf blades of the same area. Monocots typically have such linear leaves that maximize surface area while minimising self-shading. In these a high proportion of longitudinal main veins provide additional support.[40]
|
111 |
+
|
112 |
+
Although not as nutritious as other organs such as fruit, leaves provide a food source for many organisms. The leaf is a vital source of energy production for the plant, and plants have evolved protection against animals that consume leaves, such as tannins, chemicals which hinder the digestion of proteins and have an unpleasant taste. Animals that are specialized to eat leaves are known as folivores.
|
113 |
+
|
114 |
+
Some species have cryptic adaptations by which they use leaves in avoiding predators. For example, the caterpillars of some leaf-roller moths will create a small home in the leaf by folding it over themselves. Some sawflies similarly roll the leaves of their food plants into tubes. Females of the Attelabidae, so-called leaf-rolling weevils, lay their eggs into leaves that they then roll up as means of protection. Other herbivores and their predators mimic the appearance of the leaf. Reptiles such as some chameleons, and insects such as some katydids, also mimic the oscillating movements of leaves in the wind, moving from side to side or back and forth while evading a possible threat.
|
115 |
+
|
116 |
+
Leaves in temperate, boreal, and seasonally dry zones may be seasonally deciduous (falling off or dying for the inclement season). This mechanism to shed leaves is called abscission. When the leaf is shed, it leaves a leaf scar on the twig. In cold autumns, they sometimes change color, and turn yellow, bright-orange, or red, as various accessory pigments (carotenoids and xanthophylls) are revealed when the tree responds to cold and reduced sunlight by curtailing chlorophyll production. Red anthocyanin pigments are now thought to be produced in the leaf as it dies, possibly to mask the yellow hue left when the chlorophyll is lost—yellow leaves appear to attract herbivores such as aphids.[41] Optical masking of chlorophyll by anthocyanins reduces risk of photo-oxidative damage to leaf cells as they senesce, which otherwise may lower the efficiency of nutrient retrieval from senescing autumn leaves.[42]
|
117 |
+
|
118 |
+
In the course of evolution, leaves have adapted to different environments in the following ways:[citation needed]
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
May be coarsely dentate, having large teeth
|
123 |
+
|
124 |
+
or glandular dentate, having teeth which bear glands
|
125 |
+
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
The leaf surface is also host to a large variety of microorganisms; in this context it is referred to as the phyllosphere.
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
"Hairs" on plants are properly called trichomes. Leaves can show several degrees of hairiness. The meaning of several of the following terms can overlap.
|
139 |
+
|
140 |
+
A number of different classification systems of the patterns of leaf veins (venation or veination) have been described,[24] starting with Ettingshausen (1861),[45] together with many different descriptive terms, and the terminology has been described as "formidable".[24] One of the commonest among these is the Hickey system, originally developed for "dicotyledons" and using a number of Ettingshausen's terms derived from Greek (1973–1979):[46][47][48] (see also: Simpson Figure 9.12, p. 468)[24]
|
141 |
+
|
142 |
+
Types 4–6 may similarly be subclassified as basal (primaries joined at the base of the blade) or suprabasal (diverging above the blade base), and perfect or imperfect, but also flabellate.
|
143 |
+
|
144 |
+
At about the same time, Melville (1976) described a system applicable to all Angiosperms and using Latin and English terminology.[49] Melville also had six divisions, based on the order in which veins develop.
|
145 |
+
|
146 |
+
A modified form of the Hickey system was later incorporated into the Smithsonian classification (1999) which proposed seven main types of venation, based on the architecture of the primary veins, adding Flabellate as an additional main type. Further classification was then made on the basis of secondary veins, with 12 further types, such as;
|
147 |
+
|
148 |
+
terms which had been used as subtypes in the original Hickey system.[50]
|
149 |
+
|
150 |
+
Further descriptions included the higher order, or minor veins and the patterns of areoles (see Leaf Architecture Working Group, Figures 28–29).[50]
|
151 |
+
|
152 |
+
Analyses of vein patterns often fall into consideration of the vein orders, primary vein type, secondary vein type (major veins), and minor vein density. A number of authors have adopted simplified versions of these schemes.[51][24] At its simplest the primary vein types can be considered in three or four groups depending on the plant divisions being considered;
|
153 |
+
|
154 |
+
where palmate refers to multiple primary veins that radiate from the petiole, as opposed to branching from the central main vein in the pinnate form, and encompasses both of Hickey types 4 and 5, which are preserved as subtypes; e.g., palmate-acrodromous (see National Park Service Leaf Guide).[52]
|
155 |
+
|
156 |
+
Alternatively, Simpson uses:[24]
|
157 |
+
|
158 |
+
However, these simplified systems allow for further division into multiple subtypes. Simpson,[24] (and others)[54] divides parallel and netted (and some use only these two terms for Angiosperms)[55] on the basis of the number of primary veins (costa) as follows;
|
159 |
+
|
160 |
+
These complex systems are not used much in morphological descriptions of taxa, but have usefulness in plant identification,
|
161 |
+
[24] although criticized as being unduly burdened with jargon.[58]
|
162 |
+
|
163 |
+
An older, even simpler system, used in some flora[59] uses only two categories, open and closed.[60]
|
164 |
+
|
165 |
+
There are also many other descriptive terms, often with very specialized usage and confined to specific taxonomic groups.[61] The conspicuousness of veins depends on a number of features. These include the width of the veins, their prominence in relation to the lamina surface and the degree of opacity of the surface, which may hide finer veins. In this regard, veins are called obscure and the order of veins that are obscured and whether upper, lower or both surfaces, further specified.[62][53]
|
166 |
+
|
167 |
+
Terms that describe vein prominence include bullate, channelled, flat, guttered, impressed, prominent and recessed (Fig. 6.1 Hawthorne & Lawrence 2013).[58][63] Veins may show different types of prominence in different areas of the leaf. For instance Pimenta racemosa has a channelled midrib on the upper surfae, but this is prominent on the lower surface.[58]
|
168 |
+
|
169 |
+
Describing vein prominence:
|
170 |
+
|
171 |
+
Describing other features:
|
172 |
+
|
173 |
+
The terms megaphyll, macrophyll, mesophyll, notophyll, microphyll, nanophyll and leptophyll are used to describe leaf sizes (in descending order), in a classification devised in 1934 by Christen C. Raunkiær and since modified by others.[70]
|
en/1981.html.txt
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A leaf (plural leaves) is the principal lateral appendage of the vascular plant stem,[1] usually borne above ground and specialized for photosynthesis. The leaves and stem together form the shoot.[2] Leaves are collectively referred to as foliage, as in "autumn foliage".[3][4] In most leaves, the primary photosynthetic tissue, the palisade mesophyll, is located on the upper side of the blade or lamina of the leaf[1] but in some species, including the mature foliage of Eucalyptus,[5] palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll that is essential for photosynthesis as it absorbs light energy from the sun. A leaf with white patches or edges is called a variegated leaf.
|
4 |
+
|
5 |
+
Leaves can have many different shapes, sizes, and textures. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants. In the clubmosses, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls.[6] Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems called phyllodes which differ from leaves both in their structure and origin.[4][7] Some structures of non-vascular plants look and function much like leaves. Examples include the phyllids of mosses and liverworts.
|
6 |
+
|
7 |
+
Leaves are the most important organs of most vascular plants.[8] Green plants are autotrophic, meaning that they do not obtain food from other living things but instead create their own food by photosynthesis. They capture the energy in sunlight and use it to make simple sugars, such as glucose and sucrose, from carbon dioxide and water. The sugars are then stored as starch, further processed by chemical synthesis into more complex organic molecules such as proteins or cellulose, the basic structural material in plant cell walls, or metabolized by cellular respiration to provide chemical energy to run cellular processes. The leaves draw water from the ground in the transpiration stream through a vascular conducting system known as xylem and obtain carbon dioxide from the atmosphere by diffusion through openings called stomata in the outer covering layer of the leaf (epidermis), while leaves are orientated to maximize their exposure to sunlight. Once sugar has been synthesized, it needs to be transported to areas of active growth such as the plant shoots and roots. Vascular plants transport sucrose in a special tissue called the phloem. The phloem and xylem are parallel to each other, but the transport of materials is usually in opposite directions. Within the leaf these vascular systems branch (ramify) to form veins which supply as much of the leaf as possible, ensuring that cells carrying out photosynthesis are close to the transportation system.[9]
|
8 |
+
|
9 |
+
Typically leaves are broad, flat and thin (dorsiventrally flattened), thereby maximising the surface area directly exposed to light and enabling the light to penetrate the tissues and reach the chloroplasts, thus promoting photosynthesis. They are arranged on the plant so as to expose their surfaces to light as efficiently as possible without shading each other, but there are many exceptions and complications. For instance, plants adapted to windy conditions may have pendent leaves, such as in many willows and eucalypts. The flat, or laminar, shape also maximizes thermal contact with the surrounding air, promoting cooling. Functionally, in addition to carrying out photosynthesis, the leaf is the principal site of transpiration, providing the energy required to draw the transpiration stream up from the roots, and guttation.
|
10 |
+
|
11 |
+
Many gymnosperms have thin needle-like or scale-like leaves that can be advantageous in cold climates with frequent snow and frost.[10] These are interpreted as reduced from megaphyllous leaves of their Devonian ancestors.[6] Some leaf forms are adapted to modulate the amount of light they absorb to avoid or mitigate excessive heat, ultraviolet damage, or desiccation, or to sacrifice light-absorption efficiency in favor of protection from herbivory. For xerophytes the major constraint is not light flux or intensity, but drought.[11] Some window plants such as Fenestraria species and some Haworthia species such as Haworthia tesselata and Haworthia truncata are examples of xerophytes.[12] and Bulbine mesembryanthemoides.[13]
|
12 |
+
|
13 |
+
Leaves also function to store chemical energy and water (especially in succulents) and may become specialized organs serving other functions, such as tendrils of peas and other legumes, the protective spines of cacti and the insect traps in carnivorous plants such as Nepenthes and Sarracenia.[14] Leaves are the fundamental structural units from which cones are constructed in gymnosperms (each cone scale is a modified megaphyll leaf known as a sporophyll)[6]:408 and from which flowers are constructed in flowering plants.[6]:445
|
14 |
+
|
15 |
+
The internal organization of most kinds of leaves has evolved to maximize exposure of the photosynthetic organelles, the chloroplasts, to light and to increase the absorption of carbon dioxide while at the same time controlling water loss. Their surfaces are waterproofed by the plant cuticle and gas exchange between the mesophyll cells and the atmosphere is controlled by minute (length and width measured in tens of µm) openings called stomata which open or close to regulate the rate exchange of carbon dioxide, oxygen, and water vapor into and out of the internal intercellular space system. Stomatal opening is controlled by the turgor pressure in a pair of guard cells that surround the stomatal aperture. In any square centimeter of a plant leaf, there may be from 1,000 to 100,000 stomata.[15]
|
16 |
+
|
17 |
+
The shape and structure of leaves vary considerably from species to species of plant, depending largely on their adaptation to climate and available light, but also to other factors such as grazing animals (such as deer), available nutrients, and ecological competition from other plants. Considerable changes in leaf type occur within species, too, for example as a plant matures; as a case in point Eucalyptus species commonly have isobilateral, pendent leaves when mature and dominating their neighbors; however, such trees tend to have erect or horizontal dorsiventral leaves as seedlings, when their growth is limited by the available light.[16] Other factors include the need to balance water loss at high temperature and low humidity against the need to absorb atmospheric carbon dioxide. In most plants, leaves also are the primary organs responsible for transpiration and guttation (beads of fluid forming at leaf margins).
|
18 |
+
|
19 |
+
Leaves can also store food and water, and are modified accordingly to meet these functions, for example in the leaves of succulent plants and in bulb scales. The concentration of photosynthetic structures in leaves requires that they be richer in protein, minerals, and sugars than, say, woody stem tissues. Accordingly, leaves are prominent in the diet of many animals.
|
20 |
+
|
21 |
+
Correspondingly, leaves represent heavy investment on the part of the plants bearing them, and their retention or disposition are the subject of elaborate strategies for dealing with pest pressures, seasonal conditions, and protective measures such as the growth of thorns and the production of phytoliths, lignins, tannins and poisons.
|
22 |
+
|
23 |
+
Deciduous plants in frigid or cold temperate regions typically shed their leaves in autumn, whereas in areas with a severe dry season, some plants may shed their leaves until the dry season ends. In either case, the shed leaves may be expected to contribute their retained nutrients to the soil where they fall.
|
24 |
+
|
25 |
+
In contrast, many other non-seasonal plants, such as palms and conifers, retain their leaves for long periods; Welwitschia retains its two main leaves throughout a lifetime that may exceed a thousand years.
|
26 |
+
|
27 |
+
The leaf-like organs of bryophytes (e.g., mosses and liverworts), known as phyllids, differ morphologically from the leaves of vascular plants in that they lack vascular tissue, are usually only a single cell thick and have no cuticle stomata or internal system of intercellular spaces. The leaves of bryophytes are only present on the gametophytes, while in contrast the leaves of vascular plants are only present on the sporophytes, and are associated with buds (immature shoot systems in the leaf axils). These can further develop into either vegetative or reproductive structures.[14]
|
28 |
+
|
29 |
+
Simple, vascularized leaves (microphylls), such as those of the early Devonian lycopsid Baragwanathia, first evolved as enations, extensions of the stem. True leaves or euphylls of larger size and with more complex venation did not become widespread in other groups until the Devonian period, by which time the carbon dioxide concentration in the atmosphere had dropped significantly. This occurred independently in several separate lineages of vascular plants, in progymnosperms like Archaeopteris, in Sphenopsida, ferns and later in the gymnosperms and angiosperms. Euphylls are also referred to as macrophylls or megaphylls (large leaves).[6]
|
30 |
+
|
31 |
+
A structurally complete leaf of an angiosperm consists of a petiole (leaf stalk), a lamina (leaf blade), stipules (small structures located to either side of the base of the petiole) and a sheath. Not every species produces leaves with all of these structural components. The proximal stalk or petiole is called a stipe in ferns. The lamina is the expanded, flat component of the leaf which contains the chloroplasts. The sheath is a structure, typically at the base that fully or partially clasps the stem above the node, where the latter is attached. Leaf sheathes typically occur in grasses and Apiaceae (umbellifers). Between the sheath and the lamina, there may be a pseudopetiole, a petiole like structure. Pseudopetioles occur in some monocotyledons including bananas, palms and bamboos.[18] Stipules may be conspicuous (e.g. beans and roses), soon falling or otherwise not obvious as in Moraceae or absent altogether as in the Magnoliaceae. A petiole may be absent (apetiolate), or the blade may not be laminar (flattened). The tremendous variety shown in leaf structure (anatomy) from species to species is presented in detail below under morphology. The petiole mechanically links the leaf to the plant and provides the route for transfer of water and sugars to and from the leaf. The lamina is typically the location of the majority of photosynthesis. The upper (adaxial) angle between a leaf and a stem is known as the axil of the leaf. It is often the location of a bud. Structures located there are called "axillary".
|
32 |
+
|
33 |
+
External leaf characteristics, such as shape, margin, hairs, the petiole, and the presence of stipules and glands, are frequently important for identifying plants to family, genus or species levels, and botanists have developed a rich terminology for describing leaf characteristics. Leaves almost always have determinate growth. They grow to a specific pattern and shape and then stop. Other plant parts like stems or roots have non-determinate growth, and will usually continue to grow as long as they have the resources to do so.
|
34 |
+
|
35 |
+
The type of leaf is usually characteristic of a species (monomorphic), although some species produce more than one type of leaf (dimorphic or polymorphic). The longest leaves are those of the Raffia palm, R. regalis which may be up to 25 m (82 ft) long and 3 m (9.8 ft) wide.[19] The terminology associated with the description of leaf morphology is presented, in illustrated form, at Wikibooks.
|
36 |
+
|
37 |
+
Where leaves are basal, and lie on the ground, they are referred to as prostrate.
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
Different terms are usually used to describe the arrangement of leaves on the stem (phyllotaxis):
|
42 |
+
|
43 |
+
As a stem grows, leaves tend to appear arranged around the stem in a way that optimizes yield of light. In essence, leaves form a helix pattern centered around the stem, either clockwise or counterclockwise, with (depending upon the species) the same angle of divergence. There is a regularity in these angles and they follow the numbers in a Fibonacci sequence: 1/2, 2/3, 3/5, 5/8, 8/13, 13/21, 21/34, 34/55, 55/89. This series tends to the golden angle, which is approximately 360° × 34/89 ≈ 137.52° ≈ 137° 30′. In the series, the numerator indicates the number of complete turns or "gyres" until a leaf arrives at the initial position and the denominator indicates the number of leaves in the arrangement. This can be demonstrated by the following:
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
Two basic forms of leaves can be described considering the way the blade (lamina) is divided. A simple leaf has an undivided blade. However, the leaf may be dissected to form lobes, but the gaps between lobes do not reach to the main vein. A compound leaf has a fully subdivided blade, each leaflet of the blade being separated along a main or secondary vein. The leaflets may have petiolules and stipels, the equivalents of the petioles and stipules of leaves. Because each leaflet can appear to be a simple leaf, it is important to recognize where the petiole occurs to identify a compound leaf. Compound leaves are a characteristic of some families of higher plants, such as the Fabaceae. The middle vein of a compound leaf or a frond, when it is present, is called a rachis.
|
48 |
+
|
49 |
+
Petiolated leaves have a petiole (leaf stalk), and are said to be petiolate.
|
50 |
+
|
51 |
+
Sessile (epetiolate) leaves have no petiole and the blade attaches directly to the stem. Subpetiolate leaves are nearly petiolate or have an extremely short petiole and may appear to be sessile.
|
52 |
+
|
53 |
+
In clasping or decurrent leaves, the blade partially surrounds the stem.
|
54 |
+
|
55 |
+
When the leaf base completely surrounds the stem, the leaves are said to be perfoliate, such as in Eupatorium perfoliatum.
|
56 |
+
|
57 |
+
In peltate leaves, the petiole attaches to the blade inside the blade margin.
|
58 |
+
|
59 |
+
In some Acacia species, such as the koa tree (Acacia koa), the petioles are expanded or broadened and function like leaf blades; these are called phyllodes. There may or may not be normal pinnate leaves at the tip of the phyllode.
|
60 |
+
|
61 |
+
A stipule, present on the leaves of many dicotyledons, is an appendage on each side at the base of the petiole, resembling a small leaf. Stipules may be lasting and not be shed (a stipulate leaf, such as in roses and beans), or be shed as the leaf expands, leaving a stipule scar on the twig (an exstipulate leaf).
|
62 |
+
The situation, arrangement, and structure of the stipules is called the "stipulation".
|
63 |
+
|
64 |
+
Veins (sometimes referred to as nerves) constitute one of the more visible leaf traits or characteristics. The veins in a leaf represent the vascular structure of the organ, extending into the leaf via the petiole and providing transportation of water and nutrients between leaf and stem, and play a crucial role in the maintenance of leaf water status and photosynthetic capacity.They also play a role in the mechanical support of the leaf.[20][21] Within the lamina of the leaf, while some vascular plants possess only a single vein, in most this vasculature generally divides (ramifies) according to a variety of patterns (venation) and form cylindrical bundles, usually lying in the median plane of the mesophyll, between the two layers of epidermis.[22] This pattern is often specific to taxa, and of which angiosperms possess two main types, parallel and reticulate (net like). In general, parallel venation is typical of monocots, while reticulate is more typical of eudicots and magnoliids ("dicots"), though there are many exceptions.[23][22][24]
|
65 |
+
|
66 |
+
The vein or veins entering the leaf from the petiole are called primary or first order veins. The veins branching from these are secondary or second order veins. These primary and secondary veins are considered major veins or lower order veins, though some authors include third order.[25] Each subsequent branching is sequentially numbered, and these are the higher order veins, each branching being associated with a narrower vein diameter.[26] In parallel veined leaves, the primary veins run parallel and equidistant to each other for most of the length of the leaf and then converge or fuse (anastomose) towards the apex. Usually, many smaller minor veins interconnect these primary veins, but may terminate with very fine vein endings in the mesophyll. Minor veins are more typical of angiosperms, which may have as many as four higher orders.[25] In contrast, leaves with reticulate venation there is a single (sometimes more) primary vein in the centre of the leaf, referred to as the midrib or costa and is continuous with the vasculature of the petiole more proximally. The midrib then branches to a number of smaller secondary veins, also known as second order veins, that extend toward the leaf margins. These often terminate in a hydathode, a secretory organ, at the margin. In turn, smaller veins branch from the secondary veins, known as tertiary or third order (or higher order) veins, forming a dense reticulate pattern. The areas or islands of mesophyll lying between the higher order veins, are called areoles. Some of the smallest veins (veinlets) may have their endings in the areoles, a process known as areolation.[26] These minor veins act as the sites of exchange between the mesophyll and the plant's vascular system.[21] Thus, minor veins collect the products of photosynthesis (photosynthate) from the cells where it takes place, while major veins are responsible for its transport outside of the leaf. At the same time water is being transported in the opposite direction.[27][23][22]
|
67 |
+
|
68 |
+
The number of vein endings is very variable, as is whether second order veins end at the margin, or link back to other veins.[24] There are many elaborate variations on the patterns that the leaf veins form, and these have functional implications. Of these, angiosperms have the greatest diversity.[25] Within these the major veins function as the support and distribution network for leaves and are correlated with leaf shape. For instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point.[28][21][26][29]
|
69 |
+
|
70 |
+
In evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. Veins appeared in the Permian period (299–252 mya), prior to the appearance of angiosperms in the Triassic (252–201 mya), during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions.[25] Although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. A pseudo-reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some Melanthiaceae, which are monocots; e.g., Paris quadrifolia (True-lover's Knot). In leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves.[30]
|
71 |
+
|
72 |
+
Leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. Many leaves are covered in trichomes (small hairs) which have diverse structures and functions.
|
73 |
+
|
74 |
+
The major tissue systems present are
|
75 |
+
|
76 |
+
These three tissue systems typically form a regular organization at the cellular scale. Specialized cells that differ markedly from surrounding cells, and which often synthesize specialized products such as crystals, are termed idioblasts.[31]
|
77 |
+
|
78 |
+
Cross-section of a leaf
|
79 |
+
|
80 |
+
Epidermal cells
|
81 |
+
|
82 |
+
Spongy mesophyll cells
|
83 |
+
|
84 |
+
The epidermis is the outer layer of cells covering the leaf. It is covered with a waxy cuticle which is impermeable to liquid water and water vapor and forms the boundary separating the plant's inner cells from the external world. The cuticle is in some cases thinner on the lower epidermis than on the upper epidermis, and is generally thicker on leaves from dry climates as compared with those from wet climates.[32] The epidermis serves several functions: protection against water loss by way of transpiration, regulation of gas exchange and secretion of metabolic compounds. Most leaves show dorsoventral anatomy: The upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions.
|
85 |
+
|
86 |
+
The epidermis tissue includes several differentiated cell types; epidermal cells, epidermal hair cells (trichomes), cells in the stomatal complex; guard cells and subsidiary cells. The epidermal cells are the most numerous, largest, and least specialized and form the majority of the epidermis. They are typically more elongated in the leaves of monocots than in those of dicots.
|
87 |
+
|
88 |
+
Chloroplasts are generally absent in epidermal cells, the exception being the guard cells of the stomata. The stomatal pores perforate the epidermis and are surrounded on each side by chloroplast-containing guard cells, and two to four subsidiary cells that lack chloroplasts, forming a specialized cell group known as the stomatal complex. The opening and closing of the stomatal aperture is controlled by the stomatal complex and regulates the exchange of gases and water vapor between the outside air and the interior of the leaf. Stomata therefore play the important role in allowing photosynthesis without letting the leaf dry out. In a typical leaf, the stomata are more numerous over the abaxial (lower) epidermis than the adaxial (upper) epidermis and are more numerous in plants from cooler climates.
|
89 |
+
|
90 |
+
Most of the interior of the leaf between the upper and lower layers of epidermis is a parenchyma (ground tissue) or chlorenchyma tissue called the mesophyll (Greek for "middle leaf"). This assimilation tissue is the primary location of photosynthesis in the plant. The products of photosynthesis are called "assimilates".
|
91 |
+
|
92 |
+
In ferns and most flowering plants, the mesophyll is divided into two layers:
|
93 |
+
|
94 |
+
Leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. Plants that lack chlorophyll cannot photosynthesize.
|
95 |
+
|
96 |
+
The veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. The pattern of the veins is called venation. In angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad-leaved plants. They were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field.[33][34][35]
|
97 |
+
|
98 |
+
A vein is made up of a vascular bundle. At the core of each bundle are clusters of two
|
99 |
+
distinct types of conducting cells:
|
100 |
+
|
101 |
+
The xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. Both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue.
|
102 |
+
|
103 |
+
According to Agnes Arber's partial-shoot theory of the leaf, leaves are partial shoots,[36] being derived from leaf primordia of the shoot apex. Early in development they are dorsiventrally flattened with both dorsal and ventral surfaces.[14] Compound leaves are closer to shoots than simple leaves. Developmental studies have shown that compound leaves, like shoots, may branch in three dimensions.[37][38] On the basis of molecular genetics, Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties."[39]
|
104 |
+
|
105 |
+
Plants respond and adapt to environmental factors, such as light and mechanical stress from wind. Leaves need to support their own mass and align themselves in such a way as to optimize their exposure to the sun, generally more or less horizontally. However, horizontal alignment maximizes exposure to bending forces and failure from stresses such as wind, snow, hail, falling debris, animals, and abrasion from surrounding foliage and plant structures. Overall leaves are relatively flimsy with regard to other plant structures such as stems, branches and roots.[40]
|
106 |
+
|
107 |
+
Both leaf blade and petiole structure influence the leaf's response to forces such as wind, allowing a degree of repositioning to minimize drag and damage, as opposed to resistance. Leaf movement like this may also increase turbulence of the air close to the surface of the leaf, which thins the boundary layer of air immediately adjacent to the surface, increasing the capacity for gas and heat exchange, as well as photosynthesis. Strong wind forces may result in diminished leaf number and surface area, which while reducing drag, involves a trade off of also reducing photosynthesis. Thus, leaf design may involve compromise between carbon gain, thermoregulation and water loss on the one hand, and the cost of sustaining both static and dynamic loads. In vascular plants, perpendicular forces are spread over a larger area and are relatively flexible in both bending and torsion, enabling elastic deforming without damage.[40]
|
108 |
+
|
109 |
+
Many leaves rely on hydrostatic support arranged around a skeleton of vascular tissue for their strength, which depends on maintaining leaf water status. Both the mechanics and architecture of the leaf reflect the need for transportation and support. Read and Stokes (2006) consider two basic models, the "hydrostatic" and "I-beam leaf" form (see Fig 1).[40] Hydrostatic leaves such as in Prostanthera lasianthos are large and thin, and may involve the need for multiple leaves rather single large leaves because of the amount of veins needed to support the periphery of large leaves. But large leaf size favors efficiency in photosynthesis and water conservation, involving further trade offs. On the other hand, I-beam leaves such as Banksia marginata involve specialized structures to stiffen them. These I-beams are formed from bundle sheath extensions of sclerenchyma meeting stiffened sub-epidermal layers. This shifts the balance from reliance on hydrostatic pressure to structural support, an obvious advantage where water is relatively scarce.
|
110 |
+
[40] Long narrow leaves bend more easily than ovate leaf blades of the same area. Monocots typically have such linear leaves that maximize surface area while minimising self-shading. In these a high proportion of longitudinal main veins provide additional support.[40]
|
111 |
+
|
112 |
+
Although not as nutritious as other organs such as fruit, leaves provide a food source for many organisms. The leaf is a vital source of energy production for the plant, and plants have evolved protection against animals that consume leaves, such as tannins, chemicals which hinder the digestion of proteins and have an unpleasant taste. Animals that are specialized to eat leaves are known as folivores.
|
113 |
+
|
114 |
+
Some species have cryptic adaptations by which they use leaves in avoiding predators. For example, the caterpillars of some leaf-roller moths will create a small home in the leaf by folding it over themselves. Some sawflies similarly roll the leaves of their food plants into tubes. Females of the Attelabidae, so-called leaf-rolling weevils, lay their eggs into leaves that they then roll up as means of protection. Other herbivores and their predators mimic the appearance of the leaf. Reptiles such as some chameleons, and insects such as some katydids, also mimic the oscillating movements of leaves in the wind, moving from side to side or back and forth while evading a possible threat.
|
115 |
+
|
116 |
+
Leaves in temperate, boreal, and seasonally dry zones may be seasonally deciduous (falling off or dying for the inclement season). This mechanism to shed leaves is called abscission. When the leaf is shed, it leaves a leaf scar on the twig. In cold autumns, they sometimes change color, and turn yellow, bright-orange, or red, as various accessory pigments (carotenoids and xanthophylls) are revealed when the tree responds to cold and reduced sunlight by curtailing chlorophyll production. Red anthocyanin pigments are now thought to be produced in the leaf as it dies, possibly to mask the yellow hue left when the chlorophyll is lost—yellow leaves appear to attract herbivores such as aphids.[41] Optical masking of chlorophyll by anthocyanins reduces risk of photo-oxidative damage to leaf cells as they senesce, which otherwise may lower the efficiency of nutrient retrieval from senescing autumn leaves.[42]
|
117 |
+
|
118 |
+
In the course of evolution, leaves have adapted to different environments in the following ways:[citation needed]
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
May be coarsely dentate, having large teeth
|
123 |
+
|
124 |
+
or glandular dentate, having teeth which bear glands
|
125 |
+
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
The leaf surface is also host to a large variety of microorganisms; in this context it is referred to as the phyllosphere.
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
"Hairs" on plants are properly called trichomes. Leaves can show several degrees of hairiness. The meaning of several of the following terms can overlap.
|
139 |
+
|
140 |
+
A number of different classification systems of the patterns of leaf veins (venation or veination) have been described,[24] starting with Ettingshausen (1861),[45] together with many different descriptive terms, and the terminology has been described as "formidable".[24] One of the commonest among these is the Hickey system, originally developed for "dicotyledons" and using a number of Ettingshausen's terms derived from Greek (1973–1979):[46][47][48] (see also: Simpson Figure 9.12, p. 468)[24]
|
141 |
+
|
142 |
+
Types 4–6 may similarly be subclassified as basal (primaries joined at the base of the blade) or suprabasal (diverging above the blade base), and perfect or imperfect, but also flabellate.
|
143 |
+
|
144 |
+
At about the same time, Melville (1976) described a system applicable to all Angiosperms and using Latin and English terminology.[49] Melville also had six divisions, based on the order in which veins develop.
|
145 |
+
|
146 |
+
A modified form of the Hickey system was later incorporated into the Smithsonian classification (1999) which proposed seven main types of venation, based on the architecture of the primary veins, adding Flabellate as an additional main type. Further classification was then made on the basis of secondary veins, with 12 further types, such as;
|
147 |
+
|
148 |
+
terms which had been used as subtypes in the original Hickey system.[50]
|
149 |
+
|
150 |
+
Further descriptions included the higher order, or minor veins and the patterns of areoles (see Leaf Architecture Working Group, Figures 28–29).[50]
|
151 |
+
|
152 |
+
Analyses of vein patterns often fall into consideration of the vein orders, primary vein type, secondary vein type (major veins), and minor vein density. A number of authors have adopted simplified versions of these schemes.[51][24] At its simplest the primary vein types can be considered in three or four groups depending on the plant divisions being considered;
|
153 |
+
|
154 |
+
where palmate refers to multiple primary veins that radiate from the petiole, as opposed to branching from the central main vein in the pinnate form, and encompasses both of Hickey types 4 and 5, which are preserved as subtypes; e.g., palmate-acrodromous (see National Park Service Leaf Guide).[52]
|
155 |
+
|
156 |
+
Alternatively, Simpson uses:[24]
|
157 |
+
|
158 |
+
However, these simplified systems allow for further division into multiple subtypes. Simpson,[24] (and others)[54] divides parallel and netted (and some use only these two terms for Angiosperms)[55] on the basis of the number of primary veins (costa) as follows;
|
159 |
+
|
160 |
+
These complex systems are not used much in morphological descriptions of taxa, but have usefulness in plant identification,
|
161 |
+
[24] although criticized as being unduly burdened with jargon.[58]
|
162 |
+
|
163 |
+
An older, even simpler system, used in some flora[59] uses only two categories, open and closed.[60]
|
164 |
+
|
165 |
+
There are also many other descriptive terms, often with very specialized usage and confined to specific taxonomic groups.[61] The conspicuousness of veins depends on a number of features. These include the width of the veins, their prominence in relation to the lamina surface and the degree of opacity of the surface, which may hide finer veins. In this regard, veins are called obscure and the order of veins that are obscured and whether upper, lower or both surfaces, further specified.[62][53]
|
166 |
+
|
167 |
+
Terms that describe vein prominence include bullate, channelled, flat, guttered, impressed, prominent and recessed (Fig. 6.1 Hawthorne & Lawrence 2013).[58][63] Veins may show different types of prominence in different areas of the leaf. For instance Pimenta racemosa has a channelled midrib on the upper surfae, but this is prominent on the lower surface.[58]
|
168 |
+
|
169 |
+
Describing vein prominence:
|
170 |
+
|
171 |
+
Describing other features:
|
172 |
+
|
173 |
+
The terms megaphyll, macrophyll, mesophyll, notophyll, microphyll, nanophyll and leptophyll are used to describe leaf sizes (in descending order), in a classification devised in 1934 by Christen C. Raunkiær and since modified by others.[70]
|
en/1982.html.txt
ADDED
@@ -0,0 +1,173 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A leaf (plural leaves) is the principal lateral appendage of the vascular plant stem,[1] usually borne above ground and specialized for photosynthesis. The leaves and stem together form the shoot.[2] Leaves are collectively referred to as foliage, as in "autumn foliage".[3][4] In most leaves, the primary photosynthetic tissue, the palisade mesophyll, is located on the upper side of the blade or lamina of the leaf[1] but in some species, including the mature foliage of Eucalyptus,[5] palisade mesophyll is present on both sides and the leaves are said to be isobilateral. Most leaves are flattened and have distinct upper (adaxial) and lower (abaxial) surfaces that differ in color, hairiness, the number of stomata (pores that intake and output gases), the amount and structure of epicuticular wax and other features. Leaves are mostly green in color due to the presence of a compound called chlorophyll that is essential for photosynthesis as it absorbs light energy from the sun. A leaf with white patches or edges is called a variegated leaf.
|
4 |
+
|
5 |
+
Leaves can have many different shapes, sizes, and textures. The broad, flat leaves with complex venation of flowering plants are known as megaphylls and the species that bear them, the majority, as broad-leaved or megaphyllous plants. In the clubmosses, with different evolutionary origins, the leaves are simple (with only a single vein) and are known as microphylls.[6] Some leaves, such as bulb scales, are not above ground. In many aquatic species, the leaves are submerged in water. Succulent plants often have thick juicy leaves, but some leaves are without major photosynthetic function and may be dead at maturity, as in some cataphylls and spines. Furthermore, several kinds of leaf-like structures found in vascular plants are not totally homologous with them. Examples include flattened plant stems called phylloclades and cladodes, and flattened leaf stems called phyllodes which differ from leaves both in their structure and origin.[4][7] Some structures of non-vascular plants look and function much like leaves. Examples include the phyllids of mosses and liverworts.
|
6 |
+
|
7 |
+
Leaves are the most important organs of most vascular plants.[8] Green plants are autotrophic, meaning that they do not obtain food from other living things but instead create their own food by photosynthesis. They capture the energy in sunlight and use it to make simple sugars, such as glucose and sucrose, from carbon dioxide and water. The sugars are then stored as starch, further processed by chemical synthesis into more complex organic molecules such as proteins or cellulose, the basic structural material in plant cell walls, or metabolized by cellular respiration to provide chemical energy to run cellular processes. The leaves draw water from the ground in the transpiration stream through a vascular conducting system known as xylem and obtain carbon dioxide from the atmosphere by diffusion through openings called stomata in the outer covering layer of the leaf (epidermis), while leaves are orientated to maximize their exposure to sunlight. Once sugar has been synthesized, it needs to be transported to areas of active growth such as the plant shoots and roots. Vascular plants transport sucrose in a special tissue called the phloem. The phloem and xylem are parallel to each other, but the transport of materials is usually in opposite directions. Within the leaf these vascular systems branch (ramify) to form veins which supply as much of the leaf as possible, ensuring that cells carrying out photosynthesis are close to the transportation system.[9]
|
8 |
+
|
9 |
+
Typically leaves are broad, flat and thin (dorsiventrally flattened), thereby maximising the surface area directly exposed to light and enabling the light to penetrate the tissues and reach the chloroplasts, thus promoting photosynthesis. They are arranged on the plant so as to expose their surfaces to light as efficiently as possible without shading each other, but there are many exceptions and complications. For instance, plants adapted to windy conditions may have pendent leaves, such as in many willows and eucalypts. The flat, or laminar, shape also maximizes thermal contact with the surrounding air, promoting cooling. Functionally, in addition to carrying out photosynthesis, the leaf is the principal site of transpiration, providing the energy required to draw the transpiration stream up from the roots, and guttation.
|
10 |
+
|
11 |
+
Many gymnosperms have thin needle-like or scale-like leaves that can be advantageous in cold climates with frequent snow and frost.[10] These are interpreted as reduced from megaphyllous leaves of their Devonian ancestors.[6] Some leaf forms are adapted to modulate the amount of light they absorb to avoid or mitigate excessive heat, ultraviolet damage, or desiccation, or to sacrifice light-absorption efficiency in favor of protection from herbivory. For xerophytes the major constraint is not light flux or intensity, but drought.[11] Some window plants such as Fenestraria species and some Haworthia species such as Haworthia tesselata and Haworthia truncata are examples of xerophytes.[12] and Bulbine mesembryanthemoides.[13]
|
12 |
+
|
13 |
+
Leaves also function to store chemical energy and water (especially in succulents) and may become specialized organs serving other functions, such as tendrils of peas and other legumes, the protective spines of cacti and the insect traps in carnivorous plants such as Nepenthes and Sarracenia.[14] Leaves are the fundamental structural units from which cones are constructed in gymnosperms (each cone scale is a modified megaphyll leaf known as a sporophyll)[6]:408 and from which flowers are constructed in flowering plants.[6]:445
|
14 |
+
|
15 |
+
The internal organization of most kinds of leaves has evolved to maximize exposure of the photosynthetic organelles, the chloroplasts, to light and to increase the absorption of carbon dioxide while at the same time controlling water loss. Their surfaces are waterproofed by the plant cuticle and gas exchange between the mesophyll cells and the atmosphere is controlled by minute (length and width measured in tens of µm) openings called stomata which open or close to regulate the rate exchange of carbon dioxide, oxygen, and water vapor into and out of the internal intercellular space system. Stomatal opening is controlled by the turgor pressure in a pair of guard cells that surround the stomatal aperture. In any square centimeter of a plant leaf, there may be from 1,000 to 100,000 stomata.[15]
|
16 |
+
|
17 |
+
The shape and structure of leaves vary considerably from species to species of plant, depending largely on their adaptation to climate and available light, but also to other factors such as grazing animals (such as deer), available nutrients, and ecological competition from other plants. Considerable changes in leaf type occur within species, too, for example as a plant matures; as a case in point Eucalyptus species commonly have isobilateral, pendent leaves when mature and dominating their neighbors; however, such trees tend to have erect or horizontal dorsiventral leaves as seedlings, when their growth is limited by the available light.[16] Other factors include the need to balance water loss at high temperature and low humidity against the need to absorb atmospheric carbon dioxide. In most plants, leaves also are the primary organs responsible for transpiration and guttation (beads of fluid forming at leaf margins).
|
18 |
+
|
19 |
+
Leaves can also store food and water, and are modified accordingly to meet these functions, for example in the leaves of succulent plants and in bulb scales. The concentration of photosynthetic structures in leaves requires that they be richer in protein, minerals, and sugars than, say, woody stem tissues. Accordingly, leaves are prominent in the diet of many animals.
|
20 |
+
|
21 |
+
Correspondingly, leaves represent heavy investment on the part of the plants bearing them, and their retention or disposition are the subject of elaborate strategies for dealing with pest pressures, seasonal conditions, and protective measures such as the growth of thorns and the production of phytoliths, lignins, tannins and poisons.
|
22 |
+
|
23 |
+
Deciduous plants in frigid or cold temperate regions typically shed their leaves in autumn, whereas in areas with a severe dry season, some plants may shed their leaves until the dry season ends. In either case, the shed leaves may be expected to contribute their retained nutrients to the soil where they fall.
|
24 |
+
|
25 |
+
In contrast, many other non-seasonal plants, such as palms and conifers, retain their leaves for long periods; Welwitschia retains its two main leaves throughout a lifetime that may exceed a thousand years.
|
26 |
+
|
27 |
+
The leaf-like organs of bryophytes (e.g., mosses and liverworts), known as phyllids, differ morphologically from the leaves of vascular plants in that they lack vascular tissue, are usually only a single cell thick and have no cuticle stomata or internal system of intercellular spaces. The leaves of bryophytes are only present on the gametophytes, while in contrast the leaves of vascular plants are only present on the sporophytes, and are associated with buds (immature shoot systems in the leaf axils). These can further develop into either vegetative or reproductive structures.[14]
|
28 |
+
|
29 |
+
Simple, vascularized leaves (microphylls), such as those of the early Devonian lycopsid Baragwanathia, first evolved as enations, extensions of the stem. True leaves or euphylls of larger size and with more complex venation did not become widespread in other groups until the Devonian period, by which time the carbon dioxide concentration in the atmosphere had dropped significantly. This occurred independently in several separate lineages of vascular plants, in progymnosperms like Archaeopteris, in Sphenopsida, ferns and later in the gymnosperms and angiosperms. Euphylls are also referred to as macrophylls or megaphylls (large leaves).[6]
|
30 |
+
|
31 |
+
A structurally complete leaf of an angiosperm consists of a petiole (leaf stalk), a lamina (leaf blade), stipules (small structures located to either side of the base of the petiole) and a sheath. Not every species produces leaves with all of these structural components. The proximal stalk or petiole is called a stipe in ferns. The lamina is the expanded, flat component of the leaf which contains the chloroplasts. The sheath is a structure, typically at the base that fully or partially clasps the stem above the node, where the latter is attached. Leaf sheathes typically occur in grasses and Apiaceae (umbellifers). Between the sheath and the lamina, there may be a pseudopetiole, a petiole like structure. Pseudopetioles occur in some monocotyledons including bananas, palms and bamboos.[18] Stipules may be conspicuous (e.g. beans and roses), soon falling or otherwise not obvious as in Moraceae or absent altogether as in the Magnoliaceae. A petiole may be absent (apetiolate), or the blade may not be laminar (flattened). The tremendous variety shown in leaf structure (anatomy) from species to species is presented in detail below under morphology. The petiole mechanically links the leaf to the plant and provides the route for transfer of water and sugars to and from the leaf. The lamina is typically the location of the majority of photosynthesis. The upper (adaxial) angle between a leaf and a stem is known as the axil of the leaf. It is often the location of a bud. Structures located there are called "axillary".
|
32 |
+
|
33 |
+
External leaf characteristics, such as shape, margin, hairs, the petiole, and the presence of stipules and glands, are frequently important for identifying plants to family, genus or species levels, and botanists have developed a rich terminology for describing leaf characteristics. Leaves almost always have determinate growth. They grow to a specific pattern and shape and then stop. Other plant parts like stems or roots have non-determinate growth, and will usually continue to grow as long as they have the resources to do so.
|
34 |
+
|
35 |
+
The type of leaf is usually characteristic of a species (monomorphic), although some species produce more than one type of leaf (dimorphic or polymorphic). The longest leaves are those of the Raffia palm, R. regalis which may be up to 25 m (82 ft) long and 3 m (9.8 ft) wide.[19] The terminology associated with the description of leaf morphology is presented, in illustrated form, at Wikibooks.
|
36 |
+
|
37 |
+
Where leaves are basal, and lie on the ground, they are referred to as prostrate.
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
Different terms are usually used to describe the arrangement of leaves on the stem (phyllotaxis):
|
42 |
+
|
43 |
+
As a stem grows, leaves tend to appear arranged around the stem in a way that optimizes yield of light. In essence, leaves form a helix pattern centered around the stem, either clockwise or counterclockwise, with (depending upon the species) the same angle of divergence. There is a regularity in these angles and they follow the numbers in a Fibonacci sequence: 1/2, 2/3, 3/5, 5/8, 8/13, 13/21, 21/34, 34/55, 55/89. This series tends to the golden angle, which is approximately 360° × 34/89 ≈ 137.52° ≈ 137° 30′. In the series, the numerator indicates the number of complete turns or "gyres" until a leaf arrives at the initial position and the denominator indicates the number of leaves in the arrangement. This can be demonstrated by the following:
|
44 |
+
|
45 |
+
|
46 |
+
|
47 |
+
Two basic forms of leaves can be described considering the way the blade (lamina) is divided. A simple leaf has an undivided blade. However, the leaf may be dissected to form lobes, but the gaps between lobes do not reach to the main vein. A compound leaf has a fully subdivided blade, each leaflet of the blade being separated along a main or secondary vein. The leaflets may have petiolules and stipels, the equivalents of the petioles and stipules of leaves. Because each leaflet can appear to be a simple leaf, it is important to recognize where the petiole occurs to identify a compound leaf. Compound leaves are a characteristic of some families of higher plants, such as the Fabaceae. The middle vein of a compound leaf or a frond, when it is present, is called a rachis.
|
48 |
+
|
49 |
+
Petiolated leaves have a petiole (leaf stalk), and are said to be petiolate.
|
50 |
+
|
51 |
+
Sessile (epetiolate) leaves have no petiole and the blade attaches directly to the stem. Subpetiolate leaves are nearly petiolate or have an extremely short petiole and may appear to be sessile.
|
52 |
+
|
53 |
+
In clasping or decurrent leaves, the blade partially surrounds the stem.
|
54 |
+
|
55 |
+
When the leaf base completely surrounds the stem, the leaves are said to be perfoliate, such as in Eupatorium perfoliatum.
|
56 |
+
|
57 |
+
In peltate leaves, the petiole attaches to the blade inside the blade margin.
|
58 |
+
|
59 |
+
In some Acacia species, such as the koa tree (Acacia koa), the petioles are expanded or broadened and function like leaf blades; these are called phyllodes. There may or may not be normal pinnate leaves at the tip of the phyllode.
|
60 |
+
|
61 |
+
A stipule, present on the leaves of many dicotyledons, is an appendage on each side at the base of the petiole, resembling a small leaf. Stipules may be lasting and not be shed (a stipulate leaf, such as in roses and beans), or be shed as the leaf expands, leaving a stipule scar on the twig (an exstipulate leaf).
|
62 |
+
The situation, arrangement, and structure of the stipules is called the "stipulation".
|
63 |
+
|
64 |
+
Veins (sometimes referred to as nerves) constitute one of the more visible leaf traits or characteristics. The veins in a leaf represent the vascular structure of the organ, extending into the leaf via the petiole and providing transportation of water and nutrients between leaf and stem, and play a crucial role in the maintenance of leaf water status and photosynthetic capacity.They also play a role in the mechanical support of the leaf.[20][21] Within the lamina of the leaf, while some vascular plants possess only a single vein, in most this vasculature generally divides (ramifies) according to a variety of patterns (venation) and form cylindrical bundles, usually lying in the median plane of the mesophyll, between the two layers of epidermis.[22] This pattern is often specific to taxa, and of which angiosperms possess two main types, parallel and reticulate (net like). In general, parallel venation is typical of monocots, while reticulate is more typical of eudicots and magnoliids ("dicots"), though there are many exceptions.[23][22][24]
|
65 |
+
|
66 |
+
The vein or veins entering the leaf from the petiole are called primary or first order veins. The veins branching from these are secondary or second order veins. These primary and secondary veins are considered major veins or lower order veins, though some authors include third order.[25] Each subsequent branching is sequentially numbered, and these are the higher order veins, each branching being associated with a narrower vein diameter.[26] In parallel veined leaves, the primary veins run parallel and equidistant to each other for most of the length of the leaf and then converge or fuse (anastomose) towards the apex. Usually, many smaller minor veins interconnect these primary veins, but may terminate with very fine vein endings in the mesophyll. Minor veins are more typical of angiosperms, which may have as many as four higher orders.[25] In contrast, leaves with reticulate venation there is a single (sometimes more) primary vein in the centre of the leaf, referred to as the midrib or costa and is continuous with the vasculature of the petiole more proximally. The midrib then branches to a number of smaller secondary veins, also known as second order veins, that extend toward the leaf margins. These often terminate in a hydathode, a secretory organ, at the margin. In turn, smaller veins branch from the secondary veins, known as tertiary or third order (or higher order) veins, forming a dense reticulate pattern. The areas or islands of mesophyll lying between the higher order veins, are called areoles. Some of the smallest veins (veinlets) may have their endings in the areoles, a process known as areolation.[26] These minor veins act as the sites of exchange between the mesophyll and the plant's vascular system.[21] Thus, minor veins collect the products of photosynthesis (photosynthate) from the cells where it takes place, while major veins are responsible for its transport outside of the leaf. At the same time water is being transported in the opposite direction.[27][23][22]
|
67 |
+
|
68 |
+
The number of vein endings is very variable, as is whether second order veins end at the margin, or link back to other veins.[24] There are many elaborate variations on the patterns that the leaf veins form, and these have functional implications. Of these, angiosperms have the greatest diversity.[25] Within these the major veins function as the support and distribution network for leaves and are correlated with leaf shape. For instance, the parallel venation found in most monocots correlates with their elongated leaf shape and wide leaf base, while reticulate venation is seen in simple entire leaves, while digitate leaves typically have venation in which three or more primary veins diverge radially from a single point.[28][21][26][29]
|
69 |
+
|
70 |
+
In evolutionary terms, early emerging taxa tend to have dichotomous branching with reticulate systems emerging later. Veins appeared in the Permian period (299–252 mya), prior to the appearance of angiosperms in the Triassic (252–201 mya), during which vein hierarchy appeared enabling higher function, larger leaf size and adaption to a wider variety of climatic conditions.[25] Although it is the more complex pattern, branching veins appear to be plesiomorphic and in some form were present in ancient seed plants as long as 250 million years ago. A pseudo-reticulate venation that is actually a highly modified penniparallel one is an autapomorphy of some Melanthiaceae, which are monocots; e.g., Paris quadrifolia (True-lover's Knot). In leaves with reticulate venation, veins form a scaffolding matrix imparting mechanical rigidity to leaves.[30]
|
71 |
+
|
72 |
+
Leaves are normally extensively vascularized and typically have networks of vascular bundles containing xylem, which supplies water for photosynthesis, and phloem, which transports the sugars produced by photosynthesis. Many leaves are covered in trichomes (small hairs) which have diverse structures and functions.
|
73 |
+
|
74 |
+
The major tissue systems present are
|
75 |
+
|
76 |
+
These three tissue systems typically form a regular organization at the cellular scale. Specialized cells that differ markedly from surrounding cells, and which often synthesize specialized products such as crystals, are termed idioblasts.[31]
|
77 |
+
|
78 |
+
Cross-section of a leaf
|
79 |
+
|
80 |
+
Epidermal cells
|
81 |
+
|
82 |
+
Spongy mesophyll cells
|
83 |
+
|
84 |
+
The epidermis is the outer layer of cells covering the leaf. It is covered with a waxy cuticle which is impermeable to liquid water and water vapor and forms the boundary separating the plant's inner cells from the external world. The cuticle is in some cases thinner on the lower epidermis than on the upper epidermis, and is generally thicker on leaves from dry climates as compared with those from wet climates.[32] The epidermis serves several functions: protection against water loss by way of transpiration, regulation of gas exchange and secretion of metabolic compounds. Most leaves show dorsoventral anatomy: The upper (adaxial) and lower (abaxial) surfaces have somewhat different construction and may serve different functions.
|
85 |
+
|
86 |
+
The epidermis tissue includes several differentiated cell types; epidermal cells, epidermal hair cells (trichomes), cells in the stomatal complex; guard cells and subsidiary cells. The epidermal cells are the most numerous, largest, and least specialized and form the majority of the epidermis. They are typically more elongated in the leaves of monocots than in those of dicots.
|
87 |
+
|
88 |
+
Chloroplasts are generally absent in epidermal cells, the exception being the guard cells of the stomata. The stomatal pores perforate the epidermis and are surrounded on each side by chloroplast-containing guard cells, and two to four subsidiary cells that lack chloroplasts, forming a specialized cell group known as the stomatal complex. The opening and closing of the stomatal aperture is controlled by the stomatal complex and regulates the exchange of gases and water vapor between the outside air and the interior of the leaf. Stomata therefore play the important role in allowing photosynthesis without letting the leaf dry out. In a typical leaf, the stomata are more numerous over the abaxial (lower) epidermis than the adaxial (upper) epidermis and are more numerous in plants from cooler climates.
|
89 |
+
|
90 |
+
Most of the interior of the leaf between the upper and lower layers of epidermis is a parenchyma (ground tissue) or chlorenchyma tissue called the mesophyll (Greek for "middle leaf"). This assimilation tissue is the primary location of photosynthesis in the plant. The products of photosynthesis are called "assimilates".
|
91 |
+
|
92 |
+
In ferns and most flowering plants, the mesophyll is divided into two layers:
|
93 |
+
|
94 |
+
Leaves are normally green, due to chlorophyll in chloroplasts in the mesophyll cells. Plants that lack chlorophyll cannot photosynthesize.
|
95 |
+
|
96 |
+
The veins are the vascular tissue of the leaf and are located in the spongy layer of the mesophyll. The pattern of the veins is called venation. In angiosperms the venation is typically parallel in monocotyledons and forms an interconnecting network in broad-leaved plants. They were once thought to be typical examples of pattern formation through ramification, but they may instead exemplify a pattern formed in a stress tensor field.[33][34][35]
|
97 |
+
|
98 |
+
A vein is made up of a vascular bundle. At the core of each bundle are clusters of two
|
99 |
+
distinct types of conducting cells:
|
100 |
+
|
101 |
+
The xylem typically lies on the adaxial side of the vascular bundle and the phloem typically lies on the abaxial side. Both are embedded in a dense parenchyma tissue, called the sheath, which usually includes some structural collenchyma tissue.
|
102 |
+
|
103 |
+
According to Agnes Arber's partial-shoot theory of the leaf, leaves are partial shoots,[36] being derived from leaf primordia of the shoot apex. Early in development they are dorsiventrally flattened with both dorsal and ventral surfaces.[14] Compound leaves are closer to shoots than simple leaves. Developmental studies have shown that compound leaves, like shoots, may branch in three dimensions.[37][38] On the basis of molecular genetics, Eckardt and Baum (2010) concluded that "it is now generally accepted that compound leaves express both leaf and shoot properties."[39]
|
104 |
+
|
105 |
+
Plants respond and adapt to environmental factors, such as light and mechanical stress from wind. Leaves need to support their own mass and align themselves in such a way as to optimize their exposure to the sun, generally more or less horizontally. However, horizontal alignment maximizes exposure to bending forces and failure from stresses such as wind, snow, hail, falling debris, animals, and abrasion from surrounding foliage and plant structures. Overall leaves are relatively flimsy with regard to other plant structures such as stems, branches and roots.[40]
|
106 |
+
|
107 |
+
Both leaf blade and petiole structure influence the leaf's response to forces such as wind, allowing a degree of repositioning to minimize drag and damage, as opposed to resistance. Leaf movement like this may also increase turbulence of the air close to the surface of the leaf, which thins the boundary layer of air immediately adjacent to the surface, increasing the capacity for gas and heat exchange, as well as photosynthesis. Strong wind forces may result in diminished leaf number and surface area, which while reducing drag, involves a trade off of also reducing photosynthesis. Thus, leaf design may involve compromise between carbon gain, thermoregulation and water loss on the one hand, and the cost of sustaining both static and dynamic loads. In vascular plants, perpendicular forces are spread over a larger area and are relatively flexible in both bending and torsion, enabling elastic deforming without damage.[40]
|
108 |
+
|
109 |
+
Many leaves rely on hydrostatic support arranged around a skeleton of vascular tissue for their strength, which depends on maintaining leaf water status. Both the mechanics and architecture of the leaf reflect the need for transportation and support. Read and Stokes (2006) consider two basic models, the "hydrostatic" and "I-beam leaf" form (see Fig 1).[40] Hydrostatic leaves such as in Prostanthera lasianthos are large and thin, and may involve the need for multiple leaves rather single large leaves because of the amount of veins needed to support the periphery of large leaves. But large leaf size favors efficiency in photosynthesis and water conservation, involving further trade offs. On the other hand, I-beam leaves such as Banksia marginata involve specialized structures to stiffen them. These I-beams are formed from bundle sheath extensions of sclerenchyma meeting stiffened sub-epidermal layers. This shifts the balance from reliance on hydrostatic pressure to structural support, an obvious advantage where water is relatively scarce.
|
110 |
+
[40] Long narrow leaves bend more easily than ovate leaf blades of the same area. Monocots typically have such linear leaves that maximize surface area while minimising self-shading. In these a high proportion of longitudinal main veins provide additional support.[40]
|
111 |
+
|
112 |
+
Although not as nutritious as other organs such as fruit, leaves provide a food source for many organisms. The leaf is a vital source of energy production for the plant, and plants have evolved protection against animals that consume leaves, such as tannins, chemicals which hinder the digestion of proteins and have an unpleasant taste. Animals that are specialized to eat leaves are known as folivores.
|
113 |
+
|
114 |
+
Some species have cryptic adaptations by which they use leaves in avoiding predators. For example, the caterpillars of some leaf-roller moths will create a small home in the leaf by folding it over themselves. Some sawflies similarly roll the leaves of their food plants into tubes. Females of the Attelabidae, so-called leaf-rolling weevils, lay their eggs into leaves that they then roll up as means of protection. Other herbivores and their predators mimic the appearance of the leaf. Reptiles such as some chameleons, and insects such as some katydids, also mimic the oscillating movements of leaves in the wind, moving from side to side or back and forth while evading a possible threat.
|
115 |
+
|
116 |
+
Leaves in temperate, boreal, and seasonally dry zones may be seasonally deciduous (falling off or dying for the inclement season). This mechanism to shed leaves is called abscission. When the leaf is shed, it leaves a leaf scar on the twig. In cold autumns, they sometimes change color, and turn yellow, bright-orange, or red, as various accessory pigments (carotenoids and xanthophylls) are revealed when the tree responds to cold and reduced sunlight by curtailing chlorophyll production. Red anthocyanin pigments are now thought to be produced in the leaf as it dies, possibly to mask the yellow hue left when the chlorophyll is lost—yellow leaves appear to attract herbivores such as aphids.[41] Optical masking of chlorophyll by anthocyanins reduces risk of photo-oxidative damage to leaf cells as they senesce, which otherwise may lower the efficiency of nutrient retrieval from senescing autumn leaves.[42]
|
117 |
+
|
118 |
+
In the course of evolution, leaves have adapted to different environments in the following ways:[citation needed]
|
119 |
+
|
120 |
+
|
121 |
+
|
122 |
+
May be coarsely dentate, having large teeth
|
123 |
+
|
124 |
+
or glandular dentate, having teeth which bear glands
|
125 |
+
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
|
134 |
+
The leaf surface is also host to a large variety of microorganisms; in this context it is referred to as the phyllosphere.
|
135 |
+
|
136 |
+
|
137 |
+
|
138 |
+
"Hairs" on plants are properly called trichomes. Leaves can show several degrees of hairiness. The meaning of several of the following terms can overlap.
|
139 |
+
|
140 |
+
A number of different classification systems of the patterns of leaf veins (venation or veination) have been described,[24] starting with Ettingshausen (1861),[45] together with many different descriptive terms, and the terminology has been described as "formidable".[24] One of the commonest among these is the Hickey system, originally developed for "dicotyledons" and using a number of Ettingshausen's terms derived from Greek (1973–1979):[46][47][48] (see also: Simpson Figure 9.12, p. 468)[24]
|
141 |
+
|
142 |
+
Types 4–6 may similarly be subclassified as basal (primaries joined at the base of the blade) or suprabasal (diverging above the blade base), and perfect or imperfect, but also flabellate.
|
143 |
+
|
144 |
+
At about the same time, Melville (1976) described a system applicable to all Angiosperms and using Latin and English terminology.[49] Melville also had six divisions, based on the order in which veins develop.
|
145 |
+
|
146 |
+
A modified form of the Hickey system was later incorporated into the Smithsonian classification (1999) which proposed seven main types of venation, based on the architecture of the primary veins, adding Flabellate as an additional main type. Further classification was then made on the basis of secondary veins, with 12 further types, such as;
|
147 |
+
|
148 |
+
terms which had been used as subtypes in the original Hickey system.[50]
|
149 |
+
|
150 |
+
Further descriptions included the higher order, or minor veins and the patterns of areoles (see Leaf Architecture Working Group, Figures 28–29).[50]
|
151 |
+
|
152 |
+
Analyses of vein patterns often fall into consideration of the vein orders, primary vein type, secondary vein type (major veins), and minor vein density. A number of authors have adopted simplified versions of these schemes.[51][24] At its simplest the primary vein types can be considered in three or four groups depending on the plant divisions being considered;
|
153 |
+
|
154 |
+
where palmate refers to multiple primary veins that radiate from the petiole, as opposed to branching from the central main vein in the pinnate form, and encompasses both of Hickey types 4 and 5, which are preserved as subtypes; e.g., palmate-acrodromous (see National Park Service Leaf Guide).[52]
|
155 |
+
|
156 |
+
Alternatively, Simpson uses:[24]
|
157 |
+
|
158 |
+
However, these simplified systems allow for further division into multiple subtypes. Simpson,[24] (and others)[54] divides parallel and netted (and some use only these two terms for Angiosperms)[55] on the basis of the number of primary veins (costa) as follows;
|
159 |
+
|
160 |
+
These complex systems are not used much in morphological descriptions of taxa, but have usefulness in plant identification,
|
161 |
+
[24] although criticized as being unduly burdened with jargon.[58]
|
162 |
+
|
163 |
+
An older, even simpler system, used in some flora[59] uses only two categories, open and closed.[60]
|
164 |
+
|
165 |
+
There are also many other descriptive terms, often with very specialized usage and confined to specific taxonomic groups.[61] The conspicuousness of veins depends on a number of features. These include the width of the veins, their prominence in relation to the lamina surface and the degree of opacity of the surface, which may hide finer veins. In this regard, veins are called obscure and the order of veins that are obscured and whether upper, lower or both surfaces, further specified.[62][53]
|
166 |
+
|
167 |
+
Terms that describe vein prominence include bullate, channelled, flat, guttered, impressed, prominent and recessed (Fig. 6.1 Hawthorne & Lawrence 2013).[58][63] Veins may show different types of prominence in different areas of the leaf. For instance Pimenta racemosa has a channelled midrib on the upper surfae, but this is prominent on the lower surface.[58]
|
168 |
+
|
169 |
+
Describing vein prominence:
|
170 |
+
|
171 |
+
Describing other features:
|
172 |
+
|
173 |
+
The terms megaphyll, macrophyll, mesophyll, notophyll, microphyll, nanophyll and leptophyll are used to describe leaf sizes (in descending order), in a classification devised in 1934 by Christen C. Raunkiær and since modified by others.[70]
|
en/1983.html.txt
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Fire is the rapid oxidation of a material in the exothermic chemical process of combustion, releasing heat, light, and various reaction products.[1][a]
|
4 |
+
Fire is hot because the conversion of the weak double bond in molecular oxygen, O2, to the stronger bonds in the combustion products carbon dioxide and water releases energy (418 kJ per 32 g of O2); the bond energies of the fuel play only a minor role here.[2] At a certain point in the combustion reaction, called the ignition point, flames are produced. The flame is the visible portion of the fire. Flames consist primarily of carbon dioxide, water vapor, oxygen and nitrogen. If hot enough, the gases may become ionized to produce plasma.[3] Depending on the substances alight, and any impurities outside, the color of the flame and the fire's intensity will be different.
|
5 |
+
|
6 |
+
Fire in its most common form can result in conflagration, which has the potential to cause physical damage through burning. Fire is an important process that affects ecological systems around the globe. The positive effects of fire include stimulating growth and maintaining various ecological systems.
|
7 |
+
Its negative effects include hazard to life and property, atmospheric pollution, and water contamination.[4] If fire removes protective vegetation, heavy rainfall may lead to an increase in soil erosion by water.[5] Also, when vegetation is burned, the nitrogen it contains is released into the atmosphere, unlike elements such as potassium and phosphorus which remain in the ash and are quickly recycled into the soil. This loss of nitrogen caused by a fire produces a long-term reduction in the fertility of the soil, but this fecundity can potentially be recovered as molecular nitrogen in the atmosphere is "fixed" and converted to ammonia by natural phenomena such as lightning and by leguminous plants that are "nitrogen-fixing" such as clover, peas, and green beans.
|
8 |
+
|
9 |
+
Fire has been used by humans in rituals, in agriculture for clearing land, for cooking, generating heat and light, for signaling, propulsion purposes, smelting, forging, incineration of waste, cremation, and as a weapon or mode of destruction.
|
10 |
+
|
11 |
+
Fires start when a flammable or a combustible material, in combination with a sufficient quantity of an oxidizer such as oxygen gas or another oxygen-rich compound (though non-oxygen oxidizers exist), is exposed to a source of heat or ambient temperature above the flash point for the fuel/oxidizer mix, and is able to sustain a rate of rapid oxidation that produces a chain reaction. This is commonly called the fire tetrahedron. Fire cannot exist without all of these elements in place and in the right proportions. For example, a flammable liquid will start burning only if the fuel and oxygen are in the right proportions. Some fuel-oxygen mixes may require a catalyst, a substance that is not consumed, when added, in any chemical reaction during combustion, but which enables the reactants to combust more readily.
|
12 |
+
|
13 |
+
Once ignited, a chain reaction must take place whereby fires can sustain their own heat by the further release of heat energy in the process of combustion and may propagate, provided there is a continuous supply of an oxidizer and fuel.
|
14 |
+
|
15 |
+
If the oxidizer is oxygen from the surrounding air, the presence of a force of gravity, or of some similar force caused by acceleration, is necessary to produce convection, which removes combustion products and brings a supply of oxygen to the fire. Without gravity, a fire rapidly surrounds itself with its own combustion products and non-oxidizing gases from the air, which exclude oxygen and extinguish the fire. Because of this, the risk of fire in a spacecraft is small when it is coasting in inertial flight.[6][7] This does not apply if oxygen is supplied to the fire by some process other than thermal convection.
|
16 |
+
|
17 |
+
Fire can be extinguished by removing any one of the elements of the fire tetrahedron. Consider a natural gas flame, such as from a stove-top burner. The fire can be extinguished by any of the following:
|
18 |
+
|
19 |
+
In contrast, fire is intensified by increasing the overall rate of combustion. Methods to do this include balancing the input of fuel and oxidizer to stoichiometric proportions, increasing fuel and oxidizer input in this balanced mix, increasing the ambient temperature so the fire's own heat is better able to sustain combustion, or providing a catalyst, a non-reactant medium in which the fuel and oxidizer can more readily react.
|
20 |
+
|
21 |
+
A flame is a mixture of reacting gases and solids emitting visible, infrared, and sometimes ultraviolet light, the frequency spectrum of which depends on the chemical composition of the burning material and intermediate reaction products. In many cases, such as the burning of organic matter, for example wood, or the incomplete combustion of gas, incandescent solid particles called soot produce the familiar red-orange glow of "fire". This light has a continuous spectrum. Complete combustion of gas has a dim blue color due to the emission of single-wavelength radiation from various electron transitions in the excited molecules formed in the flame. Usually oxygen is involved, but hydrogen burning in chlorine also produces a flame, producing hydrogen chloride (HCl). Other possible combinations producing flames, amongst many, are fluorine and hydrogen, and hydrazine and nitrogen tetroxide. Hydrogen and hydrazine/UDMH flames are similarly pale blue, while burning boron and its compounds, evaluated in mid-20th century as a high energy fuel for jet and rocket engines, emits intense green flame, leading to its informal nickname of "Green Dragon".
|
22 |
+
|
23 |
+
The glow of a flame is complex. Black-body radiation is emitted from soot, gas, and fuel particles, though the soot particles are too small to behave like perfect blackbodies. There is also photon emission by de-excited atoms and molecules in the gases. Much of the radiation is emitted in the visible and infrared bands. The color depends on temperature for the black-body radiation, and on chemical makeup for the emission spectra. The dominant color in a flame changes with temperature. The photo of the forest fire in Canada is an excellent example of this variation. Near the ground, where most burning is occurring, the fire is white, the hottest color possible for organic material in general, or yellow. Above the yellow region, the color changes to orange, which is cooler, then red, which is cooler still. Above the red region, combustion no longer occurs, and the uncombusted carbon particles are visible as black smoke.
|
24 |
+
|
25 |
+
The common distribution of a flame under normal gravity conditions depends on convection, as soot tends to rise to the top of a general flame, as in a candle in normal gravity conditions, making it yellow. In micro gravity or zero gravity,[8] such as an environment in outer space, convection no longer occurs, and the flame becomes spherical, with a tendency to become more blue and more efficient (although it may go out if not moved steadily, as the CO2 from combustion does not disperse as readily in micro gravity, and tends to smother the flame). There are several possible explanations for this difference, of which the most likely is that the temperature is sufficiently evenly distributed that soot is not formed and complete combustion occurs.[9] Experiments by NASA reveal that diffusion flames in micro gravity allow more soot to be completely oxidized after they are produced than diffusion flames on Earth, because of a series of mechanisms that behave differently in micro gravity when compared to normal gravity conditions.[10] These discoveries have potential applications in applied science and industry, especially concerning fuel efficiency.
|
26 |
+
|
27 |
+
In combustion engines, various steps are taken to eliminate a flame. The method depends mainly on whether the fuel is oil, wood, or a high-energy fuel such as jet fuel.
|
28 |
+
|
29 |
+
It is true that objects at specific temperatures do radiate visible light. Objects whose surface is at a temperature above approximately 470 °C (878 °F) will glow, emitting light at a color that indicates the temperature of that surface. See the section on red heat for more about this effect. It is a misconception that one can judge the temperature of a fire by the color of its flames or the sparks in the flames. For many reasons, chemically and optically, these colors may not match the red/orange/yellow/white heat temperatures on the chart. Barium nitrate burns a bright green, for instance, and this is not present on the heat chart.
|
30 |
+
|
31 |
+
The "adiabatic flame temperature" of a given fuel and oxidizer pair indicates the temperature at which the gases achieve stable combustion.
|
32 |
+
|
33 |
+
Every natural ecosystem has its own fire regime, and the organisms in those ecosystems are adapted to or dependent upon that fire regime. Fire creates a mosaic of different habitat patches, each at a different stage of succession.[12] Different species of plants, animals, and microbes specialize in exploiting a particular stage, and by creating these different types of patches, fire allows a greater number of species to exist within a landscape.
|
34 |
+
|
35 |
+
The fossil record of fire first appears with the establishment of a land-based flora in the Middle Ordovician period, 470 million years ago,[13] permitting the accumulation of oxygen in the atmosphere as never before, as the new hordes of land plants pumped it out as a waste product. When this concentration rose above 13%, it permitted the possibility of wildfire.[14] Wildfire is first recorded in the Late Silurian fossil record, 420 million years ago, by fossils of charcoalified plants.[15][16] Apart from a controversial gap in the Late Devonian, charcoal is present ever since.[16] The level of atmospheric oxygen is closely related to the prevalence of charcoal: clearly oxygen is the key factor in the abundance of wildfire.[17] Fire also became more abundant when grasses radiated and became the dominant component of many ecosystems, around 6 to 7 million years ago;[18] this kindling provided tinder which allowed for the more rapid spread of fire.[17] These widespread fires may have initiated a positive feedback process, whereby they produced a warmer, drier climate more conducive to fire.[17]
|
36 |
+
|
37 |
+
The ability to control fire was a dramatic change in the habits of early humans. Making fire to generate heat and light made it possible for people to cook food, simultaneously increasing the variety and availability of nutrients and reducing disease by killing organisms in the food.[19] The heat produced would also help people stay warm in cold weather, enabling them to live in cooler climates. Fire also kept nocturnal predators at bay. Evidence of cooked food is found from 1.9 million years ago,[dubious – discuss] although fire was probably not used in a controlled fashion until 400,000 years ago.[20] There is some evidence that fire may have been used in a controlled fashion about 1 million years ago.[21][22] Evidence becomes widespread around 50 to 100 thousand years ago, suggesting regular use from this time; interestingly, resistance to air pollution started to evolve in human populations at a similar point in time.[20] The use of fire became progressively more sophisticated, with it being used to create charcoal and to control wildlife from 'tens of thousands' of years ago.[20]
|
38 |
+
|
39 |
+
Fire has also been used for centuries as a method of torture and execution, as evidenced by death by burning as well as torture devices such as the iron boot, which could be filled with water, oil, or even lead and then heated over an open fire to the agony of the wearer.
|
40 |
+
|
41 |
+
By the Neolithic Revolution,[citation needed] during the introduction of grain-based agriculture, people all over the world used fire as a tool in landscape management. These fires were typically controlled burns or "cool fires",[citation needed] as opposed to uncontrolled "hot fires", which damage the soil. Hot fires destroy plants and animals, and endanger communities. This is especially a problem in the forests of today where traditional burning is prevented in order to encourage the growth of timber crops. Cool fires are generally conducted in the spring and autumn. They clear undergrowth, burning up biomass that could trigger a hot fire should it get too dense. They provide a greater variety of environments, which encourages game and plant diversity. For humans, they make dense, impassable forests traversable. Another human use for fire in regards to landscape management is its use to clear land for agriculture. Slash-and-burn agriculture is still common across much of tropical Africa, Asia and South America. "For small farmers, it is a convenient way to clear overgrown areas and release nutrients from standing vegetation back into the soil", said Miguel Pinedo-Vasquez, an ecologist at the Earth Institute’s Center for Environmental Research and Conservation.[23] However this useful strategy is also problematic. Growing population, fragmentation of forests and warming climate are making the earth's surface more prone to ever-larger escaped fires. These harm ecosystems and human infrastructure, cause health problems, and send up spirals of carbon and soot that may encourage even more warming of the atmosphere – and thus feed back into more fires. Globally today, as much as 5 million square kilometres – an area more than half the size of the United States – burns in a given year.[23]
|
42 |
+
|
43 |
+
There are numerous modern applications of fire. In its broadest sense, fire is used by nearly every human being on earth in a controlled setting every day. Users of internal combustion vehicles employ fire every time they drive. Thermal power stations provide electricity for a large percentage of humanity.
|
44 |
+
|
45 |
+
The use of fire in warfare has a long history. Fire was the basis of all early thermal weapons. Homer detailed the use of fire by Greek soldiers who hid in a wooden horse to burn Troy during the Trojan war. Later the Byzantine fleet used Greek fire to attack ships and men. In the First World War, the first modern flamethrowers were used by infantry, and were successfully mounted on armoured vehicles in the Second World War. In the latter war, incendiary bombs were used by Axis and Allies alike, notably on Tokyo, Rotterdam, London, Hamburg and, notoriously, at Dresden; in the latter two cases firestorms were deliberately caused in which a ring of fire surrounding each city[citation needed] was drawn inward by an updraft caused by a central cluster of fires. The United States Army Air Force also extensively used incendiaries against Japanese targets in the latter months of the war, devastating entire cities constructed primarily of wood and paper houses. The use of napalm was employed in July 1944, towards the end of the Second World War;[25] although its use did not gain public attention until the Vietnam War.[25] Molotov cocktails were also used.
|
46 |
+
|
47 |
+
Setting fuel aflame releases usable energy. Wood was a prehistoric fuel, and is still viable today. The use of fossil fuels, such as petroleum, natural gas, and coal, in power plants supplies the vast majority of the world's electricity today; the International Energy Agency states that nearly 80% of the world's power came from these sources in 2002.[27] The fire in a power station is used to heat water, creating steam that drives turbines. The turbines then spin an electric generator to produce electricity. Fire is also used to provide mechanical work directly, in both external and internal combustion engines.
|
48 |
+
|
49 |
+
The unburnable solid remains of a combustible material left after a fire is called clinker if its melting point is below the flame temperature, so that it fuses and then solidifies as it cools, and ash if its melting point is above the flame temperature.
|
50 |
+
|
51 |
+
Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns.[28][29] Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.[30]
|
52 |
+
|
53 |
+
Fire fighting services are provided in most developed areas to extinguish or contain uncontrolled fires. Trained firefighters use fire apparatus, water supply resources such as water mains and fire hydrants or they might use A and B class foam depending on what is feeding the fire.
|
54 |
+
|
55 |
+
Fire prevention is intended to reduce sources of ignition. Fire prevention also includes education to teach people how to avoid causing fires.[31] Buildings, especially schools and tall buildings, often conduct fire drills to inform and prepare citizens on how to react to a building fire. Purposely starting destructive fires constitutes arson and is a crime in most jurisdictions.[32]
|
56 |
+
|
57 |
+
Model building codes require passive fire protection and active fire protection systems to minimize damage resulting from a fire. The most common form of active fire protection is fire sprinklers. To maximize passive fire protection of buildings, building materials and furnishings in most developed countries are tested for fire-resistance, combustibility and flammability. Upholstery, carpeting and plastics used in vehicles and vessels are also tested.
|
58 |
+
|
59 |
+
Where fire prevention and fire protection have failed to prevent damage, fire insurance can mitigate the financial impact.[33]
|
60 |
+
|
61 |
+
Different restoration methods and measures are used depending on the type of fire damage that occurred. Restoration after fire damage can be performed by property management teams, building maintenance personnel, or by the homeowners themselves; however, contacting a certified professional fire damage restoration specialist is often regarded as the safest way to restore fire damaged property due to their training and extensive experience.[34] Most are usually listed under "Fire and Water Restoration" and they can help speed repairs, whether for individual homeowners or for the largest of institutions.[35]
|
62 |
+
|
63 |
+
Fire and Water Restoration companies are regulated by the appropriate state's Department of Consumer Affairs – usually the state contractors license board. In California, all Fire and Water Restoration companies must register with the California Contractors State License Board.[36] Presently, the California Contractors State License Board has no specific classification for "water and fire damage restoration." Hence, the Contractor's State License Board requires both an asbestos certification (ASB) as well as a demolition classification (C-21) in order to perform Fire and Water Restoration work.[37]
|
en/1984.html.txt
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
February is the second month of the year in the Julian and Gregorian calendars. The month has 28 days in common years or 29 ]] in leap years, with the quadrennial 29th day being called the leap day. It is the first of five months to have fewer than 31 days (the other four being April, June, September, and November) and the only one to have fewer than 30 days. The other seven months have 31 days. February starts on the same day of the week as March and November in common years and August in leap years. It ends on the same day of the week as October in all years. In leap years preceding common years or common years preceding leap years, it begins on the same day of the week as May of the following year and ends on the same day of the week as April of the following year. In common years preceding common years, it begins on the same day of the week as August of the following year and ends on the same day of the week as July of the following year. It also begins on the same day of the week as June of the previous year and ends on the same day of the week as August and November of the previous year. In 2020, February had 29 days.
|
2 |
+
|
3 |
+
February is the third and last month of meteorological winter in the Northern Hemisphere. In the Southern Hemisphere, February is the third and last month of summer (being the seasonal equivalent of what is August in the Northern Hemisphere).
|
4 |
+
|
5 |
+
February is pronounced either as /ˈfɛbjuɛri/ (listen) FEB-yoo-err-ee or /ˈfɛbruɛri/ FEB-roo-err-ee. Many people drop the first "r", replacing it with /j/, as if it were spelled "Febuary". This comes about by analogy with "January" (/ˈdʒænjuɛri/ (listen)), as well as by a dissimilation effect whereby having two "r"s close to each other causes one to change for ease of pronunciation.[1]
|
6 |
+
|
7 |
+
The Roman month Februarius was named after the Latin term februum, which means purification, via the purification ritual Februa held on February 15 (full moon) in the old lunar Roman calendar. January and February were the last two months to be added to the Roman calendar, since the Romans originally considered winter a monthless period. They were added by Numa Pompilius about 713 BC. February remained the last month of the calendar year until the time of the decemvirs (c. 450 BC), when it became the second month. At certain times February was truncated to 23 or 24 days, and a 27-day intercalary month, Intercalaris, was occasionally inserted immediately after February to realign the year with the seasons.
|
8 |
+
|
9 |
+
February observances in Ancient Rome included Amburbium (precise date unknown), Sementivae (February 2), Februa (February 13–15), Lupercalia (February 13–15), Parentalia (February 13–22), Quirinalia (February 17), Feralia (February 21), Caristia (February 22), Terminalia (February 23), Regifugium (February 24), and Agonium Martiale (February 27). These days do not correspond to the modern Gregorian calendar.
|
10 |
+
|
11 |
+
Under the reforms that instituted the Julian calendar, Intercalaris was abolished, leap years occurred regularly every fourth year, and in leap years February gained a 29th day. Thereafter, it remained the second month of the calendar year, meaning the order that months are displayed (January, February, March, ..., December) within a year-at-a-glance calendar. Even during the Middle Ages, when the numbered Anno Domini year began on March 25 or December 25, the second month was February whenever all twelve months were displayed in order. The Gregorian calendar reforms made slight changes to the system for determining which years were leap years, but also contained a 29-day February.
|
12 |
+
|
13 |
+
Historical names for February include the Old English terms Solmonath (mud month) and Kale-monath (named for cabbage) as well as Charlemagne's designation Hornung. In Finnish, the month is called helmikuu, meaning "month of the pearl"; when snow melts on tree branches, it forms droplets, and as these freeze again, they are like pearls of ice. In Polish and Ukrainian, respectively, the month is called luty or лютий ("lyutiy"), meaning the month of ice or hard frost. In Macedonian the month is sechko (сечко), meaning month of cutting (wood). In Czech, it is called únor, meaning month of submerging (of river ice).
|
14 |
+
|
15 |
+
In Slovene, February is traditionally called svečan, related to icicles or Candlemas.[2] This name originates from sičan,[3] written as svičan in the New Carniolan Almanac from 1775 and changed to its final form by Franc Metelko in his New Almanac from 1824. The name was also spelled sečan, meaning "the month of cutting down of trees".[2]
|
16 |
+
|
17 |
+
In 1848, a proposal was put forward in Kmetijske in rokodelske novice by the Slovene Society of Ljubljana to call this month talnik (related to ice melting), but it did not stick. The idea was proposed by a priest, Blaž Potočnik.[4] Another name of February in Slovene was vesnar, after the mythological character Vesna.[5]
|
18 |
+
|
19 |
+
Having only 28 days in common years, February is the only month of the year that can pass without a single full moon. Using Coordinated Universal Time as the basis for determining the date and time of a full moon, this last happened in 2018 and will next happen in 2037.[6][7] The same is true regarding a new moon: again using Coordinated Universal Time as the basis, this last happened in 2014 and will next happen in 2033.[8][9]
|
20 |
+
|
21 |
+
February is also the only month of the calendar that, once every six years and twice every 11 years consecutively, either back into the past or forward into the future, has four full 7-day weeks. In countries that start their week on a Monday, it occurs as part of a common year starting on Friday, in which February 1st is a Monday and the 28th is a Sunday; this occurred in 1965, 1971, 1982, 1993, 1999 and 2010, and will occur again in 2021. In countries that start their week on a Sunday, it occurs in a common year starting on Thursday, with the next occurrence in 2026, and previous occurrences in 1987, 1998, 2009 and 2015. The pattern is broken by a skipped leap year, but no leap year has been skipped since 1900 and no others will be skipped until 2100.
|
22 |
+
|
23 |
+
February meteor showers include the Alpha Centaurids (appearing in early February), the Beta Leonids, also known as the March Virginids (lasting from February 14 to April 25, peaking around March 20), the Delta Cancrids (appearing December 14 to February 14, peaking on January 17), the Omicron Centaurids (late January through February, peaking in mid-February), Theta Centaurids (January 23 – March 12, only visible in the southern hemisphere), Eta Virginids (February 24 and March 27, peaking around March 18), and Pi Virginids (February 13 and April 8, peaking between March 3 and March 9).
|
24 |
+
|
25 |
+
The western zodiac signs of February were Aquarius (until February 18, 2020) and Pisces (February 19, 2020 onwards). In 2021 they will shift to 17–18 due to the leap day in 2020.[10][11]
|
26 |
+
|
27 |
+
This list does not necessarily imply either official status nor general observance.
|
28 |
+
|
29 |
+
(Please note that all Baha'i, Islamic, and Jewish observances begin at the sundown prior to the date listed, and end at sundown of the date in question unless otherwise noted.)
|
30 |
+
|
31 |
+
First Saturday: February 1
|
32 |
+
|
33 |
+
First Sunday: February 2
|
34 |
+
|
35 |
+
First Week of February (first Monday, ending on Sunday): February 2–9
|
36 |
+
|
37 |
+
First Monday: February 3
|
38 |
+
|
39 |
+
First Friday: February 7
|
40 |
+
|
41 |
+
Second Saturday: February 8
|
42 |
+
|
43 |
+
Second Sunday: February 9
|
44 |
+
|
45 |
+
Second Monday: February 10
|
46 |
+
|
47 |
+
Second Tuesday: February 11
|
48 |
+
|
49 |
+
Week of February 22: February 16–22
|
50 |
+
|
51 |
+
Third Monday: February 17
|
52 |
+
|
53 |
+
Third Thursday: February 20
|
54 |
+
|
55 |
+
Third Friday: February 21
|
56 |
+
|
57 |
+
Last Friday: February 28
|
58 |
+
|
59 |
+
Last Saturday: February 29
|
60 |
+
|
61 |
+
Last day of February: February 29
|
en/1985.html.txt
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fiction generally is a narrative form, in any medium, consisting of people, events, or places that are imaginary—in other words, not based strictly on history or fact.[1][2][3] In its most narrow usage, fiction refers to written narratives in prose and often specifically novels,[4][5] though also novellas and short stories. More broadly, fiction has come to encompass stories with imaginary elements in any format, including not just writings but also most live theatrical performances, films, television programs, radio dramas, comics, role-playing games, and video games.
|
2 |
+
|
3 |
+
A work of fiction implies the inventive construction of an imaginary world and, most commonly, its fictionality is publicly acknowledged, so its audience typically expects it to deviate in some ways from the real world rather than presenting only characters who are actual people or descriptions that are factually true.[6] Fiction is generally understood as not adhering precisely to the real world, which also opens it up to various interpretations.[7] Characters and events within a fictional work may even be set in their own context entirely separate from the known universe: an independent fictional universe.
|
4 |
+
|
5 |
+
In contrast to fiction is its traditional opposite: non-fiction, in which the creator assumes responsibility for presenting only the historical and factual truth. Despite the usual distinction between fiction and non-fiction, some fiction creators certainly attempt to make their audience believe the work is non-fiction or otherwise blur the boundary, often through forms of experimental fiction (including some postmodern fiction and autofiction)[8] or even through deliberate literary fraud.[9]
|
6 |
+
|
7 |
+
Traditionally, fiction includes novels, short stories, fables, legends, myths, fairy tales, epic and narrative poetry, plays (including operas, musicals, dramas, puppet plays, and various kinds of theatrical dances). However, fiction may also encompass comic books, and many animated cartoons, stop motions, anime, manga, films, video games, radio programs, television programs (comedies and dramas), etc.
|
8 |
+
|
9 |
+
The Internet has had a major impact on the creation and distribution of fiction, calling into question the feasibility of copyright as a means to ensure royalties are paid to copyright holders.[10] Also, digital libraries such as Project Gutenberg make public domain texts more readily available. The combination of inexpensive home computers, the Internet, and the creativity of its users has also led to new forms of fiction, such as interactive computer games or computer-generated comics. Countless forums for fan fiction can be found online, where loyal followers of specific fictional realms create and distribute derivative stories. The Internet is also used for the development of blog fiction, where a story is delivered through a blog either as flash fiction or serial blog, and collaborative fiction, where a story is written sequentially by different authors, or the entire text can be revised by anyone using a wiki.
|
10 |
+
|
11 |
+
Types of literary fiction in prose are distinguished by relative length and include:[11]
|
12 |
+
|
13 |
+
Fiction is commonly broken down into a variety of genres: subsets of fiction, each differentiated by a particular unifying tone or style; set of narrative techniques, archetypes, or other tropes; media content; or other popularly defined criterion. Science fiction, for example, predicts or supposes technologies that are not realities at the time of the work's creation: Jules Verne's novel From the Earth to the Moon was published in 1865 and only in 1969 did astronaut Neil Armstrong the first land on the moon.
|
14 |
+
|
15 |
+
Historical fiction places imaginary characters into real historical events. In the early historical novel Waverley, Sir Walter Scott's fictional character Edward Waverley meets a figure from history, Bonnie Prince Charlie, and takes part in the Battle of Prestonpans. Some works of fiction are slightly or greatly re-imagined based on some originally true story, or a reconstructed biography.[14] Often, even when the fictional story is based on fact, there may be additions and subtractions from the true story to make it more interesting. An example is Tim O'Brien's The Things They Carried, a series of short stories about the Vietnam War.
|
16 |
+
|
17 |
+
Fictional works that explicitly involve supernatural, magical, or scientifically impossible elements are often classified under the genre of fantasy, including Lewis Carroll's Alice In Wonderland, J. K. Rowling's Harry Potter series, and J. R. R. Tolkien's The Lord of the Rings. Creators of fantasy sometimes introduce imaginary creatures and beings such as dragons and fairies.[3]
|
18 |
+
|
19 |
+
Literary fiction is a term used in the book-trade to distinguish novels that are regarded as having literary merit, from most commercial or "genre" fiction.
|
20 |
+
|
21 |
+
Neal Stephenson has suggested that while any definition will be simplistic there is today a general cultural difference between literary and genre fiction. On the one hand literary authors nowadays are frequently supported by patronage, with employment at a university or a similar institution, and with the continuation of such positions determined not by book sales but by critical acclaim by other established literary authors and critics. On the other hand, he suggests, genre fiction writers tend to support themselves by book sales.[15] However, in an interview, John Updike lamented that "the category of 'literary fiction' has sprung up recently to torment people like me who just set out to write books, and if anybody wanted to read them, terrific, the more the merrier. ... I'm a genre writer of a sort. I write literary fiction, which is like spy fiction or chick lit".[16] Likewise, on The Charlie Rose Show, he argued that this term, when applied to his work, greatly limited him and his expectations of what might come of his writing, so he does not really like it. He suggested that all his works are literary, simply because "they are written in words".[17]
|
22 |
+
|
23 |
+
Literary fiction often involves social commentary, political criticism, or reflection on the human condition.[18] In general it focuses on "introspective, in-depth character studies" of "interesting, complex and developed" characters.[18][19] This contrasts with genre fiction where plot is the central concern.[20] Usually in literary fiction the focus is on the "inner story" of the characters who drive the plot, with detailed motivations to elicit "emotional involvement" in the reader.[21][22] The style of literary fiction is often described as "elegantly written, lyrical, and ... layered".[23] The tone of literary fiction can be darker than genre fiction,[24] while the pacing of literary fiction may be slower than popular fiction.[24] As Terrence Rafferty notes, "literary fiction, by its nature, allows itself to dawdle, to linger on stray beauties even at the risk of losing its way".[25]
|
24 |
+
|
25 |
+
Realistic fiction typically involves a story whose basic setting (time and location in the world) is real and whose events could feasibly happen in a real-world setting; non-realistic fiction involves a story where the opposite is the case, often being set in an entirely imaginary universe, an alternative history of the world other than that currently understood as true, or some other non-existent location or time-period, sometimes even presenting impossible technology or defiance of the currently understood laws of nature. However, all types of fiction arguably invite their audience to explore real ideas, issues, or possibilities in an otherwise imaginary setting or using what is understood about reality to mentally construct something similar to reality, though still distinct from it.[note 1][note 2]
|
26 |
+
|
27 |
+
In terms of the traditional separation between fiction and non-fiction, the lines are now commonly understood as blurred, showing more overlap than mutual exclusion. Even fiction usually has elements of or grounding in, truth. The distinction between the two may be best defined from the perspective of the audience, according to whom a work is regarded as non-fiction if its people, places, and events are all historically or factually real, while a work is regarded as fiction if it deviates from reality in any of those areas. The distinction between fiction and non-fiction is further obscured by an understanding, on the one hand, that the truth can be presented through imaginary channels and constructions, while, on the other hand, imagination can just as well bring about significant conclusions about truth and reality.[citation needed]
|
28 |
+
|
29 |
+
Literary critic James Wood, argues that "fiction is both artifice and verisimilitude", meaning that it requires both creative inventions as well as some acceptable degree of believability,[28] a notion often encapsulated in poet Samuel Taylor Coleridge's term: willing suspension of disbelief. Also, infinite fictional possibilities themselves signal the impossibility of fully knowing reality, provocatively demonstrating that there is no criterion to measure constructs of reality.[29]
|
30 |
+
|
en/1986.html.txt
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fiction generally is a narrative form, in any medium, consisting of people, events, or places that are imaginary—in other words, not based strictly on history or fact.[1][2][3] In its most narrow usage, fiction refers to written narratives in prose and often specifically novels,[4][5] though also novellas and short stories. More broadly, fiction has come to encompass stories with imaginary elements in any format, including not just writings but also most live theatrical performances, films, television programs, radio dramas, comics, role-playing games, and video games.
|
2 |
+
|
3 |
+
A work of fiction implies the inventive construction of an imaginary world and, most commonly, its fictionality is publicly acknowledged, so its audience typically expects it to deviate in some ways from the real world rather than presenting only characters who are actual people or descriptions that are factually true.[6] Fiction is generally understood as not adhering precisely to the real world, which also opens it up to various interpretations.[7] Characters and events within a fictional work may even be set in their own context entirely separate from the known universe: an independent fictional universe.
|
4 |
+
|
5 |
+
In contrast to fiction is its traditional opposite: non-fiction, in which the creator assumes responsibility for presenting only the historical and factual truth. Despite the usual distinction between fiction and non-fiction, some fiction creators certainly attempt to make their audience believe the work is non-fiction or otherwise blur the boundary, often through forms of experimental fiction (including some postmodern fiction and autofiction)[8] or even through deliberate literary fraud.[9]
|
6 |
+
|
7 |
+
Traditionally, fiction includes novels, short stories, fables, legends, myths, fairy tales, epic and narrative poetry, plays (including operas, musicals, dramas, puppet plays, and various kinds of theatrical dances). However, fiction may also encompass comic books, and many animated cartoons, stop motions, anime, manga, films, video games, radio programs, television programs (comedies and dramas), etc.
|
8 |
+
|
9 |
+
The Internet has had a major impact on the creation and distribution of fiction, calling into question the feasibility of copyright as a means to ensure royalties are paid to copyright holders.[10] Also, digital libraries such as Project Gutenberg make public domain texts more readily available. The combination of inexpensive home computers, the Internet, and the creativity of its users has also led to new forms of fiction, such as interactive computer games or computer-generated comics. Countless forums for fan fiction can be found online, where loyal followers of specific fictional realms create and distribute derivative stories. The Internet is also used for the development of blog fiction, where a story is delivered through a blog either as flash fiction or serial blog, and collaborative fiction, where a story is written sequentially by different authors, or the entire text can be revised by anyone using a wiki.
|
10 |
+
|
11 |
+
Types of literary fiction in prose are distinguished by relative length and include:[11]
|
12 |
+
|
13 |
+
Fiction is commonly broken down into a variety of genres: subsets of fiction, each differentiated by a particular unifying tone or style; set of narrative techniques, archetypes, or other tropes; media content; or other popularly defined criterion. Science fiction, for example, predicts or supposes technologies that are not realities at the time of the work's creation: Jules Verne's novel From the Earth to the Moon was published in 1865 and only in 1969 did astronaut Neil Armstrong the first land on the moon.
|
14 |
+
|
15 |
+
Historical fiction places imaginary characters into real historical events. In the early historical novel Waverley, Sir Walter Scott's fictional character Edward Waverley meets a figure from history, Bonnie Prince Charlie, and takes part in the Battle of Prestonpans. Some works of fiction are slightly or greatly re-imagined based on some originally true story, or a reconstructed biography.[14] Often, even when the fictional story is based on fact, there may be additions and subtractions from the true story to make it more interesting. An example is Tim O'Brien's The Things They Carried, a series of short stories about the Vietnam War.
|
16 |
+
|
17 |
+
Fictional works that explicitly involve supernatural, magical, or scientifically impossible elements are often classified under the genre of fantasy, including Lewis Carroll's Alice In Wonderland, J. K. Rowling's Harry Potter series, and J. R. R. Tolkien's The Lord of the Rings. Creators of fantasy sometimes introduce imaginary creatures and beings such as dragons and fairies.[3]
|
18 |
+
|
19 |
+
Literary fiction is a term used in the book-trade to distinguish novels that are regarded as having literary merit, from most commercial or "genre" fiction.
|
20 |
+
|
21 |
+
Neal Stephenson has suggested that while any definition will be simplistic there is today a general cultural difference between literary and genre fiction. On the one hand literary authors nowadays are frequently supported by patronage, with employment at a university or a similar institution, and with the continuation of such positions determined not by book sales but by critical acclaim by other established literary authors and critics. On the other hand, he suggests, genre fiction writers tend to support themselves by book sales.[15] However, in an interview, John Updike lamented that "the category of 'literary fiction' has sprung up recently to torment people like me who just set out to write books, and if anybody wanted to read them, terrific, the more the merrier. ... I'm a genre writer of a sort. I write literary fiction, which is like spy fiction or chick lit".[16] Likewise, on The Charlie Rose Show, he argued that this term, when applied to his work, greatly limited him and his expectations of what might come of his writing, so he does not really like it. He suggested that all his works are literary, simply because "they are written in words".[17]
|
22 |
+
|
23 |
+
Literary fiction often involves social commentary, political criticism, or reflection on the human condition.[18] In general it focuses on "introspective, in-depth character studies" of "interesting, complex and developed" characters.[18][19] This contrasts with genre fiction where plot is the central concern.[20] Usually in literary fiction the focus is on the "inner story" of the characters who drive the plot, with detailed motivations to elicit "emotional involvement" in the reader.[21][22] The style of literary fiction is often described as "elegantly written, lyrical, and ... layered".[23] The tone of literary fiction can be darker than genre fiction,[24] while the pacing of literary fiction may be slower than popular fiction.[24] As Terrence Rafferty notes, "literary fiction, by its nature, allows itself to dawdle, to linger on stray beauties even at the risk of losing its way".[25]
|
24 |
+
|
25 |
+
Realistic fiction typically involves a story whose basic setting (time and location in the world) is real and whose events could feasibly happen in a real-world setting; non-realistic fiction involves a story where the opposite is the case, often being set in an entirely imaginary universe, an alternative history of the world other than that currently understood as true, or some other non-existent location or time-period, sometimes even presenting impossible technology or defiance of the currently understood laws of nature. However, all types of fiction arguably invite their audience to explore real ideas, issues, or possibilities in an otherwise imaginary setting or using what is understood about reality to mentally construct something similar to reality, though still distinct from it.[note 1][note 2]
|
26 |
+
|
27 |
+
In terms of the traditional separation between fiction and non-fiction, the lines are now commonly understood as blurred, showing more overlap than mutual exclusion. Even fiction usually has elements of or grounding in, truth. The distinction between the two may be best defined from the perspective of the audience, according to whom a work is regarded as non-fiction if its people, places, and events are all historically or factually real, while a work is regarded as fiction if it deviates from reality in any of those areas. The distinction between fiction and non-fiction is further obscured by an understanding, on the one hand, that the truth can be presented through imaginary channels and constructions, while, on the other hand, imagination can just as well bring about significant conclusions about truth and reality.[citation needed]
|
28 |
+
|
29 |
+
Literary critic James Wood, argues that "fiction is both artifice and verisimilitude", meaning that it requires both creative inventions as well as some acceptable degree of believability,[28] a notion often encapsulated in poet Samuel Taylor Coleridge's term: willing suspension of disbelief. Also, infinite fictional possibilities themselves signal the impossibility of fully knowing reality, provocatively demonstrating that there is no criterion to measure constructs of reality.[29]
|
30 |
+
|
en/1987.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/1988.html.txt
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Fever, also referred to as pyrexia, is defined as having a temperature above the normal range due to an increase in the body's temperature set point.[1][6][7] There is not a single agreed-upon upper limit for normal temperature with sources using values between 37.2 and 38.3 °C (99.0 and 100.9 °F) in humans.[1][2][8] The increase in set point triggers increased muscle contractions and causes a feeling of cold.[3] This results in greater heat production and efforts to conserve heat.[4] When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat.[4] Rarely a fever may trigger a febrile seizure, with this being more common in young children.[5] Fevers do not typically go higher than 41 to 42 °C (105.8 to 107.6 °F).[7]
|
4 |
+
|
5 |
+
A fever can be caused by many medical conditions ranging from non-serious to life-threatening.[12] This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, COVID-19, and malaria.[12][13] Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication, and cancer.[12][14] It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.[2]
|
6 |
+
|
7 |
+
Treatment to reduce fever is generally not required.[3][9] Treatment of associated pain and inflammation, however, may be useful and help a person rest.[9] Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature.[9][10] Measures such as putting a cool damp cloth on the forehead and having a slightly warm bath are not useful and may simply make a person more uncomfortable.[9] Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms.[15] Hyperthermia does require treatment.[3]
|
8 |
+
|
9 |
+
Fever is one of the most common medical signs.[3] It is part of about 30% of healthcare visits by children[3] and occurs in up to 75% of adults who are seriously sick.[11] While fever evolved as a defense mechanism, treating fever does not appear to worsen outcomes.[16][17] Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as fever phobia.[3][18]
|
10 |
+
|
11 |
+
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". [19]
|
12 |
+
|
13 |
+
A range for normal temperatures has been found.[8] Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures.[25]
|
14 |
+
Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
|
15 |
+
|
16 |
+
In adults, the normal range of oral temperatures in healthy individuals is 33.2–38.2 °C (91.8–100.8 °F), while when taken rectally it is 34.4–37.8 °C (93.9–100.0 °F), for ear measurement it is 35.4–37.8 °C (95.7–100.0 °F), and for armpit (axillary) measurement it is 35.5–37.0 °C (95.9–98.6 °F).[29] Harrison's Principles of Internal Medicine defines a fever as a morning oral temperature of >37.2 °C (>98.9 °F) or an afternoon oral temperature of >37.7 °C (>99.9 °F) although normal daily temperature variation has been described as 0.5 °C (0.9 °F).[1]:4012[verification needed] Normal body temperatures vary depending on many factors, including age, sex, time of day, ambient temperature, activity level, and more.[30][31] A raised temperature is not always a fever; for example, the temperature of a healthy person rises when he or she exercises, but this is not considered a fever, as the set point is normal.[citation needed] On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of 37.3 °C (99.1 °F) may represent a clinically significant fever.[citation needed]
|
17 |
+
|
18 |
+
Hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.[2] It is an example of a high temperature phenomenon that is not a fever; rather, it occurs from a number of causes including heatstroke, neuroleptic malignant syndrome, malignant hyperthermia, as well as in response to stimulants such as substituted amphetamines and cocaine, and in idiosyncratic drug reactions, and serotonin syndrome.[32][1]:117–121[verification needed] Hyperthermia differs from hyperpyrexia, see section following.
|
19 |
+
|
20 |
+
Various patterns of measured patient temperatures have been observed, some of which may be indicative of a particular medical diagnosis:
|
21 |
+
|
22 |
+
Among the types of intermittent fever are ones specific to cases of malaria caused by different pathogens. These are:[34][35]
|
23 |
+
|
24 |
+
In addition, there is disagreement regarding whether a specific fever pattern is associated with Hodgkin's lymphoma—the Pel–Ebstein fever, with patient's argued to present high temperature for one week, followed by low for the next week, and so on, where the generality of this pattern is debated.[36][needs update]
|
25 |
+
|
26 |
+
Persistent fever that cannot be explained after repeated routine clinical inquiries is called fever of unknown origin.[1] A neutropenic fever, also called febrile neutropenia, is a fever in the absence of normal immune system function.[citation needed] Because of the lack of infection-fighting neutrophils, a bacterial infection can spread rapidly; this fever is, therefore, usually considered to require urgent medical attention.[citation needed] This kind of fever is more commonly seen in people receiving immune-suppressing chemotherapy than in apparently healthy people.[citation needed]
|
27 |
+
|
28 |
+
An old term, febricula, has been used to refer to low-grade fever, especially if the cause is unknown, no other symptoms are present, and the patient recovers fully in less than a week.[37][better source needed]
|
29 |
+
|
30 |
+
Hyperpyrexia is an extreme elevation of body temperature which, depending upon the source, is classified as a core body temperature greater than or equal to 40.0 or 41.0 °C (104.0 or 105.8 °F); the range of hyperpyrexias include cases considered severe (≥ 40 °C) and extreme (≥ 42 °C).[1][38][39] It differs from hyperthermia in that one’s thermoregulatory system's set point for body temperature is set above normal, then heat is generated to achieve it. In contrast, hyperthermia involves body temperature rising above its set point due to outside factors.[1][40] The high temperatures of hyperpyrexia are considered medical emergencies, as they may indicate a serious underlying condition or lead to severe morbidity (including permanent brain damage), or to death.[41] A common cause of hyperpyrexia is an intracranial hemorrhage.[1] Other causes in emergency room settings include sepsis, Kawasaki syndrome,[42] neuroleptic malignant syndrome, drug overdose, serotonin syndrome, and thyroid storm.[41]
|
31 |
+
|
32 |
+
Fever is a common symptom of many medical conditions:
|
33 |
+
|
34 |
+
Adult and pediatric manifestations for the same disease may differ; for instance, in COVID-19, one metastudy describes 92.8% of adults versus 43.9% of children presenting with fever.[13]
|
35 |
+
|
36 |
+
In addition, fever can result from a reaction to an incompatible blood product.[45]
|
37 |
+
|
38 |
+
Teething is not a cause of fever.[46]
|
39 |
+
|
40 |
+
Scholars viewing fever from an organismal and evolutionary perspective note the value to an organism of having a fever response, in particular in response to infective disease.[16][47][48] On the other hand, while fever evolved as a defense mechanism, treating fever does not appear to worsen outcomes.[16][17] Studies using warm-blooded vertebrates suggest that they recover more rapidly from infections or critical illness due to fever.[49] Other studies suggest reduced mortality in bacterial infections when fever was present.[50] Fever is thought to contribute to host defense,[16] as the reproduction of pathogens with strict temperature requirements can be hindered, and the rates of some important immunological reactions[clarification needed] are increased by temperature.[51] Fever has been described in teaching texts as assisting the healing process in various ways, including:
|
41 |
+
|
42 |
+
Temperature is regulated in the hypothalamus. The trigger of a fever, called a pyrogen, results in the release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point. Hence, the hypothalamus can be seen as working like a thermostat.[1] When the set point is raised, the body increases its temperature through both active generation of heat and retention of heat. Peripheral vasoconstriction both reduces heat loss through the skin and causes the person to feel cold. Norepinephrine increases thermogenesis in brown adipose tissue, and muscle contraction through shivering raises the metabolic rate.[54]
|
43 |
+
|
44 |
+
If these measures are insufficient to make the blood temperature in the brain match the new set point in the hypothalamus, the brain orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:[citation needed]
|
45 |
+
|
46 |
+
When the hypothalamic set point moves back to baseline—either spontaneously or via medication—normal functions such as sweating, and the reverse of the foregoing processes (e.g., vasodilation, end of shivering, and nonshivering heat production) are used to cool the body to the new, lower setting.[citation needed]
|
47 |
+
|
48 |
+
This contrasts with hyperthermia, in which the normal setting remains, and the body overheats through undesirable retention of excess heat or over-production of heat. Hyperthermia is usually the result of an excessively hot environment (heat stroke) or an adverse reaction to drugs. Fever can be differentiated from hyperthermia by the circumstances surrounding it and its response to anti-pyretic medications.[1][verification needed]
|
49 |
+
|
50 |
+
In infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-exercise-associated thermogenesis, also known as non-shivering thermogenesis).[citation needed]
|
51 |
+
|
52 |
+
Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.[citation needed]
|
53 |
+
|
54 |
+
A pyrogen is a substance that induces fever.[55] In the presence of an infectious agent, such as bacteria, viruses, viroids, etc., the immune response of the body is to inhibit their growth and eliminate them. The most common pyrogens are endotoxins, which are lipopolysaccharides (LPS) produced by Gram-negative bacteria such as E. coli. But pyrogens include non-endotoxic substances (derived from microorganisms other than gram-negative-bacteria or from chemical substances) as well.[56] The types of pyrogens include internal (endogenous) and external (exogenous) to the body.
|
55 |
+
|
56 |
+
The "pyrogenicity" of given pyrogens varies: in extreme cases, bacterial pyrogens can act as superantigens and cause rapid and dangerous fevers.[citation needed]
|
57 |
+
|
58 |
+
Endogenous pyrogens are cytokines released from monocytes (which are part of the immune system).[57] In general, they stimulate chemical responses, often in the presence of an antigen, leading to a fever. Whilst they can be a product of external factors like exogenous pyrogens, they can also be induced by internal factors like damage associated from molecular patterns such as cases like rheumatoid arthritis or lupus.[58]
|
59 |
+
|
60 |
+
Major endogenous pyrogens are interleukin 1 (α and β)[59]:1237–1248 and interleukin 6 (IL-6).[60] Minor endogenous pyrogens include interleukin-8, tumor necrosis factor-β, macrophage inflammatory protein-α and macrophage inflammatory protein-β as well as interferon-α, interferon-β, and interferon-γ.[59]:1237–1248 Tumor necrosis factor-α (TNF) also acts as a pyrogen, mediated by interleukin 1 (IL-1) release.[61] These cytokine factors are released into general circulation, where they migrate to the brain's circumventricular organs where they are more easily absorbed than in areas protected by the blood–brain barrier.[citation needed] The cytokines then bind to endothelial receptors on vessel walls to receptors on microglial cells, resulting in activation of the arachidonic acid pathway.[citation needed]
|
61 |
+
|
62 |
+
Of these, IL-1β, TNF, and IL-6 are able to raise the temperature setpoint of an organism and cause fever. These proteins produce a cyclooxygenase which induces the hypothalamic production of PGE2 which then stimulates the release of neurotransmitters such as cyclic adenosine monophosphate and increases body temperature.[62]
|
63 |
+
|
64 |
+
Exogenous pyrogens are external to the body and are of microbial origin. In general, these pyrogens, including bacterial cell wall products, may act on Toll-like receptors in the hypothalamus and elevate the thermoregulatory setpoint.[63]
|
65 |
+
|
66 |
+
An example of a class of exogenous pyrogens are bacterial lipopolysaccharides (LPS) present in the cell wall of gram-negative bacteria. According to one mechanism of pyrogen action, an immune system protein, lipopolysaccharide-binding protein (LBP), binds to LPS, and the LBP–LPS complex then binds to a CD14 receptor on a macrophage. The LBP-LPS binding to CD14 results in cellular synthesis and release of various endogenous cytokines, e.g., interleukin 1 (IL-1), interleukin 6 (IL-6), and tumor necrosis factor-alpha (TNFα). A further downstream event is activation of the arachidonic acid pathway.[64]
|
67 |
+
|
68 |
+
PGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.
|
69 |
+
|
70 |
+
PGE2 is the ultimate mediator of the febrile response. The set point temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus . Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.
|
71 |
+
|
72 |
+
Fever does not necessarily need to be treated,[65] and most febrile cases recover without specific medical attention.[66] Although it is unpleasant, fever rarely rises to a dangerous level even if untreated. Damage to the brain generally does not occur until temperatures reach 42 °C (107.6 °F), and it is rare for an untreated fever to exceed 40.6 °C (105 °F).[67] Treating fever in people with sepsis does not affect outcomes.[68]
|
73 |
+
|
74 |
+
Limited evidence supports sponging or bathing feverish children with tepid water.[69] The use of a fan or air conditioning may somewhat reduce the temperature and increase comfort. If the temperature reaches the extremely high level of hyperpyrexia, aggressive cooling is required (generally produced mechanically via conduction by applying numerous ice packs across most of the body or direct submersion in ice water).[41] In general, people are advised to keep adequately hydrated.[70] Whether increased fluid intake improves symptoms or shortens respiratory illnesses such as the common cold is not known.[71]
|
75 |
+
|
76 |
+
Medications that lower fevers are called antipyretics. The antipyretic ibuprofen is effective in reducing fevers in children.[72] It is more effective than acetaminophen (paracetamol) in children.[72] Ibuprofen and acetaminophen may be safely used together in children with fevers.[73][74] The efficacy of acetaminophen by itself in children with fevers has been questioned.[75] Ibuprofen is also superior to aspirin in children with fevers.[76] Additionally, aspirin is not recommended in children and young adults (those under the age of 16 or 19 depending on the country) due to the risk of Reye's syndrome.[77]
|
77 |
+
|
78 |
+
Using both paracetamol and ibuprofen at the same time or alternating between the two is more effective at decreasing fever than using only paracetamol or ibuprofen.[78] It is not clear if it increases child comfort.[78] Response or nonresponse to medications does not predict whether or not a child has a serious illness.[79]
|
79 |
+
|
80 |
+
With respect to the effect of antipyretics on the risk of death in those with infection, studies have found mixed results as of 2019.[80] Animal models have found worsened outcomes with the use of antipyretics in influenza as of 2010 but they have not been studied for this use in humans.[81]
|
81 |
+
|
82 |
+
Fever is one of the most common medical signs.[3] It is part of about 30% of healthcare visits by children,[3] and occurs in up to 75% of adults who are seriously sick.[11] About 5% of people who go to an emergency room have a fever.[82]
|
83 |
+
|
84 |
+
A number of types of fever were known as early as 460 BC to 370 BC when Hippocrates was practicing medicine including that due to malaria (tertian or every 2 days and quartan or every 3 days).[83] It also became clear around this time that fever was a symptom of disease rather than a disease in and of itself.[83]
|
85 |
+
|
86 |
+
Fevers were a major source of mortality in humans for about 200,000 years. Until the late nineteenth century, approximately half of all humans died from fever before the age of fifteen.[84]
|
87 |
+
|
88 |
+
Fever is often viewed with greater concern by parents and healthcare professionals than might be deserved, a phenomenon known as fever phobia,[3][85] which is based in both caregiver's and parents' misconceptions about fever in children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified "normal" number marked on a thermometer is a clinically significant fever.[85] They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers.[85] The underlying problem, according to professor of pediatrics Barton D. Schmitt, is "as parents we tend to suspect that our children’s brains may melt."[86] As a result of these misconceptions parents are anxious, give the child fever-reducing medicine when the temperature is technically normal or only slightly elevated, and interfere with the child's sleep to give the child more medicine.[85]
|
89 |
+
|
90 |
+
Fever is an important feature for the diagnosis of disease in domestic animals. The body temperature of animals, which is taken rectally, is different from one species to another. For example, a horse is said to have a fever above 101 °F (38.3 °C).[87] In species that allow the body to have a wide range of "normal" temperatures, such as camels,[88] it is sometimes difficult to determine a febrile stage.[citation needed] Fever can also be behaviorally induced by invertebrates that do not have immune-system based fever. For instance, some species of grasshopper will thermoregulate to achieve body temperatures that are 2–5 °C higher than normal in order to inhibit the growth of fungal pathogens such as Beauveria bassiana and Metarhizium acridum.[89] Honeybee colonies are also able to induce a fever in response to a fungal parasite Ascosphaera apis.[89]
|
en/1989.html.txt
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A figure of speech or rhetorical figure is an intentional deviation from ordinary language, chosen to produce a rhetorical effect.[1] Figures of speech are traditionally classified into schemes, which vary the ordinary sequence or pattern of words, and tropes, where words are made to carry a meaning other than what they ordinarily signify. A type of scheme is polysyndeton, the repeating of a conjunction before every element in a list, where normally the conjunction would appear only before the last element, as in "Lions and tigers and bears, oh my!"—emphasizing the danger and number of animals more than the prosaic wording with only the second "and". A type of trope is metaphor, describing one thing as something that it clearly is not, in order to lead the mind to compare them, in "All the world's a stage."
|
2 |
+
|
3 |
+
Classical rhetoricians classified figures of speech into four categories or quadripartita ratio:[2]
|
4 |
+
|
5 |
+
These categories are often still used. The earliest known text listing them, though not explicitly as a system, is the Rhetorica ad Herennium, of unknown authorship, where they are called πλεονασμός (pleonasmos - addition), ἔνδεια (endeia - omission), μετάθεσις (metathesis - transposition) and ἐναλλαγή (enallage - permutation).[3] Quintillian then mentioned them in Institutio Oratoria.[4] Philo of Alexandria also listed them as addition (πρόσθεσις - prosthesis), subtraction (ἀφαίρεσις - afairesis), transposition (μετάθεσις - metathesis), and transmutation (ἀλλοίωσις - alloiosis).[5]
|
6 |
+
|
7 |
+
Figures of speech come in many varieties. The aim is to use the language inventively to accentuate the effect of what is being said. A few examples follow:
|
8 |
+
|
9 |
+
Scholars of classical Western rhetoric have divided figures of speech into two main categories: schemes and tropes. Schemes (from the Greek schēma, 'form or shape') are figures of speech that change the ordinary or expected pattern of words. For example, the phrase, "John, my best friend" uses the scheme known as apposition. Tropes (from Greek trepein, 'to turn') change the general meaning of words. An example of a trope is irony, which is the use of words to convey the opposite of their usual meaning ("For Brutus is an honorable man; / So are they all, all honorable men").
|
10 |
+
|
11 |
+
During the Renaissance, scholars meticulously enumerated and classified figures of speech. Henry Peacham, for example, in his The Garden of Eloquence (1577), enumerated 184 different figures of speech. Professor Robert DiYanni, in his book Literature: Reading Fiction, Poetry, Drama and the Essay[6] wrote: "Rhetoricians have catalogued more than 250 different figures of speech, expressions or ways of using words in a nonliteral sense."
|
12 |
+
|
13 |
+
For simplicity, this article divides the figures between schemes and tropes, but does not further sub-classify them (e.g., "Figures of Disorder"). Within each category, words are listed alphabetically. Most entries link to a page that provides greater detail and relevant examples, but a short definition is placed here for convenience. Some of those listed may be considered rhetorical devices, which are similar in many ways.
|
14 |
+
|
15 |
+
Using these formulas, a pupil could render the same subject or theme in a myriad of ways. For the mature author, this principle offered a set of tools to rework source texts into a new creation. In short, the quadripartita ratio offered the student or author a ready-made framework, whether for changing words or the transformation of entire texts. Since it concerned relatively mechanical procedures of adaptation that for the most part could be learned, the techniques concerned could be taught at school at a relatively early age, for example in the improvement of pupils’ own writing.
|
16 |
+
|
17 |
+
Figure of speech by theidioms.com
|